From patchwork Tue Mar 19 09:21:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10859171 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7FC0217E9 for ; Tue, 19 Mar 2019 09:23:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 656A229248 for ; Tue, 19 Mar 2019 09:23:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 597142936B; Tue, 19 Mar 2019 09:23:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2117929357 for ; Tue, 19 Mar 2019 09:23:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw3-0005nz-MT; Tue, 19 Mar 2019 09:21:27 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw2-0005mP-GV for xen-devel@lists.xenproject.org; Tue, 19 Mar 2019 09:21:26 +0000 X-Inumbo-ID: 5e9527bc-4a28-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 5e9527bc-4a28-11e9-bc90-bc764e045a96; Tue, 19 Mar 2019 09:21:25 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.58,497,1544486400"; d="scan'208";a="80974916" From: Paul Durrant To: Date: Tue, 19 Mar 2019 09:21:06 +0000 Message-ID: <20190319092116.1525-2-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190319092116.1525-1-paul.durrant@citrix.com> References: <20190319092116.1525-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v9 01/11] viridian: add init hooks X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch adds domain and vcpu init hooks for viridian features. The init hooks do not yet do anything; the functionality will be added to by subsequent patches. Signed-off-by: Paul Durrant Reviewed-by: Wei Liu Acked-by: Jan Beulich --- Cc: Andrew Cooper Cc: "Roger Pau Monné" v5: - Put the call to viridian_domain_deinit() back into hvm_domain_relinquish_resources() where it should be v3: - Re-instate call from domain deinit to vcpu deinit - Move deinit calls to avoid introducing new labels v2: - Remove call from domain deinit to vcpu deinit --- xen/arch/x86/hvm/hvm.c | 10 ++++++++++ xen/arch/x86/hvm/viridian/viridian.c | 10 ++++++++++ xen/include/asm-x86/hvm/viridian.h | 3 +++ 3 files changed, 23 insertions(+) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 8adbb61b57..11ce21fc08 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -666,6 +666,10 @@ int hvm_domain_initialise(struct domain *d) if ( hvm_tsc_scaling_supported ) d->arch.hvm.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio; + rc = viridian_domain_init(d); + if ( rc ) + goto fail2; + rc = hvm_funcs.domain_initialise(d); if ( rc != 0 ) goto fail2; @@ -687,6 +691,7 @@ int hvm_domain_initialise(struct domain *d) hvm_destroy_cacheattr_region_list(d); destroy_perdomain_mapping(d, PERDOMAIN_VIRT_START, 0); fail: + viridian_domain_deinit(d); return rc; } @@ -1526,6 +1531,10 @@ int hvm_vcpu_initialise(struct vcpu *v) && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */ goto fail5; + rc = viridian_vcpu_init(v); + if ( rc ) + goto fail5; + rc = hvm_all_ioreq_servers_add_vcpu(d, v); if ( rc != 0 ) goto fail6; @@ -1553,6 +1562,7 @@ int hvm_vcpu_initialise(struct vcpu *v) fail2: hvm_vcpu_cacheattr_destroy(v); fail1: + viridian_vcpu_deinit(v); return rc; } diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 425af56856..5b0eb8a8c7 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -417,6 +417,16 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) return X86EMUL_OKAY; } +int viridian_vcpu_init(struct vcpu *v) +{ + return 0; +} + +int viridian_domain_init(struct domain *d) +{ + return 0; +} + void viridian_vcpu_deinit(struct vcpu *v) { viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0); diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index ec5ef8d3f9..f072838955 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -80,6 +80,9 @@ viridian_hypercall(struct cpu_user_regs *regs); void viridian_time_ref_count_freeze(struct domain *d); void viridian_time_ref_count_thaw(struct domain *d); +int viridian_vcpu_init(struct vcpu *v); +int viridian_domain_init(struct domain *d); + void viridian_vcpu_deinit(struct vcpu *v); void viridian_domain_deinit(struct domain *d); From patchwork Tue Mar 19 09:21:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10859179 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 339251708 for ; Tue, 19 Mar 2019 09:23:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1378C29248 for ; Tue, 19 Mar 2019 09:23:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 079182936B; Tue, 19 Mar 2019 09:23:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B7C3F29248 for ; Tue, 19 Mar 2019 09:23:30 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw5-0005s1-TV; Tue, 19 Mar 2019 09:21:29 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw4-0005oq-3M for xen-devel@lists.xenproject.org; Tue, 19 Mar 2019 09:21:28 +0000 X-Inumbo-ID: 5e7d9292-4a28-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 5e7d9292-4a28-11e9-bc90-bc764e045a96; Tue, 19 Mar 2019 09:21:26 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.58,497,1544486400"; d="scan'208";a="80974919" From: Paul Durrant To: Date: Tue, 19 Mar 2019 09:21:07 +0000 Message-ID: <20190319092116.1525-3-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190319092116.1525-1-paul.durrant@citrix.com> References: <20190319092116.1525-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v9 02/11] viridian: separately allocate domain and vcpu structures X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Currently the viridian_domain and viridian_vcpu structures are inline in the hvm_domain and hvm_vcpu structures respectively. Subsequent patches will need to add sizable extra fields to the viridian structures which will cause the PAGE_SIZE limit of the overall vcpu structure to be exceeded. This patch, therefore, uses the new init hooks to separately allocate the structures and converts the 'viridian' fields in hvm_domain and hvm_cpu to be pointers to these allocations. These separate allocations also allow some vcpu and domain pointers to become const. Ideally, now that they are no longer inline, the allocations of the viridian structures could be made conditional on whether the toolstack is going to configure the viridian enlightenments. However the toolstack is currently unable to convey this information to the domain creation code so such an enhancement is deferred until that becomes possible. NOTE: The patch also introduced the 'is_viridian_vcpu' macro to avoid introducing a second evaluation of 'is_viridian_domain' with an open-coded 'v->domain' argument. This macro will also be further used in a subsequent patch. Signed-off-by: Paul Durrant Reviewed-by: Wei Liu Acked-by: Jan Beulich --- Cc: Andrew Cooper Cc: "Roger Pau Monné" v4: - Const-ify some vcpu and domain pointers v2: - use XFREE() - expand commit comment to point out why allocations are unconditional --- xen/arch/x86/hvm/viridian/private.h | 2 +- xen/arch/x86/hvm/viridian/synic.c | 46 ++++++++--------- xen/arch/x86/hvm/viridian/time.c | 38 +++++++------- xen/arch/x86/hvm/viridian/viridian.c | 75 ++++++++++++++++++---------- xen/include/asm-x86/hvm/domain.h | 2 +- xen/include/asm-x86/hvm/hvm.h | 4 ++ xen/include/asm-x86/hvm/vcpu.h | 2 +- xen/include/asm-x86/hvm/viridian.h | 10 ++-- 8 files changed, 101 insertions(+), 78 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h index 398b22f12d..46174f48cd 100644 --- a/xen/arch/x86/hvm/viridian/private.h +++ b/xen/arch/x86/hvm/viridian/private.h @@ -89,7 +89,7 @@ void viridian_time_load_domain_ctxt( void viridian_dump_guest_page(const struct vcpu *v, const char *name, const struct viridian_page *vp); -void viridian_map_guest_page(struct vcpu *v, struct viridian_page *vp); +void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp); void viridian_unmap_guest_page(struct viridian_page *vp); #endif /* X86_HVM_VIRIDIAN_PRIVATE_H */ diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index a6ebbbc9f5..28eda7798c 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -28,9 +28,9 @@ typedef union _HV_VP_ASSIST_PAGE uint8_t ReservedZBytePadding[PAGE_SIZE]; } HV_VP_ASSIST_PAGE; -void viridian_apic_assist_set(struct vcpu *v) +void viridian_apic_assist_set(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian.vp_assist.ptr; + HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; if ( !ptr ) return; @@ -40,40 +40,40 @@ void viridian_apic_assist_set(struct vcpu *v) * wrong and the VM will most likely hang so force a crash now * to make the problem clear. */ - if ( v->arch.hvm.viridian.apic_assist_pending ) + if ( v->arch.hvm.viridian->apic_assist_pending ) domain_crash(v->domain); - v->arch.hvm.viridian.apic_assist_pending = true; + v->arch.hvm.viridian->apic_assist_pending = true; ptr->ApicAssist.no_eoi = 1; } -bool viridian_apic_assist_completed(struct vcpu *v) +bool viridian_apic_assist_completed(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian.vp_assist.ptr; + HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; if ( !ptr ) return false; - if ( v->arch.hvm.viridian.apic_assist_pending && + if ( v->arch.hvm.viridian->apic_assist_pending && !ptr->ApicAssist.no_eoi ) { /* An EOI has been avoided */ - v->arch.hvm.viridian.apic_assist_pending = false; + v->arch.hvm.viridian->apic_assist_pending = false; return true; } return false; } -void viridian_apic_assist_clear(struct vcpu *v) +void viridian_apic_assist_clear(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian.vp_assist.ptr; + HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; if ( !ptr ) return; ptr->ApicAssist.no_eoi = 0; - v->arch.hvm.viridian.apic_assist_pending = false; + v->arch.hvm.viridian->apic_assist_pending = false; } int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) @@ -95,12 +95,12 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_VP_ASSIST_PAGE: /* release any previous mapping */ - viridian_unmap_guest_page(&v->arch.hvm.viridian.vp_assist); - v->arch.hvm.viridian.vp_assist.msr.raw = val; + viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist); + v->arch.hvm.viridian->vp_assist.msr.raw = val; viridian_dump_guest_page(v, "VP_ASSIST", - &v->arch.hvm.viridian.vp_assist); - if ( v->arch.hvm.viridian.vp_assist.msr.fields.enabled ) - viridian_map_guest_page(v, &v->arch.hvm.viridian.vp_assist); + &v->arch.hvm.viridian->vp_assist); + if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled ) + viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist); break; default: @@ -132,7 +132,7 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) break; case HV_X64_MSR_VP_ASSIST_PAGE: - *val = v->arch.hvm.viridian.vp_assist.msr.raw; + *val = v->arch.hvm.viridian->vp_assist.msr.raw; break; default: @@ -146,18 +146,18 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) void viridian_synic_save_vcpu_ctxt(const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt) { - ctxt->apic_assist_pending = v->arch.hvm.viridian.apic_assist_pending; - ctxt->vp_assist_msr = v->arch.hvm.viridian.vp_assist.msr.raw; + ctxt->apic_assist_pending = v->arch.hvm.viridian->apic_assist_pending; + ctxt->vp_assist_msr = v->arch.hvm.viridian->vp_assist.msr.raw; } void viridian_synic_load_vcpu_ctxt( struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt) { - v->arch.hvm.viridian.vp_assist.msr.raw = ctxt->vp_assist_msr; - if ( v->arch.hvm.viridian.vp_assist.msr.fields.enabled ) - viridian_map_guest_page(v, &v->arch.hvm.viridian.vp_assist); + v->arch.hvm.viridian->vp_assist.msr.raw = ctxt->vp_assist_msr; + if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled ) + viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist); - v->arch.hvm.viridian.apic_assist_pending = ctxt->apic_assist_pending; + v->arch.hvm.viridian->apic_assist_pending = ctxt->apic_assist_pending; } /* diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 840a82b457..a7e94aadf0 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -27,7 +27,7 @@ typedef struct _HV_REFERENCE_TSC_PAGE static void dump_reference_tsc(const struct domain *d) { - const union viridian_page_msr *rt = &d->arch.hvm.viridian.reference_tsc; + const union viridian_page_msr *rt = &d->arch.hvm.viridian->reference_tsc; if ( !rt->fields.enabled ) return; @@ -38,7 +38,7 @@ static void dump_reference_tsc(const struct domain *d) static void update_reference_tsc(struct domain *d, bool initialize) { - unsigned long gmfn = d->arch.hvm.viridian.reference_tsc.fields.pfn; + unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.fields.pfn; struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); HV_REFERENCE_TSC_PAGE *p; @@ -107,7 +107,7 @@ static void update_reference_tsc(struct domain *d, bool initialize) put_page_and_type(page); } -static int64_t raw_trc_val(struct domain *d) +static int64_t raw_trc_val(const struct domain *d) { uint64_t tsc; struct time_scale tsc_to_ns; @@ -119,21 +119,19 @@ static int64_t raw_trc_val(struct domain *d) return scale_delta(tsc, &tsc_to_ns) / 100ul; } -void viridian_time_ref_count_freeze(struct domain *d) +void viridian_time_ref_count_freeze(const struct domain *d) { - struct viridian_time_ref_count *trc; - - trc = &d->arch.hvm.viridian.time_ref_count; + struct viridian_time_ref_count *trc = + &d->arch.hvm.viridian->time_ref_count; if ( test_and_clear_bit(_TRC_running, &trc->flags) ) trc->val = raw_trc_val(d) + trc->off; } -void viridian_time_ref_count_thaw(struct domain *d) +void viridian_time_ref_count_thaw(const struct domain *d) { - struct viridian_time_ref_count *trc; - - trc = &d->arch.hvm.viridian.time_ref_count; + struct viridian_time_ref_count *trc = + &d->arch.hvm.viridian->time_ref_count; if ( !d->is_shutting_down && !test_and_set_bit(_TRC_running, &trc->flags) ) @@ -150,9 +148,9 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - d->arch.hvm.viridian.reference_tsc.raw = val; + d->arch.hvm.viridian->reference_tsc.raw = val; dump_reference_tsc(d); - if ( d->arch.hvm.viridian.reference_tsc.fields.enabled ) + if ( d->arch.hvm.viridian->reference_tsc.fields.enabled ) update_reference_tsc(d, true); break; @@ -189,13 +187,13 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - *val = d->arch.hvm.viridian.reference_tsc.raw; + *val = d->arch.hvm.viridian->reference_tsc.raw; break; case HV_X64_MSR_TIME_REF_COUNT: { struct viridian_time_ref_count *trc = - &d->arch.hvm.viridian.time_ref_count; + &d->arch.hvm.viridian->time_ref_count; if ( !(viridian_feature_mask(d) & HVMPV_time_ref_count) ) return X86EMUL_EXCEPTION; @@ -219,17 +217,17 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) void viridian_time_save_domain_ctxt( const struct domain *d, struct hvm_viridian_domain_context *ctxt) { - ctxt->time_ref_count = d->arch.hvm.viridian.time_ref_count.val; - ctxt->reference_tsc = d->arch.hvm.viridian.reference_tsc.raw; + ctxt->time_ref_count = d->arch.hvm.viridian->time_ref_count.val; + ctxt->reference_tsc = d->arch.hvm.viridian->reference_tsc.raw; } void viridian_time_load_domain_ctxt( struct domain *d, const struct hvm_viridian_domain_context *ctxt) { - d->arch.hvm.viridian.time_ref_count.val = ctxt->time_ref_count; - d->arch.hvm.viridian.reference_tsc.raw = ctxt->reference_tsc; + d->arch.hvm.viridian->time_ref_count.val = ctxt->time_ref_count; + d->arch.hvm.viridian->reference_tsc.raw = ctxt->reference_tsc; - if ( d->arch.hvm.viridian.reference_tsc.fields.enabled ) + if ( d->arch.hvm.viridian->reference_tsc.fields.enabled ) update_reference_tsc(d, false); } diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 5b0eb8a8c7..7839718ef4 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -146,7 +146,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, * Hypervisor information, but only if the guest has set its * own version number. */ - if ( d->arch.hvm.viridian.guest_os_id.raw == 0 ) + if ( d->arch.hvm.viridian->guest_os_id.raw == 0 ) break; res->a = viridian_build; res->b = ((uint32_t)viridian_major << 16) | viridian_minor; @@ -191,8 +191,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, case 4: /* Recommended hypercall usage. */ - if ( (d->arch.hvm.viridian.guest_os_id.raw == 0) || - (d->arch.hvm.viridian.guest_os_id.fields.os < 4) ) + if ( (d->arch.hvm.viridian->guest_os_id.raw == 0) || + (d->arch.hvm.viridian->guest_os_id.fields.os < 4) ) break; res->a = CPUID4A_RELAX_TIMER_INT; if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush ) @@ -224,7 +224,7 @@ static void dump_guest_os_id(const struct domain *d) { const union viridian_guest_os_id_msr *goi; - goi = &d->arch.hvm.viridian.guest_os_id; + goi = &d->arch.hvm.viridian->guest_os_id; printk(XENLOG_G_INFO "d%d: VIRIDIAN GUEST_OS_ID: vendor: %x os: %x major: %x minor: %x sp: %x build: %x\n", @@ -238,7 +238,7 @@ static void dump_hypercall(const struct domain *d) { const union viridian_page_msr *hg; - hg = &d->arch.hvm.viridian.hypercall_gpa; + hg = &d->arch.hvm.viridian->hypercall_gpa; printk(XENLOG_G_INFO "d%d: VIRIDIAN HYPERCALL: enabled: %x pfn: %lx\n", d->domain_id, @@ -247,7 +247,7 @@ static void dump_hypercall(const struct domain *d) static void enable_hypercall_page(struct domain *d) { - unsigned long gmfn = d->arch.hvm.viridian.hypercall_gpa.fields.pfn; + unsigned long gmfn = d->arch.hvm.viridian->hypercall_gpa.fields.pfn; struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); uint8_t *p; @@ -288,14 +288,14 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) switch ( idx ) { case HV_X64_MSR_GUEST_OS_ID: - d->arch.hvm.viridian.guest_os_id.raw = val; + d->arch.hvm.viridian->guest_os_id.raw = val; dump_guest_os_id(d); break; case HV_X64_MSR_HYPERCALL: - d->arch.hvm.viridian.hypercall_gpa.raw = val; + d->arch.hvm.viridian->hypercall_gpa.raw = val; dump_hypercall(d); - if ( d->arch.hvm.viridian.hypercall_gpa.fields.enabled ) + if ( d->arch.hvm.viridian->hypercall_gpa.fields.enabled ) enable_hypercall_page(d); break; @@ -317,10 +317,10 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_CRASH_P3: case HV_X64_MSR_CRASH_P4: BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >= - ARRAY_SIZE(v->arch.hvm.viridian.crash_param)); + ARRAY_SIZE(v->arch.hvm.viridian->crash_param)); idx -= HV_X64_MSR_CRASH_P0; - v->arch.hvm.viridian.crash_param[idx] = val; + v->arch.hvm.viridian->crash_param[idx] = val; break; case HV_X64_MSR_CRASH_CTL: @@ -337,11 +337,11 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) spin_unlock(&d->shutdown_lock); gprintk(XENLOG_WARNING, "VIRIDIAN CRASH: %lx %lx %lx %lx %lx\n", - v->arch.hvm.viridian.crash_param[0], - v->arch.hvm.viridian.crash_param[1], - v->arch.hvm.viridian.crash_param[2], - v->arch.hvm.viridian.crash_param[3], - v->arch.hvm.viridian.crash_param[4]); + v->arch.hvm.viridian->crash_param[0], + v->arch.hvm.viridian->crash_param[1], + v->arch.hvm.viridian->crash_param[2], + v->arch.hvm.viridian->crash_param[3], + v->arch.hvm.viridian->crash_param[4]); break; } @@ -364,11 +364,11 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) switch ( idx ) { case HV_X64_MSR_GUEST_OS_ID: - *val = d->arch.hvm.viridian.guest_os_id.raw; + *val = d->arch.hvm.viridian->guest_os_id.raw; break; case HV_X64_MSR_HYPERCALL: - *val = d->arch.hvm.viridian.hypercall_gpa.raw; + *val = d->arch.hvm.viridian->hypercall_gpa.raw; break; case HV_X64_MSR_VP_INDEX: @@ -393,10 +393,10 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) case HV_X64_MSR_CRASH_P3: case HV_X64_MSR_CRASH_P4: BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >= - ARRAY_SIZE(v->arch.hvm.viridian.crash_param)); + ARRAY_SIZE(v->arch.hvm.viridian->crash_param)); idx -= HV_X64_MSR_CRASH_P0; - *val = v->arch.hvm.viridian.crash_param[idx]; + *val = v->arch.hvm.viridian->crash_param[idx]; break; case HV_X64_MSR_CRASH_CTL: @@ -419,17 +419,33 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) int viridian_vcpu_init(struct vcpu *v) { + ASSERT(!v->arch.hvm.viridian); + v->arch.hvm.viridian = xzalloc(struct viridian_vcpu); + if ( !v->arch.hvm.viridian ) + return -ENOMEM; + return 0; } int viridian_domain_init(struct domain *d) { + ASSERT(!d->arch.hvm.viridian); + d->arch.hvm.viridian = xzalloc(struct viridian_domain); + if ( !d->arch.hvm.viridian ) + return -ENOMEM; + return 0; } void viridian_vcpu_deinit(struct vcpu *v) { - viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0); + if ( !v->arch.hvm.viridian ) + return; + + if ( is_viridian_vcpu(v) ) + viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0); + + XFREE(v->arch.hvm.viridian); } void viridian_domain_deinit(struct domain *d) @@ -438,6 +454,11 @@ void viridian_domain_deinit(struct domain *d) for_each_vcpu ( d, v ) viridian_vcpu_deinit(v); + + if ( !d->arch.hvm.viridian ) + return; + + XFREE(d->arch.hvm.viridian); } /* @@ -591,7 +612,7 @@ void viridian_dump_guest_page(const struct vcpu *v, const char *name, v, name, (unsigned long)vp->msr.fields.pfn); } -void viridian_map_guest_page(struct vcpu *v, struct viridian_page *vp) +void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp) { struct domain *d = v->domain; unsigned long gmfn = vp->msr.fields.pfn; @@ -645,8 +666,8 @@ static int viridian_save_domain_ctxt(struct vcpu *v, { const struct domain *d = v->domain; struct hvm_viridian_domain_context ctxt = { - .hypercall_gpa = d->arch.hvm.viridian.hypercall_gpa.raw, - .guest_os_id = d->arch.hvm.viridian.guest_os_id.raw, + .hypercall_gpa = d->arch.hvm.viridian->hypercall_gpa.raw, + .guest_os_id = d->arch.hvm.viridian->guest_os_id.raw, }; if ( !is_viridian_domain(d) ) @@ -665,8 +686,8 @@ static int viridian_load_domain_ctxt(struct domain *d, if ( hvm_load_entry_zeroextend(VIRIDIAN_DOMAIN, h, &ctxt) != 0 ) return -EINVAL; - d->arch.hvm.viridian.hypercall_gpa.raw = ctxt.hypercall_gpa; - d->arch.hvm.viridian.guest_os_id.raw = ctxt.guest_os_id; + d->arch.hvm.viridian->hypercall_gpa.raw = ctxt.hypercall_gpa; + d->arch.hvm.viridian->guest_os_id.raw = ctxt.guest_os_id; viridian_time_load_domain_ctxt(d, &ctxt); @@ -680,7 +701,7 @@ static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h) { struct hvm_viridian_vcpu_context ctxt = {}; - if ( !is_viridian_domain(v->domain) ) + if ( !is_viridian_vcpu(v) ) return 0; viridian_synic_save_vcpu_ctxt(v, &ctxt); diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index 3e7331817f..6c7c4f5aa6 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -154,7 +154,7 @@ struct hvm_domain { /* hypervisor intercepted msix table */ struct list_head msixtbl_list; - struct viridian_domain viridian; + struct viridian_domain *viridian; bool_t hap_enabled; bool_t mem_sharing_enabled; diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index 53ffebb2c5..37c3567a57 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -463,6 +463,9 @@ static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val) #define is_viridian_domain(d) \ (is_hvm_domain(d) && (viridian_feature_mask(d) & HVMPV_base_freq)) +#define is_viridian_vcpu(v) \ + is_viridian_domain((v)->domain) + #define has_viridian_time_ref_count(d) \ (is_viridian_domain(d) && (viridian_feature_mask(d) & HVMPV_time_ref_count)) @@ -762,6 +765,7 @@ static inline bool hvm_has_set_descriptor_access_exiting(void) } #define is_viridian_domain(d) ((void)(d), false) +#define is_viridian_vcpu(v) ((void)(v), false) #define has_viridian_time_ref_count(d) ((void)(d), false) #define hvm_long_mode_active(v) ((void)(v), false) #define hvm_get_guest_time(v) ((void)(v), 0) diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 6c84d5a5a6..d1589f3a96 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -205,7 +205,7 @@ struct hvm_vcpu { /* Pending hw/sw interrupt (.vector = -1 means nothing pending). */ struct x86_event inject_event; - struct viridian_vcpu viridian; + struct viridian_vcpu *viridian; }; #endif /* __ASM_X86_HVM_VCPU_H__ */ diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index f072838955..c562424332 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -77,8 +77,8 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val); int viridian_hypercall(struct cpu_user_regs *regs); -void viridian_time_ref_count_freeze(struct domain *d); -void viridian_time_ref_count_thaw(struct domain *d); +void viridian_time_ref_count_freeze(const struct domain *d); +void viridian_time_ref_count_thaw(const struct domain *d); int viridian_vcpu_init(struct vcpu *v); int viridian_domain_init(struct domain *d); @@ -86,9 +86,9 @@ int viridian_domain_init(struct domain *d); void viridian_vcpu_deinit(struct vcpu *v); void viridian_domain_deinit(struct domain *d); -void viridian_apic_assist_set(struct vcpu *v); -bool viridian_apic_assist_completed(struct vcpu *v); -void viridian_apic_assist_clear(struct vcpu *v); +void viridian_apic_assist_set(const struct vcpu *v); +bool viridian_apic_assist_completed(const struct vcpu *v); +void viridian_apic_assist_clear(const struct vcpu *v); #endif /* __ASM_X86_HVM_VIRIDIAN_H__ */ From patchwork Tue Mar 19 09:21:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10859177 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 442841390 for ; Tue, 19 Mar 2019 09:23:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 269F029248 for ; Tue, 19 Mar 2019 09:23:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1AB9E2936B; Tue, 19 Mar 2019 09:23:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4C6A829248 for ; Tue, 19 Mar 2019 09:23:27 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw5-0005qS-0Z; Tue, 19 Mar 2019 09:21:29 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw3-0005o2-QJ for xen-devel@lists.xenproject.org; Tue, 19 Mar 2019 09:21:27 +0000 X-Inumbo-ID: 5f0355ea-4a28-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 5f0355ea-4a28-11e9-bc90-bc764e045a96; Tue, 19 Mar 2019 09:21:26 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.58,497,1544486400"; d="scan'208";a="80974920" From: Paul Durrant To: Date: Tue, 19 Mar 2019 09:21:08 +0000 Message-ID: <20190319092116.1525-4-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190319092116.1525-1-paul.durrant@citrix.com> References: <20190319092116.1525-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v9 03/11] viridian: use stack variables for viridian_vcpu and viridian_domain... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP ...where there is more than one dereference inside a function. This shortens the code and makes it more readable. No functional change. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" v4: - New in v4 --- xen/arch/x86/hvm/viridian/synic.c | 49 ++++++++++++++++------------ xen/arch/x86/hvm/viridian/time.c | 27 ++++++++------- xen/arch/x86/hvm/viridian/viridian.c | 47 +++++++++++++------------- 3 files changed, 69 insertions(+), 54 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index 28eda7798c..f3d9f7ae74 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -30,7 +30,8 @@ typedef union _HV_VP_ASSIST_PAGE void viridian_apic_assist_set(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; + struct viridian_vcpu *vv = v->arch.hvm.viridian; + HV_VP_ASSIST_PAGE *ptr = vv->vp_assist.ptr; if ( !ptr ) return; @@ -40,25 +41,25 @@ void viridian_apic_assist_set(const struct vcpu *v) * wrong and the VM will most likely hang so force a crash now * to make the problem clear. */ - if ( v->arch.hvm.viridian->apic_assist_pending ) + if ( vv->apic_assist_pending ) domain_crash(v->domain); - v->arch.hvm.viridian->apic_assist_pending = true; + vv->apic_assist_pending = true; ptr->ApicAssist.no_eoi = 1; } bool viridian_apic_assist_completed(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; + struct viridian_vcpu *vv = v->arch.hvm.viridian; + HV_VP_ASSIST_PAGE *ptr = vv->vp_assist.ptr; if ( !ptr ) return false; - if ( v->arch.hvm.viridian->apic_assist_pending && - !ptr->ApicAssist.no_eoi ) + if ( vv->apic_assist_pending && !ptr->ApicAssist.no_eoi ) { /* An EOI has been avoided */ - v->arch.hvm.viridian->apic_assist_pending = false; + vv->apic_assist_pending = false; return true; } @@ -67,17 +68,20 @@ bool viridian_apic_assist_completed(const struct vcpu *v) void viridian_apic_assist_clear(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; + struct viridian_vcpu *vv = v->arch.hvm.viridian; + HV_VP_ASSIST_PAGE *ptr = vv->vp_assist.ptr; if ( !ptr ) return; ptr->ApicAssist.no_eoi = 0; - v->arch.hvm.viridian->apic_assist_pending = false; + vv->apic_assist_pending = false; } int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; + switch ( idx ) { case HV_X64_MSR_EOI: @@ -95,12 +99,11 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_VP_ASSIST_PAGE: /* release any previous mapping */ - viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist); - v->arch.hvm.viridian->vp_assist.msr.raw = val; - viridian_dump_guest_page(v, "VP_ASSIST", - &v->arch.hvm.viridian->vp_assist); - if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled ) - viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist); + viridian_unmap_guest_page(&vv->vp_assist); + vv->vp_assist.msr.raw = val; + viridian_dump_guest_page(v, "VP_ASSIST", &vv->vp_assist); + if ( vv->vp_assist.msr.fields.enabled ) + viridian_map_guest_page(v, &vv->vp_assist); break; default: @@ -146,18 +149,22 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) void viridian_synic_save_vcpu_ctxt(const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt) { - ctxt->apic_assist_pending = v->arch.hvm.viridian->apic_assist_pending; - ctxt->vp_assist_msr = v->arch.hvm.viridian->vp_assist.msr.raw; + const struct viridian_vcpu *vv = v->arch.hvm.viridian; + + ctxt->apic_assist_pending = vv->apic_assist_pending; + ctxt->vp_assist_msr = vv->vp_assist.msr.raw; } void viridian_synic_load_vcpu_ctxt( struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt) { - v->arch.hvm.viridian->vp_assist.msr.raw = ctxt->vp_assist_msr; - if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled ) - viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist); + struct viridian_vcpu *vv = v->arch.hvm.viridian; + + vv->vp_assist.msr.raw = ctxt->vp_assist_msr; + if ( vv->vp_assist.msr.fields.enabled ) + viridian_map_guest_page(v, &vv->vp_assist); - v->arch.hvm.viridian->apic_assist_pending = ctxt->apic_assist_pending; + vv->apic_assist_pending = ctxt->apic_assist_pending; } /* diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index a7e94aadf0..76f9612001 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -141,6 +141,7 @@ void viridian_time_ref_count_thaw(const struct domain *d) int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) { struct domain *d = v->domain; + struct viridian_domain *vd = d->arch.hvm.viridian; switch ( idx ) { @@ -148,9 +149,9 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - d->arch.hvm.viridian->reference_tsc.raw = val; + vd->reference_tsc.raw = val; dump_reference_tsc(d); - if ( d->arch.hvm.viridian->reference_tsc.fields.enabled ) + if ( vd->reference_tsc.fields.enabled ) update_reference_tsc(d, true); break; @@ -165,7 +166,8 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) { - struct domain *d = v->domain; + const struct domain *d = v->domain; + struct viridian_domain *vd = d->arch.hvm.viridian; switch ( idx ) { @@ -187,13 +189,12 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - *val = d->arch.hvm.viridian->reference_tsc.raw; + *val = vd->reference_tsc.raw; break; case HV_X64_MSR_TIME_REF_COUNT: { - struct viridian_time_ref_count *trc = - &d->arch.hvm.viridian->time_ref_count; + struct viridian_time_ref_count *trc = &vd->time_ref_count; if ( !(viridian_feature_mask(d) & HVMPV_time_ref_count) ) return X86EMUL_EXCEPTION; @@ -217,17 +218,21 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) void viridian_time_save_domain_ctxt( const struct domain *d, struct hvm_viridian_domain_context *ctxt) { - ctxt->time_ref_count = d->arch.hvm.viridian->time_ref_count.val; - ctxt->reference_tsc = d->arch.hvm.viridian->reference_tsc.raw; + const struct viridian_domain *vd = d->arch.hvm.viridian; + + ctxt->time_ref_count = vd->time_ref_count.val; + ctxt->reference_tsc = vd->reference_tsc.raw; } void viridian_time_load_domain_ctxt( struct domain *d, const struct hvm_viridian_domain_context *ctxt) { - d->arch.hvm.viridian->time_ref_count.val = ctxt->time_ref_count; - d->arch.hvm.viridian->reference_tsc.raw = ctxt->reference_tsc; + struct viridian_domain *vd = d->arch.hvm.viridian; + + vd->time_ref_count.val = ctxt->time_ref_count; + vd->reference_tsc.raw = ctxt->reference_tsc; - if ( d->arch.hvm.viridian->reference_tsc.fields.enabled ) + if ( vd->reference_tsc.fields.enabled ) update_reference_tsc(d, false); } diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 7839718ef4..710470fed7 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -122,6 +122,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, uint32_t subleaf, struct cpuid_leaf *res) { const struct domain *d = v->domain; + const struct viridian_domain *vd = d->arch.hvm.viridian; ASSERT(is_viridian_domain(d)); ASSERT(leaf >= 0x40000000 && leaf < 0x40000100); @@ -146,7 +147,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, * Hypervisor information, but only if the guest has set its * own version number. */ - if ( d->arch.hvm.viridian->guest_os_id.raw == 0 ) + if ( vd->guest_os_id.raw == 0 ) break; res->a = viridian_build; res->b = ((uint32_t)viridian_major << 16) | viridian_minor; @@ -191,8 +192,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, case 4: /* Recommended hypercall usage. */ - if ( (d->arch.hvm.viridian->guest_os_id.raw == 0) || - (d->arch.hvm.viridian->guest_os_id.fields.os < 4) ) + if ( vd->guest_os_id.raw == 0 || vd->guest_os_id.fields.os < 4 ) break; res->a = CPUID4A_RELAX_TIMER_INT; if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush ) @@ -281,21 +281,23 @@ static void enable_hypercall_page(struct domain *d) int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; struct domain *d = v->domain; + struct viridian_domain *vd = d->arch.hvm.viridian; ASSERT(is_viridian_domain(d)); switch ( idx ) { case HV_X64_MSR_GUEST_OS_ID: - d->arch.hvm.viridian->guest_os_id.raw = val; + vd->guest_os_id.raw = val; dump_guest_os_id(d); break; case HV_X64_MSR_HYPERCALL: - d->arch.hvm.viridian->hypercall_gpa.raw = val; + vd->hypercall_gpa.raw = val; dump_hypercall(d); - if ( d->arch.hvm.viridian->hypercall_gpa.fields.enabled ) + if ( vd->hypercall_gpa.fields.enabled ) enable_hypercall_page(d); break; @@ -317,10 +319,10 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_CRASH_P3: case HV_X64_MSR_CRASH_P4: BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >= - ARRAY_SIZE(v->arch.hvm.viridian->crash_param)); + ARRAY_SIZE(vv->crash_param)); idx -= HV_X64_MSR_CRASH_P0; - v->arch.hvm.viridian->crash_param[idx] = val; + vv->crash_param[idx] = val; break; case HV_X64_MSR_CRASH_CTL: @@ -337,11 +339,8 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) spin_unlock(&d->shutdown_lock); gprintk(XENLOG_WARNING, "VIRIDIAN CRASH: %lx %lx %lx %lx %lx\n", - v->arch.hvm.viridian->crash_param[0], - v->arch.hvm.viridian->crash_param[1], - v->arch.hvm.viridian->crash_param[2], - v->arch.hvm.viridian->crash_param[3], - v->arch.hvm.viridian->crash_param[4]); + vv->crash_param[0], vv->crash_param[1], vv->crash_param[2], + vv->crash_param[3], vv->crash_param[4]); break; } @@ -357,18 +356,20 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) { - struct domain *d = v->domain; + const struct viridian_vcpu *vv = v->arch.hvm.viridian; + const struct domain *d = v->domain; + const struct viridian_domain *vd = d->arch.hvm.viridian; ASSERT(is_viridian_domain(d)); switch ( idx ) { case HV_X64_MSR_GUEST_OS_ID: - *val = d->arch.hvm.viridian->guest_os_id.raw; + *val = vd->guest_os_id.raw; break; case HV_X64_MSR_HYPERCALL: - *val = d->arch.hvm.viridian->hypercall_gpa.raw; + *val = vd->hypercall_gpa.raw; break; case HV_X64_MSR_VP_INDEX: @@ -393,10 +394,10 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) case HV_X64_MSR_CRASH_P3: case HV_X64_MSR_CRASH_P4: BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >= - ARRAY_SIZE(v->arch.hvm.viridian->crash_param)); + ARRAY_SIZE(vv->crash_param)); idx -= HV_X64_MSR_CRASH_P0; - *val = v->arch.hvm.viridian->crash_param[idx]; + *val = vv->crash_param[idx]; break; case HV_X64_MSR_CRASH_CTL: @@ -665,9 +666,10 @@ static int viridian_save_domain_ctxt(struct vcpu *v, hvm_domain_context_t *h) { const struct domain *d = v->domain; + const struct viridian_domain *vd = d->arch.hvm.viridian; struct hvm_viridian_domain_context ctxt = { - .hypercall_gpa = d->arch.hvm.viridian->hypercall_gpa.raw, - .guest_os_id = d->arch.hvm.viridian->guest_os_id.raw, + .hypercall_gpa = vd->hypercall_gpa.raw, + .guest_os_id = vd->guest_os_id.raw, }; if ( !is_viridian_domain(d) ) @@ -681,13 +683,14 @@ static int viridian_save_domain_ctxt(struct vcpu *v, static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h) { + struct viridian_domain *vd = d->arch.hvm.viridian; struct hvm_viridian_domain_context ctxt; if ( hvm_load_entry_zeroextend(VIRIDIAN_DOMAIN, h, &ctxt) != 0 ) return -EINVAL; - d->arch.hvm.viridian->hypercall_gpa.raw = ctxt.hypercall_gpa; - d->arch.hvm.viridian->guest_os_id.raw = ctxt.guest_os_id; + vd->hypercall_gpa.raw = ctxt.hypercall_gpa; + vd->guest_os_id.raw = ctxt.guest_os_id; viridian_time_load_domain_ctxt(d, &ctxt); From patchwork Tue Mar 19 09:21:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10859167 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B45581708 for ; Tue, 19 Mar 2019 09:23:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 950CB29248 for ; Tue, 19 Mar 2019 09:23:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 85F932939F; Tue, 19 Mar 2019 09:23:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EF23A29248 for ; Tue, 19 Mar 2019 09:23:24 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Avz-0005kl-NV; Tue, 19 Mar 2019 09:21:23 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Avy-0005kd-BU for xen-devel@lists.xenproject.org; Tue, 19 Mar 2019 09:21:22 +0000 X-Inumbo-ID: 5b5dcc60-4a28-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 5b5dcc60-4a28-11e9-bc90-bc764e045a96; Tue, 19 Mar 2019 09:21:20 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.58,497,1544486400"; d="scan'208";a="80974904" From: Paul Durrant To: Date: Tue, 19 Mar 2019 09:21:09 +0000 Message-ID: <20190319092116.1525-5-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190319092116.1525-1-paul.durrant@citrix.com> References: <20190319092116.1525-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v9 04/11] viridian: make 'fields' struct anonymous... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP ...inside viridian_page_msr and viridian_guest_os_id_msr unions. There's no need to name it and the code is shortened by not doing so. No functional change. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" v4: - New in v4 --- xen/arch/x86/hvm/viridian/synic.c | 4 ++-- xen/arch/x86/hvm/viridian/time.c | 10 +++++----- xen/arch/x86/hvm/viridian/viridian.c | 20 +++++++++----------- xen/include/asm-x86/hvm/viridian.h | 4 ++-- 4 files changed, 18 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index f3d9f7ae74..05d971b365 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -102,7 +102,7 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) viridian_unmap_guest_page(&vv->vp_assist); vv->vp_assist.msr.raw = val; viridian_dump_guest_page(v, "VP_ASSIST", &vv->vp_assist); - if ( vv->vp_assist.msr.fields.enabled ) + if ( vv->vp_assist.msr.enabled ) viridian_map_guest_page(v, &vv->vp_assist); break; @@ -161,7 +161,7 @@ void viridian_synic_load_vcpu_ctxt( struct viridian_vcpu *vv = v->arch.hvm.viridian; vv->vp_assist.msr.raw = ctxt->vp_assist_msr; - if ( vv->vp_assist.msr.fields.enabled ) + if ( vv->vp_assist.msr.enabled ) viridian_map_guest_page(v, &vv->vp_assist); vv->apic_assist_pending = ctxt->apic_assist_pending; diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 76f9612001..909a3fb9e3 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -29,16 +29,16 @@ static void dump_reference_tsc(const struct domain *d) { const union viridian_page_msr *rt = &d->arch.hvm.viridian->reference_tsc; - if ( !rt->fields.enabled ) + if ( !rt->enabled ) return; printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: pfn: %lx\n", - d->domain_id, (unsigned long)rt->fields.pfn); + d->domain_id, (unsigned long)rt->pfn); } static void update_reference_tsc(struct domain *d, bool initialize) { - unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.fields.pfn; + unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.pfn; struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); HV_REFERENCE_TSC_PAGE *p; @@ -151,7 +151,7 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) vd->reference_tsc.raw = val; dump_reference_tsc(d); - if ( vd->reference_tsc.fields.enabled ) + if ( vd->reference_tsc.enabled ) update_reference_tsc(d, true); break; @@ -232,7 +232,7 @@ void viridian_time_load_domain_ctxt( vd->time_ref_count.val = ctxt->time_ref_count; vd->reference_tsc.raw = ctxt->reference_tsc; - if ( vd->reference_tsc.fields.enabled ) + if ( vd->reference_tsc.enabled ) update_reference_tsc(d, false); } diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 710470fed7..1a20d68aaf 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -192,7 +192,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, case 4: /* Recommended hypercall usage. */ - if ( vd->guest_os_id.raw == 0 || vd->guest_os_id.fields.os < 4 ) + if ( vd->guest_os_id.raw == 0 || vd->guest_os_id.os < 4 ) break; res->a = CPUID4A_RELAX_TIMER_INT; if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush ) @@ -228,10 +228,8 @@ static void dump_guest_os_id(const struct domain *d) printk(XENLOG_G_INFO "d%d: VIRIDIAN GUEST_OS_ID: vendor: %x os: %x major: %x minor: %x sp: %x build: %x\n", - d->domain_id, - goi->fields.vendor, goi->fields.os, - goi->fields.major, goi->fields.minor, - goi->fields.service_pack, goi->fields.build_number); + d->domain_id, goi->vendor, goi->os, goi->major, goi->minor, + goi->service_pack, goi->build_number); } static void dump_hypercall(const struct domain *d) @@ -242,12 +240,12 @@ static void dump_hypercall(const struct domain *d) printk(XENLOG_G_INFO "d%d: VIRIDIAN HYPERCALL: enabled: %x pfn: %lx\n", d->domain_id, - hg->fields.enabled, (unsigned long)hg->fields.pfn); + hg->enabled, (unsigned long)hg->pfn); } static void enable_hypercall_page(struct domain *d) { - unsigned long gmfn = d->arch.hvm.viridian->hypercall_gpa.fields.pfn; + unsigned long gmfn = d->arch.hvm.viridian->hypercall_gpa.pfn; struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); uint8_t *p; @@ -297,7 +295,7 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_HYPERCALL: vd->hypercall_gpa.raw = val; dump_hypercall(d); - if ( vd->hypercall_gpa.fields.enabled ) + if ( vd->hypercall_gpa.enabled ) enable_hypercall_page(d); break; @@ -606,17 +604,17 @@ out: void viridian_dump_guest_page(const struct vcpu *v, const char *name, const struct viridian_page *vp) { - if ( !vp->msr.fields.enabled ) + if ( !vp->msr.enabled ) return; printk(XENLOG_G_INFO "%pv: VIRIDIAN %s: pfn: %lx\n", - v, name, (unsigned long)vp->msr.fields.pfn); + v, name, (unsigned long)vp->msr.pfn); } void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp) { struct domain *d = v->domain; - unsigned long gmfn = vp->msr.fields.pfn; + unsigned long gmfn = vp->msr.pfn; struct page_info *page; if ( vp->ptr ) diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index c562424332..abbbb36092 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -17,7 +17,7 @@ union viridian_page_msr uint64_t enabled:1; uint64_t reserved_preserved:11; uint64_t pfn:48; - } fields; + }; }; struct viridian_page @@ -44,7 +44,7 @@ union viridian_guest_os_id_msr uint64_t major:8; uint64_t os:8; uint64_t vendor:16; - } fields; + }; }; struct viridian_time_ref_count From patchwork Tue Mar 19 09:21:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10859165 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83AD11708 for ; Tue, 19 Mar 2019 09:23:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 680B529248 for ; Tue, 19 Mar 2019 09:23:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5C2802936B; Tue, 19 Mar 2019 09:23:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D15CE29248 for ; Tue, 19 Mar 2019 09:23:19 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw1-0005lG-16; Tue, 19 Mar 2019 09:21:25 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Avz-0005kk-NF for xen-devel@lists.xenproject.org; Tue, 19 Mar 2019 09:21:23 +0000 X-Inumbo-ID: 5c5a0ce3-4a28-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 5c5a0ce3-4a28-11e9-bc90-bc764e045a96; Tue, 19 Mar 2019 09:21:22 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.58,497,1544486400"; d="scan'208";a="80974907" From: Paul Durrant To: Date: Tue, 19 Mar 2019 09:21:10 +0000 Message-ID: <20190319092116.1525-6-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190319092116.1525-1-paul.durrant@citrix.com> References: <20190319092116.1525-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v9 05/11] viridian: extend init/deinit hooks into synic and time modules X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch simply adds domain and vcpu init/deinit hooks into the synic and time modules and wires them into viridian_[domain|vcpu]_[init|deinit](). Only one of the hooks is currently needed (to unmap the 'VP Assist' page) but subsequent patches will make use of the others. NOTE: To perform the unmap of the VP Assist page, viridian_unmap_guest_page() is now directly called in the new viridian_synic_vcpu_deinit() function (which is safe even if is_viridian_vcpu() evaluates to false). This replaces the slightly hacky mechanism of faking a zero write to the HV_X64_MSR_VP_ASSIST_PAGE MSR in viridian_cpu_deinit(). Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich Reviewed-by: Wei Liu --- Cc: Andrew Cooper Cc: "Roger Pau Monné" v4: - Constify vcpu and domain pointers v2: - Pay attention to sync and time init hook return values --- xen/arch/x86/hvm/viridian/private.h | 12 +++++++++ xen/arch/x86/hvm/viridian/synic.c | 19 ++++++++++++++ xen/arch/x86/hvm/viridian/time.c | 18 ++++++++++++++ xen/arch/x86/hvm/viridian/viridian.c | 37 ++++++++++++++++++++++++++-- 4 files changed, 84 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h index 46174f48cd..8c029f62c6 100644 --- a/xen/arch/x86/hvm/viridian/private.h +++ b/xen/arch/x86/hvm/viridian/private.h @@ -74,6 +74,12 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val); int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val); +int viridian_synic_vcpu_init(const struct vcpu *v); +int viridian_synic_domain_init(const struct domain *d); + +void viridian_synic_vcpu_deinit(const struct vcpu *v); +void viridian_synic_domain_deinit(const struct domain *d); + void viridian_synic_save_vcpu_ctxt(const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt); void viridian_synic_load_vcpu_ctxt( @@ -82,6 +88,12 @@ void viridian_synic_load_vcpu_ctxt( int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val); int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val); +int viridian_time_vcpu_init(const struct vcpu *v); +int viridian_time_domain_init(const struct domain *d); + +void viridian_time_vcpu_deinit(const struct vcpu *v); +void viridian_time_domain_deinit(const struct domain *d); + void viridian_time_save_domain_ctxt( const struct domain *d, struct hvm_viridian_domain_context *ctxt); void viridian_time_load_domain_ctxt( diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index 05d971b365..4b00dbe1b3 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -146,6 +146,25 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) return X86EMUL_OKAY; } +int viridian_synic_vcpu_init(const struct vcpu *v) +{ + return 0; +} + +int viridian_synic_domain_init(const struct domain *d) +{ + return 0; +} + +void viridian_synic_vcpu_deinit(const struct vcpu *v) +{ + viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist); +} + +void viridian_synic_domain_deinit(const struct domain *d) +{ +} + void viridian_synic_save_vcpu_ctxt(const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt) { diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 909a3fb9e3..48aca7e0ab 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -215,6 +215,24 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) return X86EMUL_OKAY; } +int viridian_time_vcpu_init(const struct vcpu *v) +{ + return 0; +} + +int viridian_time_domain_init(const struct domain *d) +{ + return 0; +} + +void viridian_time_vcpu_deinit(const struct vcpu *v) +{ +} + +void viridian_time_domain_deinit(const struct domain *d) +{ +} + void viridian_time_save_domain_ctxt( const struct domain *d, struct hvm_viridian_domain_context *ctxt) { diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 1a20d68aaf..f9a509d918 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -418,22 +418,52 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) int viridian_vcpu_init(struct vcpu *v) { + int rc; + ASSERT(!v->arch.hvm.viridian); v->arch.hvm.viridian = xzalloc(struct viridian_vcpu); if ( !v->arch.hvm.viridian ) return -ENOMEM; + rc = viridian_synic_vcpu_init(v); + if ( rc ) + goto fail; + + rc = viridian_time_vcpu_init(v); + if ( rc ) + goto fail; + return 0; + + fail: + viridian_vcpu_deinit(v); + + return rc; } int viridian_domain_init(struct domain *d) { + int rc; + ASSERT(!d->arch.hvm.viridian); d->arch.hvm.viridian = xzalloc(struct viridian_domain); if ( !d->arch.hvm.viridian ) return -ENOMEM; + rc = viridian_synic_domain_init(d); + if ( rc ) + goto fail; + + rc = viridian_time_domain_init(d); + if ( rc ) + goto fail; + return 0; + + fail: + viridian_domain_deinit(d); + + return rc; } void viridian_vcpu_deinit(struct vcpu *v) @@ -441,8 +471,8 @@ void viridian_vcpu_deinit(struct vcpu *v) if ( !v->arch.hvm.viridian ) return; - if ( is_viridian_vcpu(v) ) - viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0); + viridian_time_vcpu_deinit(v); + viridian_synic_vcpu_deinit(v); XFREE(v->arch.hvm.viridian); } @@ -457,6 +487,9 @@ void viridian_domain_deinit(struct domain *d) if ( !d->arch.hvm.viridian ) return; + viridian_time_domain_deinit(d); + viridian_synic_domain_deinit(d); + XFREE(d->arch.hvm.viridian); } From patchwork Tue Mar 19 09:21:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10859163 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0AE061390 for ; Tue, 19 Mar 2019 09:23:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E3CF329248 for ; Tue, 19 Mar 2019 09:23:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D7B182936B; Tue, 19 Mar 2019 09:23:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1A31A29248 for ; Tue, 19 Mar 2019 09:23:15 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw3-0005nB-1I; Tue, 19 Mar 2019 09:21:27 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw0-0005l8-S4 for xen-devel@lists.xenproject.org; Tue, 19 Mar 2019 09:21:24 +0000 X-Inumbo-ID: 5c0343b4-4a28-11e9-b61a-1f81ed296ae1 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 5c0343b4-4a28-11e9-b61a-1f81ed296ae1; Tue, 19 Mar 2019 09:21:22 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.58,497,1544486400"; d="scan'208";a="80974906" From: Paul Durrant To: Date: Tue, 19 Mar 2019 09:21:11 +0000 Message-ID: <20190319092116.1525-7-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190319092116.1525-1-paul.durrant@citrix.com> References: <20190319092116.1525-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v9 06/11] viridian: add missing context save helpers into synic and time modules X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Currently the time module lacks vcpu context save helpers and the synic module lacks domain context save helpers. These helpers are not yet required but subsequent patches will require at least some of them so this patch completes the set to avoid introducing them in an ad-hoc way. Signed-off-by: Paul Durrant Reviewed-by: Wei Liu --- Cc: Jan Beulich Cc: Andrew Cooper Cc: "Roger Pau Monné" v3: - Add missing callers so that they are not added in an ad-hoc way --- xen/arch/x86/hvm/viridian/private.h | 10 ++++++++++ xen/arch/x86/hvm/viridian/synic.c | 10 ++++++++++ xen/arch/x86/hvm/viridian/time.c | 10 ++++++++++ xen/arch/x86/hvm/viridian/viridian.c | 4 ++++ 4 files changed, 34 insertions(+) diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h index 8c029f62c6..5078b2d2ab 100644 --- a/xen/arch/x86/hvm/viridian/private.h +++ b/xen/arch/x86/hvm/viridian/private.h @@ -85,6 +85,11 @@ void viridian_synic_save_vcpu_ctxt(const struct vcpu *v, void viridian_synic_load_vcpu_ctxt( struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt); +void viridian_synic_save_domain_ctxt( + const struct domain *d, struct hvm_viridian_domain_context *ctxt); +void viridian_synic_load_domain_ctxt( + struct domain *d, const struct hvm_viridian_domain_context *ctxt); + int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val); int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val); @@ -94,6 +99,11 @@ int viridian_time_domain_init(const struct domain *d); void viridian_time_vcpu_deinit(const struct vcpu *v); void viridian_time_domain_deinit(const struct domain *d); +void viridian_time_save_vcpu_ctxt( + const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt); +void viridian_time_load_vcpu_ctxt( + struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt); + void viridian_time_save_domain_ctxt( const struct domain *d, struct hvm_viridian_domain_context *ctxt); void viridian_time_load_domain_ctxt( diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index 4b00dbe1b3..b8dab4b246 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -186,6 +186,16 @@ void viridian_synic_load_vcpu_ctxt( vv->apic_assist_pending = ctxt->apic_assist_pending; } +void viridian_synic_save_domain_ctxt( + const struct domain *d, struct hvm_viridian_domain_context *ctxt) +{ +} + +void viridian_synic_load_domain_ctxt( + struct domain *d, const struct hvm_viridian_domain_context *ctxt) +{ +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 48aca7e0ab..4399e62f54 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -233,6 +233,16 @@ void viridian_time_domain_deinit(const struct domain *d) { } +void viridian_time_save_vcpu_ctxt( + const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt) +{ +} + +void viridian_time_load_vcpu_ctxt( + struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt) +{ +} + void viridian_time_save_domain_ctxt( const struct domain *d, struct hvm_viridian_domain_context *ctxt) { diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index f9a509d918..742a988252 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -707,6 +707,7 @@ static int viridian_save_domain_ctxt(struct vcpu *v, return 0; viridian_time_save_domain_ctxt(d, &ctxt); + viridian_synic_save_domain_ctxt(d, &ctxt); return (hvm_save_entry(VIRIDIAN_DOMAIN, 0, h, &ctxt) != 0); } @@ -723,6 +724,7 @@ static int viridian_load_domain_ctxt(struct domain *d, vd->hypercall_gpa.raw = ctxt.hypercall_gpa; vd->guest_os_id.raw = ctxt.guest_os_id; + viridian_synic_load_domain_ctxt(d, &ctxt); viridian_time_load_domain_ctxt(d, &ctxt); return 0; @@ -738,6 +740,7 @@ static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h) if ( !is_viridian_vcpu(v) ) return 0; + viridian_time_save_vcpu_ctxt(v, &ctxt); viridian_synic_save_vcpu_ctxt(v, &ctxt); return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt); @@ -764,6 +767,7 @@ static int viridian_load_vcpu_ctxt(struct domain *d, return -EINVAL; viridian_synic_load_vcpu_ctxt(v, &ctxt); + viridian_time_load_vcpu_ctxt(v, &ctxt); return 0; } From patchwork Tue Mar 19 09:21:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10859169 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 431AB1390 for ; Tue, 19 Mar 2019 09:23:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2942129248 for ; Tue, 19 Mar 2019 09:23:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1D6802936C; Tue, 19 Mar 2019 09:23:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4FCE22936B for ; Tue, 19 Mar 2019 09:23:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw2-0005mK-B2; Tue, 19 Mar 2019 09:21:26 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw0-0005kv-4q for xen-devel@lists.xenproject.org; Tue, 19 Mar 2019 09:21:24 +0000 X-Inumbo-ID: 5cc18b98-4a28-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 5cc18b98-4a28-11e9-bc90-bc764e045a96; Tue, 19 Mar 2019 09:21:22 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.58,497,1544486400"; d="scan'208";a="80974908" From: Paul Durrant To: Date: Tue, 19 Mar 2019 09:21:12 +0000 Message-ID: <20190319092116.1525-8-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190319092116.1525-1-paul.durrant@citrix.com> References: <20190319092116.1525-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v9 07/11] viridian: use viridian_map/unmap_guest_page() for reference tsc page X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Whilst the reference tsc page does not currently need to be kept mapped after it is initially set up (or updated after migrate), the code can be simplified by using the common guest page map/unmap and dump functions. New functionality added by a subsequent patch will also require the page to kept mapped for the lifetime of the domain. NOTE: Because the reference tsc page is per-domain rather than per-vcpu this patch also changes viridian_map_guest_page() to take a domain pointer rather than a vcpu pointer. The domain pointer cannot be const, unlike the vcpu pointer. Signed-off-by: Paul Durrant Reviewed-by: Wei Liu --- Cc: Jan Beulich Cc: Andrew Cooper Cc: "Roger Pau Monné" --- xen/arch/x86/hvm/viridian/private.h | 2 +- xen/arch/x86/hvm/viridian/synic.c | 6 ++- xen/arch/x86/hvm/viridian/time.c | 56 +++++++++------------------- xen/arch/x86/hvm/viridian/viridian.c | 3 +- xen/include/asm-x86/hvm/viridian.h | 2 +- 5 files changed, 25 insertions(+), 44 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h index 5078b2d2ab..96a784b840 100644 --- a/xen/arch/x86/hvm/viridian/private.h +++ b/xen/arch/x86/hvm/viridian/private.h @@ -111,7 +111,7 @@ void viridian_time_load_domain_ctxt( void viridian_dump_guest_page(const struct vcpu *v, const char *name, const struct viridian_page *vp); -void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp); +void viridian_map_guest_page(struct domain *d, struct viridian_page *vp); void viridian_unmap_guest_page(struct viridian_page *vp); #endif /* X86_HVM_VIRIDIAN_PRIVATE_H */ diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index b8dab4b246..fb560bc162 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -81,6 +81,7 @@ void viridian_apic_assist_clear(const struct vcpu *v) int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) { struct viridian_vcpu *vv = v->arch.hvm.viridian; + struct domain *d = v->domain; switch ( idx ) { @@ -103,7 +104,7 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) vv->vp_assist.msr.raw = val; viridian_dump_guest_page(v, "VP_ASSIST", &vv->vp_assist); if ( vv->vp_assist.msr.enabled ) - viridian_map_guest_page(v, &vv->vp_assist); + viridian_map_guest_page(d, &vv->vp_assist); break; default: @@ -178,10 +179,11 @@ void viridian_synic_load_vcpu_ctxt( struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt) { struct viridian_vcpu *vv = v->arch.hvm.viridian; + struct domain *d = v->domain; vv->vp_assist.msr.raw = ctxt->vp_assist_msr; if ( vv->vp_assist.msr.enabled ) - viridian_map_guest_page(v, &vv->vp_assist); + viridian_map_guest_page(d, &vv->vp_assist); vv->apic_assist_pending = ctxt->apic_assist_pending; } diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 4399e62f54..16fe41d411 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -25,33 +25,10 @@ typedef struct _HV_REFERENCE_TSC_PAGE uint64_t Reserved2[509]; } HV_REFERENCE_TSC_PAGE, *PHV_REFERENCE_TSC_PAGE; -static void dump_reference_tsc(const struct domain *d) -{ - const union viridian_page_msr *rt = &d->arch.hvm.viridian->reference_tsc; - - if ( !rt->enabled ) - return; - - printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: pfn: %lx\n", - d->domain_id, (unsigned long)rt->pfn); -} - static void update_reference_tsc(struct domain *d, bool initialize) { - unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.pfn; - struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); - HV_REFERENCE_TSC_PAGE *p; - - if ( !page || !get_page_type(page, PGT_writable_page) ) - { - if ( page ) - put_page(page); - gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", - gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN)); - return; - } - - p = __map_domain_page(page); + const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc; + HV_REFERENCE_TSC_PAGE *p = rt->ptr; if ( initialize ) clear_page(p); @@ -82,7 +59,7 @@ static void update_reference_tsc(struct domain *d, bool initialize) printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: invalidated\n", d->domain_id); - goto out; + return; } /* @@ -100,11 +77,6 @@ static void update_reference_tsc(struct domain *d, bool initialize) if ( p->TscSequence == 0xFFFFFFFF || p->TscSequence == 0 ) /* Avoid both 'invalid' values */ p->TscSequence = 1; - - out: - unmap_domain_page(p); - - put_page_and_type(page); } static int64_t raw_trc_val(const struct domain *d) @@ -149,10 +121,14 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - vd->reference_tsc.raw = val; - dump_reference_tsc(d); - if ( vd->reference_tsc.enabled ) + viridian_unmap_guest_page(&vd->reference_tsc); + vd->reference_tsc.msr.raw = val; + viridian_dump_guest_page(v, "REFERENCE_TSC", &vd->reference_tsc); + if ( vd->reference_tsc.msr.enabled ) + { + viridian_map_guest_page(d, &vd->reference_tsc); update_reference_tsc(d, true); + } break; default: @@ -189,7 +165,7 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - *val = vd->reference_tsc.raw; + *val = vd->reference_tsc.msr.raw; break; case HV_X64_MSR_TIME_REF_COUNT: @@ -231,6 +207,7 @@ void viridian_time_vcpu_deinit(const struct vcpu *v) void viridian_time_domain_deinit(const struct domain *d) { + viridian_unmap_guest_page(&d->arch.hvm.viridian->reference_tsc); } void viridian_time_save_vcpu_ctxt( @@ -249,7 +226,7 @@ void viridian_time_save_domain_ctxt( const struct viridian_domain *vd = d->arch.hvm.viridian; ctxt->time_ref_count = vd->time_ref_count.val; - ctxt->reference_tsc = vd->reference_tsc.raw; + ctxt->reference_tsc = vd->reference_tsc.msr.raw; } void viridian_time_load_domain_ctxt( @@ -258,10 +235,13 @@ void viridian_time_load_domain_ctxt( struct viridian_domain *vd = d->arch.hvm.viridian; vd->time_ref_count.val = ctxt->time_ref_count; - vd->reference_tsc.raw = ctxt->reference_tsc; + vd->reference_tsc.msr.raw = ctxt->reference_tsc; - if ( vd->reference_tsc.enabled ) + if ( vd->reference_tsc.msr.enabled ) + { + viridian_map_guest_page(d, &vd->reference_tsc); update_reference_tsc(d, false); + } } /* diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 742a988252..2b045ed88f 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -644,9 +644,8 @@ void viridian_dump_guest_page(const struct vcpu *v, const char *name, v, name, (unsigned long)vp->msr.pfn); } -void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp) +void viridian_map_guest_page(struct domain *d, struct viridian_page *vp) { - struct domain *d = v->domain; unsigned long gmfn = vp->msr.pfn; struct page_info *page; diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index abbbb36092..c65c044191 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -65,7 +65,7 @@ struct viridian_domain union viridian_guest_os_id_msr guest_os_id; union viridian_page_msr hypercall_gpa; struct viridian_time_ref_count time_ref_count; - union viridian_page_msr reference_tsc; + struct viridian_page reference_tsc; }; void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, From patchwork Tue Mar 19 09:21:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10859173 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 451321708 for ; Tue, 19 Mar 2019 09:23:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2B24529357 for ; Tue, 19 Mar 2019 09:23:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1F8132939F; Tue, 19 Mar 2019 09:23:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A901129357 for ; Tue, 19 Mar 2019 09:23:27 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw5-0005r6-C6; Tue, 19 Mar 2019 09:21:29 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw4-0005oh-1W for xen-devel@lists.xenproject.org; Tue, 19 Mar 2019 09:21:28 +0000 X-Inumbo-ID: 5f33e7a5-4a28-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 5f33e7a5-4a28-11e9-bc90-bc764e045a96; Tue, 19 Mar 2019 09:21:27 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.58,497,1544486400"; d="scan'208";a="80974921" From: Paul Durrant To: Date: Tue, 19 Mar 2019 09:21:13 +0000 Message-ID: <20190319092116.1525-9-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190319092116.1525-1-paul.durrant@citrix.com> References: <20190319092116.1525-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v9 08/11] viridian: stop directly calling viridian_time_ref_count_freeze/thaw()... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP ...from arch_domain_shutdown/pause/unpause(). A subsequent patch will introduce an implementaion of synthetic timers which will also need freeze/thaw hooks, so make the exported hooks more generic and call through to (re-named and static) time_ref_count_freeze/thaw functions. NOTE: This patch also introduces a new time_ref_count() helper to return the current counter value. This is currently only used by the MSR read handler but the synthetic timer code will also need to use it. Signed-off-by: Paul Durrant Reviewed-by: Wei Liu Acked-by: Jan Beulich --- Cc: Andrew Cooper Cc: "Roger Pau Monné" --- xen/arch/x86/domain.c | 12 ++++++------ xen/arch/x86/hvm/viridian/time.c | 24 +++++++++++++++++++++--- xen/include/asm-x86/hvm/viridian.h | 4 ++-- 3 files changed, 29 insertions(+), 11 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 8d579e2cf9..02afa7518e 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -657,20 +657,20 @@ void arch_domain_destroy(struct domain *d) void arch_domain_shutdown(struct domain *d) { - if ( has_viridian_time_ref_count(d) ) - viridian_time_ref_count_freeze(d); + if ( is_viridian_domain(d) ) + viridian_time_domain_freeze(d); } void arch_domain_pause(struct domain *d) { - if ( has_viridian_time_ref_count(d) ) - viridian_time_ref_count_freeze(d); + if ( is_viridian_domain(d) ) + viridian_time_domain_freeze(d); } void arch_domain_unpause(struct domain *d) { - if ( has_viridian_time_ref_count(d) ) - viridian_time_ref_count_thaw(d); + if ( is_viridian_domain(d) ) + viridian_time_domain_thaw(d); } int arch_domain_soft_reset(struct domain *d) diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 16fe41d411..71291d921c 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -91,7 +91,7 @@ static int64_t raw_trc_val(const struct domain *d) return scale_delta(tsc, &tsc_to_ns) / 100ul; } -void viridian_time_ref_count_freeze(const struct domain *d) +static void time_ref_count_freeze(const struct domain *d) { struct viridian_time_ref_count *trc = &d->arch.hvm.viridian->time_ref_count; @@ -100,7 +100,7 @@ void viridian_time_ref_count_freeze(const struct domain *d) trc->val = raw_trc_val(d) + trc->off; } -void viridian_time_ref_count_thaw(const struct domain *d) +static void time_ref_count_thaw(const struct domain *d) { struct viridian_time_ref_count *trc = &d->arch.hvm.viridian->time_ref_count; @@ -110,6 +110,24 @@ void viridian_time_ref_count_thaw(const struct domain *d) trc->off = (int64_t)trc->val - raw_trc_val(d); } +static int64_t time_ref_count(const struct domain *d) +{ + struct viridian_time_ref_count *trc = + &d->arch.hvm.viridian->time_ref_count; + + return raw_trc_val(d) + trc->off; +} + +void viridian_time_domain_freeze(const struct domain *d) +{ + time_ref_count_freeze(d); +} + +void viridian_time_domain_thaw(const struct domain *d) +{ + time_ref_count_thaw(d); +} + int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) { struct domain *d = v->domain; @@ -179,7 +197,7 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) printk(XENLOG_G_INFO "d%d: VIRIDIAN MSR_TIME_REF_COUNT: accessed\n", d->domain_id); - *val = raw_trc_val(d) + trc->off; + *val = time_ref_count(d); break; } diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index c65c044191..8146e2fc46 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -77,8 +77,8 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val); int viridian_hypercall(struct cpu_user_regs *regs); -void viridian_time_ref_count_freeze(const struct domain *d); -void viridian_time_ref_count_thaw(const struct domain *d); +void viridian_time_domain_freeze(const struct domain *d); +void viridian_time_domain_thaw(const struct domain *d); int viridian_vcpu_init(struct vcpu *v); int viridian_domain_init(struct domain *d); From patchwork Tue Mar 19 09:21:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10859181 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E83FD1708 for ; Tue, 19 Mar 2019 09:23:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CCD9829357 for ; Tue, 19 Mar 2019 09:23:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C0FDA2936B; Tue, 19 Mar 2019 09:23:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 96D052939E for ; Tue, 19 Mar 2019 09:23:35 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw2-0005mi-M5; Tue, 19 Mar 2019 09:21:26 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw0-0005l1-CD for xen-devel@lists.xenproject.org; Tue, 19 Mar 2019 09:21:24 +0000 X-Inumbo-ID: 5cb8c9fc-4a28-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 5cb8c9fc-4a28-11e9-bc90-bc764e045a96; Tue, 19 Mar 2019 09:21:22 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.58,497,1544486400"; d="scan'208";a="80974909" From: Paul Durrant To: Date: Tue, 19 Mar 2019 09:21:14 +0000 Message-ID: <20190319092116.1525-10-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190319092116.1525-1-paul.durrant@citrix.com> References: <20190319092116.1525-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v9 09/11] viridian: add implementation of synthetic interrupt MSRs X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces an implementation of the SCONTROL, SVERSION, SIEFP, SIMP, EOM and SINT0-15 SynIC MSRs. No message source is added and, as such, nothing will yet generate a synthetic interrupt. A subsequent patch will add an implementation of synthetic timers which will need the infrastructure added by this patch to deliver expiry messages to the guest. NOTE: A 'synic' option is added to the toolstack viridian enlightenments enumeration but is deliberately not documented as enabling these SynIC registers without a message source is only useful for debugging. Signed-off-by: Paul Durrant Acked-by: Wei Liu Reviewed-by: Jan Beulich --- Cc: Ian Jackson Cc: Andrew Cooper Cc: George Dunlap Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: "Roger Pau Monné" v8: - Squash in https://lists.xenproject.org/archives/html/xen-devel/2019-03/msg01332.html v7: - Fix out label indentation v6: - Address further comments from Jan v4: - Address comments from Jan v3: - Add the 'SintPollingModeAvailable' bit in CPUID leaf 3 --- tools/libxl/libxl.h | 6 + tools/libxl/libxl_dom.c | 3 + tools/libxl/libxl_types.idl | 1 + xen/arch/x86/hvm/viridian/synic.c | 241 ++++++++++++++++++++++++- xen/arch/x86/hvm/viridian/viridian.c | 19 ++ xen/arch/x86/hvm/vlapic.c | 20 +- xen/include/asm-x86/hvm/hvm.h | 3 + xen/include/asm-x86/hvm/viridian.h | 26 +++ xen/include/public/arch-x86/hvm/save.h | 2 + xen/include/public/hvm/params.h | 7 +- 10 files changed, 323 insertions(+), 5 deletions(-) diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h index a38e5cdba2..a923a380d3 100644 --- a/tools/libxl/libxl.h +++ b/tools/libxl/libxl.h @@ -318,6 +318,12 @@ */ #define LIBXL_HAVE_VIRIDIAN_CRASH_CTL 1 +/* + * LIBXL_HAVE_VIRIDIAN_SYNIC indicates that the 'synic' value + * is present in the viridian enlightenment enumeration. + */ +#define LIBXL_HAVE_VIRIDIAN_SYNIC 1 + /* * LIBXL_HAVE_BUILDINFO_HVM_ACPI_LAPTOP_SLATE indicates that * libxl_domain_build_info has the u.hvm.acpi_laptop_slate field. diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c index 6160991af3..fb758d2ac3 100644 --- a/tools/libxl/libxl_dom.c +++ b/tools/libxl/libxl_dom.c @@ -317,6 +317,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid, if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_CRASH_CTL)) mask |= HVMPV_crash_ctl; + if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_SYNIC)) + mask |= HVMPV_synic; + if (mask != 0 && xc_hvm_param_set(CTX->xch, domid, diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index b685ac47ac..9860bcaf5f 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -235,6 +235,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [ (4, "hcall_remote_tlb_flush"), (5, "apic_assist"), (6, "crash_ctl"), + (7, "synic"), ]) libxl_hdtype = Enumeration("hdtype", [ diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index fb560bc162..84ab02694f 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -13,6 +13,7 @@ #include #include +#include #include "private.h" @@ -28,6 +29,37 @@ typedef union _HV_VP_ASSIST_PAGE uint8_t ReservedZBytePadding[PAGE_SIZE]; } HV_VP_ASSIST_PAGE; +typedef enum HV_MESSAGE_TYPE { + HvMessageTypeNone, + HvMessageTimerExpired = 0x80000010, +} HV_MESSAGE_TYPE; + +typedef struct HV_MESSAGE_FLAGS { + uint8_t MessagePending:1; + uint8_t Reserved:7; +} HV_MESSAGE_FLAGS; + +typedef struct HV_MESSAGE_HEADER { + HV_MESSAGE_TYPE MessageType; + uint16_t Reserved1; + HV_MESSAGE_FLAGS MessageFlags; + uint8_t PayloadSize; + uint64_t Reserved2; +} HV_MESSAGE_HEADER; + +#define HV_MESSAGE_SIZE 256 +#define HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT 30 + +typedef struct HV_MESSAGE { + HV_MESSAGE_HEADER Header; + uint64_t Payload[HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT]; +} HV_MESSAGE; + +void __init __maybe_unused build_assertions(void) +{ + BUILD_BUG_ON(sizeof(HV_MESSAGE) != HV_MESSAGE_SIZE); +} + void viridian_apic_assist_set(const struct vcpu *v) { struct viridian_vcpu *vv = v->arch.hvm.viridian; @@ -83,6 +115,8 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) struct viridian_vcpu *vv = v->arch.hvm.viridian; struct domain *d = v->domain; + ASSERT(v == current || !v->is_running); + switch ( idx ) { case HV_X64_MSR_EOI: @@ -107,6 +141,76 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) viridian_map_guest_page(d, &vv->vp_assist); break; + case HV_X64_MSR_SCONTROL: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + vv->scontrol = val; + break; + + case HV_X64_MSR_SVERSION: + return X86EMUL_EXCEPTION; + + case HV_X64_MSR_SIEFP: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + vv->siefp = val; + break; + + case HV_X64_MSR_SIMP: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + viridian_unmap_guest_page(&vv->simp); + vv->simp.msr.raw = val; + viridian_dump_guest_page(v, "SIMP", &vv->simp); + if ( vv->simp.msr.enabled ) + viridian_map_guest_page(d, &vv->simp); + break; + + case HV_X64_MSR_EOM: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + vv->msg_pending = 0; + break; + + case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15: + { + unsigned int sintx = idx - HV_X64_MSR_SINT0; + union viridian_sint_msr new, *vs = + &array_access_nospec(vv->sint, sintx); + uint8_t vector; + + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + /* Vectors must be in the range 0x10-0xff inclusive */ + new.raw = val; + if ( new.vector < 0x10 ) + return X86EMUL_EXCEPTION; + + /* + * Invalidate any previous mapping by setting an out-of-range + * index before setting the new mapping. + */ + vector = vs->vector; + vv->vector_to_sintx[vector] = ARRAY_SIZE(vv->sint); + + vector = new.vector; + vv->vector_to_sintx[vector] = sintx; + + printk(XENLOG_G_INFO "%pv: VIRIDIAN SINT%u: vector: %x\n", v, sintx, + vector); + + if ( new.polling ) + __clear_bit(sintx, &vv->msg_pending); + + *vs = new; + break; + } + default: gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x (%016"PRIx64")\n", __func__, idx, val); @@ -118,6 +222,9 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) { + const struct viridian_vcpu *vv = v->arch.hvm.viridian; + const struct domain *d = v->domain; + switch ( idx ) { case HV_X64_MSR_EOI: @@ -131,14 +238,70 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) *val = ((uint64_t)icr2 << 32) | icr; break; } + case HV_X64_MSR_TPR: *val = vlapic_get_reg(vcpu_vlapic(v), APIC_TASKPRI); break; case HV_X64_MSR_VP_ASSIST_PAGE: - *val = v->arch.hvm.viridian->vp_assist.msr.raw; + *val = vv->vp_assist.msr.raw; + break; + + case HV_X64_MSR_SCONTROL: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + *val = vv->scontrol; + break; + + case HV_X64_MSR_SVERSION: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + /* + * The specification says that the version number is 0x00000001 + * and should be in the lower 32-bits of the MSR, while the + * upper 32-bits are reserved... but it doesn't say what they + * should be set to. Assume everything but the bottom bit + * should be zero. + */ + *val = 1ul; + break; + + case HV_X64_MSR_SIEFP: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + *val = vv->siefp; + break; + + case HV_X64_MSR_SIMP: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + *val = vv->simp.msr.raw; break; + case HV_X64_MSR_EOM: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + *val = 0; + break; + + case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15: + { + unsigned int sintx = idx - HV_X64_MSR_SINT0; + const union viridian_sint_msr *vs = + &array_access_nospec(vv->sint, sintx); + + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + *val = vs->raw; + break; + } + default: gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x\n", __func__, idx); return X86EMUL_EXCEPTION; @@ -149,6 +312,20 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) int viridian_synic_vcpu_init(const struct vcpu *v) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + /* + * The specification says that all synthetic interrupts must be + * initally masked. + */ + for ( i = 0; i < ARRAY_SIZE(vv->sint); i++ ) + vv->sint[i].mask = 1; + + /* Initialize the mapping array with invalid values */ + for ( i = 0; i < ARRAY_SIZE(vv->vector_to_sintx); i++ ) + vv->vector_to_sintx[i] = ARRAY_SIZE(vv->sint); + return 0; } @@ -159,17 +336,59 @@ int viridian_synic_domain_init(const struct domain *d) void viridian_synic_vcpu_deinit(const struct vcpu *v) { - viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist); + struct viridian_vcpu *vv = v->arch.hvm.viridian; + + viridian_unmap_guest_page(&vv->vp_assist); + viridian_unmap_guest_page(&vv->simp); } void viridian_synic_domain_deinit(const struct domain *d) { } +void viridian_synic_poll(const struct vcpu *v) +{ + /* There are currently no message sources */ +} + +bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v, + unsigned int vector) +{ + const struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int sintx = vv->vector_to_sintx[vector]; + const union viridian_sint_msr *vs = + &array_access_nospec(vv->sint, sintx); + + if ( sintx >= ARRAY_SIZE(vv->sint) ) + return false; + + return vs->auto_eoi; +} + +void viridian_synic_ack_sint(const struct vcpu *v, unsigned int vector) +{ + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int sintx = vv->vector_to_sintx[vector]; + + ASSERT(v == current); + + if ( sintx < ARRAY_SIZE(vv->sint) ) + __clear_bit(array_index_nospec(sintx, ARRAY_SIZE(vv->sint)), + &vv->msg_pending); +} + void viridian_synic_save_vcpu_ctxt(const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt) { const struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + BUILD_BUG_ON(ARRAY_SIZE(vv->sint) != ARRAY_SIZE(ctxt->sint_msr)); + + for ( i = 0; i < ARRAY_SIZE(vv->sint); i++ ) + ctxt->sint_msr[i] = vv->sint[i].raw; + + ctxt->simp_msr = vv->simp.msr.raw; ctxt->apic_assist_pending = vv->apic_assist_pending; ctxt->vp_assist_msr = vv->vp_assist.msr.raw; @@ -180,12 +399,30 @@ void viridian_synic_load_vcpu_ctxt( { struct viridian_vcpu *vv = v->arch.hvm.viridian; struct domain *d = v->domain; + unsigned int i; vv->vp_assist.msr.raw = ctxt->vp_assist_msr; if ( vv->vp_assist.msr.enabled ) viridian_map_guest_page(d, &vv->vp_assist); vv->apic_assist_pending = ctxt->apic_assist_pending; + + vv->simp.msr.raw = ctxt->simp_msr; + if ( vv->simp.msr.enabled ) + viridian_map_guest_page(d, &vv->simp); + + for ( i = 0; i < ARRAY_SIZE(vv->sint); i++ ) + { + uint8_t vector; + + vv->sint[i].raw = ctxt->sint_msr[i]; + + vector = vv->sint[i].vector; + if ( vector < 0x10 ) + continue; + + vv->vector_to_sintx[vector] = i; + } } void viridian_synic_save_domain_ctxt( diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 2b045ed88f..f3166fbcd0 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -89,6 +89,7 @@ typedef union _HV_CRASH_CTL_REG_CONTENTS /* Viridian CPUID leaf 3, Hypervisor Feature Indication */ #define CPUID3D_CRASH_MSRS (1 << 10) +#define CPUID3D_SINT_POLLING (1 << 17) /* Viridian CPUID leaf 4: Implementation Recommendations. */ #define CPUID4A_HCALL_REMOTE_TLB_FLUSH (1 << 2) @@ -178,6 +179,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, mask.AccessPartitionReferenceCounter = 1; if ( viridian_feature_mask(d) & HVMPV_reference_tsc ) mask.AccessPartitionReferenceTsc = 1; + if ( viridian_feature_mask(d) & HVMPV_synic ) + mask.AccessSynicRegs = 1; u.mask = mask; @@ -186,6 +189,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, if ( viridian_feature_mask(d) & HVMPV_crash_ctl ) res->d = CPUID3D_CRASH_MSRS; + if ( viridian_feature_mask(d) & HVMPV_synic ) + res->d |= CPUID3D_SINT_POLLING; break; } @@ -306,8 +311,16 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_ICR: case HV_X64_MSR_TPR: case HV_X64_MSR_VP_ASSIST_PAGE: + case HV_X64_MSR_SCONTROL: + case HV_X64_MSR_SVERSION: + case HV_X64_MSR_SIEFP: + case HV_X64_MSR_SIMP: + case HV_X64_MSR_EOM: + case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15: return viridian_synic_wrmsr(v, idx, val); + case HV_X64_MSR_TSC_FREQUENCY: + case HV_X64_MSR_APIC_FREQUENCY: case HV_X64_MSR_REFERENCE_TSC: return viridian_time_wrmsr(v, idx, val); @@ -378,6 +391,12 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) case HV_X64_MSR_ICR: case HV_X64_MSR_TPR: case HV_X64_MSR_VP_ASSIST_PAGE: + case HV_X64_MSR_SCONTROL: + case HV_X64_MSR_SVERSION: + case HV_X64_MSR_SIEFP: + case HV_X64_MSR_SIMP: + case HV_X64_MSR_EOM: + case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15: return viridian_synic_rdmsr(v, idx, val); case HV_X64_MSR_TSC_FREQUENCY: diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index a1a43cd792..24e8e63c4f 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -461,10 +461,15 @@ void vlapic_EOI_set(struct vlapic *vlapic) void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector) { - struct domain *d = vlapic_domain(vlapic); + struct vcpu *v = vlapic_vcpu(vlapic); + struct domain *d = v->domain; + + /* All synic SINTx vectors are edge triggered */ if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) ) vioapic_update_EOI(d, vector); + else if ( has_viridian_synic(d) ) + viridian_synic_ack_sint(v, vector); hvm_dpci_msi_eoi(d, vector); } @@ -1301,6 +1306,13 @@ int vlapic_has_pending_irq(struct vcpu *v) if ( !vlapic_enabled(vlapic) ) return -1; + /* + * Poll the viridian message queues before checking the IRR since + * a synthetic interrupt may be asserted during the poll. + */ + if ( has_viridian_synic(v->domain) ) + viridian_synic_poll(v); + irr = vlapic_find_highest_irr(vlapic); if ( irr == -1 ) return -1; @@ -1360,8 +1372,12 @@ int vlapic_ack_pending_irq(struct vcpu *v, int vector, bool_t force_ack) } done: - vlapic_set_vector(vector, &vlapic->regs->data[APIC_ISR]); + if ( !has_viridian_synic(v->domain) || + !viridian_synic_is_auto_eoi_sint(v, vector) ) + vlapic_set_vector(vector, &vlapic->regs->data[APIC_ISR]); + vlapic_clear_irr(vector, vlapic); + return 1; } diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index 37c3567a57..f67e9dbd12 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -472,6 +472,9 @@ static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val) #define has_viridian_apic_assist(d) \ (is_viridian_domain(d) && (viridian_feature_mask(d) & HVMPV_apic_assist)) +#define has_viridian_synic(d) \ + (is_viridian_domain(d) && (viridian_feature_mask(d) & HVMPV_synic)) + static inline void hvm_inject_exception( unsigned int vector, unsigned int type, unsigned int insn_len, int error_code) diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index 8146e2fc46..03fc4c6b76 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -26,10 +26,31 @@ struct viridian_page void *ptr; }; +union viridian_sint_msr +{ + uint64_t raw; + struct + { + uint64_t vector:8; + uint64_t reserved_preserved1:8; + uint64_t mask:1; + uint64_t auto_eoi:1; + uint64_t polling:1; + uint64_t reserved_preserved2:45; + }; +}; + struct viridian_vcpu { struct viridian_page vp_assist; bool apic_assist_pending; + bool polled; + unsigned int msg_pending; + uint64_t scontrol; + uint64_t siefp; + struct viridian_page simp; + union viridian_sint_msr sint[16]; + uint8_t vector_to_sintx[256]; uint64_t crash_param[5]; }; @@ -90,6 +111,11 @@ void viridian_apic_assist_set(const struct vcpu *v); bool viridian_apic_assist_completed(const struct vcpu *v); void viridian_apic_assist_clear(const struct vcpu *v); +void viridian_synic_poll(const struct vcpu *v); +bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v, + unsigned int vector); +void viridian_synic_ack_sint(const struct vcpu *v, unsigned int vector); + #endif /* __ASM_X86_HVM_VIRIDIAN_H__ */ /* diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h index 40be84ecda..ec3e4df12c 100644 --- a/xen/include/public/arch-x86/hvm/save.h +++ b/xen/include/public/arch-x86/hvm/save.h @@ -602,6 +602,8 @@ struct hvm_viridian_vcpu_context { uint64_t vp_assist_msr; uint8_t apic_assist_pending; uint8_t _pad[7]; + uint64_t simp_msr; + uint64_t sint_msr[16]; }; DECLARE_HVM_SAVE_TYPE(VIRIDIAN_VCPU, 17, struct hvm_viridian_vcpu_context); diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h index 72f633ef2d..e7e3c7c892 100644 --- a/xen/include/public/hvm/params.h +++ b/xen/include/public/hvm/params.h @@ -146,6 +146,10 @@ #define _HVMPV_crash_ctl 6 #define HVMPV_crash_ctl (1 << _HVMPV_crash_ctl) +/* Enable SYNIC MSRs */ +#define _HVMPV_synic 7 +#define HVMPV_synic (1 << _HVMPV_synic) + #define HVMPV_feature_mask \ (HVMPV_base_freq | \ HVMPV_no_freq | \ @@ -153,7 +157,8 @@ HVMPV_reference_tsc | \ HVMPV_hcall_remote_tlb_flush | \ HVMPV_apic_assist | \ - HVMPV_crash_ctl) + HVMPV_crash_ctl | \ + HVMPV_synic) #endif From patchwork Tue Mar 19 09:21:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10859201 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A1D641390 for ; Tue, 19 Mar 2019 09:29:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8387F28E5C for ; Tue, 19 Mar 2019 09:29:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 77A0429590; Tue, 19 Mar 2019 09:29:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6753D28E5C for ; Tue, 19 Mar 2019 09:29:38 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6B20-0006vL-Ef; Tue, 19 Mar 2019 09:27:36 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6B1z-0006vC-51 for xen-devel@lists.xenproject.org; Tue, 19 Mar 2019 09:27:35 +0000 X-Inumbo-ID: 3960711f-4a29-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 3960711f-4a29-11e9-bc90-bc764e045a96; Tue, 19 Mar 2019 09:27:32 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.58,497,1544486400"; d="scan'208";a="80975279" From: Paul Durrant To: Date: Tue, 19 Mar 2019 09:21:15 +0000 Message-ID: <20190319092116.1525-11-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190319092116.1525-1-paul.durrant@citrix.com> References: <20190319092116.1525-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v9 10/11] viridian: add implementation of synthetic timers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces an implementation of the STIMER0-15_CONFIG/COUNT MSRs and hence a the first SynIC message source. The new (and documented) 'stimer' viridian enlightenment group may be specified to enable this feature. While in the neighbourhood, this patch adds a missing check for an attempt to write the time reference count MSR, which should result in an exception (but not be reported as an unimplemented MSR). NOTE: It is necessary for correct operation that timer expiration and message delivery time-stamping use the same time source as the guest. The specification is ambiguous but testing with a Windows 10 1803 guest has shown that using the partition reference counter as a source whilst the guest is using RDTSC and the reference tsc page does not work correctly. Therefore the time_now() function is used. This implements the algorithm for acquiring partition reference time that is documented in the specifiction. Signed-off-by: Paul Durrant Acked-by: Wei Liu --- Cc: Ian Jackson Cc: Andrew Cooper Cc: George Dunlap Cc: Jan Beulich Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: "Roger Pau Monné" v9: - Revert some of the changes in v8 to make sure that the timer config is only touched in current context or when the vcpu is not running v8: - Squash in https://lists.xenproject.org/archives/html/xen-devel/2019-03/msg01333.html v7: - Make sure missed count cannot be zero if expiration < now v6: - Stop using the reference tsc page in time_now() - Address further comments from Jan v5: - Fix time_now() to read TSC as the guest would see it v4: - Address comments from Jan v3: - Re-worked missed ticks calculation --- docs/man/xl.cfg.5.pod.in | 12 +- tools/libxl/libxl.h | 6 + tools/libxl/libxl_dom.c | 4 + tools/libxl/libxl_types.idl | 1 + xen/arch/x86/hvm/viridian/private.h | 9 +- xen/arch/x86/hvm/viridian/synic.c | 55 +++- xen/arch/x86/hvm/viridian/time.c | 389 ++++++++++++++++++++++++- xen/arch/x86/hvm/viridian/viridian.c | 5 + xen/include/asm-x86/hvm/viridian.h | 32 +- xen/include/public/arch-x86/hvm/save.h | 2 + xen/include/public/hvm/params.h | 7 +- 11 files changed, 509 insertions(+), 13 deletions(-) diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in index ad81af1ed8..355c654693 100644 --- a/docs/man/xl.cfg.5.pod.in +++ b/docs/man/xl.cfg.5.pod.in @@ -2167,11 +2167,19 @@ This group incorporates the crash control MSRs. These enlightenments allow Windows to write crash information such that it can be logged by Xen. +=item B + +This set incorporates the SynIC and synthetic timer MSRs. Windows will +use synthetic timers in preference to emulated HPET for a source of +ticks and hence enabling this group will ensure that ticks will be +consistent with use of an enlightened time source (B or +B). + =item B This is a special value that enables the default set of groups, which -is currently the B, B, B, B -and B groups. +is currently the B, B, B, B, +B and B groups. =item B diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h index a923a380d3..c8f219b0d3 100644 --- a/tools/libxl/libxl.h +++ b/tools/libxl/libxl.h @@ -324,6 +324,12 @@ */ #define LIBXL_HAVE_VIRIDIAN_SYNIC 1 +/* + * LIBXL_HAVE_VIRIDIAN_STIMER indicates that the 'stimer' value + * is present in the viridian enlightenment enumeration. + */ +#define LIBXL_HAVE_VIRIDIAN_STIMER 1 + /* * LIBXL_HAVE_BUILDINFO_HVM_ACPI_LAPTOP_SLATE indicates that * libxl_domain_build_info has the u.hvm.acpi_laptop_slate field. diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c index fb758d2ac3..2ee0f82ee7 100644 --- a/tools/libxl/libxl_dom.c +++ b/tools/libxl/libxl_dom.c @@ -269,6 +269,7 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid, libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_TIME_REF_COUNT); libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_APIC_ASSIST); libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_CRASH_CTL); + libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_STIMER); } libxl_for_each_set_bit(v, info->u.hvm.viridian_enable) { @@ -320,6 +321,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid, if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_SYNIC)) mask |= HVMPV_synic; + if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_STIMER)) + mask |= HVMPV_time_ref_count | HVMPV_synic | HVMPV_stimer; + if (mask != 0 && xc_hvm_param_set(CTX->xch, domid, diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index 9860bcaf5f..1cce249de4 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -236,6 +236,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [ (5, "apic_assist"), (6, "crash_ctl"), (7, "synic"), + (8, "stimer"), ]) libxl_hdtype = Enumeration("hdtype", [ diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h index 96a784b840..c272c34cda 100644 --- a/xen/arch/x86/hvm/viridian/private.h +++ b/xen/arch/x86/hvm/viridian/private.h @@ -74,6 +74,11 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val); int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val); +bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx, + unsigned int index, + uint64_t expiration, + uint64_t delivery); + int viridian_synic_vcpu_init(const struct vcpu *v); int viridian_synic_domain_init(const struct domain *d); @@ -93,7 +98,9 @@ void viridian_synic_load_domain_ctxt( int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val); int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val); -int viridian_time_vcpu_init(const struct vcpu *v); +void viridian_time_poll_timers(struct vcpu *v); + +int viridian_time_vcpu_init(struct vcpu *v); int viridian_time_domain_init(const struct domain *d); void viridian_time_vcpu_deinit(const struct vcpu *v); diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index 84ab02694f..2791021bcc 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -346,9 +346,60 @@ void viridian_synic_domain_deinit(const struct domain *d) { } -void viridian_synic_poll(const struct vcpu *v) +void viridian_synic_poll(struct vcpu *v) { - /* There are currently no message sources */ + viridian_time_poll_timers(v); +} + +bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx, + unsigned int index, + uint64_t expiration, + uint64_t delivery) +{ + struct viridian_vcpu *vv = v->arch.hvm.viridian; + const union viridian_sint_msr *vs = &vv->sint[sintx]; + HV_MESSAGE *msg = vv->simp.ptr; + struct { + uint32_t TimerIndex; + uint32_t Reserved; + uint64_t ExpirationTime; + uint64_t DeliveryTime; + } payload = { + .TimerIndex = index, + .ExpirationTime = expiration, + .DeliveryTime = delivery, + }; + + if ( test_bit(sintx, &vv->msg_pending) ) + return false; + + /* + * To avoid using an atomic test-and-set, and barrier before calling + * vlapic_set_irq(), this function must be called in context of the + * vcpu receiving the message. + */ + ASSERT(v == current); + + msg += sintx; + + if ( msg->Header.MessageType != HvMessageTypeNone ) + { + msg->Header.MessageFlags.MessagePending = 1; + __set_bit(sintx, &vv->msg_pending); + return false; + } + + msg->Header.MessageType = HvMessageTimerExpired; + msg->Header.MessageFlags.MessagePending = 0; + msg->Header.PayloadSize = sizeof(payload); + + BUILD_BUG_ON(sizeof(payload) > sizeof(msg->Payload)); + memcpy(msg->Payload, &payload, sizeof(payload)); + + if ( !vs->mask ) + vlapic_set_irq(vcpu_vlapic(v), vs->vector, 0); + + return true; } bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v, diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 71291d921c..8e9dac5a5a 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -12,6 +12,7 @@ #include #include +#include #include #include "private.h" @@ -27,8 +28,10 @@ typedef struct _HV_REFERENCE_TSC_PAGE static void update_reference_tsc(struct domain *d, bool initialize) { - const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc; + struct viridian_domain *vd = d->arch.hvm.viridian; + const struct viridian_page *rt = &vd->reference_tsc; HV_REFERENCE_TSC_PAGE *p = rt->ptr; + uint32_t seq; if ( initialize ) clear_page(p); @@ -59,6 +62,8 @@ static void update_reference_tsc(struct domain *d, bool initialize) printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: invalidated\n", d->domain_id); + + vd->reference_tsc_valid = false; return; } @@ -72,11 +77,14 @@ static void update_reference_tsc(struct domain *d, bool initialize) * ticks per 100ns shifted left by 64. */ p->TscScale = ((10000ul << 32) / d->arch.tsc_khz) << 32; + smp_wmb(); + + seq = p->TscSequence + 1; + if ( seq == 0xFFFFFFFF || seq == 0 ) /* Avoid both 'invalid' values */ + seq = 1; - p->TscSequence++; - if ( p->TscSequence == 0xFFFFFFFF || - p->TscSequence == 0 ) /* Avoid both 'invalid' values */ - p->TscSequence = 1; + p->TscSequence = seq; + vd->reference_tsc_valid = true; } static int64_t raw_trc_val(const struct domain *d) @@ -118,18 +126,253 @@ static int64_t time_ref_count(const struct domain *d) return raw_trc_val(d) + trc->off; } +/* + * The specification says: "The partition reference time is computed + * by the following formula: + * + * ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset + * + * The multiplication is a 64 bit multiplication, which results in a + * 128 bit number which is then shifted 64 times to the right to obtain + * the high 64 bits." + */ +static uint64_t scale_tsc(uint64_t tsc, uint64_t scale, uint64_t offset) +{ + uint64_t result; + + /* + * Quadword MUL takes an implicit operand in RAX, and puts the result + * in RDX:RAX. Because we only want the result of the multiplication + * after shifting right by 64 bits, we therefore only need the content + * of RDX. + */ + asm ( "mulq %[scale]" + : "+a" (tsc), "=d" (result) + : [scale] "rm" (scale) ); + + return result + offset; +} + +static uint64_t time_now(struct domain *d) +{ + uint64_t tsc, scale; + + /* + * If the reference TSC page is not enabled, or has been invalidated + * fall back to the partition reference counter. + */ + if ( !d->arch.hvm.viridian->reference_tsc_valid ) + return time_ref_count(d); + + /* Otherwise compute reference time in the same way the guest would */ + tsc = hvm_get_guest_tsc(pt_global_vcpu_target(d)); + scale = ((10000ul << 32) / d->arch.tsc_khz) << 32; + + return scale_tsc(tsc, scale, 0); +} + +static void stop_stimer(struct viridian_stimer *vs) +{ + if ( !vs->started ) + return; + + stop_timer(&vs->timer); + vs->started = false; +} + +static void stimer_expire(void *data) +{ + struct viridian_stimer *vs = data; + struct vcpu *v = vs->v; + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int stimerx = vs - &vv->stimer[0]; + + set_bit(stimerx, &vv->stimer_pending); + vcpu_kick(v); +} + +static void start_stimer(struct viridian_stimer *vs) +{ + const struct vcpu *v = vs->v; + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int stimerx = vs - &vv->stimer[0]; + int64_t now = time_now(v->domain); + int64_t expiration; + s_time_t timeout; + + if ( !test_and_set_bit(stimerx, &vv->stimer_enabled) ) + printk(XENLOG_G_INFO "%pv: VIRIDIAN STIMER%u: enabled\n", v, + stimerx); + + if ( vs->config.periodic ) + { + /* + * The specification says that if the timer is lazy then we + * skip over any missed expirations so we can treat this case + * as the same as if the timer is currently stopped, i.e. we + * just schedule expiration to be 'count' ticks from now. + */ + if ( !vs->started || vs->config.lazy ) + expiration = now + vs->count; + else + { + unsigned int missed = 0; + + /* + * The timer is already started, so we're re-scheduling. + * Hence advance the timer expiration by one tick. + */ + expiration = vs->expiration + vs->count; + + /* Now check to see if any expirations have been missed */ + if ( expiration - now <= 0 ) + missed = ((now - expiration) / vs->count) + 1; + + /* + * The specification says that if the timer is not lazy then + * a non-zero missed count should be used to reduce the period + * of the timer until it catches up, unless the count has + * reached a 'significant number', in which case the timer + * should be treated as lazy. Unfortunately the specification + * does not state what that number is so the choice of number + * here is a pure guess. + */ + if ( missed > 3 ) + expiration = now + vs->count; + else if ( missed ) + expiration = now + (vs->count / missed); + } + } + else + { + expiration = vs->count; + if ( expiration - now <= 0 ) + { + vs->expiration = expiration; + stimer_expire(vs); + return; + } + } + ASSERT(expiration - now > 0); + + vs->expiration = expiration; + timeout = (expiration - now) * 100ull; + + vs->started = true; + clear_bit(stimerx, &vv->stimer_pending); + migrate_timer(&vs->timer, v->processor); + set_timer(&vs->timer, timeout + NOW()); +} + +static void poll_stimer(struct vcpu *v, unsigned int stimerx) +{ + struct viridian_vcpu *vv = v->arch.hvm.viridian; + struct viridian_stimer *vs = &vv->stimer[stimerx]; + + /* + * Timer expiry may race with the timer being disabled. If the timer + * is disabled make sure the pending bit is cleared to avoid re- + * polling. + */ + if ( !vs->config.enabled ) + { + clear_bit(stimerx, &vv->stimer_pending); + return; + } + + if ( !test_bit(stimerx, &vv->stimer_pending) ) + return; + + if ( !viridian_synic_deliver_timer_msg(v, vs->config.sintx, + stimerx, vs->expiration, + time_now(v->domain)) ) + return; + + clear_bit(stimerx, &vv->stimer_pending); + + if ( vs->config.periodic ) + start_stimer(vs); + else + vs->config.enabled = 0; +} + +void viridian_time_poll_timers(struct vcpu *v) +{ + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + if ( !vv->stimer_pending ) + return; + + for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ ) + poll_stimer(v, i); +} + +void viridian_time_vcpu_freeze(struct vcpu *v) +{ + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + if ( !is_viridian_vcpu(v) || + !(viridian_feature_mask(v->domain) & HVMPV_stimer) ) + return; + + for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ ) + { + struct viridian_stimer *vs = &vv->stimer[i]; + + if ( vs->started ) + stop_timer(&vs->timer); + } +} + +void viridian_time_vcpu_thaw(struct vcpu *v) +{ + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + if ( !is_viridian_vcpu(v) || + !(viridian_feature_mask(v->domain) & HVMPV_stimer) ) + return; + + for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ ) + { + struct viridian_stimer *vs = &vv->stimer[i]; + + if ( vs->config.enabled ) + start_stimer(vs); + } +} + void viridian_time_domain_freeze(const struct domain *d) { + struct vcpu *v; + + if ( !is_viridian_domain(d) ) + return; + + for_each_vcpu ( d, v ) + viridian_time_vcpu_freeze(v); + time_ref_count_freeze(d); } void viridian_time_domain_thaw(const struct domain *d) { + struct vcpu *v; + + if ( !is_viridian_domain(d) ) + return; + time_ref_count_thaw(d); + + for_each_vcpu ( d, v ) + viridian_time_vcpu_thaw(v); } int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; struct domain *d = v->domain; struct viridian_domain *vd = d->arch.hvm.viridian; @@ -149,6 +392,61 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) } break; + case HV_X64_MSR_TIME_REF_COUNT: + return X86EMUL_EXCEPTION; + + case HV_X64_MSR_STIMER0_CONFIG: + case HV_X64_MSR_STIMER1_CONFIG: + case HV_X64_MSR_STIMER2_CONFIG: + case HV_X64_MSR_STIMER3_CONFIG: + { + unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2; + struct viridian_stimer *vs = + &array_access_nospec(vv->stimer, stimerx); + + if ( !(viridian_feature_mask(d) & HVMPV_stimer) ) + return X86EMUL_EXCEPTION; + + stop_stimer(vs); + + vs->config.raw = val; + + if ( !vs->config.sintx ) + vs->config.enabled = 0; + + if ( vs->config.enabled ) + start_stimer(vs); + + break; + } + + case HV_X64_MSR_STIMER0_COUNT: + case HV_X64_MSR_STIMER1_COUNT: + case HV_X64_MSR_STIMER2_COUNT: + case HV_X64_MSR_STIMER3_COUNT: + { + unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2; + struct viridian_stimer *vs = + &array_access_nospec(vv->stimer, stimerx); + + if ( !(viridian_feature_mask(d) & HVMPV_stimer) ) + return X86EMUL_EXCEPTION; + + stop_stimer(vs); + + vs->count = val; + + if ( !vs->count ) + vs->config.enabled = 0; + else if ( vs->config.auto_enable ) + vs->config.enabled = 1; + + if ( vs->config.enabled ) + start_stimer(vs); + + break; + } + default: gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x (%016"PRIx64")\n", __func__, idx, val); @@ -160,6 +458,7 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) { + const struct viridian_vcpu *vv = v->arch.hvm.viridian; const struct domain *d = v->domain; struct viridian_domain *vd = d->arch.hvm.viridian; @@ -201,6 +500,38 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) break; } + case HV_X64_MSR_STIMER0_CONFIG: + case HV_X64_MSR_STIMER1_CONFIG: + case HV_X64_MSR_STIMER2_CONFIG: + case HV_X64_MSR_STIMER3_CONFIG: + { + unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2; + const struct viridian_stimer *vs = + &array_access_nospec(vv->stimer, stimerx); + + if ( !(viridian_feature_mask(d) & HVMPV_stimer) ) + return X86EMUL_EXCEPTION; + + *val = vs->config.raw; + break; + } + + case HV_X64_MSR_STIMER0_COUNT: + case HV_X64_MSR_STIMER1_COUNT: + case HV_X64_MSR_STIMER2_COUNT: + case HV_X64_MSR_STIMER3_COUNT: + { + unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2; + const struct viridian_stimer *vs = + &array_access_nospec(vv->stimer, stimerx); + + if ( !(viridian_feature_mask(d) & HVMPV_stimer) ) + return X86EMUL_EXCEPTION; + + *val = vs->count; + break; + } + default: gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x\n", __func__, idx); return X86EMUL_EXCEPTION; @@ -209,8 +540,19 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) return X86EMUL_OKAY; } -int viridian_time_vcpu_init(const struct vcpu *v) +int viridian_time_vcpu_init(struct vcpu *v) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ ) + { + struct viridian_stimer *vs = &vv->stimer[i]; + + vs->v = v; + init_timer(&vs->timer, stimer_expire, vs, v->processor); + } + return 0; } @@ -221,6 +563,16 @@ int viridian_time_domain_init(const struct domain *d) void viridian_time_vcpu_deinit(const struct vcpu *v) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ ) + { + struct viridian_stimer *vs = &vv->stimer[i]; + + kill_timer(&vs->timer); + vs->v = NULL; + } } void viridian_time_domain_deinit(const struct domain *d) @@ -231,11 +583,36 @@ void viridian_time_domain_deinit(const struct domain *d) void viridian_time_save_vcpu_ctxt( const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt) { + const struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + BUILD_BUG_ON(ARRAY_SIZE(vv->stimer) != + ARRAY_SIZE(ctxt->stimer_config_msr)); + BUILD_BUG_ON(ARRAY_SIZE(vv->stimer) != + ARRAY_SIZE(ctxt->stimer_count_msr)); + + for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ ) + { + const struct viridian_stimer *vs = &vv->stimer[i]; + + ctxt->stimer_config_msr[i] = vs->config.raw; + ctxt->stimer_count_msr[i] = vs->count; + } } void viridian_time_load_vcpu_ctxt( struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ ) + { + struct viridian_stimer *vs = &vv->stimer[i]; + + vs->config.raw = ctxt->stimer_config_msr[i]; + vs->count = ctxt->stimer_count_msr[i]; + } } void viridian_time_save_domain_ctxt( diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index f3166fbcd0..dce648bb4e 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -181,6 +181,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, mask.AccessPartitionReferenceTsc = 1; if ( viridian_feature_mask(d) & HVMPV_synic ) mask.AccessSynicRegs = 1; + if ( viridian_feature_mask(d) & HVMPV_stimer ) + mask.AccessSyntheticTimerRegs = 1; u.mask = mask; @@ -322,6 +324,8 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_TSC_FREQUENCY: case HV_X64_MSR_APIC_FREQUENCY: case HV_X64_MSR_REFERENCE_TSC: + case HV_X64_MSR_TIME_REF_COUNT: + case HV_X64_MSR_STIMER0_CONFIG ... HV_X64_MSR_STIMER3_COUNT: return viridian_time_wrmsr(v, idx, val); case HV_X64_MSR_CRASH_P0: @@ -403,6 +407,7 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) case HV_X64_MSR_APIC_FREQUENCY: case HV_X64_MSR_REFERENCE_TSC: case HV_X64_MSR_TIME_REF_COUNT: + case HV_X64_MSR_STIMER0_CONFIG ... HV_X64_MSR_STIMER3_COUNT: return viridian_time_rdmsr(v, idx, val); case HV_X64_MSR_CRASH_P0: diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index 03fc4c6b76..54e46cc4c4 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -40,6 +40,32 @@ union viridian_sint_msr }; }; +union viridian_stimer_config_msr +{ + uint64_t raw; + struct + { + uint64_t enabled:1; + uint64_t periodic:1; + uint64_t lazy:1; + uint64_t auto_enable:1; + uint64_t vector:8; + uint64_t direct_mode:1; + uint64_t reserved_zero1:3; + uint64_t sintx:4; + uint64_t reserved_zero2:44; + }; +}; + +struct viridian_stimer { + struct vcpu *v; + struct timer timer; + union viridian_stimer_config_msr config; + uint64_t count; + uint64_t expiration; + bool started; +}; + struct viridian_vcpu { struct viridian_page vp_assist; @@ -51,6 +77,9 @@ struct viridian_vcpu struct viridian_page simp; union viridian_sint_msr sint[16]; uint8_t vector_to_sintx[256]; + struct viridian_stimer stimer[4]; + unsigned int stimer_enabled; + unsigned int stimer_pending; uint64_t crash_param[5]; }; @@ -87,6 +116,7 @@ struct viridian_domain union viridian_page_msr hypercall_gpa; struct viridian_time_ref_count time_ref_count; struct viridian_page reference_tsc; + bool reference_tsc_valid; }; void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, @@ -111,7 +141,7 @@ void viridian_apic_assist_set(const struct vcpu *v); bool viridian_apic_assist_completed(const struct vcpu *v); void viridian_apic_assist_clear(const struct vcpu *v); -void viridian_synic_poll(const struct vcpu *v); +void viridian_synic_poll(struct vcpu *v); bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v, unsigned int vector); void viridian_synic_ack_sint(const struct vcpu *v, unsigned int vector); diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h index ec3e4df12c..8344aa471f 100644 --- a/xen/include/public/arch-x86/hvm/save.h +++ b/xen/include/public/arch-x86/hvm/save.h @@ -604,6 +604,8 @@ struct hvm_viridian_vcpu_context { uint8_t _pad[7]; uint64_t simp_msr; uint64_t sint_msr[16]; + uint64_t stimer_config_msr[4]; + uint64_t stimer_count_msr[4]; }; DECLARE_HVM_SAVE_TYPE(VIRIDIAN_VCPU, 17, struct hvm_viridian_vcpu_context); diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h index e7e3c7c892..e06b0942d0 100644 --- a/xen/include/public/hvm/params.h +++ b/xen/include/public/hvm/params.h @@ -150,6 +150,10 @@ #define _HVMPV_synic 7 #define HVMPV_synic (1 << _HVMPV_synic) +/* Enable STIMER MSRs */ +#define _HVMPV_stimer 8 +#define HVMPV_stimer (1 << _HVMPV_stimer) + #define HVMPV_feature_mask \ (HVMPV_base_freq | \ HVMPV_no_freq | \ @@ -158,7 +162,8 @@ HVMPV_hcall_remote_tlb_flush | \ HVMPV_apic_assist | \ HVMPV_crash_ctl | \ - HVMPV_synic) + HVMPV_synic | \ + HVMPV_stimer) #endif From patchwork Tue Mar 19 09:21:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10859199 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3E3641708 for ; Tue, 19 Mar 2019 09:29:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1EF06287CF for ; Tue, 19 Mar 2019 09:29:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 100322881E; Tue, 19 Mar 2019 09:29:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 64A8A287CF for ; Tue, 19 Mar 2019 09:29:32 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6B1y-0006v6-5c; Tue, 19 Mar 2019 09:27:34 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6B1w-0006v1-UW for xen-devel@lists.xenproject.org; Tue, 19 Mar 2019 09:27:32 +0000 X-Inumbo-ID: 38aaa690-4a29-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 38aaa690-4a29-11e9-bc90-bc764e045a96; Tue, 19 Mar 2019 09:27:31 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.58,497,1544486400"; d="scan'208";a="80975275" From: Paul Durrant To: Date: Tue, 19 Mar 2019 09:21:16 +0000 Message-ID: <20190319092116.1525-12-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190319092116.1525-1-paul.durrant@citrix.com> References: <20190319092116.1525-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v9 11/11] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch adds an implementation of the hypercall as documented in the specification [1], section 10.5.2. This enlightenment, as with others, is advertised by CPUID leaf 0x40000004 and is under control of a new 'hcall_ipi' option in libxl. If used, this enlightenment should mean the guest only takes a single VMEXIT to issue IPIs to multiple vCPUs rather than the multiple VMEXITs that would result from using the emulated local APIC. [1] https://github.com/MicrosoftDocs/Virtualization-Documentation/raw/live/tlfs/Hypervisor%20Top%20Level%20Functional%20Specification%20v5.0C.pdf Signed-off-by: Paul Durrant Acked-by: Wei Liu Reviewed-by: Jan Beulich --- Cc: Ian Jackson Cc: Andrew Cooper Cc: George Dunlap Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: "Roger Pau Monné" v4: - Address comments from Jan v3: - New in v3 --- docs/man/xl.cfg.5.pod.in | 6 +++ tools/libxl/libxl.h | 6 +++ tools/libxl/libxl_dom.c | 3 ++ tools/libxl/libxl_types.idl | 1 + xen/arch/x86/hvm/viridian/viridian.c | 63 ++++++++++++++++++++++++++++ xen/include/public/hvm/params.h | 7 +++- 6 files changed, 85 insertions(+), 1 deletion(-) diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in index 355c654693..c7d70e618b 100644 --- a/docs/man/xl.cfg.5.pod.in +++ b/docs/man/xl.cfg.5.pod.in @@ -2175,6 +2175,12 @@ ticks and hence enabling this group will ensure that ticks will be consistent with use of an enlightened time source (B or B). +=item B + +This set incorporates use of a hypercall for interprocessor interrupts. +This enlightenment may improve performance of Windows guests with multiple +virtual CPUs. + =item B This is a special value that enables the default set of groups, which diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h index c8f219b0d3..482499a6c0 100644 --- a/tools/libxl/libxl.h +++ b/tools/libxl/libxl.h @@ -330,6 +330,12 @@ */ #define LIBXL_HAVE_VIRIDIAN_STIMER 1 +/* + * LIBXL_HAVE_VIRIDIAN_HCALL_IPI indicates that the 'hcall_ipi' value + * is present in the viridian enlightenment enumeration. + */ +#define LIBXL_HAVE_VIRIDIAN_HCALL_IPI 1 + /* * LIBXL_HAVE_BUILDINFO_HVM_ACPI_LAPTOP_SLATE indicates that * libxl_domain_build_info has the u.hvm.acpi_laptop_slate field. diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c index 2ee0f82ee7..879c806139 100644 --- a/tools/libxl/libxl_dom.c +++ b/tools/libxl/libxl_dom.c @@ -324,6 +324,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid, if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_STIMER)) mask |= HVMPV_time_ref_count | HVMPV_synic | HVMPV_stimer; + if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_HCALL_IPI)) + mask |= HVMPV_hcall_ipi; + if (mask != 0 && xc_hvm_param_set(CTX->xch, domid, diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index 1cce249de4..cb4702fd7a 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -237,6 +237,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [ (6, "crash_ctl"), (7, "synic"), (8, "stimer"), + (9, "hcall_ipi"), ]) libxl_hdtype = Enumeration("hdtype", [ diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index dce648bb4e..4b06b78a27 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -28,6 +28,7 @@ #define HvFlushVirtualAddressSpace 0x0002 #define HvFlushVirtualAddressList 0x0003 #define HvNotifyLongSpinWait 0x0008 +#define HvSendSyntheticClusterIpi 0x000b #define HvGetPartitionId 0x0046 #define HvExtCallQueryCapabilities 0x8001 @@ -95,6 +96,7 @@ typedef union _HV_CRASH_CTL_REG_CONTENTS #define CPUID4A_HCALL_REMOTE_TLB_FLUSH (1 << 2) #define CPUID4A_MSR_BASED_APIC (1 << 3) #define CPUID4A_RELAX_TIMER_INT (1 << 5) +#define CPUID4A_SYNTHETIC_CLUSTER_IPI (1 << 10) /* Viridian CPUID leaf 6: Implementation HW features detected and in use */ #define CPUID6A_APIC_OVERLAY (1 << 0) @@ -206,6 +208,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, res->a |= CPUID4A_HCALL_REMOTE_TLB_FLUSH; if ( !cpu_has_vmx_apic_reg_virt ) res->a |= CPUID4A_MSR_BASED_APIC; + if ( viridian_feature_mask(d) & HVMPV_hcall_ipi ) + res->a |= CPUID4A_SYNTHETIC_CLUSTER_IPI; /* * This value is the recommended number of attempts to try to @@ -628,6 +632,65 @@ int viridian_hypercall(struct cpu_user_regs *regs) break; } + case HvSendSyntheticClusterIpi: + { + struct vcpu *v; + uint32_t vector; + uint64_t vcpu_mask; + + status = HV_STATUS_INVALID_PARAMETER; + + /* Get input parameters. */ + if ( input.fast ) + { + if ( input_params_gpa >> 32 ) + break; + + vector = input_params_gpa; + vcpu_mask = output_params_gpa; + } + else + { + struct { + uint32_t vector; + uint8_t target_vtl; + uint8_t reserved_zero[3]; + uint64_t vcpu_mask; + } input_params; + + if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa, + sizeof(input_params)) != + HVMTRANS_okay ) + break; + + if ( input_params.target_vtl || + input_params.reserved_zero[0] || + input_params.reserved_zero[1] || + input_params.reserved_zero[2] ) + break; + + vector = input_params.vector; + vcpu_mask = input_params.vcpu_mask; + } + + if ( vector < 0x10 || vector > 0xff ) + break; + + for_each_vcpu ( currd, v ) + { + if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) ) + break; + + if ( !(vcpu_mask & (1ul << v->vcpu_id)) ) + continue; + + vlapic_set_irq(vcpu_vlapic(v), vector, 0); + } + + status = HV_STATUS_SUCCESS; + break; + } + default: gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n", input.call_code); diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h index e06b0942d0..36832e4b94 100644 --- a/xen/include/public/hvm/params.h +++ b/xen/include/public/hvm/params.h @@ -154,6 +154,10 @@ #define _HVMPV_stimer 8 #define HVMPV_stimer (1 << _HVMPV_stimer) +/* Use Synthetic Cluster IPI Hypercall */ +#define _HVMPV_hcall_ipi 9 +#define HVMPV_hcall_ipi (1 << _HVMPV_hcall_ipi) + #define HVMPV_feature_mask \ (HVMPV_base_freq | \ HVMPV_no_freq | \ @@ -163,7 +167,8 @@ HVMPV_apic_assist | \ HVMPV_crash_ctl | \ HVMPV_synic | \ - HVMPV_stimer) + HVMPV_stimer | \ + HVMPV_hcall_ipi) #endif