From patchwork Mon Mar 18 11:20:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10857405 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 55BBF1708 for ; Mon, 18 Mar 2019 11:23:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3B21B2937E for ; Mon, 18 Mar 2019 11:23:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2FAAC29380; Mon, 18 Mar 2019 11:23:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7E6A02937E for ; Mon, 18 Mar 2019 11:23:13 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKO-0004I4-Rc; Mon, 18 Mar 2019 11:21:12 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKN-0004GJ-2u for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 11:21:11 +0000 X-Inumbo-ID: ed6dd292-496f-11e9-936c-cff09b28bad5 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ed6dd292-496f-11e9-936c-cff09b28bad5; Mon, 18 Mar 2019 11:21:08 +0000 (UTC) IronPort-Data: A9a23:vHvAlqjwxCh8PKN+9BQxTKJ1X17brhtdQjqghKucUGVSitEq9/iXuw 6920X9Sk58P3EEI1XPyvWPmn687YojQs7xHPWiZwTPSQ36vOoNjFk93xxfFFpfO2JuBs/BOQ Rbfrkd0dbE6cTs/smduucIIikFn6NyaBqM308bl/1dCS7qIsb+0+hCJxZ5v/vz3KOqjjaafY b9fVNddGKBVTKlzg7jzNIdmWDu6UdQI2U2JVEqsAg7c1F9FayR2YgB3KvlBCACjofo9YyiJY u7S2MHb1FrOs23ldLtwVUEItvfgU+lP0VCwBoZGwKBrlRsZ6vCE7d4pdwxE2jYWy6/+tI29T 8B+xPtHpUPpMSWEqiAuJIde2NILqMUPfeaJTwEZN94JWCDm43iYVUYsgaVtgVkBEcL9TpEFH MjaGfcGHuAQCkqDOCqH6xr6aHnTDsBq0JjM2B06M8OVBJdwxXTkk5t+s1J5VZy1W+lwZ1SXm m0HZP15s7U46AHuY6oPgcP6VWeXPDVDqiWsWMA2suQiNKJmVAkjGvWPCPHxBG9k9Ekutk1xV XdOux9pkLMV3A392TFc7bEKwO1QPlPKTpqLODr3Dh7Mn8W5pilXojEwspRAVwMweTlnTz3ME KZ0oJ8ldQcrzgOFutbB43CNPKH4Xj5KB+gz7MeaZNI1cRg86G3U/td0GuawwNHNtRYVPeWlp Z3n6IiCUrMnT8HDAKww05cqbw/eZwobp5qUi4k3BGYg88z4gfkue+i4I9CM7VmRTo2NqPKt8 MBCaG4nlOVe8ilBNLsIjdKtqWUEJw+1ZPDlIBWpdHzppbljHBz2w9yMCtP76nHmIa0uc0fFc wavjaWdUsLirfSe/tqA8bsTFOL1/8mLz+/q3+Mz5gUjB1ZhI6t0Hmq+n5IxjLMjaYOPf0MHl 7Suf0Z1qaD6yIeq9rUGSlrdd7289gkIh8f/uaTLBijt8UtE57HNg8+7cCg6ZalJ2kgO8gQFl z+/YEDbOEXDJWpmbIzalpPKJo7KB5Tv1Iatjhkwj+RuJ3u7ImU4YQYdCBCB3ZzW/PUJ+xSPB a9wmtj3PtwYuimOZW9J0VUbn9vbUCSEstenFY+YkThqmsNiIEsBli0fUTmJgz8tme0Ye5Zwv bVySA+2O4B/WrNoQ7g3bgMjQ2kxbm2WzwJ0FwHuYfpky/XxjmAZLTL7yZrP2s/omzEj0dRXQ IbzPlFjMk7yLMG+lNJ/iKaLKtjHJPfuqnXo8WYrTaw5LFoXGCdWbqC423goQLZMdg6RsKcXJ g1uFV2qDDcJ5HD9pFENDrLkJveBLGKtMrUEddCqD7rL9vWHJokvTTmIdpTvvu07kWbfvBDay AGfIdX5vcFHMkl/0OyGVuprEdml8PJ+EW3HP9eOkciFWdMxT+KdsGvywjmKHdA+YpjGZvIiP FfTRYi9K+cb9DFcELjL/H7hD5ZdweYU5vMqZEHihxtp0u3lLz4mcXBZSJCJjVPdN6+Vpingy Ualezyvo2OLiBMQ3LhrJ0C5jfiQaIyPC8++53C+xJrVXgAle0POPInk7FLVbSOOL1TMML+23 McxfzmUVvX1WxX/eFY46eTEQEN5qtnWN7iMwSJ+JYQ7pWmCCz/aK9QRy/ss3MINPa6h8OkMV sCtN31RMQJyrmkaWepBux7pXbiMtvZrfao8lOlhuHuCpQ206CQfdqeBtvkvcNiw+HZB5F7aF 2ksKgB+C8LoCevCJT1Z/Ltnvy5F9xO+8GenLDZF9iK68HZ8j698hsTZsQm/wBcF7OODarMWg 8AtoUQC7/LlX7OddMcWF5bT0vxPVTsWU3tiBEMFIuW1gmyjMZ/9ZEO1zAsy3OP X-IronPort-AV: E=Sophos;i="5.58,493,1544486400"; d="scan'208";a="80850945" From: Paul Durrant To: Date: Mon, 18 Mar 2019 11:20:55 +0000 Message-ID: <20190318112059.21910-8-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190318112059.21910-1-paul.durrant@citrix.com> References: <20190318112059.21910-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v8 07/11] viridian: use viridian_map/unmap_guest_page() for reference tsc page X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Whilst the reference tsc page does not currently need to be kept mapped after it is initially set up (or updated after migrate), the code can be simplified by using the common guest page map/unmap and dump functions. New functionality added by a subsequent patch will also require the page to kept mapped for the lifetime of the domain. NOTE: Because the reference tsc page is per-domain rather than per-vcpu this patch also changes viridian_map_guest_page() to take a domain pointer rather than a vcpu pointer. The domain pointer cannot be const, unlike the vcpu pointer. Signed-off-by: Paul Durrant Reviewed-by: Wei Liu --- Cc: Jan Beulich Cc: Andrew Cooper Cc: "Roger Pau Monné" --- xen/arch/x86/hvm/viridian/private.h | 2 +- xen/arch/x86/hvm/viridian/synic.c | 6 ++- xen/arch/x86/hvm/viridian/time.c | 56 +++++++++------------------- xen/arch/x86/hvm/viridian/viridian.c | 3 +- xen/include/asm-x86/hvm/viridian.h | 2 +- 5 files changed, 25 insertions(+), 44 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h index 5078b2d2ab..96a784b840 100644 --- a/xen/arch/x86/hvm/viridian/private.h +++ b/xen/arch/x86/hvm/viridian/private.h @@ -111,7 +111,7 @@ void viridian_time_load_domain_ctxt( void viridian_dump_guest_page(const struct vcpu *v, const char *name, const struct viridian_page *vp); -void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp); +void viridian_map_guest_page(struct domain *d, struct viridian_page *vp); void viridian_unmap_guest_page(struct viridian_page *vp); #endif /* X86_HVM_VIRIDIAN_PRIVATE_H */ diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index b8dab4b246..fb560bc162 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -81,6 +81,7 @@ void viridian_apic_assist_clear(const struct vcpu *v) int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) { struct viridian_vcpu *vv = v->arch.hvm.viridian; + struct domain *d = v->domain; switch ( idx ) { @@ -103,7 +104,7 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) vv->vp_assist.msr.raw = val; viridian_dump_guest_page(v, "VP_ASSIST", &vv->vp_assist); if ( vv->vp_assist.msr.enabled ) - viridian_map_guest_page(v, &vv->vp_assist); + viridian_map_guest_page(d, &vv->vp_assist); break; default: @@ -178,10 +179,11 @@ void viridian_synic_load_vcpu_ctxt( struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt) { struct viridian_vcpu *vv = v->arch.hvm.viridian; + struct domain *d = v->domain; vv->vp_assist.msr.raw = ctxt->vp_assist_msr; if ( vv->vp_assist.msr.enabled ) - viridian_map_guest_page(v, &vv->vp_assist); + viridian_map_guest_page(d, &vv->vp_assist); vv->apic_assist_pending = ctxt->apic_assist_pending; } diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 4399e62f54..16fe41d411 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -25,33 +25,10 @@ typedef struct _HV_REFERENCE_TSC_PAGE uint64_t Reserved2[509]; } HV_REFERENCE_TSC_PAGE, *PHV_REFERENCE_TSC_PAGE; -static void dump_reference_tsc(const struct domain *d) -{ - const union viridian_page_msr *rt = &d->arch.hvm.viridian->reference_tsc; - - if ( !rt->enabled ) - return; - - printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: pfn: %lx\n", - d->domain_id, (unsigned long)rt->pfn); -} - static void update_reference_tsc(struct domain *d, bool initialize) { - unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.pfn; - struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); - HV_REFERENCE_TSC_PAGE *p; - - if ( !page || !get_page_type(page, PGT_writable_page) ) - { - if ( page ) - put_page(page); - gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", - gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN)); - return; - } - - p = __map_domain_page(page); + const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc; + HV_REFERENCE_TSC_PAGE *p = rt->ptr; if ( initialize ) clear_page(p); @@ -82,7 +59,7 @@ static void update_reference_tsc(struct domain *d, bool initialize) printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: invalidated\n", d->domain_id); - goto out; + return; } /* @@ -100,11 +77,6 @@ static void update_reference_tsc(struct domain *d, bool initialize) if ( p->TscSequence == 0xFFFFFFFF || p->TscSequence == 0 ) /* Avoid both 'invalid' values */ p->TscSequence = 1; - - out: - unmap_domain_page(p); - - put_page_and_type(page); } static int64_t raw_trc_val(const struct domain *d) @@ -149,10 +121,14 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - vd->reference_tsc.raw = val; - dump_reference_tsc(d); - if ( vd->reference_tsc.enabled ) + viridian_unmap_guest_page(&vd->reference_tsc); + vd->reference_tsc.msr.raw = val; + viridian_dump_guest_page(v, "REFERENCE_TSC", &vd->reference_tsc); + if ( vd->reference_tsc.msr.enabled ) + { + viridian_map_guest_page(d, &vd->reference_tsc); update_reference_tsc(d, true); + } break; default: @@ -189,7 +165,7 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - *val = vd->reference_tsc.raw; + *val = vd->reference_tsc.msr.raw; break; case HV_X64_MSR_TIME_REF_COUNT: @@ -231,6 +207,7 @@ void viridian_time_vcpu_deinit(const struct vcpu *v) void viridian_time_domain_deinit(const struct domain *d) { + viridian_unmap_guest_page(&d->arch.hvm.viridian->reference_tsc); } void viridian_time_save_vcpu_ctxt( @@ -249,7 +226,7 @@ void viridian_time_save_domain_ctxt( const struct viridian_domain *vd = d->arch.hvm.viridian; ctxt->time_ref_count = vd->time_ref_count.val; - ctxt->reference_tsc = vd->reference_tsc.raw; + ctxt->reference_tsc = vd->reference_tsc.msr.raw; } void viridian_time_load_domain_ctxt( @@ -258,10 +235,13 @@ void viridian_time_load_domain_ctxt( struct viridian_domain *vd = d->arch.hvm.viridian; vd->time_ref_count.val = ctxt->time_ref_count; - vd->reference_tsc.raw = ctxt->reference_tsc; + vd->reference_tsc.msr.raw = ctxt->reference_tsc; - if ( vd->reference_tsc.enabled ) + if ( vd->reference_tsc.msr.enabled ) + { + viridian_map_guest_page(d, &vd->reference_tsc); update_reference_tsc(d, false); + } } /* diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 742a988252..2b045ed88f 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -644,9 +644,8 @@ void viridian_dump_guest_page(const struct vcpu *v, const char *name, v, name, (unsigned long)vp->msr.pfn); } -void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp) +void viridian_map_guest_page(struct domain *d, struct viridian_page *vp) { - struct domain *d = v->domain; unsigned long gmfn = vp->msr.pfn; struct page_info *page; diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index abbbb36092..c65c044191 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -65,7 +65,7 @@ struct viridian_domain union viridian_guest_os_id_msr guest_os_id; union viridian_page_msr hypercall_gpa; struct viridian_time_ref_count time_ref_count; - union viridian_page_msr reference_tsc; + struct viridian_page reference_tsc; }; void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,