From patchwork Mon Mar 18 11:20:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10857397 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 96FC21709 for ; Mon, 18 Mar 2019 11:23:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7CCDC2937E for ; Mon, 18 Mar 2019 11:23:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 711F82937F; Mon, 18 Mar 2019 11:23:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0C7D629384 for ; Mon, 18 Mar 2019 11:23:06 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKK-0004F4-3d; Mon, 18 Mar 2019 11:21:08 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKI-0004Ej-7s for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 11:21:06 +0000 X-Inumbo-ID: eaf6ef20-496f-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id eaf6ef20-496f-11e9-bc90-bc764e045a96; Mon, 18 Mar 2019 11:21:04 +0000 (UTC) IronPort-Data: A9a23:8KdLjK3pafqwIcuBVPbDix16xWK7qk9dZSB6s+R8DyB0pV0GPEx97T uKzLUkopUAQnJEN2Bag4kgu/4O4M+aA/v0HWXmahotxN278sXjqoC5vKFpLolWPzaw3Mp9pH FmhHhUbiRlhWTAu2pAKZTKCVWUjYtrBQMQAZUrTrmuD3d98N9pcGd+xmNqA4p/Qa1yzR0X+d /eiRfvWLWClGLh9r8MDaQvqWSkpI3HHV62X+qGd79Z85ov2AmytHazxNcMZy4Mbv/4ryhngZ jNh1QprBaXRlMfglOmrY+kzrdpY8Y0BODxahqyIRebt8N8fulWvsAdLfXpHctxK0Xou/Zyzs m4vfe85R6jUA7+8BjZTSVHMQt8jZ0KqyypTONN2pm+AjOTt4wTbFxMSMt3bYC0kAlMQTe/h6 kb59qBLkOa2qexrt9/KhuGEQVfpJE5dmBY3nSx2WItin6dtKY3fk9ow0YQeF8hCso4V8YokD WGvE5DOYpiwkqMEeR0Wmb/VqDaCcQoU23EiLGZEjaPZ04p3YZxL4zbBrwHqcA++OF0zhWDWG eTprwEmzMon/XGjyvAl6ne2oPzsEQDG6Yg3flZDE2lt8LE8RTC3g46RfU3Kegfbj7ea729gU z6EVsATtnEU+Xt9GTjK2/r0iiFe98gxulXTj2jHuePVKFQ8AR3gRO4wVrbBj3YgImODxYz/v PdRkaH4XYoUIJLUngRaH3pUdh1ReQB/H9uMhuI0lR6zj/6jHsZORQM4VtojVTCfHZoj7ep7y +tE6OLYELgYkTNwOV9ovd2gQ2tK1qGnR8dZIS+G1Zs0C5y2h4YoztZ14LJ9wUvOTXET70Q26 vVyP4Z+5fww9sZPA+LkKS7X0pSBSplsYOC6PJe+Vp9oAhhS+RcabNdo10nqspbqwK8RE0ioo tbLzEKP0UP2SaPXg/OEURoGaFIpzu1UpdI8zUSNC2JSGX/Ct+2DlxBxVEcnr9FKXaF4ztAsn sMq5zwMBCCF3BjiR35E9LggDUqAFKZWm72c/kczJxQkxKteiIYhd6TZGi9PmfRa+zDO0ByG8 G9mAm+wNccO30gcBhl02V5YMS1RuyWcjhZbKQhyQZkIjj6PEq0ut04t8QcKhXew1fX4+/fWR laqyXS7z7rWfJEtMVg1xD3kXQgPGgpAwpo8zeMWtlzZhGzFyC70dqTgsyw4HZqc6K27C137c fcIV8dnQlJ1XBSQFOdSlCUK/Cd3700avxj9eU+DzAkF8rLG5JUnJCxa9wQFieNB4hIk15Q9S x2PARd0mGtUJhjJLcYQZVl28TQUoz4H0/DKJ3fvpTM5WF4eR6UqkxzS9fgN+Ylzrv/yJHdfl GX13rNe1+eWHj9B4X5rPWpwBW3V057hZXNDfOQHCaRqDnPDQ4FQi3DQSDP56Q6acINXwpt86 V+bIorsevZAERtLmALByGD6jGXR1f4EOjduYwADhqEeEb8LyYKd3lkQg6dQzW9EUjU6ryh4i 8bnERU5sVdExN7hsl98gmCue8PWd6U5aGDfA/AtN5zfXoBZA+tNdFgVM4vB6K9UvEwnrQoBz dnDKUO4CWpHhcYqkAcNtgyZffUcNWi53JA9yqDGLivuTytlUB8idIDQfgn1kkTFYoTS9khPy /RzzY6Qef8e6sx8696lY968jVM4w7wd7fTeTiSadf8IX1cQTDAT87S9oMbspiRvR0V5q/03x iWO8hqNWIio4FQHonnMh//0tcChXPGEZxyTgitHBx9HTl2MEKSDvk+eYJpLZJqYfX6QK8lZ3 QkXkcM7glDhO8QyRXu6h4VI2FMDfJhJ5jJ19LhrkxVZxMNz55TgYvODQI= X-IronPort-AV: E=Sophos;i="5.58,493,1544486400"; d="scan'208";a="80850932" From: Paul Durrant To: Date: Mon, 18 Mar 2019 11:20:49 +0000 Message-ID: <20190318112059.21910-2-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190318112059.21910-1-paul.durrant@citrix.com> References: <20190318112059.21910-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v8 01/11] viridian: add init hooks X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch adds domain and vcpu init hooks for viridian features. The init hooks do not yet do anything; the functionality will be added to by subsequent patches. Signed-off-by: Paul Durrant Reviewed-by: Wei Liu Acked-by: Jan Beulich --- Cc: Andrew Cooper Cc: "Roger Pau Monné" v5: - Put the call to viridian_domain_deinit() back into hvm_domain_relinquish_resources() where it should be v3: - Re-instate call from domain deinit to vcpu deinit - Move deinit calls to avoid introducing new labels v2: - Remove call from domain deinit to vcpu deinit --- xen/arch/x86/hvm/hvm.c | 10 ++++++++++ xen/arch/x86/hvm/viridian/viridian.c | 10 ++++++++++ xen/include/asm-x86/hvm/viridian.h | 3 +++ 3 files changed, 23 insertions(+) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 8adbb61b57..11ce21fc08 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -666,6 +666,10 @@ int hvm_domain_initialise(struct domain *d) if ( hvm_tsc_scaling_supported ) d->arch.hvm.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio; + rc = viridian_domain_init(d); + if ( rc ) + goto fail2; + rc = hvm_funcs.domain_initialise(d); if ( rc != 0 ) goto fail2; @@ -687,6 +691,7 @@ int hvm_domain_initialise(struct domain *d) hvm_destroy_cacheattr_region_list(d); destroy_perdomain_mapping(d, PERDOMAIN_VIRT_START, 0); fail: + viridian_domain_deinit(d); return rc; } @@ -1526,6 +1531,10 @@ int hvm_vcpu_initialise(struct vcpu *v) && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */ goto fail5; + rc = viridian_vcpu_init(v); + if ( rc ) + goto fail5; + rc = hvm_all_ioreq_servers_add_vcpu(d, v); if ( rc != 0 ) goto fail6; @@ -1553,6 +1562,7 @@ int hvm_vcpu_initialise(struct vcpu *v) fail2: hvm_vcpu_cacheattr_destroy(v); fail1: + viridian_vcpu_deinit(v); return rc; } diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 425af56856..5b0eb8a8c7 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -417,6 +417,16 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) return X86EMUL_OKAY; } +int viridian_vcpu_init(struct vcpu *v) +{ + return 0; +} + +int viridian_domain_init(struct domain *d) +{ + return 0; +} + void viridian_vcpu_deinit(struct vcpu *v) { viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0); diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index ec5ef8d3f9..f072838955 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -80,6 +80,9 @@ viridian_hypercall(struct cpu_user_regs *regs); void viridian_time_ref_count_freeze(struct domain *d); void viridian_time_ref_count_thaw(struct domain *d); +int viridian_vcpu_init(struct vcpu *v); +int viridian_domain_init(struct domain *d); + void viridian_vcpu_deinit(struct vcpu *v); void viridian_domain_deinit(struct domain *d); From patchwork Mon Mar 18 11:20:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10857409 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 545D11390 for ; Mon, 18 Mar 2019 11:23:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 340822937E for ; Mon, 18 Mar 2019 11:23:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 285A529381; Mon, 18 Mar 2019 11:23:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B71B42937E for ; Mon, 18 Mar 2019 11:23:17 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKK-0004FM-OR; Mon, 18 Mar 2019 11:21:08 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKJ-0004Et-G8 for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 11:21:07 +0000 X-Inumbo-ID: eb5e8451-496f-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id eb5e8451-496f-11e9-bc90-bc764e045a96; Mon, 18 Mar 2019 11:21:05 +0000 (UTC) IronPort-Data: A9a23:aMR5Yq8aTepZxi0nzUFWDrVtTnXEOmzSOkUsvf5EW+ux9PeyolmlTE 5kKdhwZ+x7Ou5M2aFPB7kMoVbwNY5HmiGtgi+W8w7dWlKCIEdh9I82BT49Kow/ZEBlmoMi+6 XDl88wi3sU7PwM33umeEf75AcFfJhiywnP3EKw8yw3vR8nA9YN7PNZJrcO2VdXN0UWsRiDuh TUmWDTHqGG+qbl3mvjNls1c9Y7O/+uj52n3b0nZy6PCvxo1HZqTTxEeeqQLVWtguGbJoYjge U2C3wMvCDOCOHik7eFP2HkL9Yhh1LuUYHip2q3uwTiH20Z+yt/ZWcbvnpoduGUorBEvSjdRA HDmiuRHy8HymHagDBwe7RbtAWxQGGoHbIVPn+HaQz+d/biAspHUfh/eLWcZ3zy4cgzmHUseL OnIziIc+x1u61xDz8+ggARr/+7ezShaGTYG+s/Iyjz2INhXghSZeKNp/ZVUyfImCgaHycuLP ovXcQIuWbMNoP9+QDUIyEdwtNoTM6jNf7P4tuEjZj5DKY8whOTlIJK4kZ2hZthxBNFF0Ve25 Yz7svm62GqlrLSPEdNAkpmuBGc9lGRmow9tOCfxT8fYeYgCVuMXypuhIqojekXNzau/RYrS3 x7GIAuQ2ND2u/aeJ1VGb8Hpgr24SxFWt+gqXAJb2VdU1wx8ZiSgnsKoTk61FNCLuH3fBZvbw Wnvo8W5oDyg+V3mzi0RxE3lONxB3J8+9X6/6KGNJ8d3m30d8NQahk1sFh9P9mKaUE4SW9fMJ s/J1yL0gD3kpkd0dIZLsYOV4P6KRucE7yOqdrCdvAkh0ZspTGobNM4VF5tPE+iehccpYgETj ytESctWhUIa4b6q9L3WZnwxBtXYEr73MeimOW9UXNxsGwIGTXu8KaEnGKM4z7FAvp3PKVT2A v6ZclJ85SDCShUIEF6Jg6xzpwaGoat/EHTasy3fdlOFAJIWOqfVIS3sRq5CZtNfCPKd49Mma B9ClAPv6w+GgzUL/irDRSFtXpJivoUmopU5n4vpeL/ee0ljKEXefklW8+juPkhWsgzhzyooC 80oK9tF6dXXi2Tdgbgz2hhFZjgunbT0hydOyAq3pXtunPFupdSy6jmwguqkLIeyA/PV1GhYB bJwp8dgcxTLdKr/yztGLVM4FTFCLOjzyvVjTcXFUD4LL/EyRm/ZWQeS4e+aQHQySaQUbDxmT g33gbQvkN08ULXZftq1W5HxDozsvqgBWVNHY8D79+X8f5wuX+9eKpCbOX5M3WtUYFL8jisxq lCve7cOAj4ChP+De6nbtrIBXgmXJDpvWLf8jwBNV2eGNjIZAH8zd+BDDL/FNgE4TrIB84ClJ 6xP5MwYtxVBQN6cJYqO5ijHm5/Zkwcod7pCDJKFJIi+jswKE/FTuolQtAKnltKL6aYmjQO2Z JQ/uxyb0igo3/NcrzkODYcVaNptyp70C0lWm+QNfJL6/MjxXJX6hYlRUDyatY7oSDCs3wkPZ BnTRDsrctiJyO9ucnmjBVe/vt3IWR0D1OE4uOv+1Ksjjnfl2zUsaxlPJCTGZiplIF7vRPmgz t5ddl+7Vj9rM0uUWCx+1Pu3nP7gDIy4Da4/xpUcQlIt0npYY/iQiRQFFNB52/buHjprTm++2 lCuvPQJ+opSshO7OrmXPgKCs1LFuBzfbC8N4x1v3HXidwzfcgQi3pOrgPg7FTQ30rvzzBVjR s6NeDGIj24Xja8TVR47SPlhDtWv7/+7CLodIgWZMSCv1TwUjkpKpHTht6/jFsErP7Li/3Suc /LxcweO/EtkogUZLP2KZn481RgquWXbt0sN+Y5NV0sfNi3Njk+Vj2KBQJg X-IronPort-AV: E=Sophos;i="5.58,493,1544486400"; d="scan'208";a="80850934" From: Paul Durrant To: Date: Mon, 18 Mar 2019 11:20:50 +0000 Message-ID: <20190318112059.21910-3-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190318112059.21910-1-paul.durrant@citrix.com> References: <20190318112059.21910-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v8 02/11] viridian: separately allocate domain and vcpu structures X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Currently the viridian_domain and viridian_vcpu structures are inline in the hvm_domain and hvm_vcpu structures respectively. Subsequent patches will need to add sizable extra fields to the viridian structures which will cause the PAGE_SIZE limit of the overall vcpu structure to be exceeded. This patch, therefore, uses the new init hooks to separately allocate the structures and converts the 'viridian' fields in hvm_domain and hvm_cpu to be pointers to these allocations. These separate allocations also allow some vcpu and domain pointers to become const. Ideally, now that they are no longer inline, the allocations of the viridian structures could be made conditional on whether the toolstack is going to configure the viridian enlightenments. However the toolstack is currently unable to convey this information to the domain creation code so such an enhancement is deferred until that becomes possible. NOTE: The patch also introduced the 'is_viridian_vcpu' macro to avoid introducing a second evaluation of 'is_viridian_domain' with an open-coded 'v->domain' argument. This macro will also be further used in a subsequent patch. Signed-off-by: Paul Durrant Reviewed-by: Wei Liu Acked-by: Jan Beulich --- Cc: Andrew Cooper Cc: "Roger Pau Monné" v4: - Const-ify some vcpu and domain pointers v2: - use XFREE() - expand commit comment to point out why allocations are unconditional --- xen/arch/x86/hvm/viridian/private.h | 2 +- xen/arch/x86/hvm/viridian/synic.c | 46 ++++++++--------- xen/arch/x86/hvm/viridian/time.c | 38 +++++++------- xen/arch/x86/hvm/viridian/viridian.c | 75 ++++++++++++++++++---------- xen/include/asm-x86/hvm/domain.h | 2 +- xen/include/asm-x86/hvm/hvm.h | 4 ++ xen/include/asm-x86/hvm/vcpu.h | 2 +- xen/include/asm-x86/hvm/viridian.h | 10 ++-- 8 files changed, 101 insertions(+), 78 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h index 398b22f12d..46174f48cd 100644 --- a/xen/arch/x86/hvm/viridian/private.h +++ b/xen/arch/x86/hvm/viridian/private.h @@ -89,7 +89,7 @@ void viridian_time_load_domain_ctxt( void viridian_dump_guest_page(const struct vcpu *v, const char *name, const struct viridian_page *vp); -void viridian_map_guest_page(struct vcpu *v, struct viridian_page *vp); +void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp); void viridian_unmap_guest_page(struct viridian_page *vp); #endif /* X86_HVM_VIRIDIAN_PRIVATE_H */ diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index a6ebbbc9f5..28eda7798c 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -28,9 +28,9 @@ typedef union _HV_VP_ASSIST_PAGE uint8_t ReservedZBytePadding[PAGE_SIZE]; } HV_VP_ASSIST_PAGE; -void viridian_apic_assist_set(struct vcpu *v) +void viridian_apic_assist_set(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian.vp_assist.ptr; + HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; if ( !ptr ) return; @@ -40,40 +40,40 @@ void viridian_apic_assist_set(struct vcpu *v) * wrong and the VM will most likely hang so force a crash now * to make the problem clear. */ - if ( v->arch.hvm.viridian.apic_assist_pending ) + if ( v->arch.hvm.viridian->apic_assist_pending ) domain_crash(v->domain); - v->arch.hvm.viridian.apic_assist_pending = true; + v->arch.hvm.viridian->apic_assist_pending = true; ptr->ApicAssist.no_eoi = 1; } -bool viridian_apic_assist_completed(struct vcpu *v) +bool viridian_apic_assist_completed(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian.vp_assist.ptr; + HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; if ( !ptr ) return false; - if ( v->arch.hvm.viridian.apic_assist_pending && + if ( v->arch.hvm.viridian->apic_assist_pending && !ptr->ApicAssist.no_eoi ) { /* An EOI has been avoided */ - v->arch.hvm.viridian.apic_assist_pending = false; + v->arch.hvm.viridian->apic_assist_pending = false; return true; } return false; } -void viridian_apic_assist_clear(struct vcpu *v) +void viridian_apic_assist_clear(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian.vp_assist.ptr; + HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; if ( !ptr ) return; ptr->ApicAssist.no_eoi = 0; - v->arch.hvm.viridian.apic_assist_pending = false; + v->arch.hvm.viridian->apic_assist_pending = false; } int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) @@ -95,12 +95,12 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_VP_ASSIST_PAGE: /* release any previous mapping */ - viridian_unmap_guest_page(&v->arch.hvm.viridian.vp_assist); - v->arch.hvm.viridian.vp_assist.msr.raw = val; + viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist); + v->arch.hvm.viridian->vp_assist.msr.raw = val; viridian_dump_guest_page(v, "VP_ASSIST", - &v->arch.hvm.viridian.vp_assist); - if ( v->arch.hvm.viridian.vp_assist.msr.fields.enabled ) - viridian_map_guest_page(v, &v->arch.hvm.viridian.vp_assist); + &v->arch.hvm.viridian->vp_assist); + if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled ) + viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist); break; default: @@ -132,7 +132,7 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) break; case HV_X64_MSR_VP_ASSIST_PAGE: - *val = v->arch.hvm.viridian.vp_assist.msr.raw; + *val = v->arch.hvm.viridian->vp_assist.msr.raw; break; default: @@ -146,18 +146,18 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) void viridian_synic_save_vcpu_ctxt(const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt) { - ctxt->apic_assist_pending = v->arch.hvm.viridian.apic_assist_pending; - ctxt->vp_assist_msr = v->arch.hvm.viridian.vp_assist.msr.raw; + ctxt->apic_assist_pending = v->arch.hvm.viridian->apic_assist_pending; + ctxt->vp_assist_msr = v->arch.hvm.viridian->vp_assist.msr.raw; } void viridian_synic_load_vcpu_ctxt( struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt) { - v->arch.hvm.viridian.vp_assist.msr.raw = ctxt->vp_assist_msr; - if ( v->arch.hvm.viridian.vp_assist.msr.fields.enabled ) - viridian_map_guest_page(v, &v->arch.hvm.viridian.vp_assist); + v->arch.hvm.viridian->vp_assist.msr.raw = ctxt->vp_assist_msr; + if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled ) + viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist); - v->arch.hvm.viridian.apic_assist_pending = ctxt->apic_assist_pending; + v->arch.hvm.viridian->apic_assist_pending = ctxt->apic_assist_pending; } /* diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 840a82b457..a7e94aadf0 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -27,7 +27,7 @@ typedef struct _HV_REFERENCE_TSC_PAGE static void dump_reference_tsc(const struct domain *d) { - const union viridian_page_msr *rt = &d->arch.hvm.viridian.reference_tsc; + const union viridian_page_msr *rt = &d->arch.hvm.viridian->reference_tsc; if ( !rt->fields.enabled ) return; @@ -38,7 +38,7 @@ static void dump_reference_tsc(const struct domain *d) static void update_reference_tsc(struct domain *d, bool initialize) { - unsigned long gmfn = d->arch.hvm.viridian.reference_tsc.fields.pfn; + unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.fields.pfn; struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); HV_REFERENCE_TSC_PAGE *p; @@ -107,7 +107,7 @@ static void update_reference_tsc(struct domain *d, bool initialize) put_page_and_type(page); } -static int64_t raw_trc_val(struct domain *d) +static int64_t raw_trc_val(const struct domain *d) { uint64_t tsc; struct time_scale tsc_to_ns; @@ -119,21 +119,19 @@ static int64_t raw_trc_val(struct domain *d) return scale_delta(tsc, &tsc_to_ns) / 100ul; } -void viridian_time_ref_count_freeze(struct domain *d) +void viridian_time_ref_count_freeze(const struct domain *d) { - struct viridian_time_ref_count *trc; - - trc = &d->arch.hvm.viridian.time_ref_count; + struct viridian_time_ref_count *trc = + &d->arch.hvm.viridian->time_ref_count; if ( test_and_clear_bit(_TRC_running, &trc->flags) ) trc->val = raw_trc_val(d) + trc->off; } -void viridian_time_ref_count_thaw(struct domain *d) +void viridian_time_ref_count_thaw(const struct domain *d) { - struct viridian_time_ref_count *trc; - - trc = &d->arch.hvm.viridian.time_ref_count; + struct viridian_time_ref_count *trc = + &d->arch.hvm.viridian->time_ref_count; if ( !d->is_shutting_down && !test_and_set_bit(_TRC_running, &trc->flags) ) @@ -150,9 +148,9 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - d->arch.hvm.viridian.reference_tsc.raw = val; + d->arch.hvm.viridian->reference_tsc.raw = val; dump_reference_tsc(d); - if ( d->arch.hvm.viridian.reference_tsc.fields.enabled ) + if ( d->arch.hvm.viridian->reference_tsc.fields.enabled ) update_reference_tsc(d, true); break; @@ -189,13 +187,13 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - *val = d->arch.hvm.viridian.reference_tsc.raw; + *val = d->arch.hvm.viridian->reference_tsc.raw; break; case HV_X64_MSR_TIME_REF_COUNT: { struct viridian_time_ref_count *trc = - &d->arch.hvm.viridian.time_ref_count; + &d->arch.hvm.viridian->time_ref_count; if ( !(viridian_feature_mask(d) & HVMPV_time_ref_count) ) return X86EMUL_EXCEPTION; @@ -219,17 +217,17 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) void viridian_time_save_domain_ctxt( const struct domain *d, struct hvm_viridian_domain_context *ctxt) { - ctxt->time_ref_count = d->arch.hvm.viridian.time_ref_count.val; - ctxt->reference_tsc = d->arch.hvm.viridian.reference_tsc.raw; + ctxt->time_ref_count = d->arch.hvm.viridian->time_ref_count.val; + ctxt->reference_tsc = d->arch.hvm.viridian->reference_tsc.raw; } void viridian_time_load_domain_ctxt( struct domain *d, const struct hvm_viridian_domain_context *ctxt) { - d->arch.hvm.viridian.time_ref_count.val = ctxt->time_ref_count; - d->arch.hvm.viridian.reference_tsc.raw = ctxt->reference_tsc; + d->arch.hvm.viridian->time_ref_count.val = ctxt->time_ref_count; + d->arch.hvm.viridian->reference_tsc.raw = ctxt->reference_tsc; - if ( d->arch.hvm.viridian.reference_tsc.fields.enabled ) + if ( d->arch.hvm.viridian->reference_tsc.fields.enabled ) update_reference_tsc(d, false); } diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 5b0eb8a8c7..7839718ef4 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -146,7 +146,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, * Hypervisor information, but only if the guest has set its * own version number. */ - if ( d->arch.hvm.viridian.guest_os_id.raw == 0 ) + if ( d->arch.hvm.viridian->guest_os_id.raw == 0 ) break; res->a = viridian_build; res->b = ((uint32_t)viridian_major << 16) | viridian_minor; @@ -191,8 +191,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, case 4: /* Recommended hypercall usage. */ - if ( (d->arch.hvm.viridian.guest_os_id.raw == 0) || - (d->arch.hvm.viridian.guest_os_id.fields.os < 4) ) + if ( (d->arch.hvm.viridian->guest_os_id.raw == 0) || + (d->arch.hvm.viridian->guest_os_id.fields.os < 4) ) break; res->a = CPUID4A_RELAX_TIMER_INT; if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush ) @@ -224,7 +224,7 @@ static void dump_guest_os_id(const struct domain *d) { const union viridian_guest_os_id_msr *goi; - goi = &d->arch.hvm.viridian.guest_os_id; + goi = &d->arch.hvm.viridian->guest_os_id; printk(XENLOG_G_INFO "d%d: VIRIDIAN GUEST_OS_ID: vendor: %x os: %x major: %x minor: %x sp: %x build: %x\n", @@ -238,7 +238,7 @@ static void dump_hypercall(const struct domain *d) { const union viridian_page_msr *hg; - hg = &d->arch.hvm.viridian.hypercall_gpa; + hg = &d->arch.hvm.viridian->hypercall_gpa; printk(XENLOG_G_INFO "d%d: VIRIDIAN HYPERCALL: enabled: %x pfn: %lx\n", d->domain_id, @@ -247,7 +247,7 @@ static void dump_hypercall(const struct domain *d) static void enable_hypercall_page(struct domain *d) { - unsigned long gmfn = d->arch.hvm.viridian.hypercall_gpa.fields.pfn; + unsigned long gmfn = d->arch.hvm.viridian->hypercall_gpa.fields.pfn; struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); uint8_t *p; @@ -288,14 +288,14 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) switch ( idx ) { case HV_X64_MSR_GUEST_OS_ID: - d->arch.hvm.viridian.guest_os_id.raw = val; + d->arch.hvm.viridian->guest_os_id.raw = val; dump_guest_os_id(d); break; case HV_X64_MSR_HYPERCALL: - d->arch.hvm.viridian.hypercall_gpa.raw = val; + d->arch.hvm.viridian->hypercall_gpa.raw = val; dump_hypercall(d); - if ( d->arch.hvm.viridian.hypercall_gpa.fields.enabled ) + if ( d->arch.hvm.viridian->hypercall_gpa.fields.enabled ) enable_hypercall_page(d); break; @@ -317,10 +317,10 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_CRASH_P3: case HV_X64_MSR_CRASH_P4: BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >= - ARRAY_SIZE(v->arch.hvm.viridian.crash_param)); + ARRAY_SIZE(v->arch.hvm.viridian->crash_param)); idx -= HV_X64_MSR_CRASH_P0; - v->arch.hvm.viridian.crash_param[idx] = val; + v->arch.hvm.viridian->crash_param[idx] = val; break; case HV_X64_MSR_CRASH_CTL: @@ -337,11 +337,11 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) spin_unlock(&d->shutdown_lock); gprintk(XENLOG_WARNING, "VIRIDIAN CRASH: %lx %lx %lx %lx %lx\n", - v->arch.hvm.viridian.crash_param[0], - v->arch.hvm.viridian.crash_param[1], - v->arch.hvm.viridian.crash_param[2], - v->arch.hvm.viridian.crash_param[3], - v->arch.hvm.viridian.crash_param[4]); + v->arch.hvm.viridian->crash_param[0], + v->arch.hvm.viridian->crash_param[1], + v->arch.hvm.viridian->crash_param[2], + v->arch.hvm.viridian->crash_param[3], + v->arch.hvm.viridian->crash_param[4]); break; } @@ -364,11 +364,11 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) switch ( idx ) { case HV_X64_MSR_GUEST_OS_ID: - *val = d->arch.hvm.viridian.guest_os_id.raw; + *val = d->arch.hvm.viridian->guest_os_id.raw; break; case HV_X64_MSR_HYPERCALL: - *val = d->arch.hvm.viridian.hypercall_gpa.raw; + *val = d->arch.hvm.viridian->hypercall_gpa.raw; break; case HV_X64_MSR_VP_INDEX: @@ -393,10 +393,10 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) case HV_X64_MSR_CRASH_P3: case HV_X64_MSR_CRASH_P4: BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >= - ARRAY_SIZE(v->arch.hvm.viridian.crash_param)); + ARRAY_SIZE(v->arch.hvm.viridian->crash_param)); idx -= HV_X64_MSR_CRASH_P0; - *val = v->arch.hvm.viridian.crash_param[idx]; + *val = v->arch.hvm.viridian->crash_param[idx]; break; case HV_X64_MSR_CRASH_CTL: @@ -419,17 +419,33 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) int viridian_vcpu_init(struct vcpu *v) { + ASSERT(!v->arch.hvm.viridian); + v->arch.hvm.viridian = xzalloc(struct viridian_vcpu); + if ( !v->arch.hvm.viridian ) + return -ENOMEM; + return 0; } int viridian_domain_init(struct domain *d) { + ASSERT(!d->arch.hvm.viridian); + d->arch.hvm.viridian = xzalloc(struct viridian_domain); + if ( !d->arch.hvm.viridian ) + return -ENOMEM; + return 0; } void viridian_vcpu_deinit(struct vcpu *v) { - viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0); + if ( !v->arch.hvm.viridian ) + return; + + if ( is_viridian_vcpu(v) ) + viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0); + + XFREE(v->arch.hvm.viridian); } void viridian_domain_deinit(struct domain *d) @@ -438,6 +454,11 @@ void viridian_domain_deinit(struct domain *d) for_each_vcpu ( d, v ) viridian_vcpu_deinit(v); + + if ( !d->arch.hvm.viridian ) + return; + + XFREE(d->arch.hvm.viridian); } /* @@ -591,7 +612,7 @@ void viridian_dump_guest_page(const struct vcpu *v, const char *name, v, name, (unsigned long)vp->msr.fields.pfn); } -void viridian_map_guest_page(struct vcpu *v, struct viridian_page *vp) +void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp) { struct domain *d = v->domain; unsigned long gmfn = vp->msr.fields.pfn; @@ -645,8 +666,8 @@ static int viridian_save_domain_ctxt(struct vcpu *v, { const struct domain *d = v->domain; struct hvm_viridian_domain_context ctxt = { - .hypercall_gpa = d->arch.hvm.viridian.hypercall_gpa.raw, - .guest_os_id = d->arch.hvm.viridian.guest_os_id.raw, + .hypercall_gpa = d->arch.hvm.viridian->hypercall_gpa.raw, + .guest_os_id = d->arch.hvm.viridian->guest_os_id.raw, }; if ( !is_viridian_domain(d) ) @@ -665,8 +686,8 @@ static int viridian_load_domain_ctxt(struct domain *d, if ( hvm_load_entry_zeroextend(VIRIDIAN_DOMAIN, h, &ctxt) != 0 ) return -EINVAL; - d->arch.hvm.viridian.hypercall_gpa.raw = ctxt.hypercall_gpa; - d->arch.hvm.viridian.guest_os_id.raw = ctxt.guest_os_id; + d->arch.hvm.viridian->hypercall_gpa.raw = ctxt.hypercall_gpa; + d->arch.hvm.viridian->guest_os_id.raw = ctxt.guest_os_id; viridian_time_load_domain_ctxt(d, &ctxt); @@ -680,7 +701,7 @@ static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h) { struct hvm_viridian_vcpu_context ctxt = {}; - if ( !is_viridian_domain(v->domain) ) + if ( !is_viridian_vcpu(v) ) return 0; viridian_synic_save_vcpu_ctxt(v, &ctxt); diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index 3e7331817f..6c7c4f5aa6 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -154,7 +154,7 @@ struct hvm_domain { /* hypervisor intercepted msix table */ struct list_head msixtbl_list; - struct viridian_domain viridian; + struct viridian_domain *viridian; bool_t hap_enabled; bool_t mem_sharing_enabled; diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index 53ffebb2c5..37c3567a57 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -463,6 +463,9 @@ static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val) #define is_viridian_domain(d) \ (is_hvm_domain(d) && (viridian_feature_mask(d) & HVMPV_base_freq)) +#define is_viridian_vcpu(v) \ + is_viridian_domain((v)->domain) + #define has_viridian_time_ref_count(d) \ (is_viridian_domain(d) && (viridian_feature_mask(d) & HVMPV_time_ref_count)) @@ -762,6 +765,7 @@ static inline bool hvm_has_set_descriptor_access_exiting(void) } #define is_viridian_domain(d) ((void)(d), false) +#define is_viridian_vcpu(v) ((void)(v), false) #define has_viridian_time_ref_count(d) ((void)(d), false) #define hvm_long_mode_active(v) ((void)(v), false) #define hvm_get_guest_time(v) ((void)(v), 0) diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 6c84d5a5a6..d1589f3a96 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -205,7 +205,7 @@ struct hvm_vcpu { /* Pending hw/sw interrupt (.vector = -1 means nothing pending). */ struct x86_event inject_event; - struct viridian_vcpu viridian; + struct viridian_vcpu *viridian; }; #endif /* __ASM_X86_HVM_VCPU_H__ */ diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index f072838955..c562424332 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -77,8 +77,8 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val); int viridian_hypercall(struct cpu_user_regs *regs); -void viridian_time_ref_count_freeze(struct domain *d); -void viridian_time_ref_count_thaw(struct domain *d); +void viridian_time_ref_count_freeze(const struct domain *d); +void viridian_time_ref_count_thaw(const struct domain *d); int viridian_vcpu_init(struct vcpu *v); int viridian_domain_init(struct domain *d); @@ -86,9 +86,9 @@ int viridian_domain_init(struct domain *d); void viridian_vcpu_deinit(struct vcpu *v); void viridian_domain_deinit(struct domain *d); -void viridian_apic_assist_set(struct vcpu *v); -bool viridian_apic_assist_completed(struct vcpu *v); -void viridian_apic_assist_clear(struct vcpu *v); +void viridian_apic_assist_set(const struct vcpu *v); +bool viridian_apic_assist_completed(const struct vcpu *v); +void viridian_apic_assist_clear(const struct vcpu *v); #endif /* __ASM_X86_HVM_VIRIDIAN_H__ */ From patchwork Mon Mar 18 11:20:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10857401 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1A0231390 for ; Mon, 18 Mar 2019 11:23:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F0FB82937E for ; Mon, 18 Mar 2019 11:23:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E58F229380; Mon, 18 Mar 2019 11:23:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0E7152937E for ; Mon, 18 Mar 2019 11:23:12 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKO-0004H0-1S; Mon, 18 Mar 2019 11:21:12 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKN-0004GI-2S for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 11:21:11 +0000 X-Inumbo-ID: ed689930-496f-11e9-96ed-3f54de64eaa0 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ed689930-496f-11e9-96ed-3f54de64eaa0; Mon, 18 Mar 2019 11:21:08 +0000 (UTC) IronPort-Data: A9a23:1R+iBaOvGdgV81rvrXNrnJZRuvKYJRA4MxhAe5GPyHHeKdA0Q4WDLG SUMgkCx3kO14TqgzEIKN6dT1tfe00KZkLoqo9TMAF6uyNRJ8aB7xge/ne86/7h8yCkhZzdil 4PsDmfaI4tkh9fA1lNktbWTujoJ2nYDvaai/6CwpANrOuPukvAC+Kga156PRDHI8G3QMyrgz 7+hQf/Qkvz0o5bP64P7+EkO2PKd2zg8Wo+GqIDdkEYAe3zfeBxqbLr1ZjWjdC5zqBgrbN/hC p2jvLu2cQi52kDWkcRLxKejzcwO1LZQYDu85tv0fqswK5icBfbfg9qssjTB5qoyRvzZEuDu+ ACJPTBUPP59yCp8KYrAhcK1mkjOYHnLgrviLLFmevIwkGh8IRh2vUT8thKydYSG2lMQTaeg6 kb59uBokKS0qexr9d/KhuGEQVfpNj7GUk43soCEGDvi369tKb1f09oQ0YQWV8huhy+VsboUz R+vEhjMYpyokqMEew0Vqb+VsDaecQoUy3eUZgZnTWPZ04q4YZBL6x8qteNz6PFcWiIdJ3DGf gzsAon2Ce7ir6n7Xq8s9Gf7Z2lXy3U+N81ZnyLjLC1p4oV0bpbEgaJtcNfYlAifn/vIwo5V6 GtnkV1SWJDwggDCHvA2k0EJX4pcsMVeeWjxbCyRhLTSBszOoFoc74hdkkayK9qmq5LjHwGT0 vP4PJaApUvMl+FRCHS2Q6i8MnTodX70Sg9gqfrjp+TcOQaq/XJQkoQHUSzBGLyHwSjr0V2tb 8KKqkwUyaKll0zYzjcUPVZ0gEpZyEvGNOGickgKFopjNY7Zl0LoJI9R73lcVdFWyKccZbYwi +2Hn0fYnid3U1xdOfNrZjTDHr3qArZ0M4SRDWPxO2aq2OXwfoPXlabUVNXwGq4jIxHHRriTG yxJry/V6LwhstysUYDhv/fiHvihXBX9i7nX/exy84YgFgIMReEzR+aLfmxE6qAdg1JL1noSd gPjjuhzGjy1ttqd9iq7bzBdUYL3RUtEyTQ3/LrZXww7dEovnVUAeqdT+pcg3tffLVk+rI0l0 aQJwcHuU50FWUh/iRmy+Vx0ci9SeBE0vuBfKZH3IbVKAvfAwq8BLy002eaYQZqr/roxU9eFx rLhSL0vypv9dp4qixh1xD3kXQgO2gpAwpoOzaMWtlzZgGzF8G7scKF0Wyo72Id6BfVAU/LQj IUClcSEfa9IRNQZ1+PVulV/yPvbIg+jA9HjrsxGYhLFjl6rBkcUe7DyYwi+pgc/rB6jhzXhK toTkbPViCToFI/TG2z6NtzSATIRaODTTLi9NQv91WFjqS7RD9zUAYk8mOURKWQ6VAevS37QS cNdJfFPl88N+Og/HCrB+HWF1giEl3ACdldUT4hDCGK0pYFWOJ3CLnnc/NB8icXmUuiD1Uwrq AWaGUyo0y9HwP30Jzkht09aPFxHEz6MGtNJl/kaYFe7sH3l9D8KjjQCTtBHCbb0EHerJyfJd rcRirvMy3vQW12hzWB4l+yiB9YojgjMrZWyuKrh5ZVez/f49XeyUYWq197C1e/IImR5yERot zx10a4AJgjKUSip1tfwFXIaG1IxYHVRNYdsBuT1r+1QF4vUZUqRJrzk7q4ahDKrQKjuq6Xu+ LwPYEQAfxjiZ0tdCcaguybDG9GfctErfbN2vddFtNYizzS02pjaKhFZHacgovgpq7GgfOwt6 gWMJvcNuhBZtmS48mEs9SPjiQCW83oVnlGHfJtYsQdpEArXnAGgLrTg0x9P5LQZOVY4z+CTP ATPxr1RGuOfT1h0YbinKUMnSkEVzCORVrcQr9FeOKlXg5XHP9McovODQI= X-IronPort-AV: E=Sophos;i="5.58,493,1544486400"; d="scan'208";a="80850944" From: Paul Durrant To: Date: Mon, 18 Mar 2019 11:20:51 +0000 Message-ID: <20190318112059.21910-4-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190318112059.21910-1-paul.durrant@citrix.com> References: <20190318112059.21910-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v8 03/11] viridian: use stack variables for viridian_vcpu and viridian_domain... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP ...where there is more than one dereference inside a function. This shortens the code and makes it more readable. No functional change. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" v4: - New in v4 --- xen/arch/x86/hvm/viridian/synic.c | 49 ++++++++++++++++------------ xen/arch/x86/hvm/viridian/time.c | 27 ++++++++------- xen/arch/x86/hvm/viridian/viridian.c | 47 +++++++++++++------------- 3 files changed, 69 insertions(+), 54 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index 28eda7798c..f3d9f7ae74 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -30,7 +30,8 @@ typedef union _HV_VP_ASSIST_PAGE void viridian_apic_assist_set(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; + struct viridian_vcpu *vv = v->arch.hvm.viridian; + HV_VP_ASSIST_PAGE *ptr = vv->vp_assist.ptr; if ( !ptr ) return; @@ -40,25 +41,25 @@ void viridian_apic_assist_set(const struct vcpu *v) * wrong and the VM will most likely hang so force a crash now * to make the problem clear. */ - if ( v->arch.hvm.viridian->apic_assist_pending ) + if ( vv->apic_assist_pending ) domain_crash(v->domain); - v->arch.hvm.viridian->apic_assist_pending = true; + vv->apic_assist_pending = true; ptr->ApicAssist.no_eoi = 1; } bool viridian_apic_assist_completed(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; + struct viridian_vcpu *vv = v->arch.hvm.viridian; + HV_VP_ASSIST_PAGE *ptr = vv->vp_assist.ptr; if ( !ptr ) return false; - if ( v->arch.hvm.viridian->apic_assist_pending && - !ptr->ApicAssist.no_eoi ) + if ( vv->apic_assist_pending && !ptr->ApicAssist.no_eoi ) { /* An EOI has been avoided */ - v->arch.hvm.viridian->apic_assist_pending = false; + vv->apic_assist_pending = false; return true; } @@ -67,17 +68,20 @@ bool viridian_apic_assist_completed(const struct vcpu *v) void viridian_apic_assist_clear(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; + struct viridian_vcpu *vv = v->arch.hvm.viridian; + HV_VP_ASSIST_PAGE *ptr = vv->vp_assist.ptr; if ( !ptr ) return; ptr->ApicAssist.no_eoi = 0; - v->arch.hvm.viridian->apic_assist_pending = false; + vv->apic_assist_pending = false; } int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; + switch ( idx ) { case HV_X64_MSR_EOI: @@ -95,12 +99,11 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_VP_ASSIST_PAGE: /* release any previous mapping */ - viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist); - v->arch.hvm.viridian->vp_assist.msr.raw = val; - viridian_dump_guest_page(v, "VP_ASSIST", - &v->arch.hvm.viridian->vp_assist); - if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled ) - viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist); + viridian_unmap_guest_page(&vv->vp_assist); + vv->vp_assist.msr.raw = val; + viridian_dump_guest_page(v, "VP_ASSIST", &vv->vp_assist); + if ( vv->vp_assist.msr.fields.enabled ) + viridian_map_guest_page(v, &vv->vp_assist); break; default: @@ -146,18 +149,22 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) void viridian_synic_save_vcpu_ctxt(const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt) { - ctxt->apic_assist_pending = v->arch.hvm.viridian->apic_assist_pending; - ctxt->vp_assist_msr = v->arch.hvm.viridian->vp_assist.msr.raw; + const struct viridian_vcpu *vv = v->arch.hvm.viridian; + + ctxt->apic_assist_pending = vv->apic_assist_pending; + ctxt->vp_assist_msr = vv->vp_assist.msr.raw; } void viridian_synic_load_vcpu_ctxt( struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt) { - v->arch.hvm.viridian->vp_assist.msr.raw = ctxt->vp_assist_msr; - if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled ) - viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist); + struct viridian_vcpu *vv = v->arch.hvm.viridian; + + vv->vp_assist.msr.raw = ctxt->vp_assist_msr; + if ( vv->vp_assist.msr.fields.enabled ) + viridian_map_guest_page(v, &vv->vp_assist); - v->arch.hvm.viridian->apic_assist_pending = ctxt->apic_assist_pending; + vv->apic_assist_pending = ctxt->apic_assist_pending; } /* diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index a7e94aadf0..76f9612001 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -141,6 +141,7 @@ void viridian_time_ref_count_thaw(const struct domain *d) int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) { struct domain *d = v->domain; + struct viridian_domain *vd = d->arch.hvm.viridian; switch ( idx ) { @@ -148,9 +149,9 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - d->arch.hvm.viridian->reference_tsc.raw = val; + vd->reference_tsc.raw = val; dump_reference_tsc(d); - if ( d->arch.hvm.viridian->reference_tsc.fields.enabled ) + if ( vd->reference_tsc.fields.enabled ) update_reference_tsc(d, true); break; @@ -165,7 +166,8 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) { - struct domain *d = v->domain; + const struct domain *d = v->domain; + struct viridian_domain *vd = d->arch.hvm.viridian; switch ( idx ) { @@ -187,13 +189,12 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - *val = d->arch.hvm.viridian->reference_tsc.raw; + *val = vd->reference_tsc.raw; break; case HV_X64_MSR_TIME_REF_COUNT: { - struct viridian_time_ref_count *trc = - &d->arch.hvm.viridian->time_ref_count; + struct viridian_time_ref_count *trc = &vd->time_ref_count; if ( !(viridian_feature_mask(d) & HVMPV_time_ref_count) ) return X86EMUL_EXCEPTION; @@ -217,17 +218,21 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) void viridian_time_save_domain_ctxt( const struct domain *d, struct hvm_viridian_domain_context *ctxt) { - ctxt->time_ref_count = d->arch.hvm.viridian->time_ref_count.val; - ctxt->reference_tsc = d->arch.hvm.viridian->reference_tsc.raw; + const struct viridian_domain *vd = d->arch.hvm.viridian; + + ctxt->time_ref_count = vd->time_ref_count.val; + ctxt->reference_tsc = vd->reference_tsc.raw; } void viridian_time_load_domain_ctxt( struct domain *d, const struct hvm_viridian_domain_context *ctxt) { - d->arch.hvm.viridian->time_ref_count.val = ctxt->time_ref_count; - d->arch.hvm.viridian->reference_tsc.raw = ctxt->reference_tsc; + struct viridian_domain *vd = d->arch.hvm.viridian; + + vd->time_ref_count.val = ctxt->time_ref_count; + vd->reference_tsc.raw = ctxt->reference_tsc; - if ( d->arch.hvm.viridian->reference_tsc.fields.enabled ) + if ( vd->reference_tsc.fields.enabled ) update_reference_tsc(d, false); } diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 7839718ef4..710470fed7 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -122,6 +122,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, uint32_t subleaf, struct cpuid_leaf *res) { const struct domain *d = v->domain; + const struct viridian_domain *vd = d->arch.hvm.viridian; ASSERT(is_viridian_domain(d)); ASSERT(leaf >= 0x40000000 && leaf < 0x40000100); @@ -146,7 +147,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, * Hypervisor information, but only if the guest has set its * own version number. */ - if ( d->arch.hvm.viridian->guest_os_id.raw == 0 ) + if ( vd->guest_os_id.raw == 0 ) break; res->a = viridian_build; res->b = ((uint32_t)viridian_major << 16) | viridian_minor; @@ -191,8 +192,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, case 4: /* Recommended hypercall usage. */ - if ( (d->arch.hvm.viridian->guest_os_id.raw == 0) || - (d->arch.hvm.viridian->guest_os_id.fields.os < 4) ) + if ( vd->guest_os_id.raw == 0 || vd->guest_os_id.fields.os < 4 ) break; res->a = CPUID4A_RELAX_TIMER_INT; if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush ) @@ -281,21 +281,23 @@ static void enable_hypercall_page(struct domain *d) int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; struct domain *d = v->domain; + struct viridian_domain *vd = d->arch.hvm.viridian; ASSERT(is_viridian_domain(d)); switch ( idx ) { case HV_X64_MSR_GUEST_OS_ID: - d->arch.hvm.viridian->guest_os_id.raw = val; + vd->guest_os_id.raw = val; dump_guest_os_id(d); break; case HV_X64_MSR_HYPERCALL: - d->arch.hvm.viridian->hypercall_gpa.raw = val; + vd->hypercall_gpa.raw = val; dump_hypercall(d); - if ( d->arch.hvm.viridian->hypercall_gpa.fields.enabled ) + if ( vd->hypercall_gpa.fields.enabled ) enable_hypercall_page(d); break; @@ -317,10 +319,10 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_CRASH_P3: case HV_X64_MSR_CRASH_P4: BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >= - ARRAY_SIZE(v->arch.hvm.viridian->crash_param)); + ARRAY_SIZE(vv->crash_param)); idx -= HV_X64_MSR_CRASH_P0; - v->arch.hvm.viridian->crash_param[idx] = val; + vv->crash_param[idx] = val; break; case HV_X64_MSR_CRASH_CTL: @@ -337,11 +339,8 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) spin_unlock(&d->shutdown_lock); gprintk(XENLOG_WARNING, "VIRIDIAN CRASH: %lx %lx %lx %lx %lx\n", - v->arch.hvm.viridian->crash_param[0], - v->arch.hvm.viridian->crash_param[1], - v->arch.hvm.viridian->crash_param[2], - v->arch.hvm.viridian->crash_param[3], - v->arch.hvm.viridian->crash_param[4]); + vv->crash_param[0], vv->crash_param[1], vv->crash_param[2], + vv->crash_param[3], vv->crash_param[4]); break; } @@ -357,18 +356,20 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) { - struct domain *d = v->domain; + const struct viridian_vcpu *vv = v->arch.hvm.viridian; + const struct domain *d = v->domain; + const struct viridian_domain *vd = d->arch.hvm.viridian; ASSERT(is_viridian_domain(d)); switch ( idx ) { case HV_X64_MSR_GUEST_OS_ID: - *val = d->arch.hvm.viridian->guest_os_id.raw; + *val = vd->guest_os_id.raw; break; case HV_X64_MSR_HYPERCALL: - *val = d->arch.hvm.viridian->hypercall_gpa.raw; + *val = vd->hypercall_gpa.raw; break; case HV_X64_MSR_VP_INDEX: @@ -393,10 +394,10 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) case HV_X64_MSR_CRASH_P3: case HV_X64_MSR_CRASH_P4: BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >= - ARRAY_SIZE(v->arch.hvm.viridian->crash_param)); + ARRAY_SIZE(vv->crash_param)); idx -= HV_X64_MSR_CRASH_P0; - *val = v->arch.hvm.viridian->crash_param[idx]; + *val = vv->crash_param[idx]; break; case HV_X64_MSR_CRASH_CTL: @@ -665,9 +666,10 @@ static int viridian_save_domain_ctxt(struct vcpu *v, hvm_domain_context_t *h) { const struct domain *d = v->domain; + const struct viridian_domain *vd = d->arch.hvm.viridian; struct hvm_viridian_domain_context ctxt = { - .hypercall_gpa = d->arch.hvm.viridian->hypercall_gpa.raw, - .guest_os_id = d->arch.hvm.viridian->guest_os_id.raw, + .hypercall_gpa = vd->hypercall_gpa.raw, + .guest_os_id = vd->guest_os_id.raw, }; if ( !is_viridian_domain(d) ) @@ -681,13 +683,14 @@ static int viridian_save_domain_ctxt(struct vcpu *v, static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h) { + struct viridian_domain *vd = d->arch.hvm.viridian; struct hvm_viridian_domain_context ctxt; if ( hvm_load_entry_zeroextend(VIRIDIAN_DOMAIN, h, &ctxt) != 0 ) return -EINVAL; - d->arch.hvm.viridian->hypercall_gpa.raw = ctxt.hypercall_gpa; - d->arch.hvm.viridian->guest_os_id.raw = ctxt.guest_os_id; + vd->hypercall_gpa.raw = ctxt.hypercall_gpa; + vd->guest_os_id.raw = ctxt.guest_os_id; viridian_time_load_domain_ctxt(d, &ctxt); From patchwork Mon Mar 18 11:20:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10857407 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7DAD91708 for ; Mon, 18 Mar 2019 11:23:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6238E2937E for ; Mon, 18 Mar 2019 11:23:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5672D29380; Mon, 18 Mar 2019 11:23:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C2E4E2937E for ; Mon, 18 Mar 2019 11:23:14 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKM-0004G5-El; Mon, 18 Mar 2019 11:21:10 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKK-0004FL-QS for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 11:21:08 +0000 X-Inumbo-ID: ec1e74aa-496f-11e9-bbf8-135e942494ef Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ec1e74aa-496f-11e9-bbf8-135e942494ef; Mon, 18 Mar 2019 11:21:06 +0000 (UTC) IronPort-Data: A9a23:6W+wdK1ay3EhzdQ/qPbDix16xWK7qk9dZSB6A8VEDSB0RE9Odbcrpx KurzSVy6cG28CNQrZC0tmqA8YeK8IiABxOslxLRWRNh/vHDTueqv/g41JfkmkfiBsCCVYHMj gImn57YCY+oDP7KPvoUomZdHFQH/Bx0shTUd9iV1p2SEObcQzrtQ/HJMq9f5H9WEZNsrFOQu +rUjaTwTNS/E7qAxGVvhEhM0VBJ7gkjhP8hYcz+MbVig4NtyDjxeMu668NxBngIOQsmTwE1t RxCWsjYeLIjbCnZnxyjetgWYjmftJbcOqWYJr6GzXX2UVF65vz8iy7QOW6qm7SRgfDfnFgbT KbTohYrRfYrARV5oxglEwD5h+GT/O85Dhx6XCG0gRcPhccnOZPWaU+4cwsXytpI0rsQJ25wR vx9Y80fCp2UxRIH1nVdIVJL1MfzlG/6s2+vRXrISOKO3Cu2mX1dZkgR3nBlLJYl/yU7b3uUf 7nDu8dYEFe0d4Y+XykK8wKXC0f2j885HfRuXq3H4GMyLOA4XKw341/vXkq08C9kV0IISQ+AL ZNLEydBhzxznr98R7FyzEPc4aq9B4gEw7g23gEEFLrBeZUVZtU6KmiABHvKvvZ3kGAPb6bQG EdpoGXyWM+GjPIljpngm5XDpu6mixvlcbFrb5R7tWCAVxBk5Vvt+G3IqMQgwgftjMOJB4gCr uDTvPc0lGkcP7vtfdJglpHr37JsWtwbFjEE4ZjcqaQfBo5CNF9jxPJUUp6jY6CbNbuzWBElp YzGZOblyqYAlDCAybMX6NCmOQanVOcWxSGickgKFovzdY/F10MoJIzT73lcVdHWyKc8VWi+G 7UHn0fYniRFJzM7QO0IaYJjZeszraESWIZghSPhPOaM2SXwfgPXlmbcUNXwGq0i4xDHJoXgH GvPwQ1O5z85rajXMwmUcHVtQMT7KA3aqlY/EamnDVaHqE6ABecuGZJncnchekFW4OV/zvAsn sMqxjotgCCl25lOZr4E8zeu17vfgW2Vrmagni0q4wAjTJbNoD0898SyVz6mn+Rc5i4DSNZW4 H0fA7UO2RdOVpHndQcn/LTpYd7pX+qbIRcyUQ2HX5jLwfBfd3WzKJRWJrimntsDpjS/nSnmG EJpDdPwtL078oPRuA0IehMoUWhb3sPhvbqfGR7yfEX9PAwtR4SsOXNb8232vue0f6Y00lIin c4D0cHw/UoHqd8nLQ6bPF62LPJLYhEuM+AsLGKaCNTKz0FIZGv9YrPqTQdedoHrNWSIKVoDl QKCWNO4GQU2wVKfcD+DziEWLJZSV2dNU8FllxM77nOXNFlZm6ZW648RsguKwkjItQ2XngC8D gSKdAtPU9B7A96E2fxohFtefIOcVTqrV0BjpD/eXmk3///mOXGpJ120gXj6ksYOAFlVBnVYN RIQa5Jl8IuG5d8K6H8ZC5qmvLPc6x92CsxOi3jnPFs55tqPXsPgeEPYBU0wMrSXbUuK2xksv VGTE0k0qppYGpy5/YOX5o6mhAWtWR+zDbFdy8idi/sTEShr1puZkbbIIg4fBQX96HhDf4XMV 120JFglsOIVHet9TDjD+iDkG/36/mVwF0+beWco7KoSKH1anL67YKgvSiMFiQF5cYopHldFf zk0aYx4a5BXFlWxQKvqs+CCv+7Z6oCH7OZu8cneUePbCUScOlBWhuH58ku/J9TJpRbJjQsIN SJ6Pr98F817xzpSxR8deZhjc+8c4n9cIruTAPimzVaR/m6JtmNSar05b4DfkaZZDFv1Qsf4h N0hwvTZlPBDgx473Ik2keNFIsoOVVok+o3HC0t9Lu/vY5SStYyuovODQI= X-IronPort-AV: E=Sophos;i="5.58,493,1544486400"; d="scan'208";a="80850935" From: Paul Durrant To: Date: Mon, 18 Mar 2019 11:20:52 +0000 Message-ID: <20190318112059.21910-5-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190318112059.21910-1-paul.durrant@citrix.com> References: <20190318112059.21910-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v8 04/11] viridian: make 'fields' struct anonymous... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP ...inside viridian_page_msr and viridian_guest_os_id_msr unions. There's no need to name it and the code is shortened by not doing so. No functional change. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" v4: - New in v4 --- xen/arch/x86/hvm/viridian/synic.c | 4 ++-- xen/arch/x86/hvm/viridian/time.c | 10 +++++----- xen/arch/x86/hvm/viridian/viridian.c | 20 +++++++++----------- xen/include/asm-x86/hvm/viridian.h | 4 ++-- 4 files changed, 18 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index f3d9f7ae74..05d971b365 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -102,7 +102,7 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) viridian_unmap_guest_page(&vv->vp_assist); vv->vp_assist.msr.raw = val; viridian_dump_guest_page(v, "VP_ASSIST", &vv->vp_assist); - if ( vv->vp_assist.msr.fields.enabled ) + if ( vv->vp_assist.msr.enabled ) viridian_map_guest_page(v, &vv->vp_assist); break; @@ -161,7 +161,7 @@ void viridian_synic_load_vcpu_ctxt( struct viridian_vcpu *vv = v->arch.hvm.viridian; vv->vp_assist.msr.raw = ctxt->vp_assist_msr; - if ( vv->vp_assist.msr.fields.enabled ) + if ( vv->vp_assist.msr.enabled ) viridian_map_guest_page(v, &vv->vp_assist); vv->apic_assist_pending = ctxt->apic_assist_pending; diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 76f9612001..909a3fb9e3 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -29,16 +29,16 @@ static void dump_reference_tsc(const struct domain *d) { const union viridian_page_msr *rt = &d->arch.hvm.viridian->reference_tsc; - if ( !rt->fields.enabled ) + if ( !rt->enabled ) return; printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: pfn: %lx\n", - d->domain_id, (unsigned long)rt->fields.pfn); + d->domain_id, (unsigned long)rt->pfn); } static void update_reference_tsc(struct domain *d, bool initialize) { - unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.fields.pfn; + unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.pfn; struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); HV_REFERENCE_TSC_PAGE *p; @@ -151,7 +151,7 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) vd->reference_tsc.raw = val; dump_reference_tsc(d); - if ( vd->reference_tsc.fields.enabled ) + if ( vd->reference_tsc.enabled ) update_reference_tsc(d, true); break; @@ -232,7 +232,7 @@ void viridian_time_load_domain_ctxt( vd->time_ref_count.val = ctxt->time_ref_count; vd->reference_tsc.raw = ctxt->reference_tsc; - if ( vd->reference_tsc.fields.enabled ) + if ( vd->reference_tsc.enabled ) update_reference_tsc(d, false); } diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 710470fed7..1a20d68aaf 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -192,7 +192,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, case 4: /* Recommended hypercall usage. */ - if ( vd->guest_os_id.raw == 0 || vd->guest_os_id.fields.os < 4 ) + if ( vd->guest_os_id.raw == 0 || vd->guest_os_id.os < 4 ) break; res->a = CPUID4A_RELAX_TIMER_INT; if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush ) @@ -228,10 +228,8 @@ static void dump_guest_os_id(const struct domain *d) printk(XENLOG_G_INFO "d%d: VIRIDIAN GUEST_OS_ID: vendor: %x os: %x major: %x minor: %x sp: %x build: %x\n", - d->domain_id, - goi->fields.vendor, goi->fields.os, - goi->fields.major, goi->fields.minor, - goi->fields.service_pack, goi->fields.build_number); + d->domain_id, goi->vendor, goi->os, goi->major, goi->minor, + goi->service_pack, goi->build_number); } static void dump_hypercall(const struct domain *d) @@ -242,12 +240,12 @@ static void dump_hypercall(const struct domain *d) printk(XENLOG_G_INFO "d%d: VIRIDIAN HYPERCALL: enabled: %x pfn: %lx\n", d->domain_id, - hg->fields.enabled, (unsigned long)hg->fields.pfn); + hg->enabled, (unsigned long)hg->pfn); } static void enable_hypercall_page(struct domain *d) { - unsigned long gmfn = d->arch.hvm.viridian->hypercall_gpa.fields.pfn; + unsigned long gmfn = d->arch.hvm.viridian->hypercall_gpa.pfn; struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); uint8_t *p; @@ -297,7 +295,7 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_HYPERCALL: vd->hypercall_gpa.raw = val; dump_hypercall(d); - if ( vd->hypercall_gpa.fields.enabled ) + if ( vd->hypercall_gpa.enabled ) enable_hypercall_page(d); break; @@ -606,17 +604,17 @@ out: void viridian_dump_guest_page(const struct vcpu *v, const char *name, const struct viridian_page *vp) { - if ( !vp->msr.fields.enabled ) + if ( !vp->msr.enabled ) return; printk(XENLOG_G_INFO "%pv: VIRIDIAN %s: pfn: %lx\n", - v, name, (unsigned long)vp->msr.fields.pfn); + v, name, (unsigned long)vp->msr.pfn); } void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp) { struct domain *d = v->domain; - unsigned long gmfn = vp->msr.fields.pfn; + unsigned long gmfn = vp->msr.pfn; struct page_info *page; if ( vp->ptr ) diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index c562424332..abbbb36092 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -17,7 +17,7 @@ union viridian_page_msr uint64_t enabled:1; uint64_t reserved_preserved:11; uint64_t pfn:48; - } fields; + }; }; struct viridian_page @@ -44,7 +44,7 @@ union viridian_guest_os_id_msr uint64_t major:8; uint64_t os:8; uint64_t vendor:16; - } fields; + }; }; struct viridian_time_ref_count From patchwork Mon Mar 18 11:20:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10857395 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 99DBD1390 for ; Mon, 18 Mar 2019 11:23:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 80AFE2937E for ; Mon, 18 Mar 2019 11:23:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7391629380; Mon, 18 Mar 2019 11:23:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E7C612937E for ; Mon, 18 Mar 2019 11:23:03 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKM-0004GC-QE; Mon, 18 Mar 2019 11:21:10 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKK-0004FK-QS for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 11:21:08 +0000 X-Inumbo-ID: ec288f6c-496f-11e9-836a-1b8e103764b7 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ec288f6c-496f-11e9-836a-1b8e103764b7; Mon, 18 Mar 2019 11:21:06 +0000 (UTC) IronPort-Data: A9a23:jRPNPaKLGwOZpclpFE//H55y5QJLeRNeAxBEl0ilJc3Isx7Jjk8ll/ eKhcseUHWqctO5wvWdbBv4QUJf4NkY40E2L4irnjHm4IMRpuxqUfQLyN7dleeD1b5TxFgsW5 3pJS1n5+5sPNGWQhxJvZhNSLE21/ldzBPBXGEk7V2YpeSKJMalSdEiLM82EaXt8YVk+CzEko m9moG/bqodR+XKmDPGkm2WTJ1WrCmKyHwKld/L5QWXsqbWw1JOdt0NGUotNwHpOoaHoDyItw OfRjIlqgLW9vuGzeaIJ47YNAEjEpC723D3+jeQkHqsRxBx10xGJiydZqu8nL8O8cV9IWTxmn wjFXw75WLX48WMqWHd1eW0r/kHcP7VltJc0xPqnAHtmokxjNRF4bHkTRE7kee/Nd5jWzxcS9 p1iQkT5BKxMYbU8MvnJ35Xx2y4tCvHC/OtoOB+nYZwwqXzEVOvvlBAzC8xRpcw3++WyaaZ4N lIiSjrke/XXfKRRUtHM78JwEFrkYBnx3Z1CYQUbBvCr4t/dUqgBGyZLMVUqh97v91LEAIrtG jOvO8E3v/xPugF+0iLr3LMRg251ZyvWW/IWJm+AejfF0m6Sv6VcZRnrrA08KKzwkbLG7CUWi oaigd1eZjawYBoy4mhQUGsGky/f5zD7dhImLhPVYUiejVoh7HWxs91wls7LQ3CjLP8FQzK+1 Ua+qPEM0SotDI56fewtff7mnKSqDKMTXnSxagDC+lxaCGvTVeGsPVs4VQuR/JaDUhAkkVFlg Vts38ogcPxXljCBxPv8pPyTs9vUBxHJ8/6iXkGdKdHQMfgp79yPEmkdNSFimSeLzzp1S6fp2 fQLV/IqMUjnXhyIDS2lNpDU8yonk1ANVc8WipQHo/h1+0gvO99374w6pXd+c1Y+wVYVwLYyd oOtw1c5e1QzqVSpvJDaB/Dd6i03p7e9yfWY/2CWTcMx91pr1PEGtnpxXWWNqsoehWg1g9erp qWcQTYhfMKuJaR6ipkBfyxaot2MKedN7e5My9TmClPDj6exl/+3abb7kM4B1ce2i23VmxCjz Fi5mW29HvHVXAkvtS5nKyD6j+azi/T4qef/wo+AVdfovuIirrcYaqHA6mrtI07Ivf9ABGhiC rQ9rgoWFpYOv7+RMbz+jtUT8irSHI2s1a0xVuKMpXDQy3X8ETtTPQIwRgQXg8C5hd9cgKxRf SF+ELDt3J+2PuwQ8M3Bi+86mOJ83Pu5i+psE0N7lAvkZ/kxHnv6E3yOXZKSoLySd8l0N+ktt AXYpyUpA2D/Ow4KC2gRGM5tDXgzs/+Q7VmyMvrpFqi6O55tTMjlx8YmG7Bybfp6LEv8dGNbR L1ExLNuqeaitg0KDPbZ8YYKrdboGV5idfWMDNCGjQvp3erCtVefhR0Z2YZCvMnH5QT3zCbix sa2UppNHyaatQn5BrqekH1JlufcA34Eejyv5gCOuNXeaY5Z6HYXv6xwJrG77oaZaCPLsPVyj cNX98DKVqRvVZycUqu0h2KzkVkbbacgaOdvmW6i87psKekEcoXjCtEwL4p8QEXaRVCPUX1h1 NKoMU//ZiyuIK8qG5+6LLHrzoljMjXNOScL4ktRLzdDFU69LkMvtcL28nG8nJ8mK3Y6xv+3H aGpBW3Czbw49p0C+aUkrN+iYVMJi+BKJYl2KSfwdUGCmV+IWqCQcJWuxAl8GENO9O9TMyXGL L1ZJlKL9eJv6YUCiKgqlgb5h2q9zQtZ9KExcNtmhi9fCZjJSFbcRFTl9I4mdUO/KFLnall/i ZUYOUmWRYh0JaybC5GPdb3oICoGRrnL3aXZ8S/K6QcWOQouk3B4sfG X-IronPort-AV: E=Sophos;i="5.58,493,1544486400"; d="scan'208";a="80850936" From: Paul Durrant To: Date: Mon, 18 Mar 2019 11:20:53 +0000 Message-ID: <20190318112059.21910-6-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190318112059.21910-1-paul.durrant@citrix.com> References: <20190318112059.21910-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v8 05/11] viridian: extend init/deinit hooks into synic and time modules X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch simply adds domain and vcpu init/deinit hooks into the synic and time modules and wires them into viridian_[domain|vcpu]_[init|deinit](). Only one of the hooks is currently needed (to unmap the 'VP Assist' page) but subsequent patches will make use of the others. NOTE: To perform the unmap of the VP Assist page, viridian_unmap_guest_page() is now directly called in the new viridian_synic_vcpu_deinit() function (which is safe even if is_viridian_vcpu() evaluates to false). This replaces the slightly hacky mechanism of faking a zero write to the HV_X64_MSR_VP_ASSIST_PAGE MSR in viridian_cpu_deinit(). Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich Reviewed-by: Wei Liu --- Cc: Andrew Cooper Cc: "Roger Pau Monné" v4: - Constify vcpu and domain pointers v2: - Pay attention to sync and time init hook return values --- xen/arch/x86/hvm/viridian/private.h | 12 +++++++++ xen/arch/x86/hvm/viridian/synic.c | 19 ++++++++++++++ xen/arch/x86/hvm/viridian/time.c | 18 ++++++++++++++ xen/arch/x86/hvm/viridian/viridian.c | 37 ++++++++++++++++++++++++++-- 4 files changed, 84 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h index 46174f48cd..8c029f62c6 100644 --- a/xen/arch/x86/hvm/viridian/private.h +++ b/xen/arch/x86/hvm/viridian/private.h @@ -74,6 +74,12 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val); int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val); +int viridian_synic_vcpu_init(const struct vcpu *v); +int viridian_synic_domain_init(const struct domain *d); + +void viridian_synic_vcpu_deinit(const struct vcpu *v); +void viridian_synic_domain_deinit(const struct domain *d); + void viridian_synic_save_vcpu_ctxt(const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt); void viridian_synic_load_vcpu_ctxt( @@ -82,6 +88,12 @@ void viridian_synic_load_vcpu_ctxt( int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val); int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val); +int viridian_time_vcpu_init(const struct vcpu *v); +int viridian_time_domain_init(const struct domain *d); + +void viridian_time_vcpu_deinit(const struct vcpu *v); +void viridian_time_domain_deinit(const struct domain *d); + void viridian_time_save_domain_ctxt( const struct domain *d, struct hvm_viridian_domain_context *ctxt); void viridian_time_load_domain_ctxt( diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index 05d971b365..4b00dbe1b3 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -146,6 +146,25 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) return X86EMUL_OKAY; } +int viridian_synic_vcpu_init(const struct vcpu *v) +{ + return 0; +} + +int viridian_synic_domain_init(const struct domain *d) +{ + return 0; +} + +void viridian_synic_vcpu_deinit(const struct vcpu *v) +{ + viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist); +} + +void viridian_synic_domain_deinit(const struct domain *d) +{ +} + void viridian_synic_save_vcpu_ctxt(const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt) { diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 909a3fb9e3..48aca7e0ab 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -215,6 +215,24 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) return X86EMUL_OKAY; } +int viridian_time_vcpu_init(const struct vcpu *v) +{ + return 0; +} + +int viridian_time_domain_init(const struct domain *d) +{ + return 0; +} + +void viridian_time_vcpu_deinit(const struct vcpu *v) +{ +} + +void viridian_time_domain_deinit(const struct domain *d) +{ +} + void viridian_time_save_domain_ctxt( const struct domain *d, struct hvm_viridian_domain_context *ctxt) { diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 1a20d68aaf..f9a509d918 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -418,22 +418,52 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) int viridian_vcpu_init(struct vcpu *v) { + int rc; + ASSERT(!v->arch.hvm.viridian); v->arch.hvm.viridian = xzalloc(struct viridian_vcpu); if ( !v->arch.hvm.viridian ) return -ENOMEM; + rc = viridian_synic_vcpu_init(v); + if ( rc ) + goto fail; + + rc = viridian_time_vcpu_init(v); + if ( rc ) + goto fail; + return 0; + + fail: + viridian_vcpu_deinit(v); + + return rc; } int viridian_domain_init(struct domain *d) { + int rc; + ASSERT(!d->arch.hvm.viridian); d->arch.hvm.viridian = xzalloc(struct viridian_domain); if ( !d->arch.hvm.viridian ) return -ENOMEM; + rc = viridian_synic_domain_init(d); + if ( rc ) + goto fail; + + rc = viridian_time_domain_init(d); + if ( rc ) + goto fail; + return 0; + + fail: + viridian_domain_deinit(d); + + return rc; } void viridian_vcpu_deinit(struct vcpu *v) @@ -441,8 +471,8 @@ void viridian_vcpu_deinit(struct vcpu *v) if ( !v->arch.hvm.viridian ) return; - if ( is_viridian_vcpu(v) ) - viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0); + viridian_time_vcpu_deinit(v); + viridian_synic_vcpu_deinit(v); XFREE(v->arch.hvm.viridian); } @@ -457,6 +487,9 @@ void viridian_domain_deinit(struct domain *d) if ( !d->arch.hvm.viridian ) return; + viridian_time_domain_deinit(d); + viridian_synic_domain_deinit(d); + XFREE(d->arch.hvm.viridian); } From patchwork Mon Mar 18 11:20:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10857399 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3F0DB1390 for ; Mon, 18 Mar 2019 11:23:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 24A592937E for ; Mon, 18 Mar 2019 11:23:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 193A129380; Mon, 18 Mar 2019 11:23:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 982422937F for ; Mon, 18 Mar 2019 11:23:06 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKL-0004Fa-1H; Mon, 18 Mar 2019 11:21:09 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKK-0004F3-0x for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 11:21:08 +0000 X-Inumbo-ID: ec87794d-496f-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id ec87794d-496f-11e9-bc90-bc764e045a96; Mon, 18 Mar 2019 11:21:07 +0000 (UTC) IronPort-Data: A9a23:bdVh7K0B8Y9KcpNqHPbDix16xV27qk9dZSB6A8VEDSB0RE9Odbcrpx KurzSVy6cG28CNQrZC0tmqA8YeK8IiABxOslxLRWRNh/vHDTueqv/g41JfkmkfiBsCCVYHMj gImn77Bkrrn1tqiru17ky6EVmIAahhn7ZDQhRn6wj7gOODiG4YS0DtmKCnIF3+scsfB4YwmB ECB+wlGLUClFrn9oeIHacOuWSkrY7F/3+2f+3kd7dYe6vrWwmytHZbHvozBhaor+O0Ads5U6 NG8oAobf+NGhg04FgSQURqu3MgAFRKRyXeo1ZcpzpEc+qdXKsOnr8n0qmT1I/JsXBwM1ylRv r6lMqYZ9MNqyXc+wz/7O+pNWJeWN5nKqDAGaB/3HQo6zx32Onh0oGapltnUAL8ymzmfcNpM1 OwmrNNO9GlscUVc5vQCQOiDtNhAdmdC3iT5Z955+SsSum+0LaEBOfj6FBIJCnY0oJ/uY+aII y6hgwKtmLT5qC4h56SReF4oF6v9IZCZMTA/BrYJGO2Ktkmhu/NC/54qld2cg7Nc1WnnSuXqd RfyiyX3MszFdPyiXR5Mo8hIAchdeFKIZi3r4O62XT2HtwsU8tsJBuQIyJNuKTVfBX9NLjhwo ZMCKMuQ5FoX6/G/vRs7fiTK1DH8Nt5YAhGkBhTt7dW8w3CBxM+gvABpKy1QWpm6e32KXFz5H E2hSo7E19BiYLZCcGMp9WLEBH8kRdnF7k2MMVzi+mWZObuSZgXNKh7yFbokldj8nJ7CSog0M RBf+fNHjO8KksgxhrsAQ90DCZ5abba3ILrgSWAC6oJERFMOpBOUAkmHCsebQ33enYqiPh/jO wcKBslSWmJhjO33spHOtu9hkQbDc4/ejvuPnIXkPTvxi1hN8pCgop/e2yUVKRr+esyC82sGY qTDbQh4iC1MVgBt9Ftv7zBz/PhNwD2UpYnuyWJMr3fIlJfJIcJzBeYLfmx08qEv2fn9C14lv YMfMOdwyOSjInansOTVwkjvMUaD9OWWm74c/ncv5xQ0BKteTruhd6TeuzePmfQa2xEQvB1Gs GxGPmPk07jOjs1JWr4lM4rhEdRhc+hbgAa1mQsGHb6gaiQa6F2PPxpSlgfANvOBWXj/3mviu ltHwbOZPJQ6avuTndPRfiEQubKgfQmQ8IgBgQ6baFvPNC8cQ9s7ZsD/o+PioTtB3AtSCi1fV zGK3k3OU85WqoIjIzq1/FL/UqMf5TMRiArmaVCyLZfb2ATM468XM0KaMyzV3al8m1EJ64qQd ZN0XpJut+Ux7+KVnI+Ht5AWSjY7IulewElddx4AaDRidVHZf01cpDSpyi9i+nrhy+QTwQSIM Im6CHLLRxshDcSjhwAc3rvutRfbi3vwd1UhSIqfZErWTwpiszQsxOU7Jnn63/vNR9uGWT11z /Xa/YuxzrtKm2fO2XERD7Q27pkdoxz4C81nu/SpcQ8xnOQ3ZsH3PAtHsgA4tvLSRIPJXyqM5 WkL0gfVzLxMSlyR7eWapKiPrUQrGN5jp9EYfuiNCluDEBj48Lshvv4uINd2eVxIjbl88H5mS o20MhCqSpojnmcBaP1NAlDgMrCHM+TzOysuMCu/0zVtuUuM66KwBmc3Ji6tEIizko8ppB89A HNlgqc8jjeStVSpjOz0q/qy3kp+FGuTu9Vm9fxI8o9zp3S02pjaKhFZHa8kovgpq7Ggfd0tq gWMJrcNuhBZtnSo8mEs9SPjiQCU43oVnl2fVJnM2Vn5pE1Dr70XlsWOO8pQfWhZH2CkOdreE 357wQxh0QEIONaBSGm5+UNbKvNHHb6hHuRKaMKcINJ4w5SGGJ6YYvODQI= X-IronPort-AV: E=Sophos;i="5.58,493,1544486400"; d="scan'208";a="80850937" From: Paul Durrant To: Date: Mon, 18 Mar 2019 11:20:54 +0000 Message-ID: <20190318112059.21910-7-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190318112059.21910-1-paul.durrant@citrix.com> References: <20190318112059.21910-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v8 06/11] viridian: add missing context save helpers into synic and time modules X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Currently the time module lacks vcpu context save helpers and the synic module lacks domain context save helpers. These helpers are not yet required but subsequent patches will require at least some of them so this patch completes the set to avoid introducing them in an ad-hoc way. Signed-off-by: Paul Durrant Reviewed-by: Wei Liu --- Cc: Jan Beulich Cc: Andrew Cooper Cc: "Roger Pau Monné" v3: - Add missing callers so that they are not added in an ad-hoc way --- xen/arch/x86/hvm/viridian/private.h | 10 ++++++++++ xen/arch/x86/hvm/viridian/synic.c | 10 ++++++++++ xen/arch/x86/hvm/viridian/time.c | 10 ++++++++++ xen/arch/x86/hvm/viridian/viridian.c | 4 ++++ 4 files changed, 34 insertions(+) diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h index 8c029f62c6..5078b2d2ab 100644 --- a/xen/arch/x86/hvm/viridian/private.h +++ b/xen/arch/x86/hvm/viridian/private.h @@ -85,6 +85,11 @@ void viridian_synic_save_vcpu_ctxt(const struct vcpu *v, void viridian_synic_load_vcpu_ctxt( struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt); +void viridian_synic_save_domain_ctxt( + const struct domain *d, struct hvm_viridian_domain_context *ctxt); +void viridian_synic_load_domain_ctxt( + struct domain *d, const struct hvm_viridian_domain_context *ctxt); + int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val); int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val); @@ -94,6 +99,11 @@ int viridian_time_domain_init(const struct domain *d); void viridian_time_vcpu_deinit(const struct vcpu *v); void viridian_time_domain_deinit(const struct domain *d); +void viridian_time_save_vcpu_ctxt( + const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt); +void viridian_time_load_vcpu_ctxt( + struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt); + void viridian_time_save_domain_ctxt( const struct domain *d, struct hvm_viridian_domain_context *ctxt); void viridian_time_load_domain_ctxt( diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index 4b00dbe1b3..b8dab4b246 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -186,6 +186,16 @@ void viridian_synic_load_vcpu_ctxt( vv->apic_assist_pending = ctxt->apic_assist_pending; } +void viridian_synic_save_domain_ctxt( + const struct domain *d, struct hvm_viridian_domain_context *ctxt) +{ +} + +void viridian_synic_load_domain_ctxt( + struct domain *d, const struct hvm_viridian_domain_context *ctxt) +{ +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 48aca7e0ab..4399e62f54 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -233,6 +233,16 @@ void viridian_time_domain_deinit(const struct domain *d) { } +void viridian_time_save_vcpu_ctxt( + const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt) +{ +} + +void viridian_time_load_vcpu_ctxt( + struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt) +{ +} + void viridian_time_save_domain_ctxt( const struct domain *d, struct hvm_viridian_domain_context *ctxt) { diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index f9a509d918..742a988252 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -707,6 +707,7 @@ static int viridian_save_domain_ctxt(struct vcpu *v, return 0; viridian_time_save_domain_ctxt(d, &ctxt); + viridian_synic_save_domain_ctxt(d, &ctxt); return (hvm_save_entry(VIRIDIAN_DOMAIN, 0, h, &ctxt) != 0); } @@ -723,6 +724,7 @@ static int viridian_load_domain_ctxt(struct domain *d, vd->hypercall_gpa.raw = ctxt.hypercall_gpa; vd->guest_os_id.raw = ctxt.guest_os_id; + viridian_synic_load_domain_ctxt(d, &ctxt); viridian_time_load_domain_ctxt(d, &ctxt); return 0; @@ -738,6 +740,7 @@ static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h) if ( !is_viridian_vcpu(v) ) return 0; + viridian_time_save_vcpu_ctxt(v, &ctxt); viridian_synic_save_vcpu_ctxt(v, &ctxt); return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt); @@ -764,6 +767,7 @@ static int viridian_load_vcpu_ctxt(struct domain *d, return -EINVAL; viridian_synic_load_vcpu_ctxt(v, &ctxt); + viridian_time_load_vcpu_ctxt(v, &ctxt); return 0; } From patchwork Mon Mar 18 11:20:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10857405 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 55BBF1708 for ; Mon, 18 Mar 2019 11:23:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3B21B2937E for ; Mon, 18 Mar 2019 11:23:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2FAAC29380; Mon, 18 Mar 2019 11:23:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7E6A02937E for ; Mon, 18 Mar 2019 11:23:13 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKO-0004I4-Rc; Mon, 18 Mar 2019 11:21:12 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKN-0004GJ-2u for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 11:21:11 +0000 X-Inumbo-ID: ed6dd292-496f-11e9-936c-cff09b28bad5 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ed6dd292-496f-11e9-936c-cff09b28bad5; Mon, 18 Mar 2019 11:21:08 +0000 (UTC) IronPort-Data: A9a23:vHvAlqjwxCh8PKN+9BQxTKJ1X17brhtdQjqghKucUGVSitEq9/iXuw 6920X9Sk58P3EEI1XPyvWPmn687YojQs7xHPWiZwTPSQ36vOoNjFk93xxfFFpfO2JuBs/BOQ Rbfrkd0dbE6cTs/smduucIIikFn6NyaBqM308bl/1dCS7qIsb+0+hCJxZ5v/vz3KOqjjaafY b9fVNddGKBVTKlzg7jzNIdmWDu6UdQI2U2JVEqsAg7c1F9FayR2YgB3KvlBCACjofo9YyiJY u7S2MHb1FrOs23ldLtwVUEItvfgU+lP0VCwBoZGwKBrlRsZ6vCE7d4pdwxE2jYWy6/+tI29T 8B+xPtHpUPpMSWEqiAuJIde2NILqMUPfeaJTwEZN94JWCDm43iYVUYsgaVtgVkBEcL9TpEFH MjaGfcGHuAQCkqDOCqH6xr6aHnTDsBq0JjM2B06M8OVBJdwxXTkk5t+s1J5VZy1W+lwZ1SXm m0HZP15s7U46AHuY6oPgcP6VWeXPDVDqiWsWMA2suQiNKJmVAkjGvWPCPHxBG9k9Ekutk1xV XdOux9pkLMV3A392TFc7bEKwO1QPlPKTpqLODr3Dh7Mn8W5pilXojEwspRAVwMweTlnTz3ME KZ0oJ8ldQcrzgOFutbB43CNPKH4Xj5KB+gz7MeaZNI1cRg86G3U/td0GuawwNHNtRYVPeWlp Z3n6IiCUrMnT8HDAKww05cqbw/eZwobp5qUi4k3BGYg88z4gfkue+i4I9CM7VmRTo2NqPKt8 MBCaG4nlOVe8ilBNLsIjdKtqWUEJw+1ZPDlIBWpdHzppbljHBz2w9yMCtP76nHmIa0uc0fFc wavjaWdUsLirfSe/tqA8bsTFOL1/8mLz+/q3+Mz5gUjB1ZhI6t0Hmq+n5IxjLMjaYOPf0MHl 7Suf0Z1qaD6yIeq9rUGSlrdd7289gkIh8f/uaTLBijt8UtE57HNg8+7cCg6ZalJ2kgO8gQFl z+/YEDbOEXDJWpmbIzalpPKJo7KB5Tv1Iatjhkwj+RuJ3u7ImU4YQYdCBCB3ZzW/PUJ+xSPB a9wmtj3PtwYuimOZW9J0VUbn9vbUCSEstenFY+YkThqmsNiIEsBli0fUTmJgz8tme0Ye5Zwv bVySA+2O4B/WrNoQ7g3bgMjQ2kxbm2WzwJ0FwHuYfpky/XxjmAZLTL7yZrP2s/omzEj0dRXQ IbzPlFjMk7yLMG+lNJ/iKaLKtjHJPfuqnXo8WYrTaw5LFoXGCdWbqC423goQLZMdg6RsKcXJ g1uFV2qDDcJ5HD9pFENDrLkJveBLGKtMrUEddCqD7rL9vWHJokvTTmIdpTvvu07kWbfvBDay AGfIdX5vcFHMkl/0OyGVuprEdml8PJ+EW3HP9eOkciFWdMxT+KdsGvywjmKHdA+YpjGZvIiP FfTRYi9K+cb9DFcELjL/H7hD5ZdweYU5vMqZEHihxtp0u3lLz4mcXBZSJCJjVPdN6+Vpingy Ualezyvo2OLiBMQ3LhrJ0C5jfiQaIyPC8++53C+xJrVXgAle0POPInk7FLVbSOOL1TMML+23 McxfzmUVvX1WxX/eFY46eTEQEN5qtnWN7iMwSJ+JYQ7pWmCCz/aK9QRy/ss3MINPa6h8OkMV sCtN31RMQJyrmkaWepBux7pXbiMtvZrfao8lOlhuHuCpQ206CQfdqeBtvkvcNiw+HZB5F7aF 2ksKgB+C8LoCevCJT1Z/Ltnvy5F9xO+8GenLDZF9iK68HZ8j698hsTZsQm/wBcF7OODarMWg 8AtoUQC7/LlX7OddMcWF5bT0vxPVTsWU3tiBEMFIuW1gmyjMZ/9ZEO1zAsy3OP X-IronPort-AV: E=Sophos;i="5.58,493,1544486400"; d="scan'208";a="80850945" From: Paul Durrant To: Date: Mon, 18 Mar 2019 11:20:55 +0000 Message-ID: <20190318112059.21910-8-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190318112059.21910-1-paul.durrant@citrix.com> References: <20190318112059.21910-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v8 07/11] viridian: use viridian_map/unmap_guest_page() for reference tsc page X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Whilst the reference tsc page does not currently need to be kept mapped after it is initially set up (or updated after migrate), the code can be simplified by using the common guest page map/unmap and dump functions. New functionality added by a subsequent patch will also require the page to kept mapped for the lifetime of the domain. NOTE: Because the reference tsc page is per-domain rather than per-vcpu this patch also changes viridian_map_guest_page() to take a domain pointer rather than a vcpu pointer. The domain pointer cannot be const, unlike the vcpu pointer. Signed-off-by: Paul Durrant Reviewed-by: Wei Liu --- Cc: Jan Beulich Cc: Andrew Cooper Cc: "Roger Pau Monné" --- xen/arch/x86/hvm/viridian/private.h | 2 +- xen/arch/x86/hvm/viridian/synic.c | 6 ++- xen/arch/x86/hvm/viridian/time.c | 56 +++++++++------------------- xen/arch/x86/hvm/viridian/viridian.c | 3 +- xen/include/asm-x86/hvm/viridian.h | 2 +- 5 files changed, 25 insertions(+), 44 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h index 5078b2d2ab..96a784b840 100644 --- a/xen/arch/x86/hvm/viridian/private.h +++ b/xen/arch/x86/hvm/viridian/private.h @@ -111,7 +111,7 @@ void viridian_time_load_domain_ctxt( void viridian_dump_guest_page(const struct vcpu *v, const char *name, const struct viridian_page *vp); -void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp); +void viridian_map_guest_page(struct domain *d, struct viridian_page *vp); void viridian_unmap_guest_page(struct viridian_page *vp); #endif /* X86_HVM_VIRIDIAN_PRIVATE_H */ diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index b8dab4b246..fb560bc162 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -81,6 +81,7 @@ void viridian_apic_assist_clear(const struct vcpu *v) int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) { struct viridian_vcpu *vv = v->arch.hvm.viridian; + struct domain *d = v->domain; switch ( idx ) { @@ -103,7 +104,7 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) vv->vp_assist.msr.raw = val; viridian_dump_guest_page(v, "VP_ASSIST", &vv->vp_assist); if ( vv->vp_assist.msr.enabled ) - viridian_map_guest_page(v, &vv->vp_assist); + viridian_map_guest_page(d, &vv->vp_assist); break; default: @@ -178,10 +179,11 @@ void viridian_synic_load_vcpu_ctxt( struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt) { struct viridian_vcpu *vv = v->arch.hvm.viridian; + struct domain *d = v->domain; vv->vp_assist.msr.raw = ctxt->vp_assist_msr; if ( vv->vp_assist.msr.enabled ) - viridian_map_guest_page(v, &vv->vp_assist); + viridian_map_guest_page(d, &vv->vp_assist); vv->apic_assist_pending = ctxt->apic_assist_pending; } diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 4399e62f54..16fe41d411 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -25,33 +25,10 @@ typedef struct _HV_REFERENCE_TSC_PAGE uint64_t Reserved2[509]; } HV_REFERENCE_TSC_PAGE, *PHV_REFERENCE_TSC_PAGE; -static void dump_reference_tsc(const struct domain *d) -{ - const union viridian_page_msr *rt = &d->arch.hvm.viridian->reference_tsc; - - if ( !rt->enabled ) - return; - - printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: pfn: %lx\n", - d->domain_id, (unsigned long)rt->pfn); -} - static void update_reference_tsc(struct domain *d, bool initialize) { - unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.pfn; - struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); - HV_REFERENCE_TSC_PAGE *p; - - if ( !page || !get_page_type(page, PGT_writable_page) ) - { - if ( page ) - put_page(page); - gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", - gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN)); - return; - } - - p = __map_domain_page(page); + const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc; + HV_REFERENCE_TSC_PAGE *p = rt->ptr; if ( initialize ) clear_page(p); @@ -82,7 +59,7 @@ static void update_reference_tsc(struct domain *d, bool initialize) printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: invalidated\n", d->domain_id); - goto out; + return; } /* @@ -100,11 +77,6 @@ static void update_reference_tsc(struct domain *d, bool initialize) if ( p->TscSequence == 0xFFFFFFFF || p->TscSequence == 0 ) /* Avoid both 'invalid' values */ p->TscSequence = 1; - - out: - unmap_domain_page(p); - - put_page_and_type(page); } static int64_t raw_trc_val(const struct domain *d) @@ -149,10 +121,14 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - vd->reference_tsc.raw = val; - dump_reference_tsc(d); - if ( vd->reference_tsc.enabled ) + viridian_unmap_guest_page(&vd->reference_tsc); + vd->reference_tsc.msr.raw = val; + viridian_dump_guest_page(v, "REFERENCE_TSC", &vd->reference_tsc); + if ( vd->reference_tsc.msr.enabled ) + { + viridian_map_guest_page(d, &vd->reference_tsc); update_reference_tsc(d, true); + } break; default: @@ -189,7 +165,7 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - *val = vd->reference_tsc.raw; + *val = vd->reference_tsc.msr.raw; break; case HV_X64_MSR_TIME_REF_COUNT: @@ -231,6 +207,7 @@ void viridian_time_vcpu_deinit(const struct vcpu *v) void viridian_time_domain_deinit(const struct domain *d) { + viridian_unmap_guest_page(&d->arch.hvm.viridian->reference_tsc); } void viridian_time_save_vcpu_ctxt( @@ -249,7 +226,7 @@ void viridian_time_save_domain_ctxt( const struct viridian_domain *vd = d->arch.hvm.viridian; ctxt->time_ref_count = vd->time_ref_count.val; - ctxt->reference_tsc = vd->reference_tsc.raw; + ctxt->reference_tsc = vd->reference_tsc.msr.raw; } void viridian_time_load_domain_ctxt( @@ -258,10 +235,13 @@ void viridian_time_load_domain_ctxt( struct viridian_domain *vd = d->arch.hvm.viridian; vd->time_ref_count.val = ctxt->time_ref_count; - vd->reference_tsc.raw = ctxt->reference_tsc; + vd->reference_tsc.msr.raw = ctxt->reference_tsc; - if ( vd->reference_tsc.enabled ) + if ( vd->reference_tsc.msr.enabled ) + { + viridian_map_guest_page(d, &vd->reference_tsc); update_reference_tsc(d, false); + } } /* diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 742a988252..2b045ed88f 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -644,9 +644,8 @@ void viridian_dump_guest_page(const struct vcpu *v, const char *name, v, name, (unsigned long)vp->msr.pfn); } -void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp) +void viridian_map_guest_page(struct domain *d, struct viridian_page *vp) { - struct domain *d = v->domain; unsigned long gmfn = vp->msr.pfn; struct page_info *page; diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index abbbb36092..c65c044191 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -65,7 +65,7 @@ struct viridian_domain union viridian_guest_os_id_msr guest_os_id; union viridian_page_msr hypercall_gpa; struct viridian_time_ref_count time_ref_count; - union viridian_page_msr reference_tsc; + struct viridian_page reference_tsc; }; void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, From patchwork Mon Mar 18 11:20:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10857393 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D0C0A1390 for ; Mon, 18 Mar 2019 11:23:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B7E332937E for ; Mon, 18 Mar 2019 11:23:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AC45729380; Mon, 18 Mar 2019 11:23:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2BC0B2937F for ; Mon, 18 Mar 2019 11:23:03 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKN-0004Gg-HC; Mon, 18 Mar 2019 11:21:11 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKL-0004Fs-NA for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 11:21:09 +0000 X-Inumbo-ID: eda5028b-496f-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id eda5028b-496f-11e9-bc90-bc764e045a96; Mon, 18 Mar 2019 11:21:08 +0000 (UTC) IronPort-Data: A9a23:Qm+3FK2QJHekBAuj3PbDix16xVm7qk9dZSB6s+R8DyB0pV0GPGx97D uKzLUkopUAQnJEN2Bag4kgu/4O4M+aA/v0HWXmihCR1vnuWJEqXLNj+EzkxnuRmEGtGpLTj5 q65bhplfNIIyvaLFRLOSiAGknMARhLVsNH8ELf3LBCPO80SAfZKfSxaXTMnlA/PT6AOFCcxY m+uLfy0tceuhsjMSfgINGaGrxvyhnN29oBdAPi0xH9nVyg9tdOUNkG747sDxy3u+ZNwO1/hA ty7Ppv2MQi52kDWkcXrwqer8EgOFLZQYDu85Nv0fpggaYgeNHbeg96sj7ngvM2+n5gPtHULR M2sXRUGSGdNpyvcB6gtZt+MgEbTpScI7j/eb/uCfevzfRt4Wr9tzHCRrfcBqNcYvIVrC51Pv OdFMh5Y19FSW8TCFrGY9L00TlJeK4sHB/yczSfyt5CeOL0+nMYYZ17pKm0rX/XYr5VcVU9/F DPltGvwyKR8hJSOkQ+58kDAMojUuuAEpoNkq4RVtzFVkkJ7Hj0KhtOk/PKhTpZt4Q0kUk+fK eJkbqjNDAZZJwxhP1I+0xHaAcNHUrh/XksVJ4SemDrP8g0uUDVj2tvowLqsQ/iwSDU5TuVc7 6d8sQX8QPRmfyGnGsbdBaXCzFB4+PeTVmVfbKMUzgo3E7BxKMBE4KWuhf3IGXcF68bDN6ltY apmwSAzpwMOkAQvQJMI7f5ILvxVGkiaxmrAlCV2K4oGtKXWk2LUaFNV0dTxSJaAMUgziIECO NFL7G4KBK2pte0gNkEDt/DXvFwvCUy+1p9e5k/FF1l07qJOZj45JNmFJwtWXhpm5i8PFIXF3 6CfXcNWw/DzPHaDNb+RgeSvj1lrbOP4uD56UeRlNuEkEnsJeruyr7hghiaNZZXZoUYhJEpIg wDbr31INt2N14RE7WrjYF1PS/AZb5Htvb8TTMXylWNWlWgGQl58owX/zuBfSX1THX0fzWSaR 5MtaKJ8B3OfONUb4HXYNgQN6/ZBY6l0FIoejhwvYRosyJgEVvESF9Sl+XwyLOxfwIntKaOey HV7ok1O5f9ODs7VNcJS0Y7TEeXRa+iaghv6CT4K2bI0HQzo1k21cUZbpwcXRg5OmpH+Vd/pd m2lKfIszGIAOzVfMeCNxp05DDut1uY9N7dvFi2lsesxhD1jt2Y0coO+Dwk6LiGJ81kd3iGlc TBH9AVFzn8HxdQZ1+PVplV/yHvbIw+TA9Hv6sxCYhLFzk4ohkcUf7DyYwiPpgc/rB6jhzXhG tnXvBI11aaMBEQNCjX0BnveEQb9HeWi/vssMy/N3hGLT77IF9necDjzUdwlPq04kw+N6O5Ty FkBj7fYHZrL8C1v7z3vYc9RlHcMPDNis+N6f37oyCNYB9tsIE4ExVSpmXrKKfWaXl4YzxjMk ftVV374gpCSeBxZgITpOHjdrRmvUpYOBjJoVSPqtxqmOKMMWRIRYsMSApJ+fnDQbDIH5TBRi tY0QzrfNsOQAUwhwn4o7i+6ZTSRIiG9y30e/h6t1Hq8MgY6GX/OpLsl44Bkzdx1ZuWqbzUgi FCJ6ai2/TInetU5gHS0uy/YTJoC6RazCIMFI1OtPBIdPW476c/qT1Zqigwq2854VsCXl7uRP iZZNx4mtuNlFO5pSVfgONeXbT6rCKUJEGcXza1K4GCCItTG2MsywLEdMdXtfJIpTGIEG6mQ6 OR89gTolh+UURK0iWS12rjpx4MsZWXglMk3rR4wIboIOIW/+zjzXpY1A/Uw18yZNXt/I9RKk wqnyd9w3SaxoFoecksKiEOf2qT4Lw2KRpGtGfqbLD6NQ1ZHdkNm4vODQI= X-IronPort-AV: E=Sophos;i="5.58,493,1544486400"; d="scan'208";a="80850946" From: Paul Durrant To: Date: Mon, 18 Mar 2019 11:20:56 +0000 Message-ID: <20190318112059.21910-9-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190318112059.21910-1-paul.durrant@citrix.com> References: <20190318112059.21910-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v8 08/11] viridian: stop directly calling viridian_time_ref_count_freeze/thaw()... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP ...from arch_domain_shutdown/pause/unpause(). A subsequent patch will introduce an implementaion of synthetic timers which will also need freeze/thaw hooks, so make the exported hooks more generic and call through to (re-named and static) time_ref_count_freeze/thaw functions. NOTE: This patch also introduces a new time_ref_count() helper to return the current counter value. This is currently only used by the MSR read handler but the synthetic timer code will also need to use it. Signed-off-by: Paul Durrant Reviewed-by: Wei Liu Acked-by: Jan Beulich --- Cc: Andrew Cooper Cc: "Roger Pau Monné" --- xen/arch/x86/domain.c | 12 ++++++------ xen/arch/x86/hvm/viridian/time.c | 24 +++++++++++++++++++++--- xen/include/asm-x86/hvm/viridian.h | 4 ++-- 3 files changed, 29 insertions(+), 11 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 8d579e2cf9..02afa7518e 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -657,20 +657,20 @@ void arch_domain_destroy(struct domain *d) void arch_domain_shutdown(struct domain *d) { - if ( has_viridian_time_ref_count(d) ) - viridian_time_ref_count_freeze(d); + if ( is_viridian_domain(d) ) + viridian_time_domain_freeze(d); } void arch_domain_pause(struct domain *d) { - if ( has_viridian_time_ref_count(d) ) - viridian_time_ref_count_freeze(d); + if ( is_viridian_domain(d) ) + viridian_time_domain_freeze(d); } void arch_domain_unpause(struct domain *d) { - if ( has_viridian_time_ref_count(d) ) - viridian_time_ref_count_thaw(d); + if ( is_viridian_domain(d) ) + viridian_time_domain_thaw(d); } int arch_domain_soft_reset(struct domain *d) diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 16fe41d411..71291d921c 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -91,7 +91,7 @@ static int64_t raw_trc_val(const struct domain *d) return scale_delta(tsc, &tsc_to_ns) / 100ul; } -void viridian_time_ref_count_freeze(const struct domain *d) +static void time_ref_count_freeze(const struct domain *d) { struct viridian_time_ref_count *trc = &d->arch.hvm.viridian->time_ref_count; @@ -100,7 +100,7 @@ void viridian_time_ref_count_freeze(const struct domain *d) trc->val = raw_trc_val(d) + trc->off; } -void viridian_time_ref_count_thaw(const struct domain *d) +static void time_ref_count_thaw(const struct domain *d) { struct viridian_time_ref_count *trc = &d->arch.hvm.viridian->time_ref_count; @@ -110,6 +110,24 @@ void viridian_time_ref_count_thaw(const struct domain *d) trc->off = (int64_t)trc->val - raw_trc_val(d); } +static int64_t time_ref_count(const struct domain *d) +{ + struct viridian_time_ref_count *trc = + &d->arch.hvm.viridian->time_ref_count; + + return raw_trc_val(d) + trc->off; +} + +void viridian_time_domain_freeze(const struct domain *d) +{ + time_ref_count_freeze(d); +} + +void viridian_time_domain_thaw(const struct domain *d) +{ + time_ref_count_thaw(d); +} + int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) { struct domain *d = v->domain; @@ -179,7 +197,7 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) printk(XENLOG_G_INFO "d%d: VIRIDIAN MSR_TIME_REF_COUNT: accessed\n", d->domain_id); - *val = raw_trc_val(d) + trc->off; + *val = time_ref_count(d); break; } diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index c65c044191..8146e2fc46 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -77,8 +77,8 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val); int viridian_hypercall(struct cpu_user_regs *regs); -void viridian_time_ref_count_freeze(const struct domain *d); -void viridian_time_ref_count_thaw(const struct domain *d); +void viridian_time_domain_freeze(const struct domain *d); +void viridian_time_domain_thaw(const struct domain *d); int viridian_vcpu_init(struct vcpu *v); int viridian_domain_init(struct domain *d); From patchwork Mon Mar 18 11:20:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10857411 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C407D1708 for ; Mon, 18 Mar 2019 11:23:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A87BB2937E for ; Mon, 18 Mar 2019 11:23:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9CD4A29381; Mon, 18 Mar 2019 11:23:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6D2B42937E for ; Mon, 18 Mar 2019 11:23:19 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKN-0004GO-60; Mon, 18 Mar 2019 11:21:11 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKL-0004Fk-Eb for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 11:21:09 +0000 X-Inumbo-ID: ece9accd-496f-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id ece9accd-496f-11e9-bc90-bc764e045a96; Mon, 18 Mar 2019 11:21:07 +0000 (UTC) IronPort-Data: A9a23:7YHkVqryZXsMoJHuTQd77v3JGg9eBgzWbkUOpyTDNTMEsldlBT0lAx NUKxKEGiMd1rxLUqfBsYUK5ADXQTRn68KKFesgYOOoldPUXX/UtTsVFtUgIqAs04C+AWciUp nnl86fP1D96kkzuEOLvqu41iJdJUFr56mFJ3GecJuRf3bDgqYt+/UeomkoTgWoieGzAXqTk6 bXYz98fn8sxJdA75PMslEm9NjE3rtU6/wzxk9rwcsRSnpYWIuvoBeyo2FCJzmvZ8kh55dZPF lycvlNtwOjPcXpoPU2Ry9padBas1qcZloffqcch/tpXhGuCGD4iQxnedvchPT2sZZifoyUJg WLX50VcPTZPg8iQVzwYbomSZsuXKgL29ebOwdTcgl6cxWfUJ1RkImqEiO2DITs/O7jX9WTCD VKrttEJa2fMw0khLdUjgKJfAfisdy86IRaYnusqqf2wBbADg/H3URDp9Lum+hlmnD1nQX8X3 thtoiwlLtkrRYlkeykkFNnWVG1YxxxJPZdGVIsE2rNxBoMdbtRzFzN61bpJmQO9x1U5Gptsv NcMnlk/dmwEu70yhVMCB3g6DFVXS92JcLc8A/Jsb0F8jdRuxOzVTs0UBjlgA8H3Vio7KjBu9 g2egrc07nUIErafrYflRGUawfJZWfpfXMWXWn4LkRPbGni4El9s71bs1Ue/qDTwdq1txK6SJ ZmzTShAsA/exlx7TzRELmGXw8XT8wfqx9AVkCxjInDX/MKZ8X/OqQapKHlylmCCUcR/7rxOe NFxz/LUC8Wk49NMGo0QgnWtWWyAS8fyTmvaYuaSdYyMzMPtVVMoE89ioFYkqwj/bMXo24446 +WY27sOoadcg6x6y2p/ToPi6W8s2OUwS56PXNq46Z3YLz78IpeUVKKonueWzbkM9iNPWgpZV FbWk64FF32nrmH678FGFQ409OnAZ/eXzEQdtN0V8jDoo2SbmEcGZN2ptprVANJPykzU8s5xK cXTmazMLfY4a/KB0c64DRrxmy3jP1XXYXRfq65So3TZrayqY/QsTAbne3mSbGxG5EsU7qfzh I3oclTCR1KODvxv4ix8+eHFuKTiJpZqsV+lNgEsT7oT1T+r7y5dGi7l+NLay4Myd1UIMenYz ZDJNPIZQWq0jK5PNDPzocv4JIYdoKxBd1yCpDHEvXOSwPnrmYhnJz/nIdG8Va6F2EOXchyW7 ziobc/yWJFgbbx/Aprjrz8IYI4yJM5Ncm4dd9R8JbVwV7vM5UvIkjbuaSlN7l2Jb0VnIyBVU UnHqyh0yuAVoXp0CtElkooGy1CqUiaZzLtFTLfcQYZta0jsABZSr7b6m3L7CtlvlBNrV1awO QA8CSQx+tVxRNc1RymENAMVO30kX6CKVqY/8FNc6SE37rziapM0lbgdpId8Zj6AsjiMdKLEm 02OYNsPtHvy3/d2IpFOOOkejDKXjyQK1DOyZlclGrjSdx8aRX/v+QWnnWciAWacWhg7NwFzK n/aMDwKEgnmAzMB1SG7L3TQdRkyMEzY+cy8iC8y7JcXDjh4+4cNlVtD34D4bzOUTNsrJSlWH 81vPGzIyvuCuHcWxHShISf4MjHu20sxP2/jT2z574s5bS/RN7CNtbFX7+bdPWZAULQptB+RL zF9ygpkxwTEcD08nquSm+/DMKKilu4P8/cL4PsvJE5nY5h72wkC9nHhmFj0mzyVTxByF6Yj/ wGqrnMYhoYK2/sC8dW0qET9LyhI/Mei9yw4UyoBzovlcIPy31pc+ZD7YY6M9tS67Ge0yOUR2 Vj2GjRx/88WunOT+xlOCGwi2jc3/CSMZZFAHV3y8HLk3497oFW3Mj5UUQhWGFzmlOfwMEdLp 5s4aLO4r5r4+gtMYGr+QFFyyG+wnMQIuXRVEuaMFoUbiXX2fgoPABF0JlelQE0nf4eokfy5/ ISlGg0/0DKyK1T25l2b5E1IAvfre3NbFiW+iuRUet1uj50rk7vfg1x2eH3rpFM2wHu6I3Hgk 2CutUwRFzrNyO+S2kPyxkeQwk+z7FbISxkRSrgRMfaxZeVI6PAvrpgolTz12XRAdtc3vhT1H xdDilleitE1hwSNVQNFFm6Aj2GqsSYh5KEn5PS27w= X-IronPort-AV: E=Sophos;i="5.58,493,1544486400"; d="scan'208";a="80850938" From: Paul Durrant To: Date: Mon, 18 Mar 2019 11:20:57 +0000 Message-ID: <20190318112059.21910-10-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190318112059.21910-1-paul.durrant@citrix.com> References: <20190318112059.21910-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v8 09/11] viridian: add implementation of synthetic interrupt MSRs X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces an implementation of the SCONTROL, SVERSION, SIEFP, SIMP, EOM and SINT0-15 SynIC MSRs. No message source is added and, as such, nothing will yet generate a synthetic interrupt. A subsequent patch will add an implementation of synthetic timers which will need the infrastructure added by this patch to deliver expiry messages to the guest. NOTE: A 'synic' option is added to the toolstack viridian enlightenments enumeration but is deliberately not documented as enabling these SynIC registers without a message source is only useful for debugging. Signed-off-by: Paul Durrant Acked-by: Wei Liu Reviewed-by: Jan Beulich --- Cc: Ian Jackson Cc: Andrew Cooper Cc: George Dunlap Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: "Roger Pau Monné" v8: - Squash in https://lists.xenproject.org/archives/html/xen-devel/2019-03/msg01332.html v7: - Fix out label indentation v6: - Address further comments from Jan v4: - Address comments from Jan v3: - Add the 'SintPollingModeAvailable' bit in CPUID leaf 3 --- tools/libxl/libxl.h | 6 + tools/libxl/libxl_dom.c | 3 + tools/libxl/libxl_types.idl | 1 + xen/arch/x86/hvm/viridian/synic.c | 241 ++++++++++++++++++++++++- xen/arch/x86/hvm/viridian/viridian.c | 19 ++ xen/arch/x86/hvm/vlapic.c | 20 +- xen/include/asm-x86/hvm/hvm.h | 3 + xen/include/asm-x86/hvm/viridian.h | 26 +++ xen/include/public/arch-x86/hvm/save.h | 2 + xen/include/public/hvm/params.h | 7 +- 10 files changed, 323 insertions(+), 5 deletions(-) diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h index a38e5cdba2..a923a380d3 100644 --- a/tools/libxl/libxl.h +++ b/tools/libxl/libxl.h @@ -318,6 +318,12 @@ */ #define LIBXL_HAVE_VIRIDIAN_CRASH_CTL 1 +/* + * LIBXL_HAVE_VIRIDIAN_SYNIC indicates that the 'synic' value + * is present in the viridian enlightenment enumeration. + */ +#define LIBXL_HAVE_VIRIDIAN_SYNIC 1 + /* * LIBXL_HAVE_BUILDINFO_HVM_ACPI_LAPTOP_SLATE indicates that * libxl_domain_build_info has the u.hvm.acpi_laptop_slate field. diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c index 6160991af3..fb758d2ac3 100644 --- a/tools/libxl/libxl_dom.c +++ b/tools/libxl/libxl_dom.c @@ -317,6 +317,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid, if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_CRASH_CTL)) mask |= HVMPV_crash_ctl; + if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_SYNIC)) + mask |= HVMPV_synic; + if (mask != 0 && xc_hvm_param_set(CTX->xch, domid, diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index b685ac47ac..9860bcaf5f 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -235,6 +235,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [ (4, "hcall_remote_tlb_flush"), (5, "apic_assist"), (6, "crash_ctl"), + (7, "synic"), ]) libxl_hdtype = Enumeration("hdtype", [ diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index fb560bc162..84ab02694f 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -13,6 +13,7 @@ #include #include +#include #include "private.h" @@ -28,6 +29,37 @@ typedef union _HV_VP_ASSIST_PAGE uint8_t ReservedZBytePadding[PAGE_SIZE]; } HV_VP_ASSIST_PAGE; +typedef enum HV_MESSAGE_TYPE { + HvMessageTypeNone, + HvMessageTimerExpired = 0x80000010, +} HV_MESSAGE_TYPE; + +typedef struct HV_MESSAGE_FLAGS { + uint8_t MessagePending:1; + uint8_t Reserved:7; +} HV_MESSAGE_FLAGS; + +typedef struct HV_MESSAGE_HEADER { + HV_MESSAGE_TYPE MessageType; + uint16_t Reserved1; + HV_MESSAGE_FLAGS MessageFlags; + uint8_t PayloadSize; + uint64_t Reserved2; +} HV_MESSAGE_HEADER; + +#define HV_MESSAGE_SIZE 256 +#define HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT 30 + +typedef struct HV_MESSAGE { + HV_MESSAGE_HEADER Header; + uint64_t Payload[HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT]; +} HV_MESSAGE; + +void __init __maybe_unused build_assertions(void) +{ + BUILD_BUG_ON(sizeof(HV_MESSAGE) != HV_MESSAGE_SIZE); +} + void viridian_apic_assist_set(const struct vcpu *v) { struct viridian_vcpu *vv = v->arch.hvm.viridian; @@ -83,6 +115,8 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) struct viridian_vcpu *vv = v->arch.hvm.viridian; struct domain *d = v->domain; + ASSERT(v == current || !v->is_running); + switch ( idx ) { case HV_X64_MSR_EOI: @@ -107,6 +141,76 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) viridian_map_guest_page(d, &vv->vp_assist); break; + case HV_X64_MSR_SCONTROL: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + vv->scontrol = val; + break; + + case HV_X64_MSR_SVERSION: + return X86EMUL_EXCEPTION; + + case HV_X64_MSR_SIEFP: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + vv->siefp = val; + break; + + case HV_X64_MSR_SIMP: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + viridian_unmap_guest_page(&vv->simp); + vv->simp.msr.raw = val; + viridian_dump_guest_page(v, "SIMP", &vv->simp); + if ( vv->simp.msr.enabled ) + viridian_map_guest_page(d, &vv->simp); + break; + + case HV_X64_MSR_EOM: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + vv->msg_pending = 0; + break; + + case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15: + { + unsigned int sintx = idx - HV_X64_MSR_SINT0; + union viridian_sint_msr new, *vs = + &array_access_nospec(vv->sint, sintx); + uint8_t vector; + + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + /* Vectors must be in the range 0x10-0xff inclusive */ + new.raw = val; + if ( new.vector < 0x10 ) + return X86EMUL_EXCEPTION; + + /* + * Invalidate any previous mapping by setting an out-of-range + * index before setting the new mapping. + */ + vector = vs->vector; + vv->vector_to_sintx[vector] = ARRAY_SIZE(vv->sint); + + vector = new.vector; + vv->vector_to_sintx[vector] = sintx; + + printk(XENLOG_G_INFO "%pv: VIRIDIAN SINT%u: vector: %x\n", v, sintx, + vector); + + if ( new.polling ) + __clear_bit(sintx, &vv->msg_pending); + + *vs = new; + break; + } + default: gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x (%016"PRIx64")\n", __func__, idx, val); @@ -118,6 +222,9 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) { + const struct viridian_vcpu *vv = v->arch.hvm.viridian; + const struct domain *d = v->domain; + switch ( idx ) { case HV_X64_MSR_EOI: @@ -131,14 +238,70 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) *val = ((uint64_t)icr2 << 32) | icr; break; } + case HV_X64_MSR_TPR: *val = vlapic_get_reg(vcpu_vlapic(v), APIC_TASKPRI); break; case HV_X64_MSR_VP_ASSIST_PAGE: - *val = v->arch.hvm.viridian->vp_assist.msr.raw; + *val = vv->vp_assist.msr.raw; + break; + + case HV_X64_MSR_SCONTROL: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + *val = vv->scontrol; + break; + + case HV_X64_MSR_SVERSION: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + /* + * The specification says that the version number is 0x00000001 + * and should be in the lower 32-bits of the MSR, while the + * upper 32-bits are reserved... but it doesn't say what they + * should be set to. Assume everything but the bottom bit + * should be zero. + */ + *val = 1ul; + break; + + case HV_X64_MSR_SIEFP: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + *val = vv->siefp; + break; + + case HV_X64_MSR_SIMP: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + *val = vv->simp.msr.raw; break; + case HV_X64_MSR_EOM: + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + *val = 0; + break; + + case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15: + { + unsigned int sintx = idx - HV_X64_MSR_SINT0; + const union viridian_sint_msr *vs = + &array_access_nospec(vv->sint, sintx); + + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) + return X86EMUL_EXCEPTION; + + *val = vs->raw; + break; + } + default: gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x\n", __func__, idx); return X86EMUL_EXCEPTION; @@ -149,6 +312,20 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) int viridian_synic_vcpu_init(const struct vcpu *v) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + /* + * The specification says that all synthetic interrupts must be + * initally masked. + */ + for ( i = 0; i < ARRAY_SIZE(vv->sint); i++ ) + vv->sint[i].mask = 1; + + /* Initialize the mapping array with invalid values */ + for ( i = 0; i < ARRAY_SIZE(vv->vector_to_sintx); i++ ) + vv->vector_to_sintx[i] = ARRAY_SIZE(vv->sint); + return 0; } @@ -159,17 +336,59 @@ int viridian_synic_domain_init(const struct domain *d) void viridian_synic_vcpu_deinit(const struct vcpu *v) { - viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist); + struct viridian_vcpu *vv = v->arch.hvm.viridian; + + viridian_unmap_guest_page(&vv->vp_assist); + viridian_unmap_guest_page(&vv->simp); } void viridian_synic_domain_deinit(const struct domain *d) { } +void viridian_synic_poll(const struct vcpu *v) +{ + /* There are currently no message sources */ +} + +bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v, + unsigned int vector) +{ + const struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int sintx = vv->vector_to_sintx[vector]; + const union viridian_sint_msr *vs = + &array_access_nospec(vv->sint, sintx); + + if ( sintx >= ARRAY_SIZE(vv->sint) ) + return false; + + return vs->auto_eoi; +} + +void viridian_synic_ack_sint(const struct vcpu *v, unsigned int vector) +{ + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int sintx = vv->vector_to_sintx[vector]; + + ASSERT(v == current); + + if ( sintx < ARRAY_SIZE(vv->sint) ) + __clear_bit(array_index_nospec(sintx, ARRAY_SIZE(vv->sint)), + &vv->msg_pending); +} + void viridian_synic_save_vcpu_ctxt(const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt) { const struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + BUILD_BUG_ON(ARRAY_SIZE(vv->sint) != ARRAY_SIZE(ctxt->sint_msr)); + + for ( i = 0; i < ARRAY_SIZE(vv->sint); i++ ) + ctxt->sint_msr[i] = vv->sint[i].raw; + + ctxt->simp_msr = vv->simp.msr.raw; ctxt->apic_assist_pending = vv->apic_assist_pending; ctxt->vp_assist_msr = vv->vp_assist.msr.raw; @@ -180,12 +399,30 @@ void viridian_synic_load_vcpu_ctxt( { struct viridian_vcpu *vv = v->arch.hvm.viridian; struct domain *d = v->domain; + unsigned int i; vv->vp_assist.msr.raw = ctxt->vp_assist_msr; if ( vv->vp_assist.msr.enabled ) viridian_map_guest_page(d, &vv->vp_assist); vv->apic_assist_pending = ctxt->apic_assist_pending; + + vv->simp.msr.raw = ctxt->simp_msr; + if ( vv->simp.msr.enabled ) + viridian_map_guest_page(d, &vv->simp); + + for ( i = 0; i < ARRAY_SIZE(vv->sint); i++ ) + { + uint8_t vector; + + vv->sint[i].raw = ctxt->sint_msr[i]; + + vector = vv->sint[i].vector; + if ( vector < 0x10 ) + continue; + + vv->vector_to_sintx[vector] = i; + } } void viridian_synic_save_domain_ctxt( diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 2b045ed88f..f3166fbcd0 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -89,6 +89,7 @@ typedef union _HV_CRASH_CTL_REG_CONTENTS /* Viridian CPUID leaf 3, Hypervisor Feature Indication */ #define CPUID3D_CRASH_MSRS (1 << 10) +#define CPUID3D_SINT_POLLING (1 << 17) /* Viridian CPUID leaf 4: Implementation Recommendations. */ #define CPUID4A_HCALL_REMOTE_TLB_FLUSH (1 << 2) @@ -178,6 +179,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, mask.AccessPartitionReferenceCounter = 1; if ( viridian_feature_mask(d) & HVMPV_reference_tsc ) mask.AccessPartitionReferenceTsc = 1; + if ( viridian_feature_mask(d) & HVMPV_synic ) + mask.AccessSynicRegs = 1; u.mask = mask; @@ -186,6 +189,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, if ( viridian_feature_mask(d) & HVMPV_crash_ctl ) res->d = CPUID3D_CRASH_MSRS; + if ( viridian_feature_mask(d) & HVMPV_synic ) + res->d |= CPUID3D_SINT_POLLING; break; } @@ -306,8 +311,16 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_ICR: case HV_X64_MSR_TPR: case HV_X64_MSR_VP_ASSIST_PAGE: + case HV_X64_MSR_SCONTROL: + case HV_X64_MSR_SVERSION: + case HV_X64_MSR_SIEFP: + case HV_X64_MSR_SIMP: + case HV_X64_MSR_EOM: + case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15: return viridian_synic_wrmsr(v, idx, val); + case HV_X64_MSR_TSC_FREQUENCY: + case HV_X64_MSR_APIC_FREQUENCY: case HV_X64_MSR_REFERENCE_TSC: return viridian_time_wrmsr(v, idx, val); @@ -378,6 +391,12 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) case HV_X64_MSR_ICR: case HV_X64_MSR_TPR: case HV_X64_MSR_VP_ASSIST_PAGE: + case HV_X64_MSR_SCONTROL: + case HV_X64_MSR_SVERSION: + case HV_X64_MSR_SIEFP: + case HV_X64_MSR_SIMP: + case HV_X64_MSR_EOM: + case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15: return viridian_synic_rdmsr(v, idx, val); case HV_X64_MSR_TSC_FREQUENCY: diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index a1a43cd792..24e8e63c4f 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -461,10 +461,15 @@ void vlapic_EOI_set(struct vlapic *vlapic) void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector) { - struct domain *d = vlapic_domain(vlapic); + struct vcpu *v = vlapic_vcpu(vlapic); + struct domain *d = v->domain; + + /* All synic SINTx vectors are edge triggered */ if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) ) vioapic_update_EOI(d, vector); + else if ( has_viridian_synic(d) ) + viridian_synic_ack_sint(v, vector); hvm_dpci_msi_eoi(d, vector); } @@ -1301,6 +1306,13 @@ int vlapic_has_pending_irq(struct vcpu *v) if ( !vlapic_enabled(vlapic) ) return -1; + /* + * Poll the viridian message queues before checking the IRR since + * a synthetic interrupt may be asserted during the poll. + */ + if ( has_viridian_synic(v->domain) ) + viridian_synic_poll(v); + irr = vlapic_find_highest_irr(vlapic); if ( irr == -1 ) return -1; @@ -1360,8 +1372,12 @@ int vlapic_ack_pending_irq(struct vcpu *v, int vector, bool_t force_ack) } done: - vlapic_set_vector(vector, &vlapic->regs->data[APIC_ISR]); + if ( !has_viridian_synic(v->domain) || + !viridian_synic_is_auto_eoi_sint(v, vector) ) + vlapic_set_vector(vector, &vlapic->regs->data[APIC_ISR]); + vlapic_clear_irr(vector, vlapic); + return 1; } diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index 37c3567a57..f67e9dbd12 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -472,6 +472,9 @@ static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val) #define has_viridian_apic_assist(d) \ (is_viridian_domain(d) && (viridian_feature_mask(d) & HVMPV_apic_assist)) +#define has_viridian_synic(d) \ + (is_viridian_domain(d) && (viridian_feature_mask(d) & HVMPV_synic)) + static inline void hvm_inject_exception( unsigned int vector, unsigned int type, unsigned int insn_len, int error_code) diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index 8146e2fc46..03fc4c6b76 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -26,10 +26,31 @@ struct viridian_page void *ptr; }; +union viridian_sint_msr +{ + uint64_t raw; + struct + { + uint64_t vector:8; + uint64_t reserved_preserved1:8; + uint64_t mask:1; + uint64_t auto_eoi:1; + uint64_t polling:1; + uint64_t reserved_preserved2:45; + }; +}; + struct viridian_vcpu { struct viridian_page vp_assist; bool apic_assist_pending; + bool polled; + unsigned int msg_pending; + uint64_t scontrol; + uint64_t siefp; + struct viridian_page simp; + union viridian_sint_msr sint[16]; + uint8_t vector_to_sintx[256]; uint64_t crash_param[5]; }; @@ -90,6 +111,11 @@ void viridian_apic_assist_set(const struct vcpu *v); bool viridian_apic_assist_completed(const struct vcpu *v); void viridian_apic_assist_clear(const struct vcpu *v); +void viridian_synic_poll(const struct vcpu *v); +bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v, + unsigned int vector); +void viridian_synic_ack_sint(const struct vcpu *v, unsigned int vector); + #endif /* __ASM_X86_HVM_VIRIDIAN_H__ */ /* diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h index 40be84ecda..ec3e4df12c 100644 --- a/xen/include/public/arch-x86/hvm/save.h +++ b/xen/include/public/arch-x86/hvm/save.h @@ -602,6 +602,8 @@ struct hvm_viridian_vcpu_context { uint64_t vp_assist_msr; uint8_t apic_assist_pending; uint8_t _pad[7]; + uint64_t simp_msr; + uint64_t sint_msr[16]; }; DECLARE_HVM_SAVE_TYPE(VIRIDIAN_VCPU, 17, struct hvm_viridian_vcpu_context); diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h index 72f633ef2d..e7e3c7c892 100644 --- a/xen/include/public/hvm/params.h +++ b/xen/include/public/hvm/params.h @@ -146,6 +146,10 @@ #define _HVMPV_crash_ctl 6 #define HVMPV_crash_ctl (1 << _HVMPV_crash_ctl) +/* Enable SYNIC MSRs */ +#define _HVMPV_synic 7 +#define HVMPV_synic (1 << _HVMPV_synic) + #define HVMPV_feature_mask \ (HVMPV_base_freq | \ HVMPV_no_freq | \ @@ -153,7 +157,8 @@ HVMPV_reference_tsc | \ HVMPV_hcall_remote_tlb_flush | \ HVMPV_apic_assist | \ - HVMPV_crash_ctl) + HVMPV_crash_ctl | \ + HVMPV_synic) #endif From patchwork Mon Mar 18 11:20:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10857447 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5F4B91708 for ; Mon, 18 Mar 2019 11:29:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4184E2935E for ; Mon, 18 Mar 2019 11:29:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3212C29378; Mon, 18 Mar 2019 11:29:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A85F72935E for ; Mon, 18 Mar 2019 11:29:41 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qQh-0005Zl-Iy; Mon, 18 Mar 2019 11:27:43 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qQg-0005Ya-T2 for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 11:27:42 +0000 X-Inumbo-ID: d485ebe2-4970-11e9-bc6e-27ec6b3444bb Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id d485ebe2-4970-11e9-bc6e-27ec6b3444bb; Mon, 18 Mar 2019 11:27:36 +0000 (UTC) IronPort-Data: A9a23:tP74Y6ClF+YCvxVWkefolD7ySiHEJpYLDSrF4LticNVT6lqaU1f8SB 85BICA5+luDcKmUAskYwT3B5HhZn5ECayoqExcrGeZHmnE2fnpHGeBd6LhOvyQQZov/mPYCm qHeqpZqUXKSdzd+uKr/eMIYnk+BRWO10pcEri6GiTqZamvsgv4i+LQw/cBRIjP8ndItlOQQN HYDcGBwzPCmFonN4YIDScP8PTWrKojTRH8tYc1lhtPSZ1by9cF4LwTU1KHMNlTAwV9KluGml XjH4qH6KavUOyUBj3qpW0Jo0vBroK+tFlsEh4vupEm8BYcctYmFeza/7Maa4gFls2LeZw/Yy XOgNehJnawarP5Pad1bw3T4yCi1xEeoVygvAbdz7/6UxH9XaW01YDo33M/bOqxAp9UVQeHoe WtO44rQYmu2njopDXpLy0M5G10ECtzLJk8tEYQA0/bHiCg6Ds1fZP3D322DMzubZRMQ4OVFz wvht2q7pgOZMB0bXnEym/6T92FsSU5uGqhMOLeACA8UkIIvyG/CbPIpF9mpXrW8jFN7YDlYh cXDeOsW8nnqf4h99KP2JNJXDuRVWozWpz8IDZoDCwke80SU/Psp0Bcyvy8wR5pCBoZoi8qQs 88w1rLtfSY9BIB9ZhNcsydFWGJCy1lh4R0kkjkLWRdU0dJ8Zoh2S3f+wupOjlus7NFCIBOcx Gx/ipS1TpwuMGIYqZHE0xBbTTwmESFD0Sg+Ip33YJL8Eh+7iar4r5t622VxusDDo+m5r5jSK s53H3pMCq06inYaDFq6PVAnCViBfOU/Leq9Jo1wvHmHNJNIxUfoRRH3E3BJotVM1Wm1eV9HQ zndNoB+DFZm4i7pA4hcZmnfkK+3QX8En9wpGOQZjUgGTVXDj8C4oqlZTCsCD9iCwLYWSTCmx 4Z1O7EpzTW5nUL2Xe7ifZMdqqpKuivTWMHvPLdmb5rd86TtHba5OJfl3nYaE2hWibKzEQBwc AcGlC+DEfQfCwS4AMLDYYNP8FavAHw5sTMkNqMJ1vDSZ3C7Dvnt9c/Wkx3qOIJa0sDgXlkxR r3YGLfjiuvIe+XtedU3Kt2bWii8vmzXgeQv1g3/o/osuorCcJEtwiXnDO4jvKMaK1UVjgjbz rSDklDKH2PFQ1gloajoX0E/EooknIB+9xDbip2nTErYPVS8XBo+Pr6rTXw4iv83rBKysQB7T O/rzXknwc3efO42HoyX5lFHg+z7ATENZXbv9OrPU01/6m9yI1jYXIYcgSBYAt1lhgf5lGyjK ppYdKyV7E3WwvVC/hampc+aGsEzOg0jmbv3KVeMk4XJGbTxQCdR7uZeXsXqt6E1rHURg6wqr J+1zsL8pCk8qwCQ2TOM8eAWha7+Z2mCe3/c/UQVVEyRbnp/AG1qJlkirMIqq/rMPWY732iBH jZqm0/udw5YwHfv2CrMj4cTzurpeDfoH/dadmy/frcprmDXEQAt8d4oKKYbvp4Vim0q94Akg UXO2ugKaNAr+rikk0s0ALzBTABrWkR9C30QVMiqVBK84Kk5MAd4sxGA38q96R4eMySj5EwBn UJrVZBgo+WPY2AUUt6DV3Iy3RYWO5lxj+6Vxr1weWcDLXke/jRko3LZLlkT4NPzsJ4qhKLpy cC3SqFv3RLE3LG6/0KhezmSi/65IK5Iw0sUWd4gVfjDJSsEYWjV4hTWaNO04vanomEsncErJ /9yIr43DqheaY1Le9biegniBIgEn8LUd3k04hdZcju9WtXNOtM3u+nIG/9CFUPBYZdJ9HPZG agDN5iXyAkg1j00NTdwzuyvTj40m96GMRcWhxgzDuZJniw0dFiKoSJsHo8v/EApsfkBxpncf r74DmnPGhDmnyr5CqknRbVaX0n/qgFteXUPqCAzxUVOxqm/zIBg2HcaZVmufDj+mBVkGvotB VUJNIxl8dr0OMy5eAnjbyO324nvX1iNPxVaaSyw3BDCCl5c3TdntI+m3fU8qP9+Hdzl6okkT 0mbz/s7t8AXoeMYFDSNIcle+NufcGXrr2r3CxYA+eyCxe0d877hTsh3y931wOtQo1Wg4FrVR qQCLBOQf8l2pR4DhQOFkjJ8n6OiwSbMN2uYmfR27w= X-IronPort-AV: E=Sophos;i="5.58,493,1544486400"; d="scan'208";a="80851442" From: Paul Durrant To: Date: Mon, 18 Mar 2019 11:20:58 +0000 Message-ID: <20190318112059.21910-11-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190318112059.21910-1-paul.durrant@citrix.com> References: <20190318112059.21910-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v8 10/11] viridian: add implementation of synthetic timers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces an implementation of the STIMER0-15_CONFIG/COUNT MSRs and hence a the first SynIC message source. The new (and documented) 'stimer' viridian enlightenment group may be specified to enable this feature. While in the neighbourhood, this patch adds a missing check for an attempt to write the time reference count MSR, which should result in an exception (but not be reported as an unimplemented MSR). NOTE: It is necessary for correct operation that timer expiration and message delivery time-stamping use the same time source as the guest. The specification is ambiguous but testing with a Windows 10 1803 guest has shown that using the partition reference counter as a source whilst the guest is using RDTSC and the reference tsc page does not work correctly. Therefore the time_now() function is used. This implements the algorithm for acquiring partition reference time that is documented in the specifiction. Signed-off-by: Paul Durrant Acked-by: Wei Liu --- Cc: Ian Jackson Cc: Andrew Cooper Cc: George Dunlap Cc: Jan Beulich Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: "Roger Pau Monné" v8: - Squash in https://lists.xenproject.org/archives/html/xen-devel/2019-03/msg01333.html v7: - Make sure missed count cannot be zero if expiration < now v6: - Stop using the reference tsc page in time_now() - Address further comments from Jan v5: - Fix time_now() to read TSC as the guest would see it v4: - Address comments from Jan v3: - Re-worked missed ticks calculation --- docs/man/xl.cfg.5.pod.in | 12 +- tools/libxl/libxl.h | 6 + tools/libxl/libxl_dom.c | 4 + tools/libxl/libxl_types.idl | 1 + xen/arch/x86/hvm/viridian/private.h | 9 +- xen/arch/x86/hvm/viridian/synic.c | 55 +++- xen/arch/x86/hvm/viridian/time.c | 386 ++++++++++++++++++++++++- xen/arch/x86/hvm/viridian/viridian.c | 5 + xen/include/asm-x86/hvm/viridian.h | 32 +- xen/include/public/arch-x86/hvm/save.h | 2 + xen/include/public/hvm/params.h | 7 +- 11 files changed, 506 insertions(+), 13 deletions(-) diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in index ad81af1ed8..355c654693 100644 --- a/docs/man/xl.cfg.5.pod.in +++ b/docs/man/xl.cfg.5.pod.in @@ -2167,11 +2167,19 @@ This group incorporates the crash control MSRs. These enlightenments allow Windows to write crash information such that it can be logged by Xen. +=item B + +This set incorporates the SynIC and synthetic timer MSRs. Windows will +use synthetic timers in preference to emulated HPET for a source of +ticks and hence enabling this group will ensure that ticks will be +consistent with use of an enlightened time source (B or +B). + =item B This is a special value that enables the default set of groups, which -is currently the B, B, B, B -and B groups. +is currently the B, B, B, B, +B and B groups. =item B diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h index a923a380d3..c8f219b0d3 100644 --- a/tools/libxl/libxl.h +++ b/tools/libxl/libxl.h @@ -324,6 +324,12 @@ */ #define LIBXL_HAVE_VIRIDIAN_SYNIC 1 +/* + * LIBXL_HAVE_VIRIDIAN_STIMER indicates that the 'stimer' value + * is present in the viridian enlightenment enumeration. + */ +#define LIBXL_HAVE_VIRIDIAN_STIMER 1 + /* * LIBXL_HAVE_BUILDINFO_HVM_ACPI_LAPTOP_SLATE indicates that * libxl_domain_build_info has the u.hvm.acpi_laptop_slate field. diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c index fb758d2ac3..2ee0f82ee7 100644 --- a/tools/libxl/libxl_dom.c +++ b/tools/libxl/libxl_dom.c @@ -269,6 +269,7 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid, libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_TIME_REF_COUNT); libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_APIC_ASSIST); libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_CRASH_CTL); + libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_STIMER); } libxl_for_each_set_bit(v, info->u.hvm.viridian_enable) { @@ -320,6 +321,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid, if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_SYNIC)) mask |= HVMPV_synic; + if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_STIMER)) + mask |= HVMPV_time_ref_count | HVMPV_synic | HVMPV_stimer; + if (mask != 0 && xc_hvm_param_set(CTX->xch, domid, diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index 9860bcaf5f..1cce249de4 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -236,6 +236,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [ (5, "apic_assist"), (6, "crash_ctl"), (7, "synic"), + (8, "stimer"), ]) libxl_hdtype = Enumeration("hdtype", [ diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h index 96a784b840..c272c34cda 100644 --- a/xen/arch/x86/hvm/viridian/private.h +++ b/xen/arch/x86/hvm/viridian/private.h @@ -74,6 +74,11 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val); int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val); +bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx, + unsigned int index, + uint64_t expiration, + uint64_t delivery); + int viridian_synic_vcpu_init(const struct vcpu *v); int viridian_synic_domain_init(const struct domain *d); @@ -93,7 +98,9 @@ void viridian_synic_load_domain_ctxt( int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val); int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val); -int viridian_time_vcpu_init(const struct vcpu *v); +void viridian_time_poll_timers(struct vcpu *v); + +int viridian_time_vcpu_init(struct vcpu *v); int viridian_time_domain_init(const struct domain *d); void viridian_time_vcpu_deinit(const struct vcpu *v); diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index 84ab02694f..2791021bcc 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -346,9 +346,60 @@ void viridian_synic_domain_deinit(const struct domain *d) { } -void viridian_synic_poll(const struct vcpu *v) +void viridian_synic_poll(struct vcpu *v) { - /* There are currently no message sources */ + viridian_time_poll_timers(v); +} + +bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx, + unsigned int index, + uint64_t expiration, + uint64_t delivery) +{ + struct viridian_vcpu *vv = v->arch.hvm.viridian; + const union viridian_sint_msr *vs = &vv->sint[sintx]; + HV_MESSAGE *msg = vv->simp.ptr; + struct { + uint32_t TimerIndex; + uint32_t Reserved; + uint64_t ExpirationTime; + uint64_t DeliveryTime; + } payload = { + .TimerIndex = index, + .ExpirationTime = expiration, + .DeliveryTime = delivery, + }; + + if ( test_bit(sintx, &vv->msg_pending) ) + return false; + + /* + * To avoid using an atomic test-and-set, and barrier before calling + * vlapic_set_irq(), this function must be called in context of the + * vcpu receiving the message. + */ + ASSERT(v == current); + + msg += sintx; + + if ( msg->Header.MessageType != HvMessageTypeNone ) + { + msg->Header.MessageFlags.MessagePending = 1; + __set_bit(sintx, &vv->msg_pending); + return false; + } + + msg->Header.MessageType = HvMessageTimerExpired; + msg->Header.MessageFlags.MessagePending = 0; + msg->Header.PayloadSize = sizeof(payload); + + BUILD_BUG_ON(sizeof(payload) > sizeof(msg->Payload)); + memcpy(msg->Payload, &payload, sizeof(payload)); + + if ( !vs->mask ) + vlapic_set_irq(vcpu_vlapic(v), vs->vector, 0); + + return true; } bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v, diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 71291d921c..692f014fc4 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -12,6 +12,7 @@ #include #include +#include #include #include "private.h" @@ -27,8 +28,10 @@ typedef struct _HV_REFERENCE_TSC_PAGE static void update_reference_tsc(struct domain *d, bool initialize) { - const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc; + struct viridian_domain *vd = d->arch.hvm.viridian; + const struct viridian_page *rt = &vd->reference_tsc; HV_REFERENCE_TSC_PAGE *p = rt->ptr; + uint32_t seq; if ( initialize ) clear_page(p); @@ -59,6 +62,8 @@ static void update_reference_tsc(struct domain *d, bool initialize) printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: invalidated\n", d->domain_id); + + vd->reference_tsc_valid = false; return; } @@ -72,11 +77,14 @@ static void update_reference_tsc(struct domain *d, bool initialize) * ticks per 100ns shifted left by 64. */ p->TscScale = ((10000ul << 32) / d->arch.tsc_khz) << 32; + smp_wmb(); + + seq = p->TscSequence + 1; + if ( seq == 0xFFFFFFFF || seq == 0 ) /* Avoid both 'invalid' values */ + seq = 1; - p->TscSequence++; - if ( p->TscSequence == 0xFFFFFFFF || - p->TscSequence == 0 ) /* Avoid both 'invalid' values */ - p->TscSequence = 1; + p->TscSequence = seq; + vd->reference_tsc_valid = true; } static int64_t raw_trc_val(const struct domain *d) @@ -118,18 +126,250 @@ static int64_t time_ref_count(const struct domain *d) return raw_trc_val(d) + trc->off; } +/* + * The specification says: "The partition reference time is computed + * by the following formula: + * + * ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset + * + * The multiplication is a 64 bit multiplication, which results in a + * 128 bit number which is then shifted 64 times to the right to obtain + * the high 64 bits." + */ +static uint64_t scale_tsc(uint64_t tsc, uint64_t scale, uint64_t offset) +{ + uint64_t result; + + /* + * Quadword MUL takes an implicit operand in RAX, and puts the result + * in RDX:RAX. Because we only want the result of the multiplication + * after shifting right by 64 bits, we therefore only need the content + * of RDX. + */ + asm ( "mulq %[scale]" + : "+a" (tsc), "=d" (result) + : [scale] "rm" (scale) ); + + return result + offset; +} + +static uint64_t time_now(struct domain *d) +{ + uint64_t tsc, scale; + + /* + * If the reference TSC page is not enabled, or has been invalidated + * fall back to the partition reference counter. + */ + if ( !d->arch.hvm.viridian->reference_tsc_valid ) + return time_ref_count(d); + + /* Otherwise compute reference time in the same way the guest would */ + tsc = hvm_get_guest_tsc(pt_global_vcpu_target(d)); + scale = ((10000ul << 32) / d->arch.tsc_khz) << 32; + + return scale_tsc(tsc, scale, 0); +} + +static void stop_stimer(struct viridian_stimer *vs) +{ + const struct vcpu *v = vs->v; + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int stimerx = vs - &vv->stimer[0]; + + if ( !vs->started ) + return; + + stop_timer(&vs->timer); + clear_bit(stimerx, &vv->stimer_pending); + vs->started = false; +} + +static void stimer_expire(void *data) +{ + struct viridian_stimer *vs = data; + struct vcpu *v = vs->v; + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int stimerx = vs - &vv->stimer[0]; + + if ( !vs->config.enabled ) + return; + + set_bit(stimerx, &vv->stimer_pending); + vcpu_kick(v); + + if ( !vs->config.periodic ) + vs->config.enabled = 0; +} + +static void start_stimer(struct viridian_stimer *vs) +{ + const struct vcpu *v = vs->v; + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int stimerx = vs - &vv->stimer[0]; + int64_t now = time_now(v->domain); + int64_t expiration; + s_time_t timeout; + + if ( !test_and_set_bit(stimerx, &vv->stimer_enabled) ) + printk(XENLOG_G_INFO "%pv: VIRIDIAN STIMER%u: enabled\n", v, + stimerx); + + if ( vs->config.periodic ) + { + /* + * The specification says that if the timer is lazy then we + * skip over any missed expirations so we can treat this case + * as the same as if the timer is currently stopped, i.e. we + * just schedule expiration to be 'count' ticks from now. + */ + if ( !vs->started || vs->config.lazy ) + expiration = now + vs->count; + else + { + unsigned int missed = 0; + + /* + * The timer is already started, so we're re-scheduling. + * Hence advance the timer expiration by one tick. + */ + expiration = vs->expiration + vs->count; + + /* Now check to see if any expirations have been missed */ + if ( expiration - now <= 0 ) + missed = ((now - expiration) / vs->count) + 1; + + /* + * The specification says that if the timer is not lazy then + * a non-zero missed count should be used to reduce the period + * of the timer until it catches up, unless the count has + * reached a 'significant number', in which case the timer + * should be treated as lazy. Unfortunately the specification + * does not state what that number is so the choice of number + * here is a pure guess. + */ + if ( missed > 3 ) + expiration = now + vs->count; + else if ( missed ) + expiration = now + (vs->count / missed); + } + } + else + { + expiration = vs->count; + if ( expiration - now <= 0 ) + { + vs->expiration = expiration; + stimer_expire(vs); + return; + } + } + ASSERT(expiration - now > 0); + + vs->expiration = expiration; + timeout = (expiration - now) * 100ull; + + vs->started = true; + migrate_timer(&vs->timer, smp_processor_id()); + set_timer(&vs->timer, timeout + NOW()); +} + +static void poll_stimer(struct vcpu *v, unsigned int stimerx) +{ + struct viridian_vcpu *vv = v->arch.hvm.viridian; + struct viridian_stimer *vs = &vv->stimer[stimerx]; + + if ( !test_bit(stimerx, &vv->stimer_pending) ) + return; + + if ( !viridian_synic_deliver_timer_msg(v, vs->config.sintx, + stimerx, vs->expiration, + time_now(v->domain)) ) + return; + + clear_bit(stimerx, &vv->stimer_pending); + + if ( vs->config.enabled ) + start_stimer(vs); +} + +void viridian_time_poll_timers(struct vcpu *v) +{ + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + if ( !vv->stimer_pending ) + return; + + for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ ) + poll_stimer(v, i); +} + +void viridian_time_vcpu_freeze(struct vcpu *v) +{ + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + if ( !is_viridian_vcpu(v) || + !(viridian_feature_mask(v->domain) & HVMPV_stimer) ) + return; + + for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ ) + { + struct viridian_stimer *vs = &vv->stimer[i]; + + if ( vs->started ) + stop_timer(&vs->timer); + } +} + +void viridian_time_vcpu_thaw(struct vcpu *v) +{ + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + if ( !is_viridian_vcpu(v) || + !(viridian_feature_mask(v->domain) & HVMPV_stimer) ) + return; + + for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ ) + { + struct viridian_stimer *vs = &vv->stimer[i]; + + if ( vs->config.enabled ) + start_stimer(vs); + } +} + void viridian_time_domain_freeze(const struct domain *d) { + struct vcpu *v; + + if ( !is_viridian_domain(d) ) + return; + + for_each_vcpu ( d, v ) + viridian_time_vcpu_freeze(v); + time_ref_count_freeze(d); } void viridian_time_domain_thaw(const struct domain *d) { + struct vcpu *v; + + if ( !is_viridian_domain(d) ) + return; + time_ref_count_thaw(d); + + for_each_vcpu ( d, v ) + viridian_time_vcpu_thaw(v); } int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; struct domain *d = v->domain; struct viridian_domain *vd = d->arch.hvm.viridian; @@ -149,6 +389,61 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) } break; + case HV_X64_MSR_TIME_REF_COUNT: + return X86EMUL_EXCEPTION; + + case HV_X64_MSR_STIMER0_CONFIG: + case HV_X64_MSR_STIMER1_CONFIG: + case HV_X64_MSR_STIMER2_CONFIG: + case HV_X64_MSR_STIMER3_CONFIG: + { + unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2; + struct viridian_stimer *vs = + &array_access_nospec(vv->stimer, stimerx); + + if ( !(viridian_feature_mask(d) & HVMPV_stimer) ) + return X86EMUL_EXCEPTION; + + stop_stimer(vs); + + vs->config.raw = val; + + if ( !vs->config.sintx ) + vs->config.enabled = 0; + + if ( vs->config.enabled ) + start_stimer(vs); + + break; + } + + case HV_X64_MSR_STIMER0_COUNT: + case HV_X64_MSR_STIMER1_COUNT: + case HV_X64_MSR_STIMER2_COUNT: + case HV_X64_MSR_STIMER3_COUNT: + { + unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2; + struct viridian_stimer *vs = + &array_access_nospec(vv->stimer, stimerx); + + if ( !(viridian_feature_mask(d) & HVMPV_stimer) ) + return X86EMUL_EXCEPTION; + + stop_stimer(vs); + + vs->count = val; + + if ( !vs->count ) + vs->config.enabled = 0; + else if ( vs->config.auto_enable ) + vs->config.enabled = 1; + + if ( vs->config.enabled ) + start_stimer(vs); + + break; + } + default: gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x (%016"PRIx64")\n", __func__, idx, val); @@ -160,6 +455,7 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) { + const struct viridian_vcpu *vv = v->arch.hvm.viridian; const struct domain *d = v->domain; struct viridian_domain *vd = d->arch.hvm.viridian; @@ -201,6 +497,38 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) break; } + case HV_X64_MSR_STIMER0_CONFIG: + case HV_X64_MSR_STIMER1_CONFIG: + case HV_X64_MSR_STIMER2_CONFIG: + case HV_X64_MSR_STIMER3_CONFIG: + { + unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2; + const struct viridian_stimer *vs = + &array_access_nospec(vv->stimer, stimerx); + + if ( !(viridian_feature_mask(d) & HVMPV_stimer) ) + return X86EMUL_EXCEPTION; + + *val = vs->config.raw; + break; + } + + case HV_X64_MSR_STIMER0_COUNT: + case HV_X64_MSR_STIMER1_COUNT: + case HV_X64_MSR_STIMER2_COUNT: + case HV_X64_MSR_STIMER3_COUNT: + { + unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2; + const struct viridian_stimer *vs = + &array_access_nospec(vv->stimer, stimerx); + + if ( !(viridian_feature_mask(d) & HVMPV_stimer) ) + return X86EMUL_EXCEPTION; + + *val = vs->count; + break; + } + default: gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x\n", __func__, idx); return X86EMUL_EXCEPTION; @@ -209,8 +537,19 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) return X86EMUL_OKAY; } -int viridian_time_vcpu_init(const struct vcpu *v) +int viridian_time_vcpu_init(struct vcpu *v) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ ) + { + struct viridian_stimer *vs = &vv->stimer[i]; + + vs->v = v; + init_timer(&vs->timer, stimer_expire, vs, v->processor); + } + return 0; } @@ -221,6 +560,16 @@ int viridian_time_domain_init(const struct domain *d) void viridian_time_vcpu_deinit(const struct vcpu *v) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ ) + { + struct viridian_stimer *vs = &vv->stimer[i]; + + kill_timer(&vs->timer); + vs->v = NULL; + } } void viridian_time_domain_deinit(const struct domain *d) @@ -231,11 +580,36 @@ void viridian_time_domain_deinit(const struct domain *d) void viridian_time_save_vcpu_ctxt( const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt) { + const struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + BUILD_BUG_ON(ARRAY_SIZE(vv->stimer) != + ARRAY_SIZE(ctxt->stimer_config_msr)); + BUILD_BUG_ON(ARRAY_SIZE(vv->stimer) != + ARRAY_SIZE(ctxt->stimer_count_msr)); + + for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ ) + { + const struct viridian_stimer *vs = &vv->stimer[i]; + + ctxt->stimer_config_msr[i] = vs->config.raw; + ctxt->stimer_count_msr[i] = vs->count; + } } void viridian_time_load_vcpu_ctxt( struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; + unsigned int i; + + for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ ) + { + struct viridian_stimer *vs = &vv->stimer[i]; + + vs->config.raw = ctxt->stimer_config_msr[i]; + vs->count = ctxt->stimer_count_msr[i]; + } } void viridian_time_save_domain_ctxt( diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index f3166fbcd0..dce648bb4e 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -181,6 +181,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, mask.AccessPartitionReferenceTsc = 1; if ( viridian_feature_mask(d) & HVMPV_synic ) mask.AccessSynicRegs = 1; + if ( viridian_feature_mask(d) & HVMPV_stimer ) + mask.AccessSyntheticTimerRegs = 1; u.mask = mask; @@ -322,6 +324,8 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_TSC_FREQUENCY: case HV_X64_MSR_APIC_FREQUENCY: case HV_X64_MSR_REFERENCE_TSC: + case HV_X64_MSR_TIME_REF_COUNT: + case HV_X64_MSR_STIMER0_CONFIG ... HV_X64_MSR_STIMER3_COUNT: return viridian_time_wrmsr(v, idx, val); case HV_X64_MSR_CRASH_P0: @@ -403,6 +407,7 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) case HV_X64_MSR_APIC_FREQUENCY: case HV_X64_MSR_REFERENCE_TSC: case HV_X64_MSR_TIME_REF_COUNT: + case HV_X64_MSR_STIMER0_CONFIG ... HV_X64_MSR_STIMER3_COUNT: return viridian_time_rdmsr(v, idx, val); case HV_X64_MSR_CRASH_P0: diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index 03fc4c6b76..54e46cc4c4 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -40,6 +40,32 @@ union viridian_sint_msr }; }; +union viridian_stimer_config_msr +{ + uint64_t raw; + struct + { + uint64_t enabled:1; + uint64_t periodic:1; + uint64_t lazy:1; + uint64_t auto_enable:1; + uint64_t vector:8; + uint64_t direct_mode:1; + uint64_t reserved_zero1:3; + uint64_t sintx:4; + uint64_t reserved_zero2:44; + }; +}; + +struct viridian_stimer { + struct vcpu *v; + struct timer timer; + union viridian_stimer_config_msr config; + uint64_t count; + uint64_t expiration; + bool started; +}; + struct viridian_vcpu { struct viridian_page vp_assist; @@ -51,6 +77,9 @@ struct viridian_vcpu struct viridian_page simp; union viridian_sint_msr sint[16]; uint8_t vector_to_sintx[256]; + struct viridian_stimer stimer[4]; + unsigned int stimer_enabled; + unsigned int stimer_pending; uint64_t crash_param[5]; }; @@ -87,6 +116,7 @@ struct viridian_domain union viridian_page_msr hypercall_gpa; struct viridian_time_ref_count time_ref_count; struct viridian_page reference_tsc; + bool reference_tsc_valid; }; void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, @@ -111,7 +141,7 @@ void viridian_apic_assist_set(const struct vcpu *v); bool viridian_apic_assist_completed(const struct vcpu *v); void viridian_apic_assist_clear(const struct vcpu *v); -void viridian_synic_poll(const struct vcpu *v); +void viridian_synic_poll(struct vcpu *v); bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v, unsigned int vector); void viridian_synic_ack_sint(const struct vcpu *v, unsigned int vector); diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h index ec3e4df12c..8344aa471f 100644 --- a/xen/include/public/arch-x86/hvm/save.h +++ b/xen/include/public/arch-x86/hvm/save.h @@ -604,6 +604,8 @@ struct hvm_viridian_vcpu_context { uint8_t _pad[7]; uint64_t simp_msr; uint64_t sint_msr[16]; + uint64_t stimer_config_msr[4]; + uint64_t stimer_count_msr[4]; }; DECLARE_HVM_SAVE_TYPE(VIRIDIAN_VCPU, 17, struct hvm_viridian_vcpu_context); diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h index e7e3c7c892..e06b0942d0 100644 --- a/xen/include/public/hvm/params.h +++ b/xen/include/public/hvm/params.h @@ -150,6 +150,10 @@ #define _HVMPV_synic 7 #define HVMPV_synic (1 << _HVMPV_synic) +/* Enable STIMER MSRs */ +#define _HVMPV_stimer 8 +#define HVMPV_stimer (1 << _HVMPV_stimer) + #define HVMPV_feature_mask \ (HVMPV_base_freq | \ HVMPV_no_freq | \ @@ -158,7 +162,8 @@ HVMPV_hcall_remote_tlb_flush | \ HVMPV_apic_assist | \ HVMPV_crash_ctl | \ - HVMPV_synic) + HVMPV_synic | \ + HVMPV_stimer) #endif From patchwork Mon Mar 18 11:20:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10857445 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0E15E1708 for ; Mon, 18 Mar 2019 11:29:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E6BE22935E for ; Mon, 18 Mar 2019 11:29:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DA9B829378; Mon, 18 Mar 2019 11:29:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2BF132935E for ; Mon, 18 Mar 2019 11:29:40 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qQh-0005Yf-26; Mon, 18 Mar 2019 11:27:43 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qQf-0005YQ-Kn for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 11:27:41 +0000 X-Inumbo-ID: d545a5cc-4970-11e9-a491-af220bb263d8 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id d545a5cc-4970-11e9-a491-af220bb263d8; Mon, 18 Mar 2019 11:27:38 +0000 (UTC) IronPort-Data: A9a23:9hgCd6w0DN3N+IVYHf96t4n5zX1tpZwBZkaijnDt1wiZGaq/15e6/6 kGCD/UJfrbdVymWFQW6hnmwCcKH6t9UTnFWYkH5rDfIDKV7wXi8oJHguw8OlNlY2kxE5OIgI OBwMR7LZpTove8A5m9RG6oeRlzJTolUpNLdp6Pk6owapuhKApVtzuTOwDr9wNHANzqh3dGt1 x+jm2fyDFGNl7sL4/aZFf9Dqq6bFL55AY0Ja2WNUlYbgw8KVgWAvOxXv3QpVdngHIiOKTdze L4/27C8CQ8byd4LxmETh5C3YnAhA3zewRmkp58j4BLKlhbw5Fh7MUlzDO04BpRT858y7IAgz ymNmwzfEwZUdzVbDS7iWswXpIv/94LseLjZYJ0pCd5XqutJlYhvbCKu6+lm69ccL+nD80k0l Mg3MGPOxNMLW78s+vMpXaZ7PpKXiL8/P+Vuy0Hs0apv+y3HyRdyyY822tkNpX2n49qcudSbZ Q0MW68CePy0vx6jRVYn+K7HtkQAFeCZoWITcR0SWVB4pEnDnKDZRjlzopnNivcUEdJBeeBeW nkVdYY2ZWS1TVaLQU/gBROn07QJ3s4cak6KaFQwkWm/v/9//bDmYOD1iXWxt7+6mlO34b4yj UhkpTHZ/DAfFRB99MXUEZgQ2LXDQHh3r8nzlMGKrS+yfcEelUcu+Qn+eD9dENtCN5cVJcG9R n2n6rASPWf/WMgoksRwz9pqMcy6P3WQOKKJkJFkhRCazYwNBK8XVhgNyQt/bjhdiJ4eoB+6G 3eWN4ISTUSFnZcwjRp6hYxMb0mrw+CfXvU2OPT2vTUL6YiVl6Pom4KLcEY3CEb6coonsxaqn zGx2YKD1VZOOerQ0mua49bhEva4RSaOqc07QNlzUccly1T8IQtFuGyi4bwS72sSmr0hjxEu2 Uo5S2dqEdE+YqbXF3ZkUzshwRPcpg1+/mNzbWTj1DAW2PV+0Rn/CeHloOhxon577428tV6WC hjpkaDmfO4sZg2ZGksnTlJpCZ3hzuSflfusxflusDNpwU2M0N9lrMFkrDs/ns96apRy9PaxV gwwaIdt4qTZOY8g0DD0lAwDgxKFdLsgxiJPKABYIIb8mRyNNrrBo1TZo1D5xxEPV2iWFb4+R 4mFSFIbCaHciRfkPNN1ZuPq8V/FqFY0DQu2WUyGdlQrMYsou8/ivjJlXcm9gY0Ye3CPsfzh6 87dgCwc8HM9T/DLwJBiL8xaKtfTIe548rxs4mCuJeFevj7GoBCBYbTDCD7N35G8CeUdQSmNl E2W+Er/BNx1BZLy/yXUjC2Wo5nZI7ErkL1ykLoAs3Fm6L9LhR4o9Xm7C5sSVRfXoDRNNHUhW xq/P7Xy3vSK+Evcqmy4DVoOd9SgP2VqNdyMD3RJSAAqtIVY4Nuylp/Ryif7UlJZeMNP0IBY3 3DA/w1sONmjl7SNQpTMBK8U8GAcADY4/9c2KYLm3qxryf83eSPt7FZGpbs4xoGXmYFIZmYUR UG74ty3zsHj719OJlok5Weh9MUPy6GpW2sK1uhqOiZe6hU7mQ4x/u3kNLFCD/m4yHnsDO2YH NeDv4xCUsHsUGgQ1F5Pp9bx7ZWZYsE0hpsLmoQAQRcra38W0oXtg6hluvYN4MY1WB7plQ4n6 LMUg0zGJxKQx1aejGEZg5p5lo60hPxG4fyBxYUMCZ8u3gcCVSSXFqRPiqBCUDJX6v/dAXhrv 4Wu4piwxLhdxyu3ciRqPc+63aWA7uiwGqiUyMdW8u+covuAn9PAV+hnSEwUROOhxetcO/PEh GgpSZzqGBg/lprnf818k0UYss0ufu2RafXI5HJeHl4vu4MCldiWl243ukDdupBiJDiDcyM75 rmKzeD1uWNvf/7JD0aJ6g5JxLRADDimrC+ZtdbiSUMPpCm7lVbuLfcGSHcRapcMIQPiE3SZK 0aBzfzyBu3wEqhLiOcYgWt9XJWKUUXd28SjGfUTyA2ptwgSkMwnPPUJKN0xNP+uRN16ibOlF V6zrNbmoiMnfGeN1jHQCD/O9Wk5G7jSBoRVo90ZcBhUbdhFt+Q4fLazKjbMPMTP0x1LwoF7+ HDtqMjRMaCfu1gg+raeccrbw== X-IronPort-AV: E=Sophos;i="5.58,493,1544486400"; d="scan'208";a="80851443" From: Paul Durrant To: Date: Mon, 18 Mar 2019 11:20:59 +0000 Message-ID: <20190318112059.21910-12-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190318112059.21910-1-paul.durrant@citrix.com> References: <20190318112059.21910-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v8 11/11] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch adds an implementation of the hypercall as documented in the specification [1], section 10.5.2. This enlightenment, as with others, is advertised by CPUID leaf 0x40000004 and is under control of a new 'hcall_ipi' option in libxl. If used, this enlightenment should mean the guest only takes a single VMEXIT to issue IPIs to multiple vCPUs rather than the multiple VMEXITs that would result from using the emulated local APIC. [1] https://github.com/MicrosoftDocs/Virtualization-Documentation/raw/live/tlfs/Hypervisor%20Top%20Level%20Functional%20Specification%20v5.0C.pdf Signed-off-by: Paul Durrant Acked-by: Wei Liu Reviewed-by: Jan Beulich --- Cc: Ian Jackson Cc: Andrew Cooper Cc: George Dunlap Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: "Roger Pau Monné" v4: - Address comments from Jan v3: - New in v3 --- docs/man/xl.cfg.5.pod.in | 6 +++ tools/libxl/libxl.h | 6 +++ tools/libxl/libxl_dom.c | 3 ++ tools/libxl/libxl_types.idl | 1 + xen/arch/x86/hvm/viridian/viridian.c | 63 ++++++++++++++++++++++++++++ xen/include/public/hvm/params.h | 7 +++- 6 files changed, 85 insertions(+), 1 deletion(-) diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in index 355c654693..c7d70e618b 100644 --- a/docs/man/xl.cfg.5.pod.in +++ b/docs/man/xl.cfg.5.pod.in @@ -2175,6 +2175,12 @@ ticks and hence enabling this group will ensure that ticks will be consistent with use of an enlightened time source (B or B). +=item B + +This set incorporates use of a hypercall for interprocessor interrupts. +This enlightenment may improve performance of Windows guests with multiple +virtual CPUs. + =item B This is a special value that enables the default set of groups, which diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h index c8f219b0d3..482499a6c0 100644 --- a/tools/libxl/libxl.h +++ b/tools/libxl/libxl.h @@ -330,6 +330,12 @@ */ #define LIBXL_HAVE_VIRIDIAN_STIMER 1 +/* + * LIBXL_HAVE_VIRIDIAN_HCALL_IPI indicates that the 'hcall_ipi' value + * is present in the viridian enlightenment enumeration. + */ +#define LIBXL_HAVE_VIRIDIAN_HCALL_IPI 1 + /* * LIBXL_HAVE_BUILDINFO_HVM_ACPI_LAPTOP_SLATE indicates that * libxl_domain_build_info has the u.hvm.acpi_laptop_slate field. diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c index 2ee0f82ee7..879c806139 100644 --- a/tools/libxl/libxl_dom.c +++ b/tools/libxl/libxl_dom.c @@ -324,6 +324,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid, if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_STIMER)) mask |= HVMPV_time_ref_count | HVMPV_synic | HVMPV_stimer; + if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_HCALL_IPI)) + mask |= HVMPV_hcall_ipi; + if (mask != 0 && xc_hvm_param_set(CTX->xch, domid, diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index 1cce249de4..cb4702fd7a 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -237,6 +237,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [ (6, "crash_ctl"), (7, "synic"), (8, "stimer"), + (9, "hcall_ipi"), ]) libxl_hdtype = Enumeration("hdtype", [ diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index dce648bb4e..4b06b78a27 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -28,6 +28,7 @@ #define HvFlushVirtualAddressSpace 0x0002 #define HvFlushVirtualAddressList 0x0003 #define HvNotifyLongSpinWait 0x0008 +#define HvSendSyntheticClusterIpi 0x000b #define HvGetPartitionId 0x0046 #define HvExtCallQueryCapabilities 0x8001 @@ -95,6 +96,7 @@ typedef union _HV_CRASH_CTL_REG_CONTENTS #define CPUID4A_HCALL_REMOTE_TLB_FLUSH (1 << 2) #define CPUID4A_MSR_BASED_APIC (1 << 3) #define CPUID4A_RELAX_TIMER_INT (1 << 5) +#define CPUID4A_SYNTHETIC_CLUSTER_IPI (1 << 10) /* Viridian CPUID leaf 6: Implementation HW features detected and in use */ #define CPUID6A_APIC_OVERLAY (1 << 0) @@ -206,6 +208,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, res->a |= CPUID4A_HCALL_REMOTE_TLB_FLUSH; if ( !cpu_has_vmx_apic_reg_virt ) res->a |= CPUID4A_MSR_BASED_APIC; + if ( viridian_feature_mask(d) & HVMPV_hcall_ipi ) + res->a |= CPUID4A_SYNTHETIC_CLUSTER_IPI; /* * This value is the recommended number of attempts to try to @@ -628,6 +632,65 @@ int viridian_hypercall(struct cpu_user_regs *regs) break; } + case HvSendSyntheticClusterIpi: + { + struct vcpu *v; + uint32_t vector; + uint64_t vcpu_mask; + + status = HV_STATUS_INVALID_PARAMETER; + + /* Get input parameters. */ + if ( input.fast ) + { + if ( input_params_gpa >> 32 ) + break; + + vector = input_params_gpa; + vcpu_mask = output_params_gpa; + } + else + { + struct { + uint32_t vector; + uint8_t target_vtl; + uint8_t reserved_zero[3]; + uint64_t vcpu_mask; + } input_params; + + if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa, + sizeof(input_params)) != + HVMTRANS_okay ) + break; + + if ( input_params.target_vtl || + input_params.reserved_zero[0] || + input_params.reserved_zero[1] || + input_params.reserved_zero[2] ) + break; + + vector = input_params.vector; + vcpu_mask = input_params.vcpu_mask; + } + + if ( vector < 0x10 || vector > 0xff ) + break; + + for_each_vcpu ( currd, v ) + { + if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) ) + break; + + if ( !(vcpu_mask & (1ul << v->vcpu_id)) ) + continue; + + vlapic_set_irq(vcpu_vlapic(v), vector, 0); + } + + status = HV_STATUS_SUCCESS; + break; + } + default: gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n", input.call_code); diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h index e06b0942d0..36832e4b94 100644 --- a/xen/include/public/hvm/params.h +++ b/xen/include/public/hvm/params.h @@ -154,6 +154,10 @@ #define _HVMPV_stimer 8 #define HVMPV_stimer (1 << _HVMPV_stimer) +/* Use Synthetic Cluster IPI Hypercall */ +#define _HVMPV_hcall_ipi 9 +#define HVMPV_hcall_ipi (1 << _HVMPV_hcall_ipi) + #define HVMPV_feature_mask \ (HVMPV_base_freq | \ HVMPV_no_freq | \ @@ -163,7 +167,8 @@ HVMPV_apic_assist | \ HVMPV_crash_ctl | \ HVMPV_synic | \ - HVMPV_stimer) + HVMPV_stimer | \ + HVMPV_hcall_ipi) #endif