From patchwork Mon Mar 18 11:20:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10857401 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1A0231390 for ; Mon, 18 Mar 2019 11:23:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F0FB82937E for ; Mon, 18 Mar 2019 11:23:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E58F229380; Mon, 18 Mar 2019 11:23:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0E7152937E for ; Mon, 18 Mar 2019 11:23:12 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKO-0004H0-1S; Mon, 18 Mar 2019 11:21:12 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5qKN-0004GI-2S for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 11:21:11 +0000 X-Inumbo-ID: ed689930-496f-11e9-96ed-3f54de64eaa0 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ed689930-496f-11e9-96ed-3f54de64eaa0; Mon, 18 Mar 2019 11:21:08 +0000 (UTC) IronPort-Data: A9a23:1R+iBaOvGdgV81rvrXNrnJZRuvKYJRA4MxhAe5GPyHHeKdA0Q4WDLG SUMgkCx3kO14TqgzEIKN6dT1tfe00KZkLoqo9TMAF6uyNRJ8aB7xge/ne86/7h8yCkhZzdil 4PsDmfaI4tkh9fA1lNktbWTujoJ2nYDvaai/6CwpANrOuPukvAC+Kga156PRDHI8G3QMyrgz 7+hQf/Qkvz0o5bP64P7+EkO2PKd2zg8Wo+GqIDdkEYAe3zfeBxqbLr1ZjWjdC5zqBgrbN/hC p2jvLu2cQi52kDWkcRLxKejzcwO1LZQYDu85tv0fqswK5icBfbfg9qssjTB5qoyRvzZEuDu+ ACJPTBUPP59yCp8KYrAhcK1mkjOYHnLgrviLLFmevIwkGh8IRh2vUT8thKydYSG2lMQTaeg6 kb59uBokKS0qexr9d/KhuGEQVfpNj7GUk43soCEGDvi369tKb1f09oQ0YQWV8huhy+VsboUz R+vEhjMYpyokqMEew0Vqb+VsDaecQoUy3eUZgZnTWPZ04q4YZBL6x8qteNz6PFcWiIdJ3DGf gzsAon2Ce7ir6n7Xq8s9Gf7Z2lXy3U+N81ZnyLjLC1p4oV0bpbEgaJtcNfYlAifn/vIwo5V6 GtnkV1SWJDwggDCHvA2k0EJX4pcsMVeeWjxbCyRhLTSBszOoFoc74hdkkayK9qmq5LjHwGT0 vP4PJaApUvMl+FRCHS2Q6i8MnTodX70Sg9gqfrjp+TcOQaq/XJQkoQHUSzBGLyHwSjr0V2tb 8KKqkwUyaKll0zYzjcUPVZ0gEpZyEvGNOGickgKFopjNY7Zl0LoJI9R73lcVdFWyKccZbYwi +2Hn0fYnid3U1xdOfNrZjTDHr3qArZ0M4SRDWPxO2aq2OXwfoPXlabUVNXwGq4jIxHHRriTG yxJry/V6LwhstysUYDhv/fiHvihXBX9i7nX/exy84YgFgIMReEzR+aLfmxE6qAdg1JL1noSd gPjjuhzGjy1ttqd9iq7bzBdUYL3RUtEyTQ3/LrZXww7dEovnVUAeqdT+pcg3tffLVk+rI0l0 aQJwcHuU50FWUh/iRmy+Vx0ci9SeBE0vuBfKZH3IbVKAvfAwq8BLy002eaYQZqr/roxU9eFx rLhSL0vypv9dp4qixh1xD3kXQgO2gpAwpoOzaMWtlzZgGzF8G7scKF0Wyo72Id6BfVAU/LQj IUClcSEfa9IRNQZ1+PVulV/yPvbIg+jA9HjrsxGYhLFjl6rBkcUe7DyYwi+pgc/rB6jhzXhK toTkbPViCToFI/TG2z6NtzSATIRaODTTLi9NQv91WFjqS7RD9zUAYk8mOURKWQ6VAevS37QS cNdJfFPl88N+Og/HCrB+HWF1giEl3ACdldUT4hDCGK0pYFWOJ3CLnnc/NB8icXmUuiD1Uwrq AWaGUyo0y9HwP30Jzkht09aPFxHEz6MGtNJl/kaYFe7sH3l9D8KjjQCTtBHCbb0EHerJyfJd rcRirvMy3vQW12hzWB4l+yiB9YojgjMrZWyuKrh5ZVez/f49XeyUYWq197C1e/IImR5yERot zx10a4AJgjKUSip1tfwFXIaG1IxYHVRNYdsBuT1r+1QF4vUZUqRJrzk7q4ahDKrQKjuq6Xu+ LwPYEQAfxjiZ0tdCcaguybDG9GfctErfbN2vddFtNYizzS02pjaKhFZHacgovgpq7GgfOwt6 gWMJvcNuhBZtmS48mEs9SPjiQCW83oVnlGHfJtYsQdpEArXnAGgLrTg0x9P5LQZOVY4z+CTP ATPxr1RGuOfT1h0YbinKUMnSkEVzCORVrcQr9FeOKlXg5XHP9McovODQI= X-IronPort-AV: E=Sophos;i="5.58,493,1544486400"; d="scan'208";a="80850944" From: Paul Durrant To: Date: Mon, 18 Mar 2019 11:20:51 +0000 Message-ID: <20190318112059.21910-4-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190318112059.21910-1-paul.durrant@citrix.com> References: <20190318112059.21910-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v8 03/11] viridian: use stack variables for viridian_vcpu and viridian_domain... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP ...where there is more than one dereference inside a function. This shortens the code and makes it more readable. No functional change. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" v4: - New in v4 --- xen/arch/x86/hvm/viridian/synic.c | 49 ++++++++++++++++------------ xen/arch/x86/hvm/viridian/time.c | 27 ++++++++------- xen/arch/x86/hvm/viridian/viridian.c | 47 +++++++++++++------------- 3 files changed, 69 insertions(+), 54 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index 28eda7798c..f3d9f7ae74 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -30,7 +30,8 @@ typedef union _HV_VP_ASSIST_PAGE void viridian_apic_assist_set(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; + struct viridian_vcpu *vv = v->arch.hvm.viridian; + HV_VP_ASSIST_PAGE *ptr = vv->vp_assist.ptr; if ( !ptr ) return; @@ -40,25 +41,25 @@ void viridian_apic_assist_set(const struct vcpu *v) * wrong and the VM will most likely hang so force a crash now * to make the problem clear. */ - if ( v->arch.hvm.viridian->apic_assist_pending ) + if ( vv->apic_assist_pending ) domain_crash(v->domain); - v->arch.hvm.viridian->apic_assist_pending = true; + vv->apic_assist_pending = true; ptr->ApicAssist.no_eoi = 1; } bool viridian_apic_assist_completed(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; + struct viridian_vcpu *vv = v->arch.hvm.viridian; + HV_VP_ASSIST_PAGE *ptr = vv->vp_assist.ptr; if ( !ptr ) return false; - if ( v->arch.hvm.viridian->apic_assist_pending && - !ptr->ApicAssist.no_eoi ) + if ( vv->apic_assist_pending && !ptr->ApicAssist.no_eoi ) { /* An EOI has been avoided */ - v->arch.hvm.viridian->apic_assist_pending = false; + vv->apic_assist_pending = false; return true; } @@ -67,17 +68,20 @@ bool viridian_apic_assist_completed(const struct vcpu *v) void viridian_apic_assist_clear(const struct vcpu *v) { - HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr; + struct viridian_vcpu *vv = v->arch.hvm.viridian; + HV_VP_ASSIST_PAGE *ptr = vv->vp_assist.ptr; if ( !ptr ) return; ptr->ApicAssist.no_eoi = 0; - v->arch.hvm.viridian->apic_assist_pending = false; + vv->apic_assist_pending = false; } int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; + switch ( idx ) { case HV_X64_MSR_EOI: @@ -95,12 +99,11 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_VP_ASSIST_PAGE: /* release any previous mapping */ - viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist); - v->arch.hvm.viridian->vp_assist.msr.raw = val; - viridian_dump_guest_page(v, "VP_ASSIST", - &v->arch.hvm.viridian->vp_assist); - if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled ) - viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist); + viridian_unmap_guest_page(&vv->vp_assist); + vv->vp_assist.msr.raw = val; + viridian_dump_guest_page(v, "VP_ASSIST", &vv->vp_assist); + if ( vv->vp_assist.msr.fields.enabled ) + viridian_map_guest_page(v, &vv->vp_assist); break; default: @@ -146,18 +149,22 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) void viridian_synic_save_vcpu_ctxt(const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt) { - ctxt->apic_assist_pending = v->arch.hvm.viridian->apic_assist_pending; - ctxt->vp_assist_msr = v->arch.hvm.viridian->vp_assist.msr.raw; + const struct viridian_vcpu *vv = v->arch.hvm.viridian; + + ctxt->apic_assist_pending = vv->apic_assist_pending; + ctxt->vp_assist_msr = vv->vp_assist.msr.raw; } void viridian_synic_load_vcpu_ctxt( struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt) { - v->arch.hvm.viridian->vp_assist.msr.raw = ctxt->vp_assist_msr; - if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled ) - viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist); + struct viridian_vcpu *vv = v->arch.hvm.viridian; + + vv->vp_assist.msr.raw = ctxt->vp_assist_msr; + if ( vv->vp_assist.msr.fields.enabled ) + viridian_map_guest_page(v, &vv->vp_assist); - v->arch.hvm.viridian->apic_assist_pending = ctxt->apic_assist_pending; + vv->apic_assist_pending = ctxt->apic_assist_pending; } /* diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index a7e94aadf0..76f9612001 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -141,6 +141,7 @@ void viridian_time_ref_count_thaw(const struct domain *d) int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) { struct domain *d = v->domain; + struct viridian_domain *vd = d->arch.hvm.viridian; switch ( idx ) { @@ -148,9 +149,9 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - d->arch.hvm.viridian->reference_tsc.raw = val; + vd->reference_tsc.raw = val; dump_reference_tsc(d); - if ( d->arch.hvm.viridian->reference_tsc.fields.enabled ) + if ( vd->reference_tsc.fields.enabled ) update_reference_tsc(d, true); break; @@ -165,7 +166,8 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) { - struct domain *d = v->domain; + const struct domain *d = v->domain; + struct viridian_domain *vd = d->arch.hvm.viridian; switch ( idx ) { @@ -187,13 +189,12 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - *val = d->arch.hvm.viridian->reference_tsc.raw; + *val = vd->reference_tsc.raw; break; case HV_X64_MSR_TIME_REF_COUNT: { - struct viridian_time_ref_count *trc = - &d->arch.hvm.viridian->time_ref_count; + struct viridian_time_ref_count *trc = &vd->time_ref_count; if ( !(viridian_feature_mask(d) & HVMPV_time_ref_count) ) return X86EMUL_EXCEPTION; @@ -217,17 +218,21 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) void viridian_time_save_domain_ctxt( const struct domain *d, struct hvm_viridian_domain_context *ctxt) { - ctxt->time_ref_count = d->arch.hvm.viridian->time_ref_count.val; - ctxt->reference_tsc = d->arch.hvm.viridian->reference_tsc.raw; + const struct viridian_domain *vd = d->arch.hvm.viridian; + + ctxt->time_ref_count = vd->time_ref_count.val; + ctxt->reference_tsc = vd->reference_tsc.raw; } void viridian_time_load_domain_ctxt( struct domain *d, const struct hvm_viridian_domain_context *ctxt) { - d->arch.hvm.viridian->time_ref_count.val = ctxt->time_ref_count; - d->arch.hvm.viridian->reference_tsc.raw = ctxt->reference_tsc; + struct viridian_domain *vd = d->arch.hvm.viridian; + + vd->time_ref_count.val = ctxt->time_ref_count; + vd->reference_tsc.raw = ctxt->reference_tsc; - if ( d->arch.hvm.viridian->reference_tsc.fields.enabled ) + if ( vd->reference_tsc.fields.enabled ) update_reference_tsc(d, false); } diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 7839718ef4..710470fed7 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -122,6 +122,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, uint32_t subleaf, struct cpuid_leaf *res) { const struct domain *d = v->domain; + const struct viridian_domain *vd = d->arch.hvm.viridian; ASSERT(is_viridian_domain(d)); ASSERT(leaf >= 0x40000000 && leaf < 0x40000100); @@ -146,7 +147,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, * Hypervisor information, but only if the guest has set its * own version number. */ - if ( d->arch.hvm.viridian->guest_os_id.raw == 0 ) + if ( vd->guest_os_id.raw == 0 ) break; res->a = viridian_build; res->b = ((uint32_t)viridian_major << 16) | viridian_minor; @@ -191,8 +192,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, case 4: /* Recommended hypercall usage. */ - if ( (d->arch.hvm.viridian->guest_os_id.raw == 0) || - (d->arch.hvm.viridian->guest_os_id.fields.os < 4) ) + if ( vd->guest_os_id.raw == 0 || vd->guest_os_id.fields.os < 4 ) break; res->a = CPUID4A_RELAX_TIMER_INT; if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush ) @@ -281,21 +281,23 @@ static void enable_hypercall_page(struct domain *d) int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) { + struct viridian_vcpu *vv = v->arch.hvm.viridian; struct domain *d = v->domain; + struct viridian_domain *vd = d->arch.hvm.viridian; ASSERT(is_viridian_domain(d)); switch ( idx ) { case HV_X64_MSR_GUEST_OS_ID: - d->arch.hvm.viridian->guest_os_id.raw = val; + vd->guest_os_id.raw = val; dump_guest_os_id(d); break; case HV_X64_MSR_HYPERCALL: - d->arch.hvm.viridian->hypercall_gpa.raw = val; + vd->hypercall_gpa.raw = val; dump_hypercall(d); - if ( d->arch.hvm.viridian->hypercall_gpa.fields.enabled ) + if ( vd->hypercall_gpa.fields.enabled ) enable_hypercall_page(d); break; @@ -317,10 +319,10 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) case HV_X64_MSR_CRASH_P3: case HV_X64_MSR_CRASH_P4: BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >= - ARRAY_SIZE(v->arch.hvm.viridian->crash_param)); + ARRAY_SIZE(vv->crash_param)); idx -= HV_X64_MSR_CRASH_P0; - v->arch.hvm.viridian->crash_param[idx] = val; + vv->crash_param[idx] = val; break; case HV_X64_MSR_CRASH_CTL: @@ -337,11 +339,8 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) spin_unlock(&d->shutdown_lock); gprintk(XENLOG_WARNING, "VIRIDIAN CRASH: %lx %lx %lx %lx %lx\n", - v->arch.hvm.viridian->crash_param[0], - v->arch.hvm.viridian->crash_param[1], - v->arch.hvm.viridian->crash_param[2], - v->arch.hvm.viridian->crash_param[3], - v->arch.hvm.viridian->crash_param[4]); + vv->crash_param[0], vv->crash_param[1], vv->crash_param[2], + vv->crash_param[3], vv->crash_param[4]); break; } @@ -357,18 +356,20 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val) int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) { - struct domain *d = v->domain; + const struct viridian_vcpu *vv = v->arch.hvm.viridian; + const struct domain *d = v->domain; + const struct viridian_domain *vd = d->arch.hvm.viridian; ASSERT(is_viridian_domain(d)); switch ( idx ) { case HV_X64_MSR_GUEST_OS_ID: - *val = d->arch.hvm.viridian->guest_os_id.raw; + *val = vd->guest_os_id.raw; break; case HV_X64_MSR_HYPERCALL: - *val = d->arch.hvm.viridian->hypercall_gpa.raw; + *val = vd->hypercall_gpa.raw; break; case HV_X64_MSR_VP_INDEX: @@ -393,10 +394,10 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val) case HV_X64_MSR_CRASH_P3: case HV_X64_MSR_CRASH_P4: BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >= - ARRAY_SIZE(v->arch.hvm.viridian->crash_param)); + ARRAY_SIZE(vv->crash_param)); idx -= HV_X64_MSR_CRASH_P0; - *val = v->arch.hvm.viridian->crash_param[idx]; + *val = vv->crash_param[idx]; break; case HV_X64_MSR_CRASH_CTL: @@ -665,9 +666,10 @@ static int viridian_save_domain_ctxt(struct vcpu *v, hvm_domain_context_t *h) { const struct domain *d = v->domain; + const struct viridian_domain *vd = d->arch.hvm.viridian; struct hvm_viridian_domain_context ctxt = { - .hypercall_gpa = d->arch.hvm.viridian->hypercall_gpa.raw, - .guest_os_id = d->arch.hvm.viridian->guest_os_id.raw, + .hypercall_gpa = vd->hypercall_gpa.raw, + .guest_os_id = vd->guest_os_id.raw, }; if ( !is_viridian_domain(d) ) @@ -681,13 +683,14 @@ static int viridian_save_domain_ctxt(struct vcpu *v, static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h) { + struct viridian_domain *vd = d->arch.hvm.viridian; struct hvm_viridian_domain_context ctxt; if ( hvm_load_entry_zeroextend(VIRIDIAN_DOMAIN, h, &ctxt) != 0 ) return -EINVAL; - d->arch.hvm.viridian->hypercall_gpa.raw = ctxt.hypercall_gpa; - d->arch.hvm.viridian->guest_os_id.raw = ctxt.guest_os_id; + vd->hypercall_gpa.raw = ctxt.hypercall_gpa; + vd->guest_os_id.raw = ctxt.guest_os_id; viridian_time_load_domain_ctxt(d, &ctxt);