From patchwork Fri Aug 7 11:34:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11705587 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 358871510 for ; Fri, 7 Aug 2020 11:35:27 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 11BD122C9F for ; Fri, 7 Aug 2020 11:35:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 11BD122C9F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k40dZ-0007bd-9s; Fri, 07 Aug 2020 11:34:13 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k40dX-0007bH-N4 for xen-devel@lists.xenproject.org; Fri, 07 Aug 2020 11:34:11 +0000 X-Inumbo-ID: f0e49c15-728a-45c1-9915-99dceac38ae0 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f0e49c15-728a-45c1-9915-99dceac38ae0; Fri, 07 Aug 2020 11:34:10 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id AA8BEABF1; Fri, 7 Aug 2020 11:34:27 +0000 (UTC) Subject: [PATCH v2 5/7] x86: move domain_cpu_policy_changed() From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: <3a8356a9-313c-6de8-f409-977eae1fbfa5@suse.com> Message-ID: Date: Fri, 7 Aug 2020 13:34:12 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <3a8356a9-313c-6de8-f409-977eae1fbfa5@suse.com> Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" This is in preparation of making the building of domctl.c conditional. Signed-off-by: Jan Beulich Acked-by: Andrew Cooper --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -294,6 +294,173 @@ void update_guest_memory_policy(struct v } } +void domain_cpu_policy_changed(struct domain *d) +{ + const struct cpuid_policy *p = d->arch.cpuid; + struct vcpu *v; + + if ( is_pv_domain(d) ) + { + if ( ((levelling_caps & LCAP_1cd) == LCAP_1cd) ) + { + uint64_t mask = cpuidmask_defaults._1cd; + uint32_t ecx = p->basic._1c; + uint32_t edx = p->basic._1d; + + /* + * Must expose hosts HTT and X2APIC value so a guest using native + * CPUID can correctly interpret other leaves which cannot be + * masked. + */ + if ( cpu_has_x2apic ) + ecx |= cpufeat_mask(X86_FEATURE_X2APIC); + if ( cpu_has_htt ) + edx |= cpufeat_mask(X86_FEATURE_HTT); + + switch ( boot_cpu_data.x86_vendor ) + { + case X86_VENDOR_INTEL: + /* + * Intel masking MSRs are documented as AND masks. + * Experimentally, they are applied after OSXSAVE and APIC + * are fast-forwarded from real hardware state. + */ + mask &= ((uint64_t)edx << 32) | ecx; + + if ( ecx & cpufeat_mask(X86_FEATURE_XSAVE) ) + ecx = cpufeat_mask(X86_FEATURE_OSXSAVE); + else + ecx = 0; + edx = cpufeat_mask(X86_FEATURE_APIC); + + mask |= ((uint64_t)edx << 32) | ecx; + break; + + case X86_VENDOR_AMD: + case X86_VENDOR_HYGON: + mask &= ((uint64_t)ecx << 32) | edx; + + /* + * AMD masking MSRs are documented as overrides. + * Experimentally, fast-forwarding of the OSXSAVE and APIC + * bits from real hardware state only occurs if the MSR has + * the respective bits set. + */ + if ( ecx & cpufeat_mask(X86_FEATURE_XSAVE) ) + ecx = cpufeat_mask(X86_FEATURE_OSXSAVE); + else + ecx = 0; + edx = cpufeat_mask(X86_FEATURE_APIC); + + /* + * If the Hypervisor bit is set in the policy, we can also + * forward it into real CPUID. + */ + if ( p->basic.hypervisor ) + ecx |= cpufeat_mask(X86_FEATURE_HYPERVISOR); + + mask |= ((uint64_t)ecx << 32) | edx; + break; + } + + d->arch.pv.cpuidmasks->_1cd = mask; + } + + if ( ((levelling_caps & LCAP_6c) == LCAP_6c) ) + { + uint64_t mask = cpuidmask_defaults._6c; + + if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) + mask &= (~0ULL << 32) | p->basic.raw[6].c; + + d->arch.pv.cpuidmasks->_6c = mask; + } + + if ( ((levelling_caps & LCAP_7ab0) == LCAP_7ab0) ) + { + uint64_t mask = cpuidmask_defaults._7ab0; + + /* + * Leaf 7[0].eax is max_subleaf, not a feature mask. Take it + * wholesale from the policy, but clamp the features in 7[0].ebx + * per usual. + */ + if ( boot_cpu_data.x86_vendor & + (X86_VENDOR_AMD | X86_VENDOR_HYGON) ) + mask = (((uint64_t)p->feat.max_subleaf << 32) | + ((uint32_t)mask & p->feat._7b0)); + + d->arch.pv.cpuidmasks->_7ab0 = mask; + } + + if ( ((levelling_caps & LCAP_Da1) == LCAP_Da1) ) + { + uint64_t mask = cpuidmask_defaults.Da1; + uint32_t eax = p->xstate.Da1; + + if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL ) + mask &= (~0ULL << 32) | eax; + + d->arch.pv.cpuidmasks->Da1 = mask; + } + + if ( ((levelling_caps & LCAP_e1cd) == LCAP_e1cd) ) + { + uint64_t mask = cpuidmask_defaults.e1cd; + uint32_t ecx = p->extd.e1c; + uint32_t edx = p->extd.e1d; + + /* + * Must expose hosts CMP_LEGACY value so a guest using native + * CPUID can correctly interpret other leaves which cannot be + * masked. + */ + if ( cpu_has_cmp_legacy ) + ecx |= cpufeat_mask(X86_FEATURE_CMP_LEGACY); + + /* + * If not emulating AMD or Hygon, clear the duplicated features + * in e1d. + */ + if ( !(p->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ) + edx &= ~CPUID_COMMON_1D_FEATURES; + + switch ( boot_cpu_data.x86_vendor ) + { + case X86_VENDOR_INTEL: + mask &= ((uint64_t)edx << 32) | ecx; + break; + + case X86_VENDOR_AMD: + case X86_VENDOR_HYGON: + mask &= ((uint64_t)ecx << 32) | edx; + + /* + * Fast-forward bits - Must be set in the masking MSR for + * fast-forwarding to occur in hardware. + */ + ecx = 0; + edx = cpufeat_mask(X86_FEATURE_APIC); + + mask |= ((uint64_t)ecx << 32) | edx; + break; + } + + d->arch.pv.cpuidmasks->e1cd = mask; + } + } + + for_each_vcpu ( d, v ) + { + cpuid_policy_updated(v); + + /* If PMU version is zero then the guest doesn't have VPMU */ + if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && + p->basic.pmu_version == 0 ) + vpmu_destroy(v); + } +} + #ifndef CONFIG_BIGMEM /* * The hole may be at or above the 44-bit boundary, so we need to determine --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -49,173 +49,6 @@ static int gdbsx_guest_mem_io(domid_t do } #endif -void domain_cpu_policy_changed(struct domain *d) -{ - const struct cpuid_policy *p = d->arch.cpuid; - struct vcpu *v; - - if ( is_pv_domain(d) ) - { - if ( ((levelling_caps & LCAP_1cd) == LCAP_1cd) ) - { - uint64_t mask = cpuidmask_defaults._1cd; - uint32_t ecx = p->basic._1c; - uint32_t edx = p->basic._1d; - - /* - * Must expose hosts HTT and X2APIC value so a guest using native - * CPUID can correctly interpret other leaves which cannot be - * masked. - */ - if ( cpu_has_x2apic ) - ecx |= cpufeat_mask(X86_FEATURE_X2APIC); - if ( cpu_has_htt ) - edx |= cpufeat_mask(X86_FEATURE_HTT); - - switch ( boot_cpu_data.x86_vendor ) - { - case X86_VENDOR_INTEL: - /* - * Intel masking MSRs are documented as AND masks. - * Experimentally, they are applied after OSXSAVE and APIC - * are fast-forwarded from real hardware state. - */ - mask &= ((uint64_t)edx << 32) | ecx; - - if ( ecx & cpufeat_mask(X86_FEATURE_XSAVE) ) - ecx = cpufeat_mask(X86_FEATURE_OSXSAVE); - else - ecx = 0; - edx = cpufeat_mask(X86_FEATURE_APIC); - - mask |= ((uint64_t)edx << 32) | ecx; - break; - - case X86_VENDOR_AMD: - case X86_VENDOR_HYGON: - mask &= ((uint64_t)ecx << 32) | edx; - - /* - * AMD masking MSRs are documented as overrides. - * Experimentally, fast-forwarding of the OSXSAVE and APIC - * bits from real hardware state only occurs if the MSR has - * the respective bits set. - */ - if ( ecx & cpufeat_mask(X86_FEATURE_XSAVE) ) - ecx = cpufeat_mask(X86_FEATURE_OSXSAVE); - else - ecx = 0; - edx = cpufeat_mask(X86_FEATURE_APIC); - - /* - * If the Hypervisor bit is set in the policy, we can also - * forward it into real CPUID. - */ - if ( p->basic.hypervisor ) - ecx |= cpufeat_mask(X86_FEATURE_HYPERVISOR); - - mask |= ((uint64_t)ecx << 32) | edx; - break; - } - - d->arch.pv.cpuidmasks->_1cd = mask; - } - - if ( ((levelling_caps & LCAP_6c) == LCAP_6c) ) - { - uint64_t mask = cpuidmask_defaults._6c; - - if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) - mask &= (~0ULL << 32) | p->basic.raw[6].c; - - d->arch.pv.cpuidmasks->_6c = mask; - } - - if ( ((levelling_caps & LCAP_7ab0) == LCAP_7ab0) ) - { - uint64_t mask = cpuidmask_defaults._7ab0; - - /* - * Leaf 7[0].eax is max_subleaf, not a feature mask. Take it - * wholesale from the policy, but clamp the features in 7[0].ebx - * per usual. - */ - if ( boot_cpu_data.x86_vendor & - (X86_VENDOR_AMD | X86_VENDOR_HYGON) ) - mask = (((uint64_t)p->feat.max_subleaf << 32) | - ((uint32_t)mask & p->feat._7b0)); - - d->arch.pv.cpuidmasks->_7ab0 = mask; - } - - if ( ((levelling_caps & LCAP_Da1) == LCAP_Da1) ) - { - uint64_t mask = cpuidmask_defaults.Da1; - uint32_t eax = p->xstate.Da1; - - if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL ) - mask &= (~0ULL << 32) | eax; - - d->arch.pv.cpuidmasks->Da1 = mask; - } - - if ( ((levelling_caps & LCAP_e1cd) == LCAP_e1cd) ) - { - uint64_t mask = cpuidmask_defaults.e1cd; - uint32_t ecx = p->extd.e1c; - uint32_t edx = p->extd.e1d; - - /* - * Must expose hosts CMP_LEGACY value so a guest using native - * CPUID can correctly interpret other leaves which cannot be - * masked. - */ - if ( cpu_has_cmp_legacy ) - ecx |= cpufeat_mask(X86_FEATURE_CMP_LEGACY); - - /* - * If not emulating AMD or Hygon, clear the duplicated features - * in e1d. - */ - if ( !(p->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ) - edx &= ~CPUID_COMMON_1D_FEATURES; - - switch ( boot_cpu_data.x86_vendor ) - { - case X86_VENDOR_INTEL: - mask &= ((uint64_t)edx << 32) | ecx; - break; - - case X86_VENDOR_AMD: - case X86_VENDOR_HYGON: - mask &= ((uint64_t)ecx << 32) | edx; - - /* - * Fast-forward bits - Must be set in the masking MSR for - * fast-forwarding to occur in hardware. - */ - ecx = 0; - edx = cpufeat_mask(X86_FEATURE_APIC); - - mask |= ((uint64_t)ecx << 32) | edx; - break; - } - - d->arch.pv.cpuidmasks->e1cd = mask; - } - } - - for_each_vcpu ( d, v ) - { - cpuid_policy_updated(v); - - /* If PMU version is zero then the guest doesn't have VPMU */ - if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && - p->basic.pmu_version == 0 ) - vpmu_destroy(v); - } -} - static int update_domain_cpu_policy(struct domain *d, xen_domctl_cpu_policy_t *xdpc) {