From patchwork Tue Feb 15 19:52:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 12748214 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E40CEC433F5 for ; Wed, 16 Feb 2022 08:54:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231785AbiBPIyS (ORCPT ); Wed, 16 Feb 2022 03:54:18 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:48556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231767AbiBPIyR (ORCPT ); Wed, 16 Feb 2022 03:54:17 -0500 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0EFE2189A8D for ; Wed, 16 Feb 2022 00:54:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645001646; x=1676537646; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Aqdv8MZYaBIeZdcgvu5e2Fh6FIGmCmAW19zWgTGh2xk=; b=kymW3RARvFwgYtzm2GvOe0qt56t4j1em4+M2hx7Rf7ba9PZ7av3QVokS kiBcCk3RFDUod3k1w0lvIWUenMrsK7DxrvY1tCT/mSYlxrSXDTOVjnAo4 T32b/7U99uiNAMeUHzgLhxABYXwXAMqTc2woStFNnZb9rvuaj4doVobMZ Xv/eMyRnGwm0STJ5CEAwQWQ7Mahmqjr6zAKrLyy6NH+vQCnler26ES5Tw zufgyrs6mCVGNP1A738r4mNVJcwABBZzXExoslTfiIRRprQlo4PWKKGhP UB1ZCf0rUNA0EJSwzW7yVKyM52Gjc+LGhOS4ZB7db5f4A62vRMiWIKZcY Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10259"; a="275135776" X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="275135776" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:05 -0800 X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="633418264" Received: from embargo.jf.intel.com ([10.165.9.183]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:05 -0800 From: Yang Weijiang To: pbonzini@redhat.com, ehabkost@redhat.com, mtosatti@redhat.com, seanjc@google.com, richard.henderson@linaro.org, like.xu.linux@gmail.com, wei.w.wang@intel.com, qemu-devel@nongnu.org, kvm@vger.kernel.org Cc: Yang Weijiang , Like Xu Subject: [PATCH 1/8] qdev-properties: Add a new macro with bitmask check for uint64_t property Date: Tue, 15 Feb 2022 14:52:51 -0500 Message-Id: <20220215195258.29149-2-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220215195258.29149-1-weijiang.yang@intel.com> References: <20220215195258.29149-1-weijiang.yang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The DEFINE_PROP_UINT64_CHECKMASK maro applies certain mask check agaist user-supplied property value, reject the value if it violates the bitmask. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Yang Weijiang --- hw/core/qdev-properties.c | 19 +++++++++++++++++++ include/hw/qdev-properties.h | 12 ++++++++++++ 2 files changed, 31 insertions(+) diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c index c34aac6ebc..27566e5ef7 100644 --- a/hw/core/qdev-properties.c +++ b/hw/core/qdev-properties.c @@ -428,6 +428,25 @@ const PropertyInfo qdev_prop_int64 = { .set_default_value = qdev_propinfo_set_default_value_int, }; +static void set_uint64_checkmask(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + Property *prop = opaque; + uint64_t *ptr = object_field_prop_ptr(obj, prop); + + visit_type_uint64(v, name, ptr, errp); + if (*ptr & ~prop->bitmask) { + error_setg(errp, "Property value for '%s' violates bitmask '0x%lx'", + name, prop->bitmask); + } +} + +const PropertyInfo qdev_prop_uint64_checkmask = { + .name = "uint64", + .get = get_uint64, + .set = set_uint64_checkmask, +}; + /* --- string --- */ static void release_string(Object *obj, const char *name, void *opaque) diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h index f7925f67d0..e1df08876c 100644 --- a/include/hw/qdev-properties.h +++ b/include/hw/qdev-properties.h @@ -17,6 +17,7 @@ struct Property { const PropertyInfo *info; ptrdiff_t offset; uint8_t bitnr; + uint64_t bitmask; bool set_default; union { int64_t i; @@ -54,6 +55,7 @@ extern const PropertyInfo qdev_prop_uint16; extern const PropertyInfo qdev_prop_uint32; extern const PropertyInfo qdev_prop_int32; extern const PropertyInfo qdev_prop_uint64; +extern const PropertyInfo qdev_prop_uint64_checkmask; extern const PropertyInfo qdev_prop_int64; extern const PropertyInfo qdev_prop_size; extern const PropertyInfo qdev_prop_string; @@ -103,6 +105,16 @@ extern const PropertyInfo qdev_prop_link; .set_default = true, \ .defval.u = (bool)_defval) +/** + * The DEFINE_PROP_UINT64_CHECKMASK macro checks a user-supplied value + * against corresponding bitmask, rejects the value if it violates. + * The default value is set in instance_init(). + */ +#define DEFINE_PROP_UINT64_CHECKMASK(_name, _state, _field, _bitmask) \ + DEFINE_PROP(_name, _state, _field, qdev_prop_uint64_checkmask, uint64_t, \ + .bitmask = (_bitmask), \ + .set_default = false) + #define PROP_ARRAY_LEN_PREFIX "len-" /** From patchwork Tue Feb 15 19:52:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 12748215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA7A1C433EF for ; Wed, 16 Feb 2022 08:54:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231796AbiBPIyU (ORCPT ); Wed, 16 Feb 2022 03:54:20 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:48566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231775AbiBPIyR (ORCPT ); Wed, 16 Feb 2022 03:54:17 -0500 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8BBDCFA200 for ; Wed, 16 Feb 2022 00:54:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645001646; x=1676537646; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pb5fomvJCsmkZJ5lEHv4lA8W0IQScN7xG932ihf1WPo=; b=gio7OWCPEGlIEEfgiEIg6Dl0L1DfpnoeLprNqUdc0RGJ312f+7gv2aoA OnY9XO3fBKZbCb3MRZ6YnKqetmje0OutwM+r3sRpJgbYNZ0EI9NXSA+nK QRqU+PZ67glZ0onLOhpSDMCguQU/7Yu+VVukVIQ4NI26E4QqMKIxuPfIz UY/UjQcTOEigg2R0PgDrGrPFeHfT3LsP6ojEXJraX9KC9EfePGSWy3JdZ IhksGOglxxDkKGP/kMby0oU85LwjHam/HW7aYc7CCz+YplhPNtV3YmCPJ bUa6/xJbuVJm+6sUquxTxcVvZl+DPi94rxIilRly4jvh4A3KbcF/R2ikD A==; X-IronPort-AV: E=McAfee;i="6200,9189,10259"; a="275135780" X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="275135780" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:05 -0800 X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="633418271" Received: from embargo.jf.intel.com ([10.165.9.183]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:05 -0800 From: Yang Weijiang To: pbonzini@redhat.com, ehabkost@redhat.com, mtosatti@redhat.com, seanjc@google.com, richard.henderson@linaro.org, like.xu.linux@gmail.com, wei.w.wang@intel.com, qemu-devel@nongnu.org, kvm@vger.kernel.org Cc: Yang Weijiang , Like Xu Subject: [PATCH 2/8] target/i386: Add lbr-fmt vPMU option to support guest LBR Date: Tue, 15 Feb 2022 14:52:52 -0500 Message-Id: <20220215195258.29149-3-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220215195258.29149-1-weijiang.yang@intel.com> References: <20220215195258.29149-1-weijiang.yang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The Last Branch Recording (LBR) is a performance monitor unit (PMU) feature on Intel processors which records a running trace of the most recent branches taken by the processor in the LBR stack. This option indicates the LBR format to enable for guest perf. The LBR feature is enabled if below conditions are met: 1) KVM is enabled and the PMU is enabled. 2) msr-based-feature IA32_PERF_CAPABILITIES is supporterd on KVM. 3) Supported returned value for lbr_fmt from above msr is non-zero. 4) Guest vcpu model does support FEAT_1_ECX.CPUID_EXT_PDCM. 5) User-provided lbr-fmt value doesn't violate its bitmask (0x3f). 6) Target guest LBR format matches that of host. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Yang Weijiang --- target/i386/cpu.c | 40 ++++++++++++++++++++++++++++++++++++++++ target/i386/cpu.h | 10 ++++++++++ 2 files changed, 50 insertions(+) diff --git a/target/i386/cpu.c b/target/i386/cpu.c index 9543762e7e..a037bba387 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -6370,6 +6370,7 @@ static void x86_cpu_realizefn(DeviceState *dev, Error **errp) CPUX86State *env = &cpu->env; Error *local_err = NULL; static bool ht_warned; + uint64_t requested_lbr_fmt; if (cpu->apic_id == UNASSIGNED_APIC_ID) { error_setg(errp, "apic-id property was not initialized properly"); @@ -6387,6 +6388,42 @@ static void x86_cpu_realizefn(DeviceState *dev, Error **errp) goto out; } + /* + * Override env->features[FEAT_PERF_CAPABILITIES].LBR_FMT + * with user-provided setting. + */ + if (cpu->lbr_fmt != ~PERF_CAP_LBR_FMT) { + if ((cpu->lbr_fmt & PERF_CAP_LBR_FMT) != cpu->lbr_fmt) { + error_setg(errp, "invalid lbr-fmt"); + return; + } + env->features[FEAT_PERF_CAPABILITIES] &= ~PERF_CAP_LBR_FMT; + env->features[FEAT_PERF_CAPABILITIES] |= cpu->lbr_fmt; + } + + /* + * vPMU LBR is supported when 1) KVM is enabled 2) Option pmu=on and + * 3)vPMU LBR format matches that of host setting. + */ + requested_lbr_fmt = + env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_LBR_FMT; + if (requested_lbr_fmt && kvm_enabled()) { + uint64_t host_perf_cap = + x86_cpu_get_supported_feature_word(FEAT_PERF_CAPABILITIES, false); + uint64_t host_lbr_fmt = host_perf_cap & PERF_CAP_LBR_FMT; + + if (!cpu->enable_pmu) { + error_setg(errp, "vPMU: LBR is unsupported without pmu=on"); + return; + } + if (requested_lbr_fmt != host_lbr_fmt) { + error_setg(errp, "vPMU: the lbr-fmt value (0x%lx) mismatches " + "the host supported value (0x%lx).", + requested_lbr_fmt, host_lbr_fmt); + return; + } + } + x86_cpu_filter_features(cpu, cpu->check_cpuid || cpu->enforce_cpuid); if (cpu->enforce_cpuid && x86_cpu_have_filtered_features(cpu)) { @@ -6739,6 +6776,8 @@ static void x86_cpu_initfn(Object *obj) object_property_add_alias(obj, "sse4_2", obj, "sse4.2"); object_property_add_alias(obj, "hv-apicv", obj, "hv-avic"); + cpu->lbr_fmt = ~PERF_CAP_LBR_FMT; + object_property_add_alias(obj, "lbr_fmt", obj, "lbr-fmt"); if (xcc->model) { x86_cpu_load_model(cpu, xcc->model); @@ -6894,6 +6933,7 @@ static Property x86_cpu_properties[] = { #endif DEFINE_PROP_INT32("node-id", X86CPU, node_id, CPU_UNSET_NUMA_NODE_ID), DEFINE_PROP_BOOL("pmu", X86CPU, enable_pmu, false), + DEFINE_PROP_UINT64_CHECKMASK("lbr-fmt", X86CPU, lbr_fmt, PERF_CAP_LBR_FMT), DEFINE_PROP_UINT32("hv-spinlocks", X86CPU, hyperv_spinlock_attempts, HYPERV_SPINLOCK_NEVER_NOTIFY), diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 509c16323a..852afabe0b 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -383,6 +383,7 @@ typedef enum X86Seg { #define ARCH_CAP_TSX_CTRL_MSR (1<<7) #define MSR_IA32_PERF_CAPABILITIES 0x345 +#define PERF_CAP_LBR_FMT 0x3f #define MSR_IA32_TSX_CTRL 0x122 #define MSR_IA32_TSCDEADLINE 0x6e0 @@ -1819,6 +1820,15 @@ struct X86CPU { */ bool enable_pmu; + /* + * Enable LBR_FMT bits of IA32_PERF_CAPABILITIES MSR. + * This can't be initialized with a default because it doesn't have + * stable ABI support yet. It is only allowed to pass all LBR_FMT bits + * returned by kvm_arch_get_supported_msr_feature()(which depends on both + * host CPU and kernel capabilities) to the guest. + */ + uint64_t lbr_fmt; + /* LMCE support can be enabled/disabled via cpu option 'lmce=on/off'. It is * disabled by default to avoid breaking migration between QEMU with * different LMCE configurations. From patchwork Tue Feb 15 19:52:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 12748216 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91933C433FE for ; Wed, 16 Feb 2022 08:54:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231801AbiBPIyV (ORCPT ); Wed, 16 Feb 2022 03:54:21 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:48580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231781AbiBPIyS (ORCPT ); Wed, 16 Feb 2022 03:54:18 -0500 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B16DE15F602 for ; Wed, 16 Feb 2022 00:54:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645001646; x=1676537646; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HHSwigx+fEOjTfkwt5vUqTNa2cOrqiOLC5/yaTKJ80E=; b=bepFA3ZbyepGR7ID8rOurzJdT0goEwqfAeNKh1t79yfi03EQzD5aG77V ZB39PqP5QQkHzIvbDkFevt24VTxBfeDmeUBYb7lSiBXOjVCa3I4E3Uh3/ QEPnnmmOXJh1UNs3W3PnmlDBoWxDa0ss0TIphMaXBhSjVh+G7cOXkBxvu nafGNgp6O3ixMeI5pP43MN5mPhbSYmOg5C9nIm1106InQF3v2n1+6VPfZ sKhFkeuQWLM61OMXsBrSxapFaMqJAiBjcZRMxxvTw58Wf+Dc5QBsEHeT1 utLN3fJczFdw126Mp0ngQ5tvvdKOlp6RcecGjkKFqZIg6nDJwRg9V18P5 w==; X-IronPort-AV: E=McAfee;i="6200,9189,10259"; a="275135787" X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="275135787" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:06 -0800 X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="633418276" Received: from embargo.jf.intel.com ([10.165.9.183]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:05 -0800 From: Yang Weijiang To: pbonzini@redhat.com, ehabkost@redhat.com, mtosatti@redhat.com, seanjc@google.com, richard.henderson@linaro.org, like.xu.linux@gmail.com, wei.w.wang@intel.com, qemu-devel@nongnu.org, kvm@vger.kernel.org Cc: Yang Weijiang Subject: [PATCH 3/8] target/i386: Add kvm_get_one_msr helper Date: Tue, 15 Feb 2022 14:52:53 -0500 Message-Id: <20220215195258.29149-4-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220215195258.29149-1-weijiang.yang@intel.com> References: <20220215195258.29149-1-weijiang.yang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When try to get one msr from KVM, I found there's no such kind of existing interface while kvm_put_one_msr() is there. So here comes the patch. It'll remove redundant preparation code before finally call KVM_GET_MSRS IOCTL. No functional change intended. Signed-off-by: Yang Weijiang --- target/i386/kvm/kvm.c | 48 ++++++++++++++++++++++++------------------- 1 file changed, 27 insertions(+), 21 deletions(-) diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c index 8dbda2420d..764d110e0f 100644 --- a/target/i386/kvm/kvm.c +++ b/target/i386/kvm/kvm.c @@ -136,6 +136,7 @@ static struct kvm_msr_list *kvm_feature_msrs; #define BUS_LOCK_SLICE_TIME 1000000000ULL /* ns */ static RateLimit bus_lock_ratelimit_ctrl; +static int kvm_get_one_msr(X86CPU *cpu, int index, uint64_t *value); int kvm_has_pit_state2(void) { @@ -206,28 +207,21 @@ static int kvm_get_tsc(CPUState *cs) { X86CPU *cpu = X86_CPU(cs); CPUX86State *env = &cpu->env; - struct { - struct kvm_msrs info; - struct kvm_msr_entry entries[1]; - } msr_data = {}; + uint64_t value; int ret; if (env->tsc_valid) { return 0; } - memset(&msr_data, 0, sizeof(msr_data)); - msr_data.info.nmsrs = 1; - msr_data.entries[0].index = MSR_IA32_TSC; env->tsc_valid = !runstate_is_running(); - ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_MSRS, &msr_data); + ret = kvm_get_one_msr(cpu, MSR_IA32_TSC, &value); if (ret < 0) { return ret; } - assert(ret == 1); - env->tsc = msr_data.entries[0].data; + env->tsc = value; return 0; } @@ -1485,21 +1479,14 @@ static int hyperv_init_vcpu(X86CPU *cpu) * the kernel doesn't support setting vp_index; assert that its value * is in sync */ - struct { - struct kvm_msrs info; - struct kvm_msr_entry entries[1]; - } msr_data = { - .info.nmsrs = 1, - .entries[0].index = HV_X64_MSR_VP_INDEX, - }; - - ret = kvm_vcpu_ioctl(cs, KVM_GET_MSRS, &msr_data); + uint64_t value; + + ret = kvm_get_one_msr(cpu, HV_X64_MSR_VP_INDEX, &value); if (ret < 0) { return ret; } - assert(ret == 1); - if (msr_data.entries[0].data != hyperv_vp_index(CPU(cpu))) { + if (value != hyperv_vp_index(CPU(cpu))) { error_report("kernel's vp_index != QEMU's vp_index"); return -ENXIO; } @@ -2752,6 +2739,25 @@ static int kvm_put_one_msr(X86CPU *cpu, int index, uint64_t value) return kvm_vcpu_ioctl(CPU(cpu), KVM_SET_MSRS, cpu->kvm_msr_buf); } +static int kvm_get_one_msr(X86CPU *cpu, int index, uint64_t *value) +{ + int ret; + struct { + struct kvm_msrs info; + struct kvm_msr_entry entries[1]; + } msr_data = { + .info.nmsrs = 1, + .entries[0].index = index, + }; + + ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_MSRS, &msr_data); + if (ret < 0) { + return ret; + } + assert(ret == 1); + *value = msr_data.entries[0].data; + return ret; +} void kvm_put_apicbase(X86CPU *cpu, uint64_t value) { int ret; From patchwork Tue Feb 15 19:52:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 12748218 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1816AC4332F for ; Wed, 16 Feb 2022 08:54:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231808AbiBPIyX (ORCPT ); Wed, 16 Feb 2022 03:54:23 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:48626 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231786AbiBPIyT (ORCPT ); Wed, 16 Feb 2022 03:54:19 -0500 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45134189A8D for ; Wed, 16 Feb 2022 00:54:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645001647; x=1676537647; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vpU+dDwGdr/y/YFn6PEOCs1NoiHA7z2R+29wt+qBHQY=; b=NjyaampGJ62Q/wzmu6ktZbGyi8vgB39Tci1yAY3tZLuOpm3reH5Bz5sY FnOAVxhHEV+ZSLqFqORt8p7KaYWqoeI30oOP4rko2JNtKlAWMgrtddvLc EzvqNcWr2M57BPP/wL7VirjaFNI7ENZbjwIe5e9W24wj8cRP4a8l1Pe3r t6Jm9rKmvznSBTEK0PEORAc5gkiUIGk25+D6edAbg6XAS7KpF3jnXJzYw NT8bqCAm6LKc++WTLAo9/7avqPQT0NakdOt5VQM7d7qBwDqmEN0vsYo4L zYtm3XNej6bcChZuwf0v1pA9g6WJrFkpcODZud6esHRspzeaLgKMxPeyM w==; X-IronPort-AV: E=McAfee;i="6200,9189,10259"; a="275135796" X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="275135796" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:06 -0800 X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="633418280" Received: from embargo.jf.intel.com ([10.165.9.183]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:06 -0800 From: Yang Weijiang To: pbonzini@redhat.com, ehabkost@redhat.com, mtosatti@redhat.com, seanjc@google.com, richard.henderson@linaro.org, like.xu.linux@gmail.com, wei.w.wang@intel.com, qemu-devel@nongnu.org, kvm@vger.kernel.org Cc: Yang Weijiang Subject: [PATCH 4/8] target/i386: Enable support for XSAVES based features Date: Tue, 15 Feb 2022 14:52:54 -0500 Message-Id: <20220215195258.29149-5-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220215195258.29149-1-weijiang.yang@intel.com> References: <20220215195258.29149-1-weijiang.yang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There're some new features, including Arch LBR, depending on XSAVES/XRSTORS support, the new instructions will save/restore data based on feature bits enabled in XCR0 | XSS. This patch adds the basic support for related CPUID enumeration and meanwhile changes the name from FEAT_XSAVE_COMP_{LO|HI} to FEAT_XSAVE_XCR0_{LO|HI} to differentiate clearly the feature bits in XCR0 and those in XSS. Signed-off-by: Yang Weijiang --- target/i386/cpu.c | 104 +++++++++++++++++++++++++++++++++++----------- target/i386/cpu.h | 13 +++++- 2 files changed, 91 insertions(+), 26 deletions(-) diff --git a/target/i386/cpu.c b/target/i386/cpu.c index a037bba387..496e906233 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -940,6 +940,34 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = { }, .tcg_features = TCG_XSAVE_FEATURES, }, + [FEAT_XSAVE_XSS_LO] = { + .type = CPUID_FEATURE_WORD, + .feat_names = { + NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, + }, + .cpuid = { + .eax = 0xD, + .needs_ecx = true, + .ecx = 1, + .reg = R_ECX, + }, + }, + [FEAT_XSAVE_XSS_HI] = { + .type = CPUID_FEATURE_WORD, + .cpuid = { + .eax = 0xD, + .needs_ecx = true, + .ecx = 1, + .reg = R_EDX + }, + }, [FEAT_6_EAX] = { .type = CPUID_FEATURE_WORD, .feat_names = { @@ -955,7 +983,7 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = { .cpuid = { .eax = 6, .reg = R_EAX, }, .tcg_features = TCG_6_EAX_FEATURES, }, - [FEAT_XSAVE_COMP_LO] = { + [FEAT_XSAVE_XCR0_LO] = { .type = CPUID_FEATURE_WORD, .cpuid = { .eax = 0xD, @@ -968,7 +996,7 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = { XSTATE_OPMASK_MASK | XSTATE_ZMM_Hi256_MASK | XSTATE_Hi16_ZMM_MASK | XSTATE_PKRU_MASK, }, - [FEAT_XSAVE_COMP_HI] = { + [FEAT_XSAVE_XCR0_HI] = { .type = CPUID_FEATURE_WORD, .cpuid = { .eax = 0xD, @@ -1385,6 +1413,9 @@ static const X86RegisterInfo32 x86_reg_info_32[CPU_NB_REGS32] = { }; #undef REGISTER +/* CPUID feature bits available in XSS */ +#define CPUID_XSTATE_XSS_MASK (0) + ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT] = { [XSTATE_FP_BIT] = { /* x87 FP state component is always enabled if XSAVE is supported */ @@ -1427,15 +1458,18 @@ ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT] = { }, }; -static uint32_t xsave_area_size(uint64_t mask) +static uint32_t xsave_area_size(uint64_t mask, bool compacted) { + uint64_t ret = x86_ext_save_areas[0].size; + const ExtSaveArea *esa; + uint32_t offset = 0; int i; - uint64_t ret = 0; - for (i = 0; i < ARRAY_SIZE(x86_ext_save_areas); i++) { - const ExtSaveArea *esa = &x86_ext_save_areas[i]; + for (i = 2; i < ARRAY_SIZE(x86_ext_save_areas); i++) { + esa = &x86_ext_save_areas[i]; if ((mask >> i) & 1) { - ret = MAX(ret, esa->offset + esa->size); + offset = compacted ? ret : esa->offset; + ret = MAX(ret, offset + esa->size); } } return ret; @@ -1446,10 +1480,10 @@ static inline bool accel_uses_host_cpuid(void) return kvm_enabled() || hvf_enabled(); } -static inline uint64_t x86_cpu_xsave_components(X86CPU *cpu) +static inline uint64_t x86_cpu_xsave_xcr0_components(X86CPU *cpu) { - return ((uint64_t)cpu->env.features[FEAT_XSAVE_COMP_HI]) << 32 | - cpu->env.features[FEAT_XSAVE_COMP_LO]; + return ((uint64_t)cpu->env.features[FEAT_XSAVE_XCR0_HI]) << 32 | + cpu->env.features[FEAT_XSAVE_XCR0_LO]; } /* Return name of 32-bit register, from a R_* constant */ @@ -1461,6 +1495,12 @@ static const char *get_register_name_32(unsigned int reg) return x86_reg_info_32[reg].name; } +static inline uint64_t x86_cpu_xsave_xss_components(X86CPU *cpu) +{ + return ((uint64_t)cpu->env.features[FEAT_XSAVE_XSS_HI]) << 32 | + cpu->env.features[FEAT_XSAVE_XSS_LO]; +} + /* * Returns the set of feature flags that are supported and migratable by * QEMU, for a given FeatureWord. @@ -4628,8 +4668,8 @@ static const char *x86_cpu_feature_name(FeatureWord w, int bitnr) /* XSAVE components are automatically enabled by other features, * so return the original feature name instead */ - if (w == FEAT_XSAVE_COMP_LO || w == FEAT_XSAVE_COMP_HI) { - int comp = (w == FEAT_XSAVE_COMP_HI) ? bitnr + 32 : bitnr; + if (w == FEAT_XSAVE_XCR0_LO || w == FEAT_XSAVE_XCR0_HI) { + int comp = (w == FEAT_XSAVE_XCR0_HI) ? bitnr + 32 : bitnr; if (comp < ARRAY_SIZE(x86_ext_save_areas) && x86_ext_save_areas[comp].bits) { @@ -5494,25 +5534,36 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count, } if (count == 0) { - *ecx = xsave_area_size(x86_cpu_xsave_components(cpu)); - *eax = env->features[FEAT_XSAVE_COMP_LO]; - *edx = env->features[FEAT_XSAVE_COMP_HI]; + *ecx = xsave_area_size(x86_cpu_xsave_xcr0_components(cpu), false); + *eax = env->features[FEAT_XSAVE_XCR0_LO]; + *edx = env->features[FEAT_XSAVE_XCR0_HI]; /* * The initial value of xcr0 and ebx == 0, On host without kvm * commit 412a3c41(e.g., CentOS 6), the ebx's value always == 0 * even through guest update xcr0, this will crash some legacy guest * (e.g., CentOS 6), So set ebx == ecx to workaroud it. */ - *ebx = kvm_enabled() ? *ecx : xsave_area_size(env->xcr0); + *ebx = kvm_enabled() ? *ecx : xsave_area_size(env->xcr0, false); } else if (count == 1) { + uint64_t xstate = x86_cpu_xsave_xcr0_components(cpu) | + x86_cpu_xsave_xss_components(cpu); + *eax = env->features[FEAT_XSAVE]; + *ebx = xsave_area_size(xstate, true); + *ecx = env->features[FEAT_XSAVE_XSS_LO]; + *edx = env->features[FEAT_XSAVE_XSS_HI]; } else if (count < ARRAY_SIZE(x86_ext_save_areas)) { - if ((x86_cpu_xsave_components(cpu) >> count) & 1) { - const ExtSaveArea *esa = &x86_ext_save_areas[count]; + const ExtSaveArea *esa = &x86_ext_save_areas[count]; + + if ((x86_cpu_xsave_xcr0_components(cpu) >> count) & 1) { *eax = esa->size; *ebx = esa->offset; *ecx = (esa->ecx & ESA_FEATURE_ALIGN64_MASK) | (esa->ecx & ESA_FEATURE_XFD_MASK); + } else if ((x86_cpu_xsave_xss_components(cpu) >> count) & 1) { + *eax = esa->size; + *ebx = 0; + *ecx = 1; } } break; @@ -5563,8 +5614,8 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count, } else { *eax &= env->features[FEAT_SGX_12_1_EAX]; *ebx &= 0; /* ebx reserve */ - *ecx &= env->features[FEAT_XSAVE_COMP_LO]; - *edx &= env->features[FEAT_XSAVE_COMP_HI]; + *ecx &= env->features[FEAT_XSAVE_XSS_LO]; + *edx &= env->features[FEAT_XSAVE_XSS_HI]; /* FP and SSE are always allowed regardless of XSAVE/XCR0. */ *ecx |= XSTATE_FP_MASK | XSTATE_SSE_MASK; @@ -5947,6 +5998,9 @@ static void x86_cpu_reset(DeviceState *dev) } for (i = 2; i < ARRAY_SIZE(x86_ext_save_areas); i++) { const ExtSaveArea *esa = &x86_ext_save_areas[i]; + if (!((1 << i) & CPUID_XSTATE_XCR0_MASK)) { + continue; + } if (env->features[esa->feature] & esa->bits) { xcr0 |= 1ull << i; } @@ -6083,8 +6137,8 @@ static void x86_cpu_enable_xsave_components(X86CPU *cpu) uint64_t mask; if (!(env->features[FEAT_1_ECX] & CPUID_EXT_XSAVE)) { - env->features[FEAT_XSAVE_COMP_LO] = 0; - env->features[FEAT_XSAVE_COMP_HI] = 0; + env->features[FEAT_XSAVE_XCR0_LO] = 0; + env->features[FEAT_XSAVE_XCR0_HI] = 0; return; } @@ -6102,8 +6156,10 @@ static void x86_cpu_enable_xsave_components(X86CPU *cpu) request_perm = true; } - env->features[FEAT_XSAVE_COMP_LO] = mask; - env->features[FEAT_XSAVE_COMP_HI] = mask >> 32; + env->features[FEAT_XSAVE_XCR0_LO] = mask & CPUID_XSTATE_XCR0_MASK; + env->features[FEAT_XSAVE_XCR0_HI] = mask >> 32; + env->features[FEAT_XSAVE_XSS_LO] = mask & CPUID_XSTATE_XSS_MASK; + env->features[FEAT_XSAVE_XSS_HI] = mask >> 32; } /***** Steps involved on loading and filtering CPUID data diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 852afabe0b..1d17196a0b 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -568,6 +568,13 @@ typedef enum X86Seg { #define ESA_FEATURE_XFD_MASK (1U << ESA_FEATURE_XFD_BIT) +/* CPUID feature bits available in XCR0 */ +#define CPUID_XSTATE_XCR0_MASK (XSTATE_FP_MASK | XSTATE_SSE_MASK | \ + XSTATE_YMM_MASK | XSTATE_BNDREGS_MASK | \ + XSTATE_BNDCSR_MASK | XSTATE_OPMASK_MASK | \ + XSTATE_ZMM_Hi256_MASK | \ + XSTATE_Hi16_ZMM_MASK | XSTATE_PKRU_MASK) + /* CPUID feature words */ typedef enum FeatureWord { FEAT_1_EDX, /* CPUID[1].EDX */ @@ -586,8 +593,8 @@ typedef enum FeatureWord { FEAT_SVM, /* CPUID[8000_000A].EDX */ FEAT_XSAVE, /* CPUID[EAX=0xd,ECX=1].EAX */ FEAT_6_EAX, /* CPUID[6].EAX */ - FEAT_XSAVE_COMP_LO, /* CPUID[EAX=0xd,ECX=0].EAX */ - FEAT_XSAVE_COMP_HI, /* CPUID[EAX=0xd,ECX=0].EDX */ + FEAT_XSAVE_XCR0_LO, /* CPUID[EAX=0xd,ECX=0].EAX */ + FEAT_XSAVE_XCR0_HI, /* CPUID[EAX=0xd,ECX=0].EDX */ FEAT_ARCH_CAPABILITIES, FEAT_CORE_CAPABILITY, FEAT_PERF_CAPABILITIES, @@ -604,6 +611,8 @@ typedef enum FeatureWord { FEAT_SGX_12_0_EAX, /* CPUID[EAX=0x12,ECX=0].EAX (SGX) */ FEAT_SGX_12_0_EBX, /* CPUID[EAX=0x12,ECX=0].EBX (SGX MISCSELECT[31:0]) */ FEAT_SGX_12_1_EAX, /* CPUID[EAX=0x12,ECX=1].EAX (SGX ATTRIBUTES[31:0]) */ + FEAT_XSAVE_XSS_LO, /* CPUID[EAX=0xd,ECX=1].ECX */ + FEAT_XSAVE_XSS_HI, /* CPUID[EAX=0xd,ECX=1].EDX */ FEATURE_WORDS, } FeatureWord; From patchwork Tue Feb 15 19:52:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 12748217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54E3DC433F5 for ; Wed, 16 Feb 2022 08:54:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231803AbiBPIyW (ORCPT ); Wed, 16 Feb 2022 03:54:22 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:48608 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231783AbiBPIyS (ORCPT ); Wed, 16 Feb 2022 03:54:18 -0500 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7303918B17C for ; Wed, 16 Feb 2022 00:54:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645001647; x=1676537647; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eKxh2JCpd5Vqvqfr9C/TBksQqPRHO62E7J80Y06qmuI=; b=b7lbaXvo5WQ+vHWhmD55dLDuVb/WX/rc1MmrX3n8X947h9RwRfrSFLsb FjOY4TUHARTGlSb6jFsN6tY7bTWZasdjemaAMZI0+C69ann2QWxAVJhGq A5zF0l/GcuXyLuJPbd8DMGn0LjyVABwhzix7Mcrh5YDPmOdWp6FFiLS+I 7QDeJpsfItJcEI3TFwO8Bdb0PfpfFlIAeEa310yINIrqef7yE6Ym681+U H3UbKcLgKc5Gu2x2vLUuPLRVxd7pYVlJ4KLV7hP7VC1uoZn6A7sCRlp1d hhVDDJszJw5iDnglvo4+NO2K15hWbQfqSR5k9TBo/qNcbnT5gq2e8GUK8 A==; X-IronPort-AV: E=McAfee;i="6200,9189,10259"; a="275135801" X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="275135801" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:07 -0800 X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="633418284" Received: from embargo.jf.intel.com ([10.165.9.183]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:06 -0800 From: Yang Weijiang To: pbonzini@redhat.com, ehabkost@redhat.com, mtosatti@redhat.com, seanjc@google.com, richard.henderson@linaro.org, like.xu.linux@gmail.com, wei.w.wang@intel.com, qemu-devel@nongnu.org, kvm@vger.kernel.org Cc: Yang Weijiang Subject: [PATCH 5/8] target/i386: Add XSAVES support for Arch LBR Date: Tue, 15 Feb 2022 14:52:55 -0500 Message-Id: <20220215195258.29149-6-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220215195258.29149-1-weijiang.yang@intel.com> References: <20220215195258.29149-1-weijiang.yang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Define Arch LBR bit in XSS and save/restore structure for XSAVE area size calculation. Signed-off-by: Yang Weijiang --- target/i386/cpu.c | 6 +++++- target/i386/cpu.h | 23 +++++++++++++++++++++++ 2 files changed, 28 insertions(+), 1 deletion(-) diff --git a/target/i386/cpu.c b/target/i386/cpu.c index 496e906233..e505c926b2 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -1414,7 +1414,7 @@ static const X86RegisterInfo32 x86_reg_info_32[CPU_NB_REGS32] = { #undef REGISTER /* CPUID feature bits available in XSS */ -#define CPUID_XSTATE_XSS_MASK (0) +#define CPUID_XSTATE_XSS_MASK (XSTATE_ARCH_LBR_MASK) ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT] = { [XSTATE_FP_BIT] = { @@ -1448,6 +1448,10 @@ ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT] = { [XSTATE_PKRU_BIT] = { .feature = FEAT_7_0_ECX, .bits = CPUID_7_0_ECX_PKU, .size = sizeof(XSavePKRU) }, + [XSTATE_ARCH_LBR_BIT] = { + .feature = FEAT_7_0_EDX, .bits = CPUID_7_0_EDX_ARCH_LBR, + .offset = 0 /*supervisor mode component, offset = 0 */, + .size = sizeof(XSavesArchLBR) }, [XSTATE_XTILE_CFG_BIT] = { .feature = FEAT_7_0_EDX, .bits = CPUID_7_0_EDX_AMX_TILE, .size = sizeof(XSaveXTILECFG), diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 1d17196a0b..07b198539b 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -541,6 +541,7 @@ typedef enum X86Seg { #define XSTATE_ZMM_Hi256_BIT 6 #define XSTATE_Hi16_ZMM_BIT 7 #define XSTATE_PKRU_BIT 9 +#define XSTATE_ARCH_LBR_BIT 15 #define XSTATE_XTILE_CFG_BIT 17 #define XSTATE_XTILE_DATA_BIT 18 @@ -553,6 +554,7 @@ typedef enum X86Seg { #define XSTATE_ZMM_Hi256_MASK (1ULL << XSTATE_ZMM_Hi256_BIT) #define XSTATE_Hi16_ZMM_MASK (1ULL << XSTATE_Hi16_ZMM_BIT) #define XSTATE_PKRU_MASK (1ULL << XSTATE_PKRU_BIT) +#define XSTATE_ARCH_LBR_MASK (1ULL << XSTATE_ARCH_LBR_BIT) #define XSTATE_XTILE_CFG_MASK (1ULL << XSTATE_XTILE_CFG_BIT) #define XSTATE_XTILE_DATA_MASK (1ULL << XSTATE_XTILE_DATA_BIT) #define XFEATURE_XTILE_MASK (XSTATE_XTILE_CFG_MASK \ @@ -867,6 +869,8 @@ typedef uint64_t FeatureWordArray[FEATURE_WORDS]; #define CPUID_7_0_EDX_SERIALIZE (1U << 14) /* TSX Suspend Load Address Tracking instruction */ #define CPUID_7_0_EDX_TSX_LDTRK (1U << 16) +/* Architectural LBRs */ +#define CPUID_7_0_EDX_ARCH_LBR (1U << 19) /* AVX512_FP16 instruction */ #define CPUID_7_0_EDX_AVX512_FP16 (1U << 23) /* AMX tile (two-dimensional register) */ @@ -1386,6 +1390,24 @@ typedef struct XSaveXTILEDATA { uint8_t xtiledata[8][1024]; } XSaveXTILEDATA; +typedef struct { + uint64_t from; + uint64_t to; + uint64_t info; +} LBR_ENTRY; + +#define ARCH_LBR_NR_ENTRIES 32 + +/* Ext. save area 19: Supervisor mode Arch LBR state */ +typedef struct XSavesArchLBR { + uint64_t lbr_ctl; + uint64_t lbr_depth; + uint64_t ler_from; + uint64_t ler_to; + uint64_t ler_info; + LBR_ENTRY lbr_records[ARCH_LBR_NR_ENTRIES]; +} XSavesArchLBR; + QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100); QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40); QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40); @@ -1395,6 +1417,7 @@ QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400); QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8); QEMU_BUILD_BUG_ON(sizeof(XSaveXTILECFG) != 0x40); QEMU_BUILD_BUG_ON(sizeof(XSaveXTILEDATA) != 0x2000); +QEMU_BUILD_BUG_ON(sizeof(XSavesArchLBR) != 0x328); typedef struct ExtSaveArea { uint32_t feature, bits; From patchwork Tue Feb 15 19:52:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 12748219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51670C433EF for ; Wed, 16 Feb 2022 08:54:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231810AbiBPIyX (ORCPT ); Wed, 16 Feb 2022 03:54:23 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:48720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231795AbiBPIyU (ORCPT ); Wed, 16 Feb 2022 03:54:20 -0500 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D8832ABD05 for ; Wed, 16 Feb 2022 00:54:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645001648; x=1676537648; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2hjc6Pok40xB2Li4GkMnvzkMlmAWKbYNYxRDyWlF/w0=; b=lHYYviXeOXNUROytHXK7Si0yjHyN8f94SzEg1R/SXyPv4YAFonEfgBpM Lblo6SXs0mgQ3wYapLSqyR66jGBCU6r7CsceU44s2BCMwaNvALBbw47tq jwVVFL2cQnLfceHKilrSi79u0gPQmPxm6wkOKjb0KoWjN4+3ZUdBTHSS8 YGYZiSvTy256cMus29ciS2cbK/8DmVAM7M4SD3am9NU4rYqXasRj5EPFE Vv3T+Qgekts+GFd98Yt1/6XqJM4HDWWSQ1pKP9ojDRKnI+VmCreDEvplh EKyVygm0PbtRGOGazKrk4CYdHDdlKO/K0X2JbGgwu5PcwYNMvjDKoqPHn A==; X-IronPort-AV: E=McAfee;i="6200,9189,10259"; a="275135806" X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="275135806" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:07 -0800 X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="633418289" Received: from embargo.jf.intel.com ([10.165.9.183]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:07 -0800 From: Yang Weijiang To: pbonzini@redhat.com, ehabkost@redhat.com, mtosatti@redhat.com, seanjc@google.com, richard.henderson@linaro.org, like.xu.linux@gmail.com, wei.w.wang@intel.com, qemu-devel@nongnu.org, kvm@vger.kernel.org Cc: Yang Weijiang Subject: [PATCH 6/8] target/i386: Add MSR access interface for Arch LBR Date: Tue, 15 Feb 2022 14:52:56 -0500 Message-Id: <20220215195258.29149-7-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220215195258.29149-1-weijiang.yang@intel.com> References: <20220215195258.29149-1-weijiang.yang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In the first generation of Arch LBR, the max support Arch LBR depth is 32, both host and guest use the value to set depth MSR. This can simplify the implementation of patch given the side-effect of mismatch of host/guest depth MSR: XRSTORS will reset all recording MSRs to 0s if the saved depth mismatches MSR_ARCH_LBR_DEPTH. In most of the cases Arch LBR is not in active status, so check the control bit before save/restore the big chunck of Arch LBR MSRs. Signed-off-by: Yang Weijiang --- target/i386/cpu.h | 10 +++++++ target/i386/kvm/kvm.c | 67 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 77 insertions(+) diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 07b198539b..0cadd37c47 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -388,6 +388,11 @@ typedef enum X86Seg { #define MSR_IA32_TSX_CTRL 0x122 #define MSR_IA32_TSCDEADLINE 0x6e0 #define MSR_IA32_PKRS 0x6e1 +#define MSR_ARCH_LBR_CTL 0x000014ce +#define MSR_ARCH_LBR_DEPTH 0x000014cf +#define MSR_ARCH_LBR_FROM_0 0x00001500 +#define MSR_ARCH_LBR_TO_0 0x00001600 +#define MSR_ARCH_LBR_INFO_0 0x00001200 #define FEATURE_CONTROL_LOCKED (1<<0) #define FEATURE_CONTROL_VMXON_ENABLED_INSIDE_SMX (1ULL << 1) @@ -1659,6 +1664,11 @@ typedef struct CPUX86State { uint64_t msr_xfd; uint64_t msr_xfd_err; + /* Per-VCPU Arch LBR MSRs */ + uint64_t msr_lbr_ctl; + uint64_t msr_lbr_depth; + LBR_ENTRY lbr_records[ARCH_LBR_NR_ENTRIES]; + /* exception/interrupt handling */ int error_code; int exception_is_int; diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c index 764d110e0f..974ff3c0a5 100644 --- a/target/i386/kvm/kvm.c +++ b/target/i386/kvm/kvm.c @@ -3273,6 +3273,38 @@ static int kvm_put_msrs(X86CPU *cpu, int level) env->msr_xfd_err); } + if (kvm_enabled() && cpu->enable_pmu && + (env->features[FEAT_7_0_EDX] & CPUID_7_0_EDX_ARCH_LBR)) { + uint64_t depth; + int i, ret; + + /* + * Only migrate Arch LBR states when: 1) Arch LBR is enabled + * for migrated vcpu. 2) the host Arch LBR depth equals that + * of source guest's, this is to avoid mismatch of guest/host + * config for the msr hence avoid unexpected misbehavior. + */ + ret = kvm_get_one_msr(cpu, MSR_ARCH_LBR_DEPTH, &depth); + + if (ret == 1 && (env->msr_lbr_ctl & 0x1) && !!depth && + depth == env->msr_lbr_depth) { + kvm_msr_entry_add(cpu, MSR_ARCH_LBR_CTL, env->msr_lbr_ctl); + kvm_msr_entry_add(cpu, MSR_ARCH_LBR_DEPTH, env->msr_lbr_depth); + + for (i = 0; i < ARCH_LBR_NR_ENTRIES; i++) { + if (!env->lbr_records[i].from) { + continue; + } + kvm_msr_entry_add(cpu, MSR_ARCH_LBR_FROM_0 + i, + env->lbr_records[i].from); + kvm_msr_entry_add(cpu, MSR_ARCH_LBR_TO_0 + i, + env->lbr_records[i].to); + kvm_msr_entry_add(cpu, MSR_ARCH_LBR_INFO_0 + i, + env->lbr_records[i].info); + } + } + } + /* Note: MSR_IA32_FEATURE_CONTROL is written separately, see * kvm_put_msr_feature_control. */ } @@ -3670,6 +3702,26 @@ static int kvm_get_msrs(X86CPU *cpu) kvm_msr_entry_add(cpu, MSR_IA32_XFD_ERR, 0); } + if (kvm_enabled() && cpu->enable_pmu && + (env->features[FEAT_7_0_EDX] & CPUID_7_0_EDX_ARCH_LBR)) { + uint64_t ctl, depth; + int i, ret2; + + ret = kvm_get_one_msr(cpu, MSR_ARCH_LBR_CTL, &ctl); + ret2 = kvm_get_one_msr(cpu, MSR_ARCH_LBR_DEPTH, &depth); + if (ret == 1 && ret2 == 1 && (ctl & 0x1) && + depth == ARCH_LBR_NR_ENTRIES) { + kvm_msr_entry_add(cpu, MSR_ARCH_LBR_CTL, 0); + kvm_msr_entry_add(cpu, MSR_ARCH_LBR_DEPTH, 0); + + for (i = 0; i < ARCH_LBR_NR_ENTRIES; i++) { + kvm_msr_entry_add(cpu, MSR_ARCH_LBR_FROM_0 + i, 0); + kvm_msr_entry_add(cpu, MSR_ARCH_LBR_TO_0 + i, 0); + kvm_msr_entry_add(cpu, MSR_ARCH_LBR_INFO_0 + i, 0); + } + } + } + ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_MSRS, cpu->kvm_msr_buf); if (ret < 0) { return ret; @@ -3972,6 +4024,21 @@ static int kvm_get_msrs(X86CPU *cpu) case MSR_IA32_XFD_ERR: env->msr_xfd_err = msrs[i].data; break; + case MSR_ARCH_LBR_CTL: + env->msr_lbr_ctl = msrs[i].data; + break; + case MSR_ARCH_LBR_DEPTH: + env->msr_lbr_depth = msrs[i].data; + break; + case MSR_ARCH_LBR_FROM_0 ... MSR_ARCH_LBR_FROM_0 + 31: + env->lbr_records[index - MSR_ARCH_LBR_FROM_0].from = msrs[i].data; + break; + case MSR_ARCH_LBR_TO_0 ... MSR_ARCH_LBR_TO_0 + 31: + env->lbr_records[index - MSR_ARCH_LBR_TO_0].to = msrs[i].data; + break; + case MSR_ARCH_LBR_INFO_0 ... MSR_ARCH_LBR_INFO_0 + 31: + env->lbr_records[index - MSR_ARCH_LBR_INFO_0].info = msrs[i].data; + break; } } From patchwork Tue Feb 15 19:52:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 12748220 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8DF9C433FE for ; Wed, 16 Feb 2022 08:54:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231812AbiBPIyY (ORCPT ); Wed, 16 Feb 2022 03:54:24 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:48790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231797AbiBPIyU (ORCPT ); Wed, 16 Feb 2022 03:54:20 -0500 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DB6F2ABD10 for ; Wed, 16 Feb 2022 00:54:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645001648; x=1676537648; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o84uKWKMQfEn4ExBF+qBe2pAQYKLeDqkmI267vu6PGw=; b=hSRiKkEuQ7qiZveoMr7DOZybY3hCUwvtTV4YDzC2dTWYPdEZltUqJmeX NCzb/VgpOegsCf9z92yhVSKNwQmQMQUU9mOb2XMRGU8fMr6D4GNonq1uc Na9bO38i2w27jBxMlRdbOEjYos/giy7gCwrAUA70AbirIUtnt1r89AXjp o24QPWh8MUPincwq5u2YSyobXlBsyyTymHi6IKaAW1bYqyDW/WpDKmk1y ImauSSX0JJVm9uh7wQ/OpK1UIqqm6vCS5OoK8tOmP1TXGR532xqekV3nI EeQm207jSnA6+KOX8tcNfifVkkfwJP+Fvf/PcNqg2xIg3+k2gGhpQG9F/ Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10259"; a="275135812" X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="275135812" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:07 -0800 X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="633418293" Received: from embargo.jf.intel.com ([10.165.9.183]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:07 -0800 From: Yang Weijiang To: pbonzini@redhat.com, ehabkost@redhat.com, mtosatti@redhat.com, seanjc@google.com, richard.henderson@linaro.org, like.xu.linux@gmail.com, wei.w.wang@intel.com, qemu-devel@nongnu.org, kvm@vger.kernel.org Cc: Yang Weijiang Subject: [PATCH 7/8] target/i386: Enable Arch LBR migration states in vmstate Date: Tue, 15 Feb 2022 14:52:57 -0500 Message-Id: <20220215195258.29149-8-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220215195258.29149-1-weijiang.yang@intel.com> References: <20220215195258.29149-1-weijiang.yang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The Arch LBR record MSRs and control MSRs will be migrated to destination guest if the vcpus were running with Arch LBR active. Signed-off-by: Yang Weijiang --- target/i386/machine.c | 38 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/target/i386/machine.c b/target/i386/machine.c index 1f9d0c46f1..08db7d3629 100644 --- a/target/i386/machine.c +++ b/target/i386/machine.c @@ -136,6 +136,22 @@ static const VMStateDescription vmstate_mtrr_var = { #define VMSTATE_MTRR_VARS(_field, _state, _n, _v) \ VMSTATE_STRUCT_ARRAY(_field, _state, _n, _v, vmstate_mtrr_var, MTRRVar) +static const VMStateDescription vmstate_lbr_records_var = { + .name = "lbr_records_var", + .version_id = 1, + .minimum_version_id = 1, + .fields = (VMStateField[]) { + VMSTATE_UINT64(from, LBR_ENTRY), + VMSTATE_UINT64(to, LBR_ENTRY), + VMSTATE_UINT64(info, LBR_ENTRY), + VMSTATE_END_OF_LIST() + } +}; + +#define VMSTATE_LBR_VARS(_field, _state, _n, _v) \ + VMSTATE_STRUCT_ARRAY(_field, _state, _n, _v, vmstate_lbr_records_var, \ + LBR_ENTRY) + typedef struct x86_FPReg_tmp { FPReg *parent; uint64_t tmp_mant; @@ -1523,6 +1539,27 @@ static const VMStateDescription vmstate_amx_xtile = { } }; +static bool arch_lbr_needed(void *opaque) +{ + X86CPU *cpu = opaque; + CPUX86State *env = &cpu->env; + + return !!(env->features[FEAT_7_0_EDX] & CPUID_7_0_EDX_ARCH_LBR); +} + +static const VMStateDescription vmstate_arch_lbr = { + .name = "cpu/arch_lbr", + .version_id = 1, + .minimum_version_id = 1, + .needed = arch_lbr_needed, + .fields = (VMStateField[]) { + VMSTATE_UINT64(env.msr_lbr_ctl, X86CPU), + VMSTATE_UINT64(env.msr_lbr_depth, X86CPU), + VMSTATE_LBR_VARS(env.lbr_records, X86CPU, ARCH_LBR_NR_ENTRIES, 1), + VMSTATE_END_OF_LIST() + } +}; + const VMStateDescription vmstate_x86_cpu = { .name = "cpu", .version_id = 12, @@ -1664,6 +1701,7 @@ const VMStateDescription vmstate_x86_cpu = { &vmstate_pdptrs, &vmstate_msr_xfd, &vmstate_amx_xtile, + &vmstate_arch_lbr, NULL } }; From patchwork Tue Feb 15 19:52:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 12748221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DF15C433F5 for ; Wed, 16 Feb 2022 08:54:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231817AbiBPIyY (ORCPT ); Wed, 16 Feb 2022 03:54:24 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:48874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231791AbiBPIyV (ORCPT ); Wed, 16 Feb 2022 03:54:21 -0500 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6240A2ABD1B for ; Wed, 16 Feb 2022 00:54:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645001649; x=1676537649; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QzBCP5M4wMF4q22jDeU36YFp1BmxRx/56eXR+XroOIE=; b=O27PsFqudNa+aJLyq00T6c+9KoWuKS30fi37vrVFOlSaWdLfeSh+o02L sXNylQ7va+BTb4mEnsgf+3wFHEvtIozFkezaBHZzR9kbAW7RhJ2mzg8wW IS04JwF7u4jSUVZPGhft1a9FtiuE71ElO49aMDTRdo5sy0rdRnUjRXa7V WGZjQr+9tRIhUWrEbr5S6CyPKU/yROZIp5hJQZXr2qj30c0q7Nl6mVEaA We6PUjVi/lY+qMMbKgdMlypvzq3X+go0yfK6LjEEbzDAAyZjYa4hpSMFj slPwoRXN6B1cZJCeXH5aZQt9t4PVCfxh8ukbv+dKQoXfdY7DvCIhZqJnp A==; X-IronPort-AV: E=McAfee;i="6200,9189,10259"; a="275135819" X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="275135819" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:08 -0800 X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="633418296" Received: from embargo.jf.intel.com ([10.165.9.183]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 00:54:07 -0800 From: Yang Weijiang To: pbonzini@redhat.com, ehabkost@redhat.com, mtosatti@redhat.com, seanjc@google.com, richard.henderson@linaro.org, like.xu.linux@gmail.com, wei.w.wang@intel.com, qemu-devel@nongnu.org, kvm@vger.kernel.org Cc: Yang Weijiang Subject: [PATCH 8/8] target/i386: Support Arch LBR in CPUID enumeration Date: Tue, 15 Feb 2022 14:52:58 -0500 Message-Id: <20220215195258.29149-9-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220215195258.29149-1-weijiang.yang@intel.com> References: <20220215195258.29149-1-weijiang.yang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If CPUID.(EAX=07H, ECX=0):EDX[19] is set to 1, the processor supports Architectural LBRs. In this case, CPUID leaf 01CH indicates details of the Architectural LBRs capabilities. XSAVE support for Architectural LBRs is enumerated in CPUID.(EAX=0DH, ECX=0FH). Signed-off-by: Yang Weijiang --- target/i386/cpu.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/target/i386/cpu.c b/target/i386/cpu.c index e505c926b2..1092618683 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -858,7 +858,7 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = { "fsrm", NULL, NULL, NULL, "avx512-vp2intersect", NULL, "md-clear", NULL, NULL, NULL, "serialize", NULL, - "tsx-ldtrk", NULL, NULL /* pconfig */, NULL, + "tsx-ldtrk", NULL, NULL /* pconfig */, "arch-lbr", NULL, NULL, "amx-bf16", "avx512-fp16", "amx-tile", "amx-int8", "spec-ctrl", "stibp", NULL, "arch-capabilities", "core-capability", "ssbd", @@ -5494,6 +5494,12 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count, assert(!(*eax & ~0x1f)); *ebx &= 0xffff; /* The count doesn't need to be reliable. */ break; + case 0x1C: + *eax = kvm_arch_get_supported_cpuid(cs->kvm_state, 0x1C, 0, R_EAX); + *ebx = kvm_arch_get_supported_cpuid(cs->kvm_state, 0x1C, 0, R_EBX); + *ecx = kvm_arch_get_supported_cpuid(cs->kvm_state, 0x1C, 0, R_ECX); + *edx = 0; + break; case 0x1F: /* V2 Extended Topology Enumeration Leaf */ if (env->nr_dies < 2) { @@ -5556,6 +5562,19 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count, *ebx = xsave_area_size(xstate, true); *ecx = env->features[FEAT_XSAVE_XSS_LO]; *edx = env->features[FEAT_XSAVE_XSS_HI]; + if (kvm_enabled() && cpu->enable_pmu && + (env->features[FEAT_7_0_EDX] & CPUID_7_0_EDX_ARCH_LBR) && + (*eax & CPUID_XSAVE_XSAVES)) { + *ecx |= XSTATE_ARCH_LBR_MASK; + } else { + *ecx &= ~XSTATE_ARCH_LBR_MASK; + } + } else if (count == 0xf && kvm_enabled() && cpu->enable_pmu && + (env->features[FEAT_7_0_EDX] & CPUID_7_0_EDX_ARCH_LBR)) { + *eax = kvm_arch_get_supported_cpuid(cs->kvm_state, 0xD, 0xf, R_EAX); + *ebx = kvm_arch_get_supported_cpuid(cs->kvm_state, 0xD, 0xf, R_EBX); + *ecx = kvm_arch_get_supported_cpuid(cs->kvm_state, 0xD, 0xf, R_ECX); + *edx = kvm_arch_get_supported_cpuid(cs->kvm_state, 0xD, 0xf, R_EDX); } else if (count < ARRAY_SIZE(x86_ext_save_areas)) { const ExtSaveArea *esa = &x86_ext_save_areas[count];