diff mbox series

[kvm-unit-tests,4/4] x86/pmu: Add a PEBS test to verify the host LBRs aren't leaked to the guest

Message ID 20240306230153.786365-5-seanjc@google.com (mailing list archive)
State New, archived
Headers show
Series x86/pmu: PEBS fixes and new testcases | expand

Commit Message

Sean Christopherson March 6, 2024, 11:01 p.m. UTC
When using adaptive PEBS with LBR entries, verify that the LBR entries are
all '0'.  If KVM fails to context switch LBRs, e.g. when the guest isn't
using LBRs, as is the case in the pmu_pebs test, then adaptive PEBS can be
used to read the *host* LBRs as the CPU doesn't enforce the VMX MSR bitmaps
when generating PEBS records, i.e. ignores KVM's interception of reads to
LBR MSRs.

This testcase is best exercised by simultaneously utilizing LBRs in the
host, e.g. by running "perf record -b -e instructions", so that there is
non-zero data in the LBR MSRs.

Cc: Like Xu <like.xu.linux@gmail.com>
Cc: Mingwei Zhang <mizhang@google.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: Zhang Xiong <xiong.y.zhang@intel.com>
Cc: Lv Zhiyuan <zhiyuan.lv@intel.com>
Cc: Dapeng Mi <dapeng1.mi@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 x86/pmu_pebs.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

Comments

Like Xu March 7, 2024, 9:23 a.m. UTC | #1
On 7/3/2024 7:01 am, Sean Christopherson wrote:
> When using adaptive PEBS with LBR entries, verify that the LBR entries are
> all '0'.  If KVM fails to context switch LBRs, e.g. when the guest isn't
> using LBRs, as is the case in the pmu_pebs test, then adaptive PEBS can be
> used to read the *host* LBRs as the CPU doesn't enforce the VMX MSR bitmaps
> when generating PEBS records, i.e. ignores KVM's interception of reads to
> LBR MSRs.
> 
> This testcase is best exercised by simultaneously utilizing LBRs in the
> host, e.g. by running "perf record -b -e instructions", so that there is
> non-zero data in the LBR MSRs.
> 
> Cc: Like Xu <like.xu.linux@gmail.com>
> Cc: Mingwei Zhang <mizhang@google.com>
> Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
> Cc: Zhang Xiong <xiong.y.zhang@intel.com>
> Cc: Lv Zhiyuan <zhiyuan.lv@intel.com>
> Cc: Dapeng Mi <dapeng1.mi@intel.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

As I recall, the conclusion of the internal discussions was "low priority."

This issue has been lying on my to-do list for a long time, and finally
we have a test case to hit it right in the heart.

Reviewed-by: Like Xu <likexu@tencent.com>

> ---
>   x86/pmu_pebs.c | 13 +++++++++++++
>   1 file changed, 13 insertions(+)
> 
> diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c
> index 0e8d60c3..df8e736f 100644
> --- a/x86/pmu_pebs.c
> +++ b/x86/pmu_pebs.c
> @@ -299,6 +299,19 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive
>   		expected = pebs_idx_match && pebs_size_match && data_cfg_match;
>   		report(expected,
>   		       "PEBS record (written seq %d) is verified (including size, counters and cfg).", count);
> +		if (use_adaptive && (pebs_data_cfg & PEBS_DATACFG_LBRS)) {
> +			unsigned int lbrs_offset = get_pebs_record_size(pebs_data_cfg & ~PEBS_DATACFG_LBRS, true);
> +			struct lbr_entry *pebs_lbrs = cur_record + lbrs_offset;
> +			int i;
> +
> +			for (i = 0; i < MAX_NUM_LBR_ENTRY; i++) {
> +				if (!pebs_lbrs[i].from && !pebs_lbrs[i].to)
> +					continue;
> +
> +				report_fail("PEBS LBR record %u isn't empty, got from = '%lx', to = '%lx', info = '%lx'",
> +					    i, pebs_lbrs[i].from, pebs_lbrs[i].to, pebs_lbrs[i].info);
> +			}
> +		}
>   		cur_record = cur_record + pebs_record_size;
>   		count++;
>   	} while (expected && (void *)cur_record < (void *)ds->pebs_index);
Mi, Dapeng March 7, 2024, 9:31 a.m. UTC | #2
On 3/7/2024 7:01 AM, Sean Christopherson wrote:
> When using adaptive PEBS with LBR entries, verify that the LBR entries are
> all '0'.  If KVM fails to context switch LBRs, e.g. when the guest isn't
> using LBRs, as is the case in the pmu_pebs test, then adaptive PEBS can be
> used to read the *host* LBRs as the CPU doesn't enforce the VMX MSR bitmaps
> when generating PEBS records, i.e. ignores KVM's interception of reads to
> LBR MSRs.
>
> This testcase is best exercised by simultaneously utilizing LBRs in the
> host, e.g. by running "perf record -b -e instructions", so that there is
> non-zero data in the LBR MSRs.
>
> Cc: Like Xu <like.xu.linux@gmail.com>
> Cc: Mingwei Zhang <mizhang@google.com>
> Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
> Cc: Zhang Xiong <xiong.y.zhang@intel.com>
> Cc: Lv Zhiyuan <zhiyuan.lv@intel.com>
> Cc: Dapeng Mi <dapeng1.mi@intel.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>   x86/pmu_pebs.c | 13 +++++++++++++
>   1 file changed, 13 insertions(+)
>
> diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c
> index 0e8d60c3..df8e736f 100644
> --- a/x86/pmu_pebs.c
> +++ b/x86/pmu_pebs.c
> @@ -299,6 +299,19 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive
>   		expected = pebs_idx_match && pebs_size_match && data_cfg_match;
>   		report(expected,
>   		       "PEBS record (written seq %d) is verified (including size, counters and cfg).", count);
> +		if (use_adaptive && (pebs_data_cfg & PEBS_DATACFG_LBRS)) {
> +			unsigned int lbrs_offset = get_pebs_record_size(pebs_data_cfg & ~PEBS_DATACFG_LBRS, true);
> +			struct lbr_entry *pebs_lbrs = cur_record + lbrs_offset;
> +			int i;
> +
> +			for (i = 0; i < MAX_NUM_LBR_ENTRY; i++) {
> +				if (!pebs_lbrs[i].from && !pebs_lbrs[i].to)
> +					continue;
> +
> +				report_fail("PEBS LBR record %u isn't empty, got from = '%lx', to = '%lx', info = '%lx'",
> +					    i, pebs_lbrs[i].from, pebs_lbrs[i].to, pebs_lbrs[i].info);
> +			}
> +		}
>   		cur_record = cur_record + pebs_record_size;
>   		count++;
>   	} while (expected && (void *)cur_record < (void *)ds->pebs_index);
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
diff mbox series

Patch

diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c
index 0e8d60c3..df8e736f 100644
--- a/x86/pmu_pebs.c
+++ b/x86/pmu_pebs.c
@@ -299,6 +299,19 @@  static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive
 		expected = pebs_idx_match && pebs_size_match && data_cfg_match;
 		report(expected,
 		       "PEBS record (written seq %d) is verified (including size, counters and cfg).", count);
+		if (use_adaptive && (pebs_data_cfg & PEBS_DATACFG_LBRS)) {
+			unsigned int lbrs_offset = get_pebs_record_size(pebs_data_cfg & ~PEBS_DATACFG_LBRS, true);
+			struct lbr_entry *pebs_lbrs = cur_record + lbrs_offset;
+			int i;
+
+			for (i = 0; i < MAX_NUM_LBR_ENTRY; i++) {
+				if (!pebs_lbrs[i].from && !pebs_lbrs[i].to)
+					continue;
+
+				report_fail("PEBS LBR record %u isn't empty, got from = '%lx', to = '%lx', info = '%lx'",
+					    i, pebs_lbrs[i].from, pebs_lbrs[i].to, pebs_lbrs[i].info);
+			}
+		}
 		cur_record = cur_record + pebs_record_size;
 		count++;
 	} while (expected && (void *)cur_record < (void *)ds->pebs_index);