From patchwork Sat Sep 14 10:17:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804280 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 03C6978297; Sat, 14 Sep 2024 07:01:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297274; cv=none; b=G7Je6MdxOy1D01YCIgm1xYYyB62muwiKrYirDMFX0gUqpAhXNntOfL4hQk9obMkdOh7e0u+v/NP9ViUAz1jf9J9XbAN15n0f3ijDfc5+jm3tm4bkmkP2oGXcBStJ2nalgE/tTonrSCh1YCn8389WVv1+yIEkB86NWdXwHLVJ1L4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297274; c=relaxed/simple; bh=+M4vonICc0ZyeTSp/xqG/2je7Rme8lCHPlZ6Y/9HccU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=QmrY6Ao5Oe8cYVGw0qgLvuHROlztPfP10kexznqasnh8XTr+2R+KGNJg57Rjyljj47QANor5n39jYcc2sRUEL7KBFt9SWkjz0HvNp3fR0Q1wBRj7Q0ber7Y7gfR3ZOKZjyLpegJbPDVlcLhySmTmwPUkuEwP7sew6qOgMAfHUDY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bIKHnhip; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bIKHnhip" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297273; x=1757833273; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+M4vonICc0ZyeTSp/xqG/2je7Rme8lCHPlZ6Y/9HccU=; b=bIKHnhipoMSTm4hVKy6vLPNGqQnTs+5r48hFHTsUNlNJO1dhm4IeBVyl XqNhBls371u/FvjdjFX4PB0JRoiXfZbOQJsO9P3cvsJDwtbCVLqN5riZq f+pYp2H7T8XF7DUKBmmVoAwLJRN4KVV4jrvYQTWGXGONDA21mwp7G8wID +wzMoIxDnV+NNAA248kVEtQUdUFI6YecvcHHrbg8Lm6Nc3XqytfbUmFaX H6Qe/dIYA5qq9+lCiOX2GMFc4WNuTK7awZtxkQyHlQdgKRDdXfMertCbj 0vNZb0DZK54y2IEX/hR9Qpvt0xf6Wz92GJcZVVWkdwjMEVMxpHgq1V47q A==; X-CSE-ConnectionGUID: mIQcrvotTfOD0tFlhQ7FhQ== X-CSE-MsgGUID: 6q2qLKEISdqY+a/loV1H3Q== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778735" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778735" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:12 -0700 X-CSE-ConnectionGUID: NHXS/7ozSLy3bXFZzs4eng== X-CSE-MsgGUID: SQ7eiUSBQKCtPEdmF2Ddaw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67950673" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:00:58 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 01/18] x86: pmu: Remove duplicate code in pmu_init() Date: Sat, 14 Sep 2024 10:17:11 +0000 Message-Id: <20240914101728.33148-2-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiong Zhang There are totally same code in pmu_init() helper, remove the duplicate code. Reviewed-by: Jim Mattson Signed-off-by: Xiong Zhang Signed-off-by: Dapeng Mi --- lib/x86/pmu.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c index 0f2afd65..d06e9455 100644 --- a/lib/x86/pmu.c +++ b/lib/x86/pmu.c @@ -16,11 +16,6 @@ void pmu_init(void) pmu.fixed_counter_width = (cpuid_10.d >> 5) & 0xff; } - if (pmu.version > 1) { - pmu.nr_fixed_counters = cpuid_10.d & 0x1f; - pmu.fixed_counter_width = (cpuid_10.d >> 5) & 0xff; - } - pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff; pmu.gp_counter_width = (cpuid_10.a >> 16) & 0xff; pmu.gp_counter_mask_length = (cpuid_10.a >> 24) & 0xff; From patchwork Sat Sep 14 10:17:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804281 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6DC781CDFC5; Sat, 14 Sep 2024 07:01:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297276; cv=none; b=pn4G2cvIxz/35qAVCWBUKSDiklns+c7drKhx9lUcC0eS3TCRLtrvE4dJsUY0s5OYGkt8btJn5WCmbgiF3Dxx+xcnk/gCjm9F4pzEfCeOskp5sjpXAVe5RaFhOciVOHd2lojpucPQlWVA24MeMp/1BBHeQQ+4klrNr5fVYVRuimU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297276; c=relaxed/simple; bh=UBSN3tAHv7Ks50wCX/Fi8DfPkqAzLtxONkTJalnh9Eo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=uQwy1PocHb6aQYYELSLV72sKsE0gcvSHlBh3ISUEW2Q1AdlUdlEz5zXv4Ky4veUw6pqr2sRwqm5wN7aczW/OSCbbGXYmRIARfi4dBljUW93ve8pD47m/R21Ga2Q9KXuDrfJPiDJtI8pehBefbDNAW95bRaynRjPA17rBsXGYSQ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Kj8qz5fE; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Kj8qz5fE" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297275; x=1757833275; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UBSN3tAHv7Ks50wCX/Fi8DfPkqAzLtxONkTJalnh9Eo=; b=Kj8qz5fEzoYsst3IewtM/r4lDAG0AF60HdFbvXenmiy2J5H3FGkXisUW O4Me+bZpJHrQKT2qYqj7k8XA4OKwjpk5BKuLduJqAFMoWb3oszahMkvKQ h6Er9UvxZtaNHeSf8dZtTACB85JGukpEnUOyCPjxSiWIFBzVWrdlbMXFq vFIdp2FtKY+baFs9hUVsYHwExrp3L0ihe2IkRaoiyEFetFB/a+uoD23vt MUzisArRzH+tRTulzPrzDchuleAvIDuk3E7qQER7IZVmf0ffL75o3CfjI +oXYompJBlHhFAG1E3+fFR4PhIyQWqkDUarvWzkTqRbjTxIWcV8dy/Gre w==; X-CSE-ConnectionGUID: zXm2cLfrTqSWVZ+qG5eGyA== X-CSE-MsgGUID: eNk4kpcISm6YefctGdElrQ== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778751" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778751" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:14 -0700 X-CSE-ConnectionGUID: gZYvSExUTUC89l/DUJ4LWg== X-CSE-MsgGUID: DdxG5ZRRTba2BuyIEv5/FQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67950740" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:01 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 02/18] x86: pmu: Remove blank line and redundant space Date: Sat, 14 Sep 2024 10:17:12 +0000 Message-Id: <20240914101728.33148-3-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 code style changes. Reviewed-by: Mingwei Zhang Signed-off-by: Dapeng Mi --- x86/pmu.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index ce9abbe1..865dbe67 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -205,8 +205,7 @@ static noinline void __measure(pmu_counter_t *evt, uint64_t count) static bool verify_event(uint64_t count, struct pmu_event *e) { // printf("%d <= %ld <= %d\n", e->min, count, e->max); - return count >= e->min && count <= e->max; - + return count >= e->min && count <= e->max; } static bool verify_counter(pmu_counter_t *cnt) From patchwork Sat Sep 14 10:17:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804282 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8501C1CE702; Sat, 14 Sep 2024 07:01:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297279; cv=none; b=k0hYtoN2N2ENCNcHVYcNU6GF1qinBtjuECubUt5O+kuzlxCJBZec94GU6wIrR2VcJZCWerfIUpW46gMZvyK5d9NvZ1qe56IbCF76RZJRs1XSHEA9NxTBDHbuEmXv0l1a2Rql/hJYY7Yiqj89xLxH1IfdyT9syTh3AWZ660RZ8A0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297279; c=relaxed/simple; bh=KhG3sJ5luiiC8MFnE8cG6eZUNJPJ5V6DemW66c/WrE0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=p10cM5/h9vqkKyd8t2dbQqY942wkmNA9hH8K4ASGYFTynI1s4slqokLCAGJLOPGc8M0luR2U8UT67sa641H9nMZFkOJJdpMHOnNX3nS+kRetCEnlWOGpGhSXDGmlt509tqkov/Gom8jBQa2/KWMijAsv/+IxCtUJcqjmf21tuXA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GvQqVEmu; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GvQqVEmu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297278; x=1757833278; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KhG3sJ5luiiC8MFnE8cG6eZUNJPJ5V6DemW66c/WrE0=; b=GvQqVEmuzVbckijcjn7M/vj6cAADG4LvngjtmtTmrmSYZlGiyFzgC/n/ lW6susY1uweD5FTVnm6GTgdE+lqz/xLUsyQpu8dHNRaRcmkn2pXLbjuKD XT5VWCva0eEJ5enNfp/6JshRpLdi+eJ1klp16pYr748Eipc7WGQnh7jzr nU1HPhOak8b3ZvuBP/yiIwEoWRVui+01P9TkXZlTyla5/dRUumyO5OcNC 7nKoNgP79siFzYAswadQD7pQ6sEaf9SKrDWo2SJLK2B2PfKh07pN+1Ie3 SRFErVLGOB1R4mTIdVDruShh1eTBqrkFH6AmaZCUiCIOWkEj2Ir6z9Gj8 w==; X-CSE-ConnectionGUID: SekO6tRPT8azT+HWOdRI5A== X-CSE-MsgGUID: SiBN0E63RD6hrSgZLiffLw== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778758" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778758" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:17 -0700 X-CSE-ConnectionGUID: nIDCJz2lSRatkIx0e4MA8A== X-CSE-MsgGUID: FAFrko+6S4yEoDM3IvqNHQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67950804" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:04 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 03/18] x86: pmu: Refine fixed_events[] names Date: Sat, 14 Sep 2024 10:17:13 +0000 Message-Id: <20240914101728.33148-4-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In SDM the fixed counter is numbered from 0 but currently the fixed_events names are numbered from 1. It would cause confusion for users. So Change the fixed_events[] names to number from 0 as well and keep identical with SDM. Reviewed-by: Mingwei Zhang Signed-off-by: Dapeng Mi --- x86/pmu.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index 865dbe67..60db8bdf 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -45,9 +45,9 @@ struct pmu_event { {"branches", 0x00c2, 1*N, 1.1*N}, {"branch misses", 0x00c3, 0, 0.1*N}, }, fixed_events[] = { - {"fixed 1", MSR_CORE_PERF_FIXED_CTR0, 10*N, 10.2*N}, - {"fixed 2", MSR_CORE_PERF_FIXED_CTR0 + 1, 1*N, 30*N}, - {"fixed 3", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N} + {"fixed 0", MSR_CORE_PERF_FIXED_CTR0, 10*N, 10.2*N}, + {"fixed 1", MSR_CORE_PERF_FIXED_CTR0 + 1, 1*N, 30*N}, + {"fixed 2", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N} }; char *buf; From patchwork Sat Sep 14 10:17:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804283 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A2C91CEAAB; Sat, 14 Sep 2024 07:01:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297281; cv=none; b=XsyPUcLtCeGg+2DEIxsFYaPQexhyy8t8EsMx5tMP7+IF+ZrMt/3TyJeUB8BSegc6lsu9MRW/GGQNy68VvDmk+DchOgJnjuvRt+5X22NZutJw7wCR9F1C9oydB9ZvKG8sr1Y0NgnBvBGRnn2h/zOi6oT/ZhwS80fYhKSwEHUeM3Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297281; c=relaxed/simple; bh=TFH87E5BhpwdqBCfxEk/wJnJqhSK4EU8CSnk++BqOsA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ouU720UVxpq8MnQNoG8UWRD/8z8YJXSCcfTDWYPOeofis3pBZu2tFqV5Cy/vG3FQJ89kWxx5jmAq5RPu5KUUpKZ4y961QrzDUbTS7nBU+kuJr7/DoqMS6cLPwTq/+9jBPzic2+s45Az3wuCbA4MBvgo2VBdYejMWNdq2SYpx0ZI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bON52U7B; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bON52U7B" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297279; x=1757833279; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TFH87E5BhpwdqBCfxEk/wJnJqhSK4EU8CSnk++BqOsA=; b=bON52U7B8XPhxdFVXpYndjk/hQFyj9d+Yk6y6lw10koXN5wyE8OQft4T caObfebLtCYCfBc/RL3Ea7SbgiOP3NblB0NGR9bEStkrIGO0XZdUcFF9l okJd6JxPMcmaQHio9ty0eH2SZLxbM+K8UGsLg5/9Zlsl98WrlmellXCew q0i6dxKqAUx8tvmz59GfLrdv6CDyymjosrsBP2hDBC/768r2Um497v/UD oO0dyALZaojbOmA3CL1NI+pKw1lRNkJGU4IV+QubegQKNg4VAbG+nP6oU DTUxuT0gi+Wl29Ach7+YlrZMf77tqjmVXbGAVTRbDPvV09NUFyFgG/Hrn w==; X-CSE-ConnectionGUID: hqbfNxhJSG2vmHAjWtbZ6A== X-CSE-MsgGUID: 2rlyqjmoQCKR5YAJksnTAA== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778763" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778763" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:19 -0700 X-CSE-ConnectionGUID: 6BwvalokSHmdZpvP+lvGKA== X-CSE-MsgGUID: VLItPEOgT7amraXiQFEC8g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67950852" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:09 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 04/18] x86: pmu: Fix the issue that pmu_counter_t.config crosses cache line Date: Sat, 14 Sep 2024 10:17:14 +0000 Message-Id: <20240914101728.33148-5-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When running pmu test on SPR, the following #GP fault is reported. Unhandled exception 13 #GP at ip 000000000040771f error_code=0000 rflags=00010046 cs=00000008 rax=00000000004031ad rcx=0000000000000186 rdx=0000000000000000 rbx=00000000005142f0 rbp=0000000000514260 rsi=0000000000000020 rdi=0000000000000340 r8=0000000000513a65 r9=00000000000003f8 r10=000000000000000d r11=00000000ffffffff r12=000000000043003c r13=0000000000514450 r14=000000000000000b r15=0000000000000001 cr0=0000000080010011 cr2=0000000000000000 cr3=0000000001007000 cr4=0000000000000020 cr8=0000000000000000 STACK: @40771f 40040e 400976 400aef 40148d 401da9 4001ad FAIL pmu It looks EVENTSEL0 MSR (0x186) is written a invalid value (0x4031ad) and cause a #GP. Further investigation shows the #GP is caused by below code in __start_event(). rmsr(MSR_GP_EVENT_SELECTx(event_to_global_idx(evt)), evt->config | EVNTSEL_EN); The evt->config is correctly initialized but seems corrupted before writing to MSR. The original pmu_counter_t layout looks as below. typedef struct { uint32_t ctr; uint64_t config; uint64_t count; int idx; } pmu_counter_t; Obviously the config filed crosses two cache lines. When the two cache lines are not updated simultaneously, the config value is corrupted. Adjust pmu_counter_t fields order and ensure config field is cache-line aligned. Signeduoff-by: Dapeng Mi --- x86/pmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/x86/pmu.c b/x86/pmu.c index 60db8bdf..a0268db8 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -21,9 +21,9 @@ typedef struct { uint32_t ctr; + uint32_t idx; uint64_t config; uint64_t count; - int idx; } pmu_counter_t; struct pmu_event { From patchwork Sat Sep 14 10:17:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804284 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6F881CEE84; Sat, 14 Sep 2024 07:01:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297283; cv=none; b=uwYvkpXZcplY1RprV/6DhnAB/sq7QMTxv541q1TsTLQ+WQZQY2fjMpmr5v5jPofsRu4zULCsfvOJ84/3ByeiZGgABtVQmT1QUpG8LpaYTqPcY+ztYalowV3pU13HVXAEHWtvuSKTFFslr2jYAma73aH/YdaXPl/EjvCivlP/Z3o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297283; c=relaxed/simple; bh=c/FKtCmCvkZd0M4TkpSHqVExn/0016+qbGWLpWK/BiA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gYKmWznDGVTgusBcfGwJ4+N1qYR9or9los8z1HD2PsNUjEOQFiEHt4hAuO4WIcNfcqKX3g7D4PIe8sCxB6vUDkAFQ2oMVfb6rviZenhh5j0m2iRw1x9UzvxVIarAIy7AGae16dwIxzOc2DK+u4waDWHrJX0Y71LM9gesdbaxNTk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=etMvt9bu; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="etMvt9bu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297282; x=1757833282; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=c/FKtCmCvkZd0M4TkpSHqVExn/0016+qbGWLpWK/BiA=; b=etMvt9burReQQEpZey4luxIoCzNgH0HO8Fj0SVaAmcEWwvlCLR64/d8R x9RDVDukB32nGPt/3iGo7k9Lc9LyOu4NRUgl65ZeQtrNh7xN7/umsDmEI YCF6nnRPOY2DOHUSxnFWARmg3E7zJ9wwfb6SXkSMD0cWexT1RsR5D7otA yny2xnnlQtFBBddFOMsnkZhyNMFXjsBBuAxvzC6zhnMyl0aXsOAJTthG5 WBxZuvHWkOafAji2xHgTrxFm//ihhkoEwFY+GLn5Q6yez/xw17ascLq21 dE9lvZ50iVO2CdTwfPskdSaf2RfRdFm1OJ0gUZ0969L1MxUBuO/mgcaRs g==; X-CSE-ConnectionGUID: o7sR+5zRRimluAe8GpwntQ== X-CSE-MsgGUID: /vgpxzU6SLqTsnDv44s3oA== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778772" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778772" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:22 -0700 X-CSE-ConnectionGUID: e+nqjkttSxyNyg3j9Rpp3A== X-CSE-MsgGUID: KwqE5NccTAmbBVqMoyEL2g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67950900" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:16 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 05/18] x86: pmu: Enlarge cnt[] length to 48 in check_counters_many() Date: Sat, 14 Sep 2024 10:17:15 +0000 Message-Id: <20240914101728.33148-6-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Considering there are already 8 GP counters and 4 fixed counters on latest Intel processors, like Sapphire Rapids. The original cnt[] array length 10 is definitely not enough to cover all supported PMU counters on these new processors even through currently KVM only supports 3 fixed counters at most. This would cause out of bound memory access and may trigger false alarm on PMU counter validation It's probably more and more GP and fixed counters are introduced in the future and then directly extends the cnt[] array length to 48 once and for all. Base on the layout of IA32_PERF_GLOBAL_CTRL and IA32_PERF_GLOBAL_STATUS, 48 looks enough in near feature. Reviewed-by: Jim Mattson Signed-off-by: Dapeng Mi --- x86/pmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/x86/pmu.c b/x86/pmu.c index a0268db8..b4de2680 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -255,7 +255,7 @@ static void check_fixed_counters(void) static void check_counters_many(void) { - pmu_counter_t cnt[10]; + pmu_counter_t cnt[48]; int i, n; for (i = 0, n = 0; n < pmu.nr_gp_counters; i++) { From patchwork Sat Sep 14 10:17:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804285 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5008E1CDA34; Sat, 14 Sep 2024 07:01:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297285; cv=none; b=OBX7iELdj1SZFH5skmKcnaSkEWCXTY3r9Zeu0eKG/YhRtn1wrE+tH2O7kEbEYRprimgogGvsU68h2jlctGbrEyOl4f/Qw5fsl0m4Vm0eXsougu/2e2qf29tj2Vj2AzP5JhTJA26tfUoT4xjlocH5/FQBgqOas8qeRVkMIJ6Zfow= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297285; c=relaxed/simple; bh=iP055UHzTX++BXlX72hw/r55PRCSYQxiKzpgOHjSnoo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=u7vTSEJLWFkd0iDf6P0b23PQoGfsZpNN54paNbr2PCg+qFHcQwe2UM/x7e28C0OQmLLnN1QlO5KmsRSldIokJGupJYbis5pe6dsy4LG/5ZBannGEwwFbRsKE4YgkrLGyp35Ikf3EfNPyI2QmYZfgK8/IXV2n1lk5J5OE1rdf41E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=T/yozKaK; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="T/yozKaK" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297284; x=1757833284; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iP055UHzTX++BXlX72hw/r55PRCSYQxiKzpgOHjSnoo=; b=T/yozKaKigBukGVSvgbJomIBdLwYwoXOYpuWauh2CPoX/AAACwL/lyvR f8KFQGlTomTqMF7p040KYHe6Jx8xNlkLjvIe1nPGMGrw64zxEiG/Ud1Wc azNUERdKSEMPjX1zrvnGqBZqCDuwzSJG9D1ZFGw2ex6jHUN0HVJOmtnDS h6eQ/kS8a4z6ZGvKLKClkgcnKaMrGLmtrKf47ndb67+rSkJ16bdh9a7dk rcSaNF4SKZC7j8Y0t77FpOzO7vx4iYQvuxkpTE5z9nG6HiDHR3C2JMFfM +YzoYibedGoI/a351Ivr3g654/xxlzZ3SFpP5+QjP3WZ3TuOZEQysBhbl g==; X-CSE-ConnectionGUID: BsEMtTcISNSPi1w+tbGYsQ== X-CSE-MsgGUID: kx8mAMUwTDq4is2weCJkYw== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778777" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778777" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:24 -0700 X-CSE-ConnectionGUID: PCvOHz/4SRePr79QUH969Q== X-CSE-MsgGUID: HM+GwhZSQwmOMGqUluM8kQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67950908" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:21 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 06/18] x86: pmu: Print measured event count if test fails Date: Sat, 14 Sep 2024 10:17:16 +0000 Message-Id: <20240914101728.33148-7-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Print the measured event count if the test case fails. This helps users quickly know why the test case fails. Signed-off-by: Dapeng Mi --- x86/pmu.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index b4de2680..53bb7ec0 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -204,8 +204,12 @@ static noinline void __measure(pmu_counter_t *evt, uint64_t count) static bool verify_event(uint64_t count, struct pmu_event *e) { - // printf("%d <= %ld <= %d\n", e->min, count, e->max); - return count >= e->min && count <= e->max; + bool pass = count >= e->min && count <= e->max; + + if (!pass) + printf("FAIL: %d <= %"PRId64" <= %d\n", e->min, count, e->max); + + return pass; } static bool verify_counter(pmu_counter_t *cnt) From patchwork Sat Sep 14 10:17:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804286 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6DC941CF5D3; Sat, 14 Sep 2024 07:01:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297288; cv=none; b=RKuX1/CcHNNdtKgmUpVwMp+Im/G+1lK8HxxRHlZqlppsswylW0ya1u1AI9ukOrSkQNBRX/CWgDQlY6yTXx07WmgM5GUEh+X+51l0UwraKUTWQCxoq6F8Dt4s53dUTugxE0XiHea5XxOHKxHdK+qd2t+DlqdXjr8xpqBlykfrF7o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297288; c=relaxed/simple; bh=BJe0sP/7dT483VAOgVBmKK7UewNMZdCxjtWZqHTYsi4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Gk+KNqx88jy6F3Rv+vHlGTcn3Stf3rDYTV5g8vW27c1TsRGwJXLLErdgjweqBw2JlBUZ+IoK5s56gev4krHt4DXGVm6i1hhMeLmjIs3qRwiYLMyMufF/e29E38o7PkvZ0KCbkZnLyDd9/NLv3uBt1lowARyZWpePTM97xu6nIDA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=IT5uE6qc; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IT5uE6qc" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297287; x=1757833287; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BJe0sP/7dT483VAOgVBmKK7UewNMZdCxjtWZqHTYsi4=; b=IT5uE6qcBD9M/c5ec9XgqAgPcVBW1AX4Dl+8n5yF4gVNA2qn+JDDxTaP wYugY5d338w1UeNhbSn2K2WeAVF7+cQZpXOZ9exiKU1cKpfCejb+H0UyM NDZQGxSX1GPJHf9dtj3VU4ML7z6G8l+I23ZxrvV0J7bt2mJ7ZGqRaRHpw dHg4c51LuHKZ9cLVGvDkbaxDF5hqk28j1CoKr/Cln4sX4piTHhH5mSoZN OY5ySC/iVVC0ZjluydQ/04QadmNELB9aJ4eZ0SwCltm5bXLFFabYa9rb6 oif77RoMNzQobJCWM2y/cyzQxq5UmKPgunqsh3goXqEEpATYhYL3mIJNZ w==; X-CSE-ConnectionGUID: sMbULrjoQo2lsvsqZbCrlw== X-CSE-MsgGUID: 1DRj9rDJQMmqqjZ9CTQJKQ== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778783" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778783" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:27 -0700 X-CSE-ConnectionGUID: GyduzjHJRaWo+z49O9c8jg== X-CSE-MsgGUID: uGqwn+L7TWifZ2bNUWdgUg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67950921" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:24 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 07/18] x86: pmu: Fix potential out of bound access for fixed events Date: Sat, 14 Sep 2024 10:17:17 +0000 Message-Id: <20240914101728.33148-8-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Current PMU code doesn't check whether PMU fixed counter number is larger than pre-defined fixed events. If so, it would cause memory access out of range. So limit validated fixed counters number to MIN(pmu.nr_fixed_counters, ARRAY_SIZE(fixed_events)) and print message to warn that KUT/pmu tests need to be updated if fixed counters number exceeds defined fixed events number. Signed-off-by: Dapeng Mi --- x86/pmu.c | 27 +++++++++++++++++++++------ 1 file changed, 21 insertions(+), 6 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index 53bb7ec0..cc940a61 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -54,6 +54,7 @@ char *buf; static struct pmu_event *gp_events; static unsigned int gp_events_size; +static unsigned int fixed_counters_num; static inline void loop(void) { @@ -113,8 +114,12 @@ static struct pmu_event* get_counter_event(pmu_counter_t *cnt) for (i = 0; i < gp_events_size; i++) if (gp_events[i].unit_sel == (cnt->config & 0xffff)) return &gp_events[i]; - } else - return &fixed_events[cnt->ctr - MSR_CORE_PERF_FIXED_CTR0]; + } else { + unsigned int idx = cnt->ctr - MSR_CORE_PERF_FIXED_CTR0; + + if (idx < ARRAY_SIZE(fixed_events)) + return &fixed_events[idx]; + } return (void*)0; } @@ -204,8 +209,12 @@ static noinline void __measure(pmu_counter_t *evt, uint64_t count) static bool verify_event(uint64_t count, struct pmu_event *e) { - bool pass = count >= e->min && count <= e->max; + bool pass; + if (!e) + return false; + + pass = count >= e->min && count <= e->max; if (!pass) printf("FAIL: %d <= %"PRId64" <= %d\n", e->min, count, e->max); @@ -250,7 +259,7 @@ static void check_fixed_counters(void) }; int i; - for (i = 0; i < pmu.nr_fixed_counters; i++) { + for (i = 0; i < fixed_counters_num; i++) { cnt.ctr = fixed_events[i].unit_sel; measure_one(&cnt); report(verify_event(cnt.count, &fixed_events[i]), "fixed-%d", i); @@ -271,7 +280,7 @@ static void check_counters_many(void) gp_events[i % gp_events_size].unit_sel; n++; } - for (i = 0; i < pmu.nr_fixed_counters; i++) { + for (i = 0; i < fixed_counters_num; i++) { cnt[n].ctr = fixed_events[i].unit_sel; cnt[n].config = EVNTSEL_OS | EVNTSEL_USR; n++; @@ -419,7 +428,7 @@ static void check_rdpmc(void) else report(cnt.count == (u32)val, "fast-%d", i); } - for (i = 0; i < pmu.nr_fixed_counters; i++) { + for (i = 0; i < fixed_counters_num; i++) { uint64_t x = val & ((1ull << pmu.fixed_counter_width) - 1); pmu_counter_t cnt = { .ctr = MSR_CORE_PERF_FIXED_CTR0 + i, @@ -744,6 +753,12 @@ int main(int ac, char **av) printf("Fixed counters: %d\n", pmu.nr_fixed_counters); printf("Fixed counter width: %d\n", pmu.fixed_counter_width); + fixed_counters_num = MIN(pmu.nr_fixed_counters, ARRAY_SIZE(fixed_events)); + if (pmu.nr_fixed_counters > ARRAY_SIZE(fixed_events)) + report_info("Fixed counters number %d > defined fixed events %ld. " + "Please update test case.", pmu.nr_fixed_counters, + ARRAY_SIZE(fixed_events)); + apic_write(APIC_LVTPC, PMI_VECTOR); check_counters(); From patchwork Sat Sep 14 10:17:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804287 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6905D1CF7A6; Sat, 14 Sep 2024 07:01:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297291; cv=none; b=iRx4+6Ca8Zgnp6LawCZXYe2zK+/fQkqgK1/kHfdHYbWZS8tbFdlX0Yio1YfAorUD31sq4VdGLkadgMPDfTTIu0fhHn2zZwVM4Bv8Dq9NPt8MB1Ut8FjHbnmDD35Gvt2eRqw57zAI8PXCHdYtZ29/hjL6Ou16rCtKgvHh0mAQ6tE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297291; c=relaxed/simple; bh=osRoBW8hU9dewFz/BqZu2Ja6vvUVZ4VOz3YRrGapuqU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=lpIkLGPAY9AmgEg5qB47tszd8wJD+xym9Ow46Onf3ATxoURKAi9QOrWyHqlNwOMu4M5JikOAbq8rBPHx6QyF7pEMq3Kh7981ZBAMmr+s/lA5Qq40YTcwBuGZf/aB7oaM7j0vdMpQf0CdGgKCnCNRvdy3xT7zyXE07OPZHnFecF8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bwu8X1TI; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bwu8X1TI" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297290; x=1757833290; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=osRoBW8hU9dewFz/BqZu2Ja6vvUVZ4VOz3YRrGapuqU=; b=bwu8X1TIr8J4WiY1biA72CJ/v+MhbHC3KROwYl/SbQTM/AEkhAMeNKLf FVdPIowGdGeohSDO+7UfxdmdY12H7z+CO3A+bZv/6bg3YhXPUo3J/Jmwn dcmbveEbHwfVkBT2HF1ZeJMaEdeiY6VfNs4Ag/SaBZ5MdFAKTn3lSlXoe l6jJGVPBP6PxOTMZam//qtgOhisMLgPCbcq5XsOvQ4VZdoCUUHNj8uG8A JYrtK7CGlMSSSx8910nTvL8raIufsi8wa/Ai1WKmUgFkwM9RDQFuCyECF fYkV6T0OWvTUWcekPNpqRDhpA2wFkB41k2bt3nGqXXZfdpwfwCxdTwdPc A==; X-CSE-ConnectionGUID: 8AYhZsIwQiKtw2bpM3IVKA== X-CSE-MsgGUID: gU2+lnhUT/eGHERpm3X9lw== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778792" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778792" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:30 -0700 X-CSE-ConnectionGUID: T0D4+R23TIa5FqYMhAEalg== X-CSE-MsgGUID: R1wqdMwLTEeVioJ5BuKzdw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67950932" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:27 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 08/18] x86: pmu: Fix cycles event validation failure Date: Sat, 14 Sep 2024 10:17:18 +0000 Message-Id: <20240914101728.33148-9-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When running pmu test on SPR, sometimes the following failure is reported. PMU version: 2 GP counters: 8 GP counter width: 48 Mask length: 8 Fixed counters: 3 Fixed counter width: 48 1000000 <= 55109398 <= 50000000 FAIL: Intel: core cycles-0 1000000 <= 18279571 <= 50000000 PASS: Intel: core cycles-1 1000000 <= 12238092 <= 50000000 PASS: Intel: core cycles-2 1000000 <= 7981727 <= 50000000 PASS: Intel: core cycles-3 1000000 <= 6984711 <= 50000000 PASS: Intel: core cycles-4 1000000 <= 6773673 <= 50000000 PASS: Intel: core cycles-5 1000000 <= 6697842 <= 50000000 PASS: Intel: core cycles-6 1000000 <= 6747947 <= 50000000 PASS: Intel: core cycles-7 The count of the "core cycles" on first counter would exceed the upper boundary and leads to a failure, and then the "core cycles" count would drop gradually and reach a stable state. That looks reasonable. The "core cycles" event is defined as the 1st event in xxx_gp_events[] array and it is always verified at first. when the program loop() is executed at the first time it needs to warm up the pipeline and cache, such as it has to wait for cache is filled. All these warm-up work leads to a quite large core cycles count which may exceeds the verification range. To avoid the false positive of cycles event caused by warm-up, explicitly introduce a warm-up state before really starting verification. Signed-off-by: Dapeng Mi --- x86/pmu.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/x86/pmu.c b/x86/pmu.c index cc940a61..e864ebc4 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -602,11 +602,27 @@ static void check_tsx_cycles(void) report_prefix_pop(); } +static void warm_up(void) +{ + int i = 8; + + /* + * Since cycles event is always run as the first event, there would be + * a warm-up state to warm up the cache, it leads to the measured cycles + * value may exceed the pre-defined cycles upper boundary and cause + * false positive. To avoid this, introduce an warm-up state before + * the real verification. + */ + while (i--) + loop(); +} + static void check_counters(void) { if (is_fep_available()) check_emulated_instr(); + warm_up(); check_gp_counters(); check_fixed_counters(); check_rdpmc(); From patchwork Sat Sep 14 10:17:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804288 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A96261CF5D3; Sat, 14 Sep 2024 07:01:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297295; cv=none; b=RmMK/uzXZEG62xb3aqVhz9ZiOrRXLZAXCAxmcDKv+addZesg4zxkrX7Lq5DFe2qAEZQ2lHyeN6Ny4JUc0mk+B5yREGklJMP5fg6xkfIlguPSOn5iOLu8K0kmX0OWegyzWaWA9lUdwhthigrzpwxM5dZ1RRFlxoT7M4iyiy0UKR8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297295; c=relaxed/simple; bh=HHRh5QrD/XIq+0VTyI4BruluVOAwBpdXIhsspddwPeA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=QusoTuYNPAZu/rnyn2Nqn2oEcIsUl2cyachBwV5mQC9ndbksc0VzznRzevS7GsUTnRsrREceyT2jqB39m8iIc/UyzFJYt2UWtU8RSmczzKK8Ikn+S1rATd96wBhXxBDa1dNvh/hQUPsYgCHwo+9td1oNpXdx0taBtxKrzmsGORQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HbLDASi9; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HbLDASi9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297294; x=1757833294; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HHRh5QrD/XIq+0VTyI4BruluVOAwBpdXIhsspddwPeA=; b=HbLDASi9JdZUysSmdOors95nKXJ0TaqZMI9j/QSSDx0ns/EXhnJzZ0Bi ljzm+KFpfOl5cYekguWS2RwkRFPdr2iVfIM22/i8uGmqtbfoSdwLQeJ/0 rvOBik/bcFgGnzSE0BS6nBJ4NajAsImyecC9aolUHtxAcbhia3tDvS1RL qtlQraKBBxBXqISJI5OA4Zw4pz62cSjnI2fYSLi4oftyzzw6a3sYALfCc k4wAdSHHGtBZwX3wCOFrkT+T6GG8RJjyzM/wCsv3uPaUEQ8qvwvnbML86 bMQSL2CYRkfGWt2JjKB222OSNL/d8Rqx/+wz35yJuTAPEyv85lALpYevS A==; X-CSE-ConnectionGUID: zniMxSM5RoCltoOcxP+1og== X-CSE-MsgGUID: IpZSdpbMTlyeyM7ZZ7Q9Gw== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778800" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778800" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:33 -0700 X-CSE-ConnectionGUID: A2kgvwnSRmWhcNhwY0Tl7A== X-CSE-MsgGUID: 2QCNGOqiTAuktfrbumHDOA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67950948" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:30 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 09/18] x86: pmu: Use macro to replace hard-coded branches event index Date: Sat, 14 Sep 2024 10:17:19 +0000 Message-Id: <20240914101728.33148-10-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Currently the branches event index is a hard-coded number. User could add new events and cause the branches event index changes in the future, but don't notice the hard-coded event index and forget to update the event index synchronously, then the issue comes. Thus, replace the hard-coded index to a macro. Signed-off-by: Dapeng Mi --- x86/pmu.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/x86/pmu.c b/x86/pmu.c index e864ebc4..496ee877 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -50,6 +50,22 @@ struct pmu_event { {"fixed 2", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N} }; +/* + * Events index in intel_gp_events[], ensure consistent with + * intel_gp_events[]. + */ +enum { + INTEL_BRANCHES_IDX = 5, +}; + +/* + * Events index in amd_gp_events[], ensure consistent with + * amd_gp_events[]. + */ +enum { + AMD_BRANCHES_IDX = 2, +}; + char *buf; static struct pmu_event *gp_events; @@ -492,7 +508,8 @@ static void check_emulated_instr(void) { uint64_t status, instr_start, brnch_start; uint64_t gp_counter_width = (1ull << pmu.gp_counter_width) - 1; - unsigned int branch_idx = pmu.is_intel ? 5 : 2; + unsigned int branch_idx = pmu.is_intel ? + INTEL_BRANCHES_IDX : AMD_BRANCHES_IDX; pmu_counter_t brnch_cnt = { .ctr = MSR_GP_COUNTERx(0), /* branch instructions */ From patchwork Sat Sep 14 10:17:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804289 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2421C1CDFCC; Sat, 14 Sep 2024 07:01:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297298; cv=none; b=Z2WdXJYycJ0W2J9mJbv78dJouSHFe1SpCV32lFU4/JpFmZio3tQr0fxTDBnOqlDncnR5Dhwun7USbL8iBDcH6UeHoogy7Pdn6+eIA8R2991jIfvwZ6aaqLTMaZXdK1Bxs53RAPmF6/z3Ug8OVDFNqSXgPAM+b/tprSzzkVLSlAE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297298; c=relaxed/simple; bh=CLw+sFB+mUF6gzDPRFjmd//aq+LgNnbUGCAlWCZahBA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=SifRI84gyhvsVHATNfEz1GSNVX8qtuvmnV6UEOI8XNaeKO4dHM1XbOJLiarsvli4MTD0dnw6pfQplLt95zcwIOQNmmXrqV2pUPtIT/MVJzShiQHUibLrj/2JolofUg8QjIcReHhoTO6iGVflrsIZagrEUAXVbelah6t+bS8FoUs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=A8O3ZyLj; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="A8O3ZyLj" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297297; x=1757833297; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CLw+sFB+mUF6gzDPRFjmd//aq+LgNnbUGCAlWCZahBA=; b=A8O3ZyLjEEaSUaUrJtUcAezf5IBDyVhJxfZvQRE96ZOkRTx25Jf2XWU0 395gVLwu+xU1kHqd0OJ1Ojxp/7cGdGtheU2V/UzXBFPA3YzVeTUtGRhJx IKggtuXfd8+GL98Pb8xJhM2U7qXUl6fmqe4z9oHY5cbGvdd5Iw7teB6Bs ahzhaO00/YqWb5gwhtIIRvO2iolaDZsmZGneUq7qb2PVrP5FTxKEkuthS +9sIVTisWEYRZV7OFnLQx1hg1cvoNlN/FPeZwpVz4HtGKm6cSeohYaOKr Xy/CItYyR/8YPPaA4YYmnkyA9eVc+AaIDkgZAk97UPDRjlMIHrxd/gt81 w==; X-CSE-ConnectionGUID: QH27+pHNTMu84KhbELDVPg== X-CSE-MsgGUID: GTiORYDYQZqtdcKP6wiTaQ== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778809" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778809" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:37 -0700 X-CSE-ConnectionGUID: P+hXxCiZRwaNuP8x5jfJjg== X-CSE-MsgGUID: yj00ZIYKT8KJdLSijc0e8g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67950959" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:33 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 10/18] x86: pmu: Use macro to replace hard-coded ref-cycles event index Date: Sat, 14 Sep 2024 10:17:20 +0000 Message-Id: <20240914101728.33148-11-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Replace hard-coded ref-cycles event index with macro to avoid possible mismatch issue if new event is added in the future and cause ref-cycles event index changed, but forget to update the hard-coded ref-cycles event index. Signed-off-by: Dapeng Mi --- x86/pmu.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index 496ee877..523369b2 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -55,6 +55,7 @@ struct pmu_event { * intel_gp_events[]. */ enum { + INTEL_REF_CYCLES_IDX = 2, INTEL_BRANCHES_IDX = 5, }; @@ -708,7 +709,8 @@ static void set_ref_cycle_expectations(void) { pmu_counter_t cnt = { .ctr = MSR_IA32_PERFCTR0, - .config = EVNTSEL_OS | EVNTSEL_USR | intel_gp_events[2].unit_sel, + .config = EVNTSEL_OS | EVNTSEL_USR | + intel_gp_events[INTEL_REF_CYCLES_IDX].unit_sel, }; uint64_t tsc_delta; uint64_t t0, t1, t2, t3; @@ -744,8 +746,10 @@ static void set_ref_cycle_expectations(void) if (!tsc_delta) return; - intel_gp_events[2].min = (intel_gp_events[2].min * cnt.count) / tsc_delta; - intel_gp_events[2].max = (intel_gp_events[2].max * cnt.count) / tsc_delta; + intel_gp_events[INTEL_REF_CYCLES_IDX].min = + (intel_gp_events[INTEL_REF_CYCLES_IDX].min * cnt.count) / tsc_delta; + intel_gp_events[INTEL_REF_CYCLES_IDX].max = + (intel_gp_events[INTEL_REF_CYCLES_IDX].max * cnt.count) / tsc_delta; } static void check_invalid_rdpmc_gp(void) From patchwork Sat Sep 14 10:17:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804290 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2FA01CF5D3; Sat, 14 Sep 2024 07:01:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297301; cv=none; b=CU8f6iEAzpCyW6aJpvgJgpKgqO7+PzTQY3pxQHK+9BHNQ+p5svbZ1QHuM8l89NIMg8FWfXNWm1S7ZxRd7x2ZAzULSF03HgnncPpnKRoAL9NMhldqV/JKQQJFotNKUS+QlzDR3KMtJ5pCDDzreUqtbanEUkmT/ZohbhI1/HviaxI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297301; c=relaxed/simple; bh=oXErb7tWEYRmcTgERbH7uxK8jSQS/T5b989AwlEhXdg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=tXkZmtQZoeY6fvzeufdcZYMiZnuWK2S8vU2QeNG71E29Bbj9ihDapSubn3NGCxiXG2/4yO3NobjUZL7NUD/Qbg/2Me3CU1nzB/MLH20aCdv1peGZtgYwmFOBQf2O95uRs7XlPh5UaqneeNayJqODh9SrW6RkPskqgcZa+oDXmhY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=VJkN42E6; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="VJkN42E6" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297300; x=1757833300; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oXErb7tWEYRmcTgERbH7uxK8jSQS/T5b989AwlEhXdg=; b=VJkN42E6MwVgJFiyb+wWgUJWNWp0nQkWfV1NfHe3kVOZGHnxACO4cSAV CcTD56JW6h9mvV7xxSCHJoAG7aBlXB0GIiu1AIZ2imC43NKTAQjnzCsvm hcETHsIt/WdzG5gSLRzWgYrDYKamiP0bFYMcM3wU0kwfwoWjzN5AIxXFq QfbFJUQVIkIMxhBmSmB6vuEG44yy2Q2SZiBnIsmPVOfgOcQdbTw1azf5x E5nfyAd9WfXO0n+655Ps7onveFwe8y8p8T9qt3Donf3EhZnxdfsKwW+h1 MdjlI73th0opsQRGZNEgKSNDd4TDWajyJF2MZmky5gDdFNEWahw+tTdGU w==; X-CSE-ConnectionGUID: Te0FZd+MRrO+RxqKlddx2g== X-CSE-MsgGUID: s2zssjELT0ybAQv8MczwUQ== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778822" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778822" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:40 -0700 X-CSE-ConnectionGUID: ermrybJ2SUiZh1Yp1FfbHQ== X-CSE-MsgGUID: 44PPAOhASdWQwpJS1QEo0g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67950969" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:37 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 11/18] x86: pmu: Use macro to replace hard-coded instructions event index Date: Sat, 14 Sep 2024 10:17:21 +0000 Message-Id: <20240914101728.33148-12-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Replace hard-coded instruction event index with macro to avoid possible mismatch issue if new event is added in the future and cause instructions event index changed, but forget to update the hard-coded event index. Signed-off-by: Dapeng Mi --- x86/pmu.c | 34 +++++++++++++++++++++++++++------- 1 file changed, 27 insertions(+), 7 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index 523369b2..91484c77 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -55,6 +55,7 @@ struct pmu_event { * intel_gp_events[]. */ enum { + INTEL_INSTRUCTIONS_IDX = 1, INTEL_REF_CYCLES_IDX = 2, INTEL_BRANCHES_IDX = 5, }; @@ -64,6 +65,7 @@ enum { * amd_gp_events[]. */ enum { + AMD_INSTRUCTIONS_IDX = 1, AMD_BRANCHES_IDX = 2, }; @@ -328,11 +330,16 @@ static uint64_t measure_for_overflow(pmu_counter_t *cnt) static void check_counter_overflow(void) { - uint64_t overflow_preset; int i; + uint64_t overflow_preset; + int instruction_idx = pmu.is_intel ? + INTEL_INSTRUCTIONS_IDX : + AMD_INSTRUCTIONS_IDX; + pmu_counter_t cnt = { .ctr = MSR_GP_COUNTERx(0), - .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel /* instructions */, + .config = EVNTSEL_OS | EVNTSEL_USR | + gp_events[instruction_idx].unit_sel /* instructions */, }; overflow_preset = measure_for_overflow(&cnt); @@ -388,13 +395,18 @@ static void check_counter_overflow(void) static void check_gp_counter_cmask(void) { + int instruction_idx = pmu.is_intel ? + INTEL_INSTRUCTIONS_IDX : + AMD_INSTRUCTIONS_IDX; + pmu_counter_t cnt = { .ctr = MSR_GP_COUNTERx(0), - .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel /* instructions */, + .config = EVNTSEL_OS | EVNTSEL_USR | + gp_events[instruction_idx].unit_sel /* instructions */, }; cnt.config |= (0x2 << EVNTSEL_CMASK_SHIFT); measure_one(&cnt); - report(cnt.count < gp_events[1].min, "cmask"); + report(cnt.count < gp_events[instruction_idx].min, "cmask"); } static void do_rdpmc_fast(void *ptr) @@ -469,9 +481,14 @@ static void check_running_counter_wrmsr(void) { uint64_t status; uint64_t count; + unsigned int instruction_idx = pmu.is_intel ? + INTEL_INSTRUCTIONS_IDX : + AMD_INSTRUCTIONS_IDX; + pmu_counter_t evt = { .ctr = MSR_GP_COUNTERx(0), - .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel, + .config = EVNTSEL_OS | EVNTSEL_USR | + gp_events[instruction_idx].unit_sel, }; report_prefix_push("running counter wrmsr"); @@ -480,7 +497,7 @@ static void check_running_counter_wrmsr(void) loop(); wrmsr(MSR_GP_COUNTERx(0), 0); stop_event(&evt); - report(evt.count < gp_events[1].min, "cntr"); + report(evt.count < gp_events[instruction_idx].min, "cntr"); /* clear status before overflow test */ if (this_cpu_has_perf_global_status()) @@ -511,6 +528,9 @@ static void check_emulated_instr(void) uint64_t gp_counter_width = (1ull << pmu.gp_counter_width) - 1; unsigned int branch_idx = pmu.is_intel ? INTEL_BRANCHES_IDX : AMD_BRANCHES_IDX; + unsigned int instruction_idx = pmu.is_intel ? + INTEL_INSTRUCTIONS_IDX : + AMD_INSTRUCTIONS_IDX; pmu_counter_t brnch_cnt = { .ctr = MSR_GP_COUNTERx(0), /* branch instructions */ @@ -519,7 +539,7 @@ static void check_emulated_instr(void) pmu_counter_t instr_cnt = { .ctr = MSR_GP_COUNTERx(1), /* instructions */ - .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel, + .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[instruction_idx].unit_sel, }; report_prefix_push("emulated instruction"); From patchwork Sat Sep 14 10:17:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804291 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23B761D016C; Sat, 14 Sep 2024 07:01:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297304; cv=none; b=MfvRwtckpbcrxAKTrF7KgtXUhVMghou7iejnWGDri2/PLiqnSlcpjq5qKeouEJmqgJNaIKp9dpNoNOn836MeYeBr6f7nTodp1ANjnE9cKWpIcGv1oA7INm+ujsk7hppXd4/QDuIxwVNtRmPXHOwjlvraRqynBuLqm0Fc/yF0O+E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297304; c=relaxed/simple; bh=It/DXjNoq2+jPSGksnuMJgR7XrXtesJ/3Oto5mBW+T0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=B8KZNCpMA+FozemhfEbBDMzDwyUAOx23j28XfIBoW/CvpaV4SgE5EMXwkG/E+nroqY91g9tHGxzD3B8ZO9V3crTwzC6zxW57DdfLQ1rEr9e79GvnfS2yvkvXL3oGMVmoRgqFA2in6OVsVzqeJoSO/g1JX8yDILzBwzjpfP1V5uE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=eMopaxQI; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="eMopaxQI" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297303; x=1757833303; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=It/DXjNoq2+jPSGksnuMJgR7XrXtesJ/3Oto5mBW+T0=; b=eMopaxQI9JeYP2Cje7SlMroUYeBDwVMechn94MI3VdLUi3DPi2Ola2pt IcywsEaCYW3nyvialO5HyCLoMO9l1NhcBetAKZbsUHUEvkB+ZELZzZuu6 l0WfWvAkyfG9mefNGLwSxdNX8s4NmNGA/GZ/uVH1YS36xf2SoWeFCkeK2 Mzs4NPf7W1CA6nfzj5qcEmzqU+q/c7qOxvTlm/9/5gK0MsqvvQKMlTrqv d2m7WsLF4rqSrWcN0yUFCdjdyxp23Yk3U7ZjlrxGX9xToseFUtGJHkyz4 nJlx01zabzmoTyhkhrXbsiBK1kIAgF2XfoKSz3nqvOlqgWLHy/IwaBNBg w==; X-CSE-ConnectionGUID: ql2ermTVQ02TEqGblJyIsw== X-CSE-MsgGUID: 3AK6TpaZS7OosF7QBcYgcw== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778831" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778831" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:43 -0700 X-CSE-ConnectionGUID: o3vkwpLMQhakbp7oz1XUew== X-CSE-MsgGUID: zUxbij7pRuyW3yaID0oeYQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67950974" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:40 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 12/18] x86: pmu: Enable and disable PMCs in loop() asm blob Date: Sat, 14 Sep 2024 10:17:22 +0000 Message-Id: <20240914101728.33148-13-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Currently enabling PMCs, executing loop() and disabling PMCs are divided 3 separated functions. So there could be other instructions executed between enabling PMCS and running loop() or running loop() and disabling PMCs, e.g. if there are multiple counters enabled in measure_many() function, the instructions which enabling the 2nd and more counters would be counted in by the 1st counter. So current implementation can only verify the correctness of count by an rough range rather than a precise count even for instructions and branches events. Strictly speaking, this verification is meaningless as the test could still pass even though KVM vPMU has something wrong and reports an incorrect instructions or branches count which is in the rough range. Thus, move the PMCs enabling and disabling into the loop() asm blob and ensure only the loop asm instructions would be counted, then the instructions or branches events can be verified with an precise count instead of an rough range. Signed-off-by: Dapeng Mi --- x86/pmu.c | 80 ++++++++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 65 insertions(+), 15 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index 91484c77..270f11b9 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -19,6 +19,15 @@ #define EXPECTED_INSTR 17 #define EXPECTED_BRNCH 5 +#define LOOP_ASM(_wrmsr) \ + _wrmsr "\n\t" \ + "mov %%ecx, %%edi; mov %%ebx, %%ecx;\n\t" \ + "1: mov (%1), %2; add $64, %1;\n\t" \ + "nop; nop; nop; nop; nop; nop; nop;\n\t" \ + "loop 1b;\n\t" \ + "mov %%edi, %%ecx; xor %%eax, %%eax; xor %%edx, %%edx;\n\t" \ + _wrmsr "\n\t" + typedef struct { uint32_t ctr; uint32_t idx; @@ -75,13 +84,43 @@ static struct pmu_event *gp_events; static unsigned int gp_events_size; static unsigned int fixed_counters_num; -static inline void loop(void) + +static inline void __loop(void) +{ + unsigned long tmp, tmp2, tmp3; + + asm volatile(LOOP_ASM("nop") + : "=c"(tmp), "=r"(tmp2), "=r"(tmp3) + : "0"(N), "1"(buf)); +} + +/* + * Enable and disable counters in a whole asm blob to ensure + * no other instructions are counted in the window between + * counters enabling and really LOOP_ASM code executing. + * Thus counters can verify instructions and branches events + * against precise counts instead of a rough valid count range. + */ +static inline void __precise_loop(u64 cntrs) { unsigned long tmp, tmp2, tmp3; + unsigned int global_ctl = pmu.msr_global_ctl; + u32 eax = cntrs & (BIT_ULL(32) - 1); + u32 edx = cntrs >> 32; - asm volatile("1: mov (%1), %2; add $64, %1; nop; nop; nop; nop; nop; nop; nop; loop 1b" - : "=c"(tmp), "=r"(tmp2), "=r"(tmp3): "0"(N), "1"(buf)); + asm volatile(LOOP_ASM("wrmsr") + : "=b"(tmp), "=r"(tmp2), "=r"(tmp3) + : "a"(eax), "d"(edx), "c"(global_ctl), + "0"(N), "1"(buf) + : "edi"); +} +static inline void loop(u64 cntrs) +{ + if (!this_cpu_has_perf_global_ctrl()) + __loop(); + else + __precise_loop(cntrs); } volatile uint64_t irq_received; @@ -181,18 +220,17 @@ static void __start_event(pmu_counter_t *evt, uint64_t count) ctrl = (ctrl & ~(0xf << shift)) | (usrospmi << shift); wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, ctrl); } - global_enable(evt); apic_write(APIC_LVTPC, PMI_VECTOR); } static void start_event(pmu_counter_t *evt) { __start_event(evt, 0); + global_enable(evt); } -static void stop_event(pmu_counter_t *evt) +static void __stop_event(pmu_counter_t *evt) { - global_disable(evt); if (is_gp(evt)) { wrmsr(MSR_GP_EVENT_SELECTx(event_to_global_idx(evt)), evt->config & ~EVNTSEL_EN); @@ -204,14 +242,24 @@ static void stop_event(pmu_counter_t *evt) evt->count = rdmsr(evt->ctr); } +static void stop_event(pmu_counter_t *evt) +{ + global_disable(evt); + __stop_event(evt); +} + static noinline void measure_many(pmu_counter_t *evt, int count) { int i; + u64 cntrs = 0; + + for (i = 0; i < count; i++) { + __start_event(&evt[i], 0); + cntrs |= BIT_ULL(event_to_global_idx(&evt[i])); + } + loop(cntrs); for (i = 0; i < count; i++) - start_event(&evt[i]); - loop(); - for (i = 0; i < count; i++) - stop_event(&evt[i]); + __stop_event(&evt[i]); } static void measure_one(pmu_counter_t *evt) @@ -221,9 +269,11 @@ static void measure_one(pmu_counter_t *evt) static noinline void __measure(pmu_counter_t *evt, uint64_t count) { + u64 cntrs = BIT_ULL(event_to_global_idx(evt)); + __start_event(evt, count); - loop(); - stop_event(evt); + loop(cntrs); + __stop_event(evt); } static bool verify_event(uint64_t count, struct pmu_event *e) @@ -494,7 +544,7 @@ static void check_running_counter_wrmsr(void) report_prefix_push("running counter wrmsr"); start_event(&evt); - loop(); + __loop(); wrmsr(MSR_GP_COUNTERx(0), 0); stop_event(&evt); report(evt.count < gp_events[instruction_idx].min, "cntr"); @@ -511,7 +561,7 @@ static void check_running_counter_wrmsr(void) wrmsr(MSR_GP_COUNTERx(0), count); - loop(); + __loop(); stop_event(&evt); if (this_cpu_has_perf_global_status()) { @@ -652,7 +702,7 @@ static void warm_up(void) * the real verification. */ while (i--) - loop(); + loop(0); } static void check_counters(void) From patchwork Sat Sep 14 10:17:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804292 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A0CE21D049D; Sat, 14 Sep 2024 07:01:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297308; cv=none; b=gpNLKkVRlB92mO9FRx4lnI3CKnNjL8C0iraU4t+TJl8XmzbsKGPkFP9C+pPnf6g4UBElmxcHW5/kSvbiqe6yyJYm3f2emEZtY6Y+inB2Qj22HZNCgaTHt1j45iq5YBeaitAniyOnkhmn9e99ChlH4ws+iX/cmVbShRWdMZiCa14= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297308; c=relaxed/simple; bh=Ih04t7D/0iaLudr3E/C/IlUJ8oePrTKqbytNL2ob+RE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OeSR94SyjS7jqh6tlzu7kBWR/BR3aBFubYOnCF+2RwJT8POJV3eimuFubvz//SswT3106vrotenJhc3/4rHY1in5AORyyS2whAAztxydn4mxSKBFxsnRj+Pxbd2wlaqwifJHKl8WekFEGOmifLBCQUxa/t4+QQZ5GmrChge7wz0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HaxM0CtN; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HaxM0CtN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297307; x=1757833307; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ih04t7D/0iaLudr3E/C/IlUJ8oePrTKqbytNL2ob+RE=; b=HaxM0CtNveJljxjyw4+W+QF7fK+uFLPCYXiP8ut73hutqeuK7idtQKWs DVJ4DuZ9mWPDHgLajnx0gV1tCoXVJ/ZpPMu34Mut/3ui40JzVlWIkndee jBZKvEbErxQzyhGvIU2kKC9Ae+Mv9ArbIdlgkRkd+DdiBXlGSI4lTpyCy reF6f27BWR0IqokpKv88BctktvpKvFeDOdc8N6VTyFcu1gT39KWAVJ/SJ n6mx3fp+92dfAC02cYoiNnYKFxVOPe1WLOfWRThwdCBKa6SAoQ1PlNekx sKHJQ6AUTOrSGmzDI4M0iT2Ym+b0kpJ3X6gAOhHcPRzTj0oeS/IynPk17 g==; X-CSE-ConnectionGUID: wEFe2iz2SKe4mHDfdNT2CA== X-CSE-MsgGUID: tfKzfjC0R3aSx5kxW83TuQ== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778844" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778844" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:46 -0700 X-CSE-ConnectionGUID: DqgFYJEGQj+a1W0wwMSUuQ== X-CSE-MsgGUID: XZVUL+iFStWvfLybBSp6LQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67950986" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:43 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 13/18] x86: pmu: Improve instruction and branches events verification Date: Sat, 14 Sep 2024 10:17:23 +0000 Message-Id: <20240914101728.33148-14-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 If HW supports GLOBAL_CTRL MSR, enabling and disabling PMCs are moved in __precise_count_loop(). Thus, instructions and branches events can be verified against a precise count instead of a rough range. BTW, some intermittent failures on AMD processors using PerfMonV2 is seen due to variance in counts. This probably has to do with the way instructions leading to a VM-Entry or VM-Exit are accounted when counting retired instructions and branches. https://lore.kernel.org/all/6d512a14-ace1-41a3-801e-0beb41425734@amd.com/ So only enable this precise check for Intel processors. Signed-off-by: Dapeng Mi --- x86/pmu.c | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/x86/pmu.c b/x86/pmu.c index 270f11b9..13c7c45d 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -19,6 +19,11 @@ #define EXPECTED_INSTR 17 #define EXPECTED_BRNCH 5 + +/* Enable GLOBAL_CTRL + disable GLOBAL_CTRL instructions */ +#define EXTRA_INSTRNS (3 + 3) +#define LOOP_INSTRNS (N * 10 + EXTRA_INSTRNS) +#define LOOP_BRANCHES (N) #define LOOP_ASM(_wrmsr) \ _wrmsr "\n\t" \ "mov %%ecx, %%edi; mov %%ebx, %%ecx;\n\t" \ @@ -123,6 +128,30 @@ static inline void loop(u64 cntrs) __precise_loop(cntrs); } +static void adjust_events_range(struct pmu_event *gp_events, + int instruction_idx, int branch_idx) +{ + /* + * If HW supports GLOBAL_CTRL MSR, enabling and disabling PMCs are + * moved in __precise_loop(). Thus, instructions and branches events + * can be verified against a precise count instead of a rough range. + * + * We see some intermittent failures on AMD processors using PerfMonV2 + * due to variance in counts. This probably has to do with the way + * instructions leading to a VM-Entry or VM-Exit are accounted when + * counting retired instructions and branches. Thus only enable the + * precise validation for Intel processors. + */ + if (pmu.is_intel && this_cpu_has_perf_global_ctrl()) { + /* instructions event */ + gp_events[instruction_idx].min = LOOP_INSTRNS; + gp_events[instruction_idx].max = LOOP_INSTRNS; + /* branches event */ + gp_events[branch_idx].min = LOOP_BRANCHES; + gp_events[branch_idx].max = LOOP_BRANCHES; + } +} + volatile uint64_t irq_received; static void cnt_overflow(isr_regs_t *regs) @@ -832,6 +861,9 @@ static void check_invalid_rdpmc_gp(void) int main(int ac, char **av) { + int instruction_idx; + int branch_idx; + setup_vm(); handle_irq(PMI_VECTOR, cnt_overflow); buf = malloc(N*64); @@ -845,13 +877,18 @@ int main(int ac, char **av) } gp_events = (struct pmu_event *)intel_gp_events; gp_events_size = sizeof(intel_gp_events)/sizeof(intel_gp_events[0]); + instruction_idx = INTEL_INSTRUCTIONS_IDX; + branch_idx = INTEL_BRANCHES_IDX; report_prefix_push("Intel"); set_ref_cycle_expectations(); } else { gp_events_size = sizeof(amd_gp_events)/sizeof(amd_gp_events[0]); gp_events = (struct pmu_event *)amd_gp_events; + instruction_idx = AMD_INSTRUCTIONS_IDX; + branch_idx = AMD_BRANCHES_IDX; report_prefix_push("AMD"); } + adjust_events_range(gp_events, instruction_idx, branch_idx); printf("PMU version: %d\n", pmu.version); printf("GP counters: %d\n", pmu.nr_gp_counters); From patchwork Sat Sep 14 10:17:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804293 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C70411D094B; Sat, 14 Sep 2024 07:01:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297311; cv=none; b=NZ6bau/dZZEGhpjQAL7mRQtpYYk/VJuSxKUAp6CVHW6axiGvwwMfVHKNUm2uAqKIh8NSJq1Qf3uGHLlt6vqGR2MXFULO+wriEOBSpYy7n61WzqxFXt6wfssFcBmZJBGd1kNjPF+TErcimhjLnqzzCgo3wZeSlOOli8wNM0PHvA8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297311; c=relaxed/simple; bh=bTi4JEytAfv9XQr8gZo88mP9Wtsmpx188j38MU3wAF8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UVCYoW71kivBKkXmSS/vEpiYXSoKPVfs1oxfKXhimqmOZztj48C/rYslbKwaxIsCt6pdfDtSDmvyvAP52z6C0K78PcgdmP+1KN6kmkEybYnZqVwoFu0eQrCuVtkfNMZjUSl8C6ZncyYg7H95VLFwiu0k31KBqrRlCq8160QIJGw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=TXqMVrPH; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="TXqMVrPH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297310; x=1757833310; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bTi4JEytAfv9XQr8gZo88mP9Wtsmpx188j38MU3wAF8=; b=TXqMVrPHOonJMrqC4sq/GntlI4N6yv+zVz+woRutbic9dUDNESTkM6bH KMscDBWBIiJcSwzqd3mo7W92bgUPQPdT1qYwVO7ifXRJ/+LjbpbbeQ6pV d+/mKf/rEiES20hjCUzSeVIwzDnnXPSZX0ETPW2P4AxhHXoN+kGmY1t1s dlErG7X1je/8t9ZBswjKZSkLw/Bh8fd2BMm4Nd+KYZRPU5+a5Tzr05KJU 4XBEZoiL4ABcCJ43vgLjU1n+1yliuV7m9SeXQJ9DUSScL4qpg3SnHeHDs LamLOsi1NprCnTG6Wh39WA1qUnZ20JxiM10xZcj/Ej6cqelYRlNoOiZi6 A==; X-CSE-ConnectionGUID: nAvVpfcTQe+nvmlONoiCPA== X-CSE-MsgGUID: rRnZi0wrSfyfuECxklDgig== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778864" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778864" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:50 -0700 X-CSE-ConnectionGUID: ASuQKp2xTxSTyR9A5j7IRw== X-CSE-MsgGUID: QHlGzxcRSwuZueFyHGuN/A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67950998" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:46 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 14/18] x86: pmu: Improve LLC misses event verification Date: Sat, 14 Sep 2024 10:17:24 +0000 Message-Id: <20240914101728.33148-15-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When running pmu test on SPR, sometimes the following failure is reported. 1 <= 0 <= 1000000 FAIL: Intel: llc misses-4 Currently The LLC misses occurring only depends on probability. It's possible that there is no LLC misses happened in the whole loop(), especially along with processors have larger and larger cache size just like what we observed on SPR. Thus, add clflush instruction into the loop() asm blob and ensure once LLC miss is triggered at least. Suggested-by: Jim Mattson Signed-off-by: Dapeng Mi --- x86/pmu.c | 39 ++++++++++++++++++++++++++------------- 1 file changed, 26 insertions(+), 13 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index 13c7c45d..c9160423 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -20,19 +20,30 @@ #define EXPECTED_BRNCH 5 -/* Enable GLOBAL_CTRL + disable GLOBAL_CTRL instructions */ -#define EXTRA_INSTRNS (3 + 3) +/* Enable GLOBAL_CTRL + disable GLOBAL_CTRL + clflush/mfence instructions */ +#define EXTRA_INSTRNS (3 + 3 + 2) #define LOOP_INSTRNS (N * 10 + EXTRA_INSTRNS) #define LOOP_BRANCHES (N) -#define LOOP_ASM(_wrmsr) \ +#define LOOP_ASM(_wrmsr, _clflush) \ _wrmsr "\n\t" \ "mov %%ecx, %%edi; mov %%ebx, %%ecx;\n\t" \ + _clflush "\n\t" \ + "mfence;\n\t" \ "1: mov (%1), %2; add $64, %1;\n\t" \ "nop; nop; nop; nop; nop; nop; nop;\n\t" \ "loop 1b;\n\t" \ "mov %%edi, %%ecx; xor %%eax, %%eax; xor %%edx, %%edx;\n\t" \ _wrmsr "\n\t" +#define _loop_asm(_wrmsr, _clflush) \ +do { \ + asm volatile(LOOP_ASM(_wrmsr, _clflush) \ + : "=b"(tmp), "=r"(tmp2), "=r"(tmp3) \ + : "a"(eax), "d"(edx), "c"(global_ctl), \ + "0"(N), "1"(buf) \ + : "edi"); \ +} while (0) + typedef struct { uint32_t ctr; uint32_t idx; @@ -89,14 +100,17 @@ static struct pmu_event *gp_events; static unsigned int gp_events_size; static unsigned int fixed_counters_num; - static inline void __loop(void) { unsigned long tmp, tmp2, tmp3; + u32 global_ctl = 0; + u32 eax = 0; + u32 edx = 0; - asm volatile(LOOP_ASM("nop") - : "=c"(tmp), "=r"(tmp2), "=r"(tmp3) - : "0"(N), "1"(buf)); + if (this_cpu_has(X86_FEATURE_CLFLUSH)) + _loop_asm("nop", "clflush (%1)"); + else + _loop_asm("nop", "nop"); } /* @@ -109,15 +123,14 @@ static inline void __loop(void) static inline void __precise_loop(u64 cntrs) { unsigned long tmp, tmp2, tmp3; - unsigned int global_ctl = pmu.msr_global_ctl; + u32 global_ctl = pmu.msr_global_ctl; u32 eax = cntrs & (BIT_ULL(32) - 1); u32 edx = cntrs >> 32; - asm volatile(LOOP_ASM("wrmsr") - : "=b"(tmp), "=r"(tmp2), "=r"(tmp3) - : "a"(eax), "d"(edx), "c"(global_ctl), - "0"(N), "1"(buf) - : "edi"); + if (this_cpu_has(X86_FEATURE_CLFLUSH)) + _loop_asm("wrmsr", "clflush (%1)"); + else + _loop_asm("wrmsr", "nop"); } static inline void loop(u64 cntrs) From patchwork Sat Sep 14 10:17:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804294 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBB751D0972; Sat, 14 Sep 2024 07:01:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297314; cv=none; b=aQ14rG5DQMcGxE1a2urmP3oBd8ICK44IxrRdmkoRLaG64XE6omNMTK7hoQ6INJVBZWI+r0uqMZaQnV5v006FRIQZlDlR+u4WNY2Gdc5K3n3wCvSaOCvLaM96WdJaLAm0CEfERQE1BMFoQIJZUoKMTAv7wr8kvhrBnPokaJRb8d4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297314; c=relaxed/simple; bh=6rkU41q/MyT0y9yMobn5IXzrkBZx/aO+BTHMgn2NnYU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=iBzOuuDbdsfMejesGCAR4PsUA0NsZ2r0dUvqikhYwGHKj1JolCX1jiCxLO7JxLURWCkv0DN2kwu1R9G7jcKAkFfyrZNajPU0zVN+j3PgX+dR+sMnVi6LzUNeFRS4KnDcWBmYzCOKQSWyCIKJdHcZFDMHeLVlczDNtpUr7yDbyDk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bkMNjsMW; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bkMNjsMW" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297313; x=1757833313; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6rkU41q/MyT0y9yMobn5IXzrkBZx/aO+BTHMgn2NnYU=; b=bkMNjsMWGm0u1WAHuu8NiTjdvJIgq3hpInfxhHYc57aQIsga2yFs8O+S c15mwOMpBMlMpWu0EOqJ/jE3rrPjNpWZJTtPwB5waWjb3tH2on1mERECi HxBQu35UVxdkFNfc/ENzISxph3uh0MH9AbX6Egr1i4oZqa/BFZNA710mx u8KUzFTOqvkFc/UJ5Z4KUV4iksl/XUINWuXbpre3l1uEGNRJtUdDqFDVt Cmn3n3tRBmHfeyYPkV63A/vTLeTODjRu/ni2DE/ewD4+c1MbJAPvdaF8Z HOp553xNlY9VCbnn3g67VIVJeKduw/oEzi1iHNYw1USwBiKBB2vF6guag A==; X-CSE-ConnectionGUID: MY+juHGRT3ik3e12cp0Rew== X-CSE-MsgGUID: Ed8uzmJeSy6BAFMO665Ncg== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778874" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778874" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:53 -0700 X-CSE-ConnectionGUID: RJiyu6iFRfOHfoErH0Qf7Q== X-CSE-MsgGUID: qDd4tpmRTemmeyynpoIaaw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67951004" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:50 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 15/18] x86: pmu: Adjust lower boundary of llc-misses event to 0 for legacy CPUs Date: Sat, 14 Sep 2024 10:17:25 +0000 Message-Id: <20240914101728.33148-16-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 For these legacy Intel CPUs without clflush/clflushopt support, there is on way to force to trigger a LLC miss and the measured llc misses is possible to be 0. Thus adjust the lower boundary of llc-misses event to 0 to avoid possible false positive. Signed-off-by: Dapeng Mi --- x86/pmu.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/x86/pmu.c b/x86/pmu.c index c9160423..47b6305d 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -82,6 +82,7 @@ struct pmu_event { enum { INTEL_INSTRUCTIONS_IDX = 1, INTEL_REF_CYCLES_IDX = 2, + INTEL_LLC_MISSES_IDX = 4, INTEL_BRANCHES_IDX = 5, }; @@ -892,6 +893,15 @@ int main(int ac, char **av) gp_events_size = sizeof(intel_gp_events)/sizeof(intel_gp_events[0]); instruction_idx = INTEL_INSTRUCTIONS_IDX; branch_idx = INTEL_BRANCHES_IDX; + + /* + * For legacy Intel CPUS without clflush/clflushopt support, + * there is no way to force to trigger a LLC miss, thus set + * the minimum value to 0 to avoid false positives. + */ + if (!this_cpu_has(X86_FEATURE_CLFLUSH)) + gp_events[INTEL_LLC_MISSES_IDX].min = 0; + report_prefix_push("Intel"); set_ref_cycle_expectations(); } else { From patchwork Sat Sep 14 10:17:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804295 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 137BE1CEAA1; Sat, 14 Sep 2024 07:01:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297317; cv=none; b=PvI7imCrOZ9eZCEXAElgtfy4llTW1WbZK2pnchOPYYlLZI5nr1OxJW31zrwCQBTfCvMqMHLUlijelvAbvOVT4nnY7ZVXVRTjey5q+thmiXYLq9jFb6EG47PRuVCJxJokp6670UUMxpgojZ4rqVap65nDJ3YidXZ981f6mohnDPM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297317; c=relaxed/simple; bh=Rz3BSwdtyqkIaqhFWc6IZEQz107z+w5qDNmQ8HUMorU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=sY52GCPxqXhlbuAaNiAqIG3UfzmolgDJcn5kZtim6aqsHUBEszwbnukMQgMBUUezUofgW1cgBQJ8qr7bdSV4Ou+0oY/iyRVL6dbnH3y2EhQx4hwKn4nquM45TV4xnWsfeGVvM+fI3YXL8iElt+FA/CULVp66EQzyzeCnIBYJZYc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=W2E6UzDW; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="W2E6UzDW" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297316; x=1757833316; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Rz3BSwdtyqkIaqhFWc6IZEQz107z+w5qDNmQ8HUMorU=; b=W2E6UzDW/rtfvaiTxKpGvdupl3uji9rMwTmJ5SQYgalCrkHKWjqvumJ0 11IDsRqjrM4c3VZAOigOARzXt0KH17HOH8GfdVTwt5wweHOUylDzp9vZP WLqER8q846gJ2Zt6GbIjMnliyIEPTxlIOuqpccR4cIG5Ep9v1FaabGmoQ yfQ++1Gfbp3+zwnQ7Y2AAk1TSUP/3YXxrLeySKkE5+nZMHoP3oxlaSNNL y6ya/PeR8S4ixy63FEy0IwHXFFJvVejFb/C/hNSIzLHC8YqXt/wmUK1hz V1U9DnyswmHzKvH3fR9apMSvl/3ZBDqF9fu9x5hPABTO7dO3Q8BXlDQsf A==; X-CSE-ConnectionGUID: EHiE1RfhTrCFP355GR2Bhg== X-CSE-MsgGUID: TNTzKo5VTU2/tFintrL6mw== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778882" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778882" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:56 -0700 X-CSE-ConnectionGUID: 3urytuaeQcK2OFXa6aiANg== X-CSE-MsgGUID: zwMVccmSSvGIBi+4NcqANQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67951007" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:53 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 16/18] x86: pmu: Add IBPB indirect jump asm blob Date: Sat, 14 Sep 2024 10:17:26 +0000 Message-Id: <20240914101728.33148-17-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Currently the lower boundary of branch misses event is set to 0. Strictly speaking 0 shouldn't be a valid count since it can't tell us if branch misses event counter works correctly or even disabled. Whereas it's also possible and reasonable that branch misses event count is 0 especailly for such simple loop() program with advanced branch predictor. To eliminate such ambiguity and make branch misses event verification more acccurately, an extra IBPB indirect jump asm blob is appended and IBPB command is leveraged to clear the branch target buffer and force to cause a branch miss for the indirect jump. Suggested-by: Jim Mattson Signed-off-by: Dapeng Mi --- x86/pmu.c | 71 +++++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 56 insertions(+), 15 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index 47b6305d..279d418d 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -19,25 +19,52 @@ #define EXPECTED_INSTR 17 #define EXPECTED_BRNCH 5 - -/* Enable GLOBAL_CTRL + disable GLOBAL_CTRL + clflush/mfence instructions */ -#define EXTRA_INSTRNS (3 + 3 + 2) +#define IBPB_JMP_INSTRNS 9 +#define IBPB_JMP_BRANCHES 2 + +#if defined(__i386__) || defined(_M_IX86) /* i386 */ +#define IBPB_JMP_ASM(_wrmsr) \ + "mov $1, %%eax; xor %%edx, %%edx;\n\t" \ + "mov $73, %%ecx;\n\t" \ + _wrmsr "\n\t" \ + "call 1f\n\t" \ + "1: pop %%eax\n\t" \ + "add $(2f-1b), %%eax\n\t" \ + "jmp *%%eax;\n\t" \ + "nop;\n\t" \ + "2: nop;\n\t" +#else /* x86_64 */ +#define IBPB_JMP_ASM(_wrmsr) \ + "mov $1, %%eax; xor %%edx, %%edx;\n\t" \ + "mov $73, %%ecx;\n\t" \ + _wrmsr "\n\t" \ + "call 1f\n\t" \ + "1: pop %%rax\n\t" \ + "add $(2f-1b), %%rax\n\t" \ + "jmp *%%rax;\n\t" \ + "nop;\n\t" \ + "2: nop;\n\t" +#endif + +/* GLOBAL_CTRL enable + disable + clflush/mfence + IBPB_JMP */ +#define EXTRA_INSTRNS (3 + 3 + 2 + IBPB_JMP_INSTRNS) #define LOOP_INSTRNS (N * 10 + EXTRA_INSTRNS) -#define LOOP_BRANCHES (N) -#define LOOP_ASM(_wrmsr, _clflush) \ - _wrmsr "\n\t" \ +#define LOOP_BRANCHES (N + IBPB_JMP_BRANCHES) +#define LOOP_ASM(_wrmsr1, _clflush, _wrmsr2) \ + _wrmsr1 "\n\t" \ "mov %%ecx, %%edi; mov %%ebx, %%ecx;\n\t" \ _clflush "\n\t" \ "mfence;\n\t" \ "1: mov (%1), %2; add $64, %1;\n\t" \ "nop; nop; nop; nop; nop; nop; nop;\n\t" \ "loop 1b;\n\t" \ + IBPB_JMP_ASM(_wrmsr2) \ "mov %%edi, %%ecx; xor %%eax, %%eax; xor %%edx, %%edx;\n\t" \ - _wrmsr "\n\t" + _wrmsr1 "\n\t" -#define _loop_asm(_wrmsr, _clflush) \ +#define _loop_asm(_wrmsr1, _clflush, _wrmsr2) \ do { \ - asm volatile(LOOP_ASM(_wrmsr, _clflush) \ + asm volatile(LOOP_ASM(_wrmsr1, _clflush, _wrmsr2) \ : "=b"(tmp), "=r"(tmp2), "=r"(tmp3) \ : "a"(eax), "d"(edx), "c"(global_ctl), \ "0"(N), "1"(buf) \ @@ -101,6 +128,12 @@ static struct pmu_event *gp_events; static unsigned int gp_events_size; static unsigned int fixed_counters_num; +static int has_ibpb(void) +{ + return this_cpu_has(X86_FEATURE_SPEC_CTRL) || + this_cpu_has(X86_FEATURE_AMD_IBPB); +} + static inline void __loop(void) { unsigned long tmp, tmp2, tmp3; @@ -108,10 +141,14 @@ static inline void __loop(void) u32 eax = 0; u32 edx = 0; - if (this_cpu_has(X86_FEATURE_CLFLUSH)) - _loop_asm("nop", "clflush (%1)"); + if (this_cpu_has(X86_FEATURE_CLFLUSH) && has_ibpb()) + _loop_asm("nop", "clflush (%1)", "wrmsr"); + else if (this_cpu_has(X86_FEATURE_CLFLUSH)) + _loop_asm("nop", "clflush (%1)", "nop"); + else if (has_ibpb()) + _loop_asm("nop", "nop", "wrmsr"); else - _loop_asm("nop", "nop"); + _loop_asm("nop", "nop", "nop"); } /* @@ -128,10 +165,14 @@ static inline void __precise_loop(u64 cntrs) u32 eax = cntrs & (BIT_ULL(32) - 1); u32 edx = cntrs >> 32; - if (this_cpu_has(X86_FEATURE_CLFLUSH)) - _loop_asm("wrmsr", "clflush (%1)"); + if (this_cpu_has(X86_FEATURE_CLFLUSH) && has_ibpb()) + _loop_asm("wrmsr", "clflush (%1)", "wrmsr"); + else if (this_cpu_has(X86_FEATURE_CLFLUSH)) + _loop_asm("wrmsr", "clflush (%1)", "nop"); + else if (has_ibpb()) + _loop_asm("wrmsr", "nop", "wrmsr"); else - _loop_asm("wrmsr", "nop"); + _loop_asm("wrmsr", "nop", "nop"); } static inline void loop(u64 cntrs) From patchwork Sat Sep 14 10:17:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804296 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 345301D12EF; Sat, 14 Sep 2024 07:01:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297320; cv=none; b=e/rC7ebpFWH8JK8cSeRskeMfoL+rKcdTgTJozS3LNwe9SXlj8Pehjwu7sUm7oTUeVDES7HVhArphl/6sVQqEyIUPbQWozwykHXybuoyl5dMHe/akheYSPoXnUxqXvVs6XbloVFbfS78DCQukcbioSnWAgynNjIChii+c9tWS2gM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297320; c=relaxed/simple; bh=RPa+ulwIzhMZ5m/sqo9YLmXKAMy+I/zbCrWW8L0dxrQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=jkY21ReU8bX/nmx4gUIiX9D3O/HQl/Eu0Gcuic+3++E3KCC69P59RyaszRjGD+E/DBqNHVxU8b/Cb0nGHTiO/KrDb2hAEjDkl6SHj+axyZg9wcbMcS5/+OqDIkJXesX3K9aEX+MpbY3eaZFrlWvBqYclk9zePNNNzcWpabt9DsQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=mE05eoHG; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mE05eoHG" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297319; x=1757833319; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RPa+ulwIzhMZ5m/sqo9YLmXKAMy+I/zbCrWW8L0dxrQ=; b=mE05eoHGS0WGqZHUb21T1Sycd/92C1XKUSEa8C/bAqcemTJdiDrzxi3w ke9b7w3YAA78rvfpudTUvot4pLE9hwOQLMkr91sOmCjgk6hRZ0FMeFK1I nUJTd2hdiraDiX2YzuzHlDZ7njZ6/h2ac/7t7gXMHYcN6tq6Cqh+k2U0Y Y9aU9rzB+eNJATRqHKNQpJ3whBbdMeNApikzs6aup2r6Qgyy97d5zVEiQ kEX5ucBfFrWuK/VIsnb/GSSqLK1djVTEzyImnRYMNWWfeYUqErTcxgxar nufj0vZB0d4RlKS6o+FnvioDGng0gpqGu5BVh633VQhsjxRda3l1BS1SE w==; X-CSE-ConnectionGUID: zlmavc13Q/6/MKTaELOtfQ== X-CSE-MsgGUID: +6ioKSAIRseWeS7jSHOxaQ== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778890" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778890" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:01:59 -0700 X-CSE-ConnectionGUID: xX9dpccoQfuljbLCf59VfA== X-CSE-MsgGUID: EvYNJMgrQauRR3P1fSDofw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67951014" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:56 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 17/18] x86: pmu: Adjust lower boundary of branch-misses event Date: Sat, 14 Sep 2024 10:17:27 +0000 Message-Id: <20240914101728.33148-18-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Since the IBPB command is added to force to trigger a branch miss at least, the lower boundary of branch misses event is increased to 1 by default. For these CPUs without IBPB support, adjust dynamically the lower boundary to 0 to avoid false positive. Signed-off-by: Dapeng Mi --- x86/pmu.c | 25 +++++++++++++++++++++---- 1 file changed, 21 insertions(+), 4 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index 279d418d..c7848fd1 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -90,12 +90,12 @@ struct pmu_event { {"llc references", 0x4f2e, 1, 2*N}, {"llc misses", 0x412e, 1, 1*N}, {"branches", 0x00c4, 1*N, 1.1*N}, - {"branch misses", 0x00c5, 0, 0.1*N}, + {"branch misses", 0x00c5, 1, 0.1*N}, }, amd_gp_events[] = { {"core cycles", 0x0076, 1*N, 50*N}, {"instructions", 0x00c0, 10*N, 10.2*N}, {"branches", 0x00c2, 1*N, 1.1*N}, - {"branch misses", 0x00c3, 0, 0.1*N}, + {"branch misses", 0x00c3, 1, 0.1*N}, }, fixed_events[] = { {"fixed 0", MSR_CORE_PERF_FIXED_CTR0, 10*N, 10.2*N}, {"fixed 1", MSR_CORE_PERF_FIXED_CTR0 + 1, 1*N, 30*N}, @@ -111,6 +111,7 @@ enum { INTEL_REF_CYCLES_IDX = 2, INTEL_LLC_MISSES_IDX = 4, INTEL_BRANCHES_IDX = 5, + INTEL_BRANCH_MISS_IDX = 6, }; /* @@ -120,6 +121,7 @@ enum { enum { AMD_INSTRUCTIONS_IDX = 1, AMD_BRANCHES_IDX = 2, + AMD_BRANCH_MISS_IDX = 3, }; char *buf; @@ -184,7 +186,8 @@ static inline void loop(u64 cntrs) } static void adjust_events_range(struct pmu_event *gp_events, - int instruction_idx, int branch_idx) + int instruction_idx, int branch_idx, + int branch_miss_idx) { /* * If HW supports GLOBAL_CTRL MSR, enabling and disabling PMCs are @@ -205,6 +208,17 @@ static void adjust_events_range(struct pmu_event *gp_events, gp_events[branch_idx].min = LOOP_BRANCHES; gp_events[branch_idx].max = LOOP_BRANCHES; } + + /* + * For CPUs without IBPB support, no way to force to trigger a + * branch miss and the measured branch misses is possible to be + * 0. Thus overwrite the lower boundary of branch misses event + * to 0 to avoid false positive. + */ + if (!has_ibpb()) { + /* branch misses event */ + gp_events[branch_miss_idx].min = 0; + } } volatile uint64_t irq_received; @@ -918,6 +932,7 @@ int main(int ac, char **av) { int instruction_idx; int branch_idx; + int branch_miss_idx; setup_vm(); handle_irq(PMI_VECTOR, cnt_overflow); @@ -934,6 +949,7 @@ int main(int ac, char **av) gp_events_size = sizeof(intel_gp_events)/sizeof(intel_gp_events[0]); instruction_idx = INTEL_INSTRUCTIONS_IDX; branch_idx = INTEL_BRANCHES_IDX; + branch_miss_idx = INTEL_BRANCH_MISS_IDX; /* * For legacy Intel CPUS without clflush/clflushopt support, @@ -950,9 +966,10 @@ int main(int ac, char **av) gp_events = (struct pmu_event *)amd_gp_events; instruction_idx = AMD_INSTRUCTIONS_IDX; branch_idx = AMD_BRANCHES_IDX; + branch_miss_idx = AMD_BRANCH_MISS_IDX; report_prefix_push("AMD"); } - adjust_events_range(gp_events, instruction_idx, branch_idx); + adjust_events_range(gp_events, instruction_idx, branch_idx, branch_miss_idx); printf("PMU version: %d\n", pmu.version); printf("GP counters: %d\n", pmu.nr_gp_counters); From patchwork Sat Sep 14 10:17:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13804297 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 682361D12EA; Sat, 14 Sep 2024 07:02:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297323; cv=none; b=uUUNO/mIEbD4FVbIlgqZkaWyqSNZ9jiHkwbMn+lKNciX7vn8URXvcsMGusprsECuArTF02RTDEIRAdJaUu+5HFHYYF3jWHf1HU9QEZkgXMGMKdluo3auuT2NDr/zrWMAci32qcKXiMaJGsSmXzCrJXxi+9iQ3D2ol8lBgFV/XTk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726297323; c=relaxed/simple; bh=M4FmLtoUVjDaLJG+tUmv0c1pIfIWLYH5Bz71t6kstTk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=P97t05FJPfDA7sX3WBrcMmmv15EGnExirYin21siezJgvFIeYRu+uiAjKP6Y8BC5zYCQgD9Vr1hmdBTguHnRWLaeI85eEYBVIvyyVOAzf8kYwAghA0Q5lZD2NXep0kwEyWLHXGV+OzzcE5sOX9nBD6ZoZkTBWnNlCPmoB+OOUuI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=XK7UerGB; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="XK7UerGB" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726297322; x=1757833322; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=M4FmLtoUVjDaLJG+tUmv0c1pIfIWLYH5Bz71t6kstTk=; b=XK7UerGBQWPJJq6U5+UUErH8aVe4yFiDko5YXGF5ZfnagahZMYyvJ2ZW GRVZEqZ8T4rPcSBdCU7/G4QeybRLSkebk+kJIAQTvnnEqqXyhdydj2bh8 l7qbmq/cp549v2xSnUd1rtihVIbQEf8xKf8q0I0lqV/eC/7T1mc7lBp++ GBdjTZ721qAZ39nnbzkZnbwt2Y+AmqgEavBNrM9HT+U5/rhYWyQLN5FEw CMyrO6mZzq8lbvXddCKWptbwNDQOT9IfNVIvluJfzrP1K/WZb5FOUro97 9utEJotSsm4wDZv+koYi6FebmCfyeVywcHDCKxuXBtBNC+OrV+Jav4uly w==; X-CSE-ConnectionGUID: uNThJpO9Rk2ecFFYHAsemQ== X-CSE-MsgGUID: LG8Msil4Tp2QgYOHPZNtwQ== X-IronPort-AV: E=McAfee;i="6700,10204,11194"; a="35778899" X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="35778899" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2024 00:02:02 -0700 X-CSE-ConnectionGUID: TuQxA9nDRt+TpaV1XfE+6g== X-CSE-MsgGUID: FqI4piiFRryiZ6XGpGAoYA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,228,1719903600"; d="scan'208";a="67951026" Received: from emr.sh.intel.com ([10.112.229.56]) by fmviesa006.fm.intel.com with ESMTP; 14 Sep 2024 00:01:59 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Mingwei Zhang , Xiong Zhang , Zhenyu Wang , Like Xu , Jinrong Liang , Yongwei Ma , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests patch v6 18/18] x86: pmu: Optimize emulated instruction validation Date: Sat, 14 Sep 2024 10:17:28 +0000 Message-Id: <20240914101728.33148-19-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> References: <20240914101728.33148-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 For support CPUs supporting PERF_GLOBAL_CTRL MSR, the validation for emulated instruction can be improved to check against precise counts for instructions and branches events instead of a rough range. Move enabling and disabling PERF_GLOBAL_CTRL MSR into kvm_fep_asm blob, thus instructions and branches events can be verified against precise counts. Signed-off-by: Dapeng Mi --- x86/pmu.c | 108 ++++++++++++++++++++++++++++++++---------------------- 1 file changed, 65 insertions(+), 43 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index c7848fd1..3a5659b2 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -14,11 +14,6 @@ #define N 1000000 -// These values match the number of instructions and branches in the -// assembly block in check_emulated_instr(). -#define EXPECTED_INSTR 17 -#define EXPECTED_BRNCH 5 - #define IBPB_JMP_INSTRNS 9 #define IBPB_JMP_BRANCHES 2 @@ -71,6 +66,40 @@ do { \ : "edi"); \ } while (0) +/* the number of instructions and branches of the kvm_fep_asm() blob */ +#define KVM_FEP_INSTR 22 +#define KVM_FEP_BRNCH 5 + +/* + * KVM_FEP is a magic prefix that forces emulation so + * 'KVM_FEP "jne label\n"' just counts as a single instruction. + */ +#define kvm_fep_asm(_wrmsr) \ +do { \ + asm volatile( \ + _wrmsr "\n\t" \ + "mov %%ecx, %%edi;\n\t" \ + "mov $0x0, %%eax;\n\t" \ + "cmp $0x0, %%eax;\n\t" \ + KVM_FEP "jne 1f\n\t" \ + KVM_FEP "jne 1f\n\t" \ + KVM_FEP "jne 1f\n\t" \ + KVM_FEP "jne 1f\n\t" \ + KVM_FEP "jne 1f\n\t" \ + "mov $0xa, %%eax; cpuid;\n\t" \ + "mov $0xa, %%eax; cpuid;\n\t" \ + "mov $0xa, %%eax; cpuid;\n\t" \ + "mov $0xa, %%eax; cpuid;\n\t" \ + "mov $0xa, %%eax; cpuid;\n\t" \ + "1: mov %%edi, %%ecx; \n\t" \ + "xor %%eax, %%eax; \n\t" \ + "xor %%edx, %%edx;\n\t" \ + _wrmsr "\n\t" \ + : \ + : "a"(eax), "d"(edx), "c"(ecx) \ + : "ebx", "edi"); \ +} while (0) + typedef struct { uint32_t ctr; uint32_t idx; @@ -672,6 +701,7 @@ static void check_running_counter_wrmsr(void) static void check_emulated_instr(void) { + u32 eax, edx, ecx; uint64_t status, instr_start, brnch_start; uint64_t gp_counter_width = (1ull << pmu.gp_counter_width) - 1; unsigned int branch_idx = pmu.is_intel ? @@ -679,6 +709,7 @@ static void check_emulated_instr(void) unsigned int instruction_idx = pmu.is_intel ? INTEL_INSTRUCTIONS_IDX : AMD_INSTRUCTIONS_IDX; + pmu_counter_t brnch_cnt = { .ctr = MSR_GP_COUNTERx(0), /* branch instructions */ @@ -694,55 +725,46 @@ static void check_emulated_instr(void) if (this_cpu_has_perf_global_status()) pmu_clear_global_status(); - start_event(&brnch_cnt); - start_event(&instr_cnt); + __start_event(&brnch_cnt, 0); + __start_event(&instr_cnt, 0); - brnch_start = -EXPECTED_BRNCH; - instr_start = -EXPECTED_INSTR; + brnch_start = -KVM_FEP_BRNCH; + instr_start = -KVM_FEP_INSTR; wrmsr(MSR_GP_COUNTERx(0), brnch_start & gp_counter_width); wrmsr(MSR_GP_COUNTERx(1), instr_start & gp_counter_width); - // KVM_FEP is a magic prefix that forces emulation so - // 'KVM_FEP "jne label\n"' just counts as a single instruction. - asm volatile( - "mov $0x0, %%eax\n" - "cmp $0x0, %%eax\n" - KVM_FEP "jne label\n" - KVM_FEP "jne label\n" - KVM_FEP "jne label\n" - KVM_FEP "jne label\n" - KVM_FEP "jne label\n" - "mov $0xa, %%eax\n" - "cpuid\n" - "mov $0xa, %%eax\n" - "cpuid\n" - "mov $0xa, %%eax\n" - "cpuid\n" - "mov $0xa, %%eax\n" - "cpuid\n" - "mov $0xa, %%eax\n" - "cpuid\n" - "label:\n" - : - : - : "eax", "ebx", "ecx", "edx"); - if (this_cpu_has_perf_global_ctrl()) - wrmsr(pmu.msr_global_ctl, 0); + if (this_cpu_has_perf_global_ctrl()) { + eax = BIT(0) | BIT(1); + ecx = pmu.msr_global_ctl; + edx = 0; + kvm_fep_asm("wrmsr"); + } else { + eax = ecx = edx = 0; + kvm_fep_asm("nop"); + } - stop_event(&brnch_cnt); - stop_event(&instr_cnt); + __stop_event(&brnch_cnt); + __stop_event(&instr_cnt); // Check that the end count - start count is at least the expected // number of instructions and branches. - report(instr_cnt.count - instr_start >= EXPECTED_INSTR, - "instruction count"); - report(brnch_cnt.count - brnch_start >= EXPECTED_BRNCH, - "branch count"); + if (this_cpu_has_perf_global_ctrl()) { + report(instr_cnt.count - instr_start == KVM_FEP_INSTR, + "instruction count"); + report(brnch_cnt.count - brnch_start == KVM_FEP_BRNCH, + "branch count"); + } else { + report(instr_cnt.count - instr_start >= KVM_FEP_INSTR, + "instruction count"); + report(brnch_cnt.count - brnch_start >= KVM_FEP_BRNCH, + "branch count"); + } + if (this_cpu_has_perf_global_status()) { // Additionally check that those counters overflowed properly. status = rdmsr(pmu.msr_global_status); - report(status & 1, "branch counter overflow"); - report(status & 2, "instruction counter overflow"); + report(status & BIT_ULL(0), "branch counter overflow"); + report(status & BIT_ULL(1), "instruction counter overflow"); } report_prefix_pop();