From patchwork Wed Feb 15 01:07:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141137 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B691EC61DA4 for ; Wed, 15 Feb 2023 01:10:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=kF9teEmaWDxb6HNWVx7wri+RpkIF/LkW944TmkBtWmc=; b=bH8CmvK2W+y1C6w1FMruADI2xu RYVwoaeqK6tAUyGLC7HDW7vdmRMJ7K8jI8IOO7FUSSCeSuIbmfDmGGbUzJ/I3EEyqVEKRMJ7kqebc KE1M+g06s+7IG/Rxa5JSRaU7fJfsEdKO8+MEYw54dpzRN9GKRg/wqRqp7hfNvgRVpv9iMtusmiQ3D UUSm5sOAjbbtC1lg+eOD4fproBWxaSuV2aO5tRwZqW2TiPdIogE/dM5FpihCdUwWxX5XZ/cs76am9 fsTuTFdHz9jAkSjddl+JgxG3CaRDQIWUWkE9YVHnqkAVh10qM2is7uQJKZ1cgwSj8+gc+mJ3P1g6w lzgIFgzw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Ia-0049ps-NK; Wed, 15 Feb 2023 01:09:28 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gg-0048x1-64 for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:35 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-52a8f97c8ccso178197407b3.6 for ; Tue, 14 Feb 2023 17:07:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kyOB/hRwU/hB4ybFMwafI+d7DhJ3aCkjc4J9NusFIBA=; b=sjAq/MM37phS2hg2uKqcHlFo35MnerAFSLVt+k96bgZkcdvjEUYyIZD1X2sR4mOHv7 Q1zBdAiWCcpj7UaWLJlfQ5Ul+5e3B2pl8TwPORgxMLd0NYhNGrwWUkjaN/8bNXsiXPUl 6vy514GwJTNuExb/CENAZ12PXQ/+6DR51kZvpT8W15IFK5+fj+a1Cri1UC/tXyJgDaWz myva6dmIdGjTaxjo3ZmAOQTkti5qDQGuIPuxmdJKXewVObLK5RRrLGSIl3XSPx9+jOEg 7nVcMAHxfuSAQKyjHF0AWE+XAKpp7zQbe2XMbfx2rUJkyl7cy7Q4d3UZczxNwPosAAMi e2Jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kyOB/hRwU/hB4ybFMwafI+d7DhJ3aCkjc4J9NusFIBA=; b=HRCAgM1haPlu8KqtNBu3v6Ux9LzXaB0WpHnfIas78Kr9TkZDxUBkJgHTvtIERRxFc1 PVXm2/Krt1NTSU06XYHKDpTfmU6C4kDlWdS7FlxuEazG5QHu6NCP3siEUzGdUzVe7tYf 4tgw8G1g6taUCrh9SjOlMPMN8cKjnBfiexygoagIDzyCqsqxjwvsb+OS6k4EdSP0ZpE7 3EkL2bckgFmchkh9gAKZwbxzzLWSvq4FD9ynNLEH+btJztSlSdaTpVDFZzP/BasiQfcf yLZCIPCuXV0khsSRQMmUEE0BiIA+7uMWRswmcdrSoiVqI6maaXNYYU3rgG4x9CaILfl3 TwGA== X-Gm-Message-State: AO0yUKWYr08UISXXILsjXHSXy3oqxW5LMsqOy3+C1BBorvtd+y7qe5Kr qM+MOK7RUzfMoTG/h4MHAosWTodpEg+8 X-Google-Smtp-Source: AK7set9ai+nhOh9s2abJDVKiQljmEsSRj0VvoW0fetnLXKQF7+fFcaE6hWMYGqBqPThtWrzfrEo8hAagPKN1 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:e311:0:b0:8ef:90e1:b2f8 with SMTP id z17-20020a25e311000000b008ef90e1b2f8mr0ybd.2.1676423241585; Tue, 14 Feb 2023 17:07:21 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:02 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-2-rananta@google.com> Subject: [REPOST PATCH 01/16] tools: arm64: Import perf_event.h From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230214_170730_313712_880AE5BE X-CRM114-Status: GOOD ( 13.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe Copy perf_event.h from the kernel's arch/arm64/include/asm/perf_event.h. The following patches will use macros defined in this header. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- tools/arch/arm64/include/asm/perf_event.h | 258 ++++++++++++++++++++++ 1 file changed, 258 insertions(+) create mode 100644 tools/arch/arm64/include/asm/perf_event.h diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/include/asm/perf_event.h new file mode 100644 index 0000000000000..97e49a4d4969f --- /dev/null +++ b/tools/arch/arm64/include/asm/perf_event.h @@ -0,0 +1,258 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +#ifndef __ASM_PERF_EVENT_H +#define __ASM_PERF_EVENT_H + +#define ARMV8_PMU_MAX_COUNTERS 32 +#define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1) + +/* + * Common architectural and microarchitectural event numbers. + */ +#define ARMV8_PMUV3_PERFCTR_SW_INCR 0x0000 +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL 0x0001 +#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL 0x0002 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x0003 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x0004 +#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL 0x0005 +#define ARMV8_PMUV3_PERFCTR_LD_RETIRED 0x0006 +#define ARMV8_PMUV3_PERFCTR_ST_RETIRED 0x0007 +#define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x0008 +#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN 0x0009 +#define ARMV8_PMUV3_PERFCTR_EXC_RETURN 0x000A +#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED 0x000B +#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED 0x000C +#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED 0x000D +#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED 0x000E +#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED 0x000F +#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x0010 +#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x0011 +#define ARMV8_PMUV3_PERFCTR_BR_PRED 0x0012 +#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS 0x0013 +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE 0x0014 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB 0x0015 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE 0x0016 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL 0x0017 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB 0x0018 +#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS 0x0019 +#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR 0x001A +#define ARMV8_PMUV3_PERFCTR_INST_SPEC 0x001B +#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED 0x001C +#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES 0x001D +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x001E +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE 0x001F +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE 0x0020 +#define ARMV8_PMUV3_PERFCTR_BR_RETIRED 0x0021 +#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED 0x0022 +#define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND 0x0023 +#define ARMV8_PMUV3_PERFCTR_STALL_BACKEND 0x0024 +#define ARMV8_PMUV3_PERFCTR_L1D_TLB 0x0025 +#define ARMV8_PMUV3_PERFCTR_L1I_TLB 0x0026 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE 0x0027 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL 0x0028 +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE 0x0029 +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL 0x002A +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE 0x002B +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB 0x002C +#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL 0x002D +#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL 0x002E +#define ARMV8_PMUV3_PERFCTR_L2D_TLB 0x002F +#define ARMV8_PMUV3_PERFCTR_L2I_TLB 0x0030 +#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS 0x0031 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE 0x0032 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS 0x0033 +#define ARMV8_PMUV3_PERFCTR_DTLB_WALK 0x0034 +#define ARMV8_PMUV3_PERFCTR_ITLB_WALK 0x0035 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD 0x0036 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD 0x0037 +#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD 0x0038 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD 0x0039 +#define ARMV8_PMUV3_PERFCTR_OP_RETIRED 0x003A +#define ARMV8_PMUV3_PERFCTR_OP_SPEC 0x003B +#define ARMV8_PMUV3_PERFCTR_STALL 0x003C +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND 0x003D +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND 0x003E +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT 0x003F + +/* Statistical profiling extension microarchitectural events */ +#define ARMV8_SPE_PERFCTR_SAMPLE_POP 0x4000 +#define ARMV8_SPE_PERFCTR_SAMPLE_FEED 0x4001 +#define ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE 0x4002 +#define ARMV8_SPE_PERFCTR_SAMPLE_COLLISION 0x4003 + +/* AMUv1 architecture events */ +#define ARMV8_AMU_PERFCTR_CNT_CYCLES 0x4004 +#define ARMV8_AMU_PERFCTR_STALL_BACKEND_MEM 0x4005 + +/* long-latency read miss events */ +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_LMISS 0x4006 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_LMISS_RD 0x4009 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS 0x400A +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD 0x400B + +/* Trace buffer events */ +#define ARMV8_PMUV3_PERFCTR_TRB_WRAP 0x400C +#define ARMV8_PMUV3_PERFCTR_TRB_TRIG 0x400E + +/* Trace unit events */ +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT0 0x4010 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT1 0x4011 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT2 0x4012 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT3 0x4013 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4 0x4018 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5 0x4019 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6 0x401A +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7 0x401B + +/* additional latency from alignment events */ +#define ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT 0x4020 +#define ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT 0x4021 +#define ARMV8_PMUV3_PERFCTR_ST_ALIGN_LAT 0x4022 + +/* Armv8.5 Memory Tagging Extension events */ +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED 0x4024 +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_RD 0x4025 +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_WR 0x4026 + +/* ARMv8 recommended implementation defined event types */ +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD 0x0040 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x0041 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD 0x0042 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x0043 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER 0x0044 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER 0x0045 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM 0x0046 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN 0x0047 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL 0x0048 + +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x004E +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD 0x0050 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR 0x0051 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD 0x0052 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR 0x0053 + +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM 0x0056 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN 0x0057 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL 0x0058 + +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD 0x005C +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR 0x005D +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD 0x005E +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR 0x005F +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED 0x0062 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED 0x0063 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL 0x0064 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH 0x0065 +#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD 0x0066 +#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR 0x0067 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC 0x0068 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC 0x0069 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC 0x006A + +#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC 0x006C +#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC 0x006D +#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC 0x006E +#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC 0x006F +#define ARMV8_IMPDEF_PERFCTR_LD_SPEC 0x0070 +#define ARMV8_IMPDEF_PERFCTR_ST_SPEC 0x0071 +#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC 0x0072 +#define ARMV8_IMPDEF_PERFCTR_DP_SPEC 0x0073 +#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC 0x0074 +#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC 0x0075 +#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC 0x0076 +#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC 0x0077 +#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC 0x0078 +#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC 0x0079 +#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC 0x007A + +#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC 0x007C +#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC 0x007D +#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC 0x007E + +#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF 0x0081 +#define ARMV8_IMPDEF_PERFCTR_EXC_SVC 0x0082 +#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT 0x0083 +#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT 0x0084 + +#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ 0x0086 +#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ 0x0087 +#define ARMV8_IMPDEF_PERFCTR_EXC_SMC 0x0088 + +#define ARMV8_IMPDEF_PERFCTR_EXC_HVC 0x008A +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT 0x008B +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT 0x008C +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER 0x008D +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ 0x008E +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ 0x008F +#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC 0x0090 +#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC 0x0091 + +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD 0x00A0 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR 0x00A1 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD 0x00A2 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR 0x00A3 + +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM 0x00A6 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN 0x00A7 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL 0x00A8 + +/* + * Per-CPU PMCR: config reg + */ +#define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */ +#define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */ +#define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */ +#define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ +#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */ +#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ +#define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ +#define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */ +#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */ +#define ARMV8_PMU_PMCR_N (0x1f << ARMV8_PMU_PMCR_N_SHIFT) +#define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */ + +/* + * PMOVSR: counters overflow flag status reg + */ +#define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */ +#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK + +/* + * PMXEVTYPER: Event selection reg + */ +#define ARMV8_PMU_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */ +#define ARMV8_PMU_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */ + +/* + * Event filters for PMUv3 + */ +#define ARMV8_PMU_EXCLUDE_EL1 (1U << 31) +#define ARMV8_PMU_EXCLUDE_EL0 (1U << 30) +#define ARMV8_PMU_INCLUDE_EL2 (1U << 27) + +/* + * PMUSERENR: user enable reg + */ +#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */ +#define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */ +#define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */ +#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ +#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ + +/* PMMIR_EL1.SLOTS mask */ +#define ARMV8_PMU_SLOTS_MASK 0xff + +#define ARMV8_PMU_BUS_SLOTS_SHIFT 8 +#define ARMV8_PMU_BUS_SLOTS_MASK 0xff +#define ARMV8_PMU_BUS_WIDTH_SHIFT 16 +#define ARMV8_PMU_BUS_WIDTH_MASK 0xf + +#endif /* __ASM_PERF_EVENT_H */ From patchwork Wed Feb 15 01:07:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8DCB5C61DA4 for ; Wed, 15 Feb 2023 01:08:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=VnzhoOP4UBm5NwjENAhZ8f1EOFBP28gAqmNNy+1A6Sw=; b=eDPqnr6JdlshV59Z7DwjKTfojs xoxQ7tbuoInuQsnKryxBp997L/T7ALQqyjzCFSjiGw9CmvG7JQk4m1VHx/0m3J3IlTiMWU2ARqbKC C3c/Rhowsupq4MfXd6kXgidGpVoIx2pzM62tLielYzl7H1yJKz9Ayt2Rx4DV2kUvSMmNSvp8SjxO9 UFCUWVZ82SGeISvHlqHMGJ7XdCqkCRAs/mAostS/gCnQCdZHhL2PsUbZHRRWsnCLGfnusHDyuZY2Z bjpHhJQq7YXBb/iDPlSqPqt0OxPJdmlBwxaEMiY7aroHhQKEkfx2ahDesQSvHhaU0uSpRl7M3WOIy 7MpMrJAA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gu-00498B-KB; Wed, 15 Feb 2023 01:07:44 +0000 Received: from mail-io1-xd49.google.com ([2607:f8b0:4864:20::d49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Ga-0048xH-HS for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:26 +0000 Received: by mail-io1-xd49.google.com with SMTP id u6-20020a6be406000000b00716ceebf132so11241496iog.1 for ; Tue, 14 Feb 2023 17:07:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=f1XuYl7W4EZRzya084iED7e5cNSRia+m87PoIB/XYnk=; b=TPS6HlrHNsEiK3QpR1dJwHaTc+NdvOBx0q4ycbpxm32zJz01l7KLSpeH4VQdKFJsAC DfTS6qCgyr+y++3YmNbNlVeBeFeUf0OyusVhQjDAW1nxnVeTh5yrfC92eVwSGkHgm9qO kbPLA3CbJsOqSL+UpwuhGDBOPeU6QHXE939PaWA5otafBlESeHBaAFOdBModsHRnsyPw SBm3rXhIELbNZ/o6hoFgkkdxE+SNvXO4xIRj9BwoO59jl5TGqRKqinWDd72VdDlsMD2z iHddK8lrl0LDjSp+50pGHMGBLVTXE5kHlwMN3RLJUJfgVXNS3xi/LQIYECb+IQg/E7aM 26hA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=f1XuYl7W4EZRzya084iED7e5cNSRia+m87PoIB/XYnk=; b=wC+vcAuooWePE4Q2zZDMuqrZqbBW9vlJuweUwVbPEnhlZ/Pe7XNO/U9stvDUV8qGg7 DjwptShDkhy6uWl+vamCc4fLSM4obeCUcKJAlRYnf9GpYK6DjipyWF2SFLWHc4Hu4OaA y5N5Ybaf0DngZN3nU+rieYXc34M8AtBvKjET3ZH352VBKLSQatP5YReI2s44fFNX+vKf TpnvBu8HVxP036dAWcx8PWtDA1vJNeD14xY3R/J74eUAGOXKkiXpXODDfogez/BBx+pM Peun9BTOhc+NZmFgWYi9Wl/4G18W/MuTOe8S5tLZ3CEdb6UHvuTxt/B+U+Kzphkbr+c+ Q6lw== X-Gm-Message-State: AO0yUKWx5JdtCsWb+MAZmwDVrZ/FpCcbcCLB/kj2nxvYKWOfHKWFx7VE DxJQ/u/qs9+bbr+vkou2osiwRynXrNDa X-Google-Smtp-Source: AK7set+q2eZtBSgqU1nJ8Ne14FZ2kDQE2+GhjkQRcEQYWX0OYHvYrDjJQgp6mTjShQnFgo4Z2IW3FySfmZR2 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6e02:66b:b0:310:fc49:1d9 with SMTP id l11-20020a056e02066b00b00310fc4901d9mr213006ilt.6.1676423243214; Tue, 14 Feb 2023 17:07:23 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:03 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-3-rananta@google.com> Subject: [REPOST PATCH 02/16] KVM: selftests: aarch64: Introduce vpmu_counter_access test From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230214_170724_606847_AFFD7117 X-CRM114-Status: GOOD ( 26.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe Introduce vpmu_counter_access test for arm64 platforms. The test configures PMUv3 for a vCPU, sets PMCR_EL0.N for the vCPU, and check if the guest can consistently see the same number of the PMU event counters (PMCR_EL0.N) that userspace sets. This test case is done with each of the PMCR_EL0.N values from 0 to 31 (With the PMCR_EL0.N values greater than the host value, the test expects KVM_SET_ONE_REG for the PMCR_EL0 to fail). Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- tools/testing/selftests/kvm/Makefile | 1 + .../kvm/aarch64/vpmu_counter_access.c | 207 ++++++++++++++++++ 2 files changed, 208 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 1750f91dd9362..b27fea0ce5918 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -143,6 +143,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/psci_test TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq +TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access TEST_GEN_PROGS_aarch64 += access_tracking_perf_test TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c new file mode 100644 index 0000000000000..7a4333f64daef --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -0,0 +1,207 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * vpmu_counter_access - Test vPMU event counter access + * + * Copyright (c) 2022 Google LLC. + * + * This test checks if the guest can see the same number of the PMU event + * counters (PMCR_EL0.N) that userspace sets. + * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. + */ +#include +#include +#include +#include +#include +#include + +/* The max number of the PMU event counters (excluding the cycle counter) */ +#define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) + +/* + * The guest is configured with PMUv3 with @expected_pmcr_n number of + * event counters. + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N. + */ +static void guest_code(uint64_t expected_pmcr_n) +{ + uint64_t pmcr, pmcr_n; + + GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); + + pmcr = read_sysreg(pmcr_el0); + pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); + + /* Make sure that PMCR_EL0.N indicates the value userspace set */ + GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + + GUEST_DONE(); +} + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL + +/* Create a VM that has one vCPU with PMUv3 configured. */ +static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, + int *gic_fd) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct kvm_vcpu_init init; + uint8_t pmuver; + uint64_t dfr0, irq = 23; + struct kvm_device_attr irq_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_IRQ, + .addr = (uint64_t)&irq, + }; + struct kvm_device_attr init_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_INIT, + }; + + vm = vm_create(1); + + /* Create vCPU with PMUv3 */ + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + *gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); + + /* Make sure that PMUv3 support is indicated in the ID register */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0); + TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF && + pmuver >= ID_AA64DFR0_PMUVER_8_0, + "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); + + /* Initialize vPMU */ + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + + *vcpup = vcpu; + return vm; +} + +static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) +{ + struct ucall uc; + + vcpu_args_set(vcpu, 1, pmcr_n); + vcpu_run(vcpu); + switch (get_ucall(vcpu, &uc)) { + case UCALL_ABORT: + REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx"); + break; + case UCALL_DONE: + break; + default: + TEST_FAIL("Unknown ucall %lu", uc.cmd); + break; + } +} + +/* + * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n, + * and run the test. + */ +static void run_test(uint64_t pmcr_n) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; + uint64_t sp, pmcr, pmcr_orig; + struct kvm_vcpu_init init; + + pr_debug("Test with pmcr_n %lu\n", pmcr_n); + vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + + /* Save the initial sp to restore them later to run the guest again */ + vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); + + /* Update the PMCR_EL0.N with @pmcr_n */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); + pmcr = pmcr_orig & ~ARMV8_PMU_PMCR_N; + pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); + + run_vcpu(vcpu, pmcr_n); + + /* + * Reset and re-initialize the vCPU, and run the guest code again to + * check if PMCR_EL0.N is preserved. + */ + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + aarch64_vcpu_setup(vcpu, &init); + vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); + vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); + + run_vcpu(vcpu, pmcr_n); + + close(gic_fd); + kvm_vm_free(vm); +} + +/* + * Create a guest with one vCPU, and attempt to set the PMCR_EL0.N for + * the vCPU to @pmcr_n, which is larger than the host value. + * The attempt should fail as @pmcr_n is too big to set for the vCPU. + */ +static void run_error_test(uint64_t pmcr_n) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd, ret; + uint64_t pmcr, pmcr_orig; + + pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); + vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + + /* Update the PMCR_EL0.N with @pmcr_n */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); + pmcr = pmcr_orig & ~ARMV8_PMU_PMCR_N; + pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); + + /* This should fail as @pmcr_n is too big to set for the vCPU */ + ret = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); + TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail", + pmcr, pmcr_orig); + + close(gic_fd); + kvm_vm_free(vm); +} + +/* + * Return the default number of implemented PMU event counters excluding + * the cycle counter (i.e. PMCR_EL0.N value) for the guest. + */ +static uint64_t get_pmcr_n_limit(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; + uint64_t pmcr; + + vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + close(gic_fd); + kvm_vm_free(vm); + return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); +} + +int main(void) +{ + uint64_t i, pmcr_n; + + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + + pmcr_n = get_pmcr_n_limit(); + for (i = 0; i <= pmcr_n; i++) + run_test(i); + + for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) + run_error_test(i); + + return 0; +} From patchwork Wed Feb 15 01:07:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141138 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0D5B9C05027 for ; Wed, 15 Feb 2023 01:13:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ylgllPtBWQrqqDOgkDRhUSatWoMOQJhp7f9Zv+NdEDA=; b=nKQ4MZvIt26fFvRhsAF3/on/RR AdJ8UC45wVG2CQxW7jqRFvySkjps+SjZxLhDblgU3WoQfBnvBnQKZspe1o5zjnuw7vutk67crYznO 6zfXthCtsnSL16+oRpRlnnFJKZiwy1BADACwrbbsbR68jc5AIr1LGKWw87ndnkUqNPQvq2ATERko/ tRmiYNWOiznov2SDgd65q1E7Ub6/6CR57QwSk+zb1AOTfSvjhJYBECi5SDs1aYS94vvl1g5BAFI9r PNMB/5RnWSWwtimV+zYR/EkLwOAe9ZWWMnu8zhDOS6VXWxq32LsW2tTmgBO3KOYf8M4dcEfDfUWv7 AkPBp84g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Io-0049wG-M1; Wed, 15 Feb 2023 01:09:42 +0000 Received: from mail-ot1-x34a.google.com ([2607:f8b0:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gh-0048xm-85 for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:35 +0000 Received: by mail-ot1-x34a.google.com with SMTP id j60-20020a9d17c2000000b0068bd57aa53aso8699421otj.17 for ; Tue, 14 Feb 2023 17:07:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=alb97r+DX7PehFMTgq6B9eu/Os3M9Qni8gUbhL07Tdg=; b=NOY/beDDKj+whtpYM1ucXY2f8kUIHHuIOuB8Nk/ClACledvRSZhMh0yGz24FaryGCd EZv32D2l9Aw4vfU5CDrNQFq1sa1kTde7CPA8J0vSENKqoqmVLy9gBSgew68kiYGLQrSE nba1ln785wnxO941T+uX+ad4ET77HjH8vFG97kPPjkK6Hk9nTDoc8wDyn9FzE7zO1zDx FJ7cjhwIK79+w166tlSkUeMREr+dWBrilMCxnECJ8G2YgB208lgBNf+DMu9YQ198m3ON +JqBcyAq3jlSwFrqi6uEntnbnHCNKGojI5xLRVSiqvMcq2k6r8mI6vL32LdACDudupfs Sr+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=alb97r+DX7PehFMTgq6B9eu/Os3M9Qni8gUbhL07Tdg=; b=CjbQ0jUuFENOWk19+S5MrteIUBXnvlGuyPSleFM+6bT7XSb+O/+0lQ4ywrA8BPo75D cOybxNxixloaizRGdV0wNaekbb+feH3LLkBGj7fNRlv8O8X2+hL/rJ2uz1gVom62TYcm 11PSLZDZTkprw6lGEM7twJ0UupuB2+wAqen+yodK0HHYINV7onnkz0QiGoAODeoiEOpz BgPpjOZE4zbRNrs+IzhWdsP709gG1HF/pcFNPXBn72TcpRsqMGknOZC1jA8MTzWaSqDI JKFY1mxTz9YOKMe6rRhl75Kq5S3+vA4mz3koSHNcDpmAuL/CQXthfNo/acL+/qCHMh/7 xj7g== X-Gm-Message-State: AO0yUKULjG0M3Wp1GB7BS5FRN8OjjcdZJZuGM7sc4jw7U18CGUGmD988 uo5hLuPfKGGIlnGXJudWJGrphJ4Yj/6/ X-Google-Smtp-Source: AK7set8RZc77yKwZerPzE3jU+p0xxGmxroJ9jnv79/W64YPupa5Fkq7lp7PUJft1eWG9zZ3UTyn9WZZy3lKa X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6830:12d0:b0:68b:e15b:6df4 with SMTP id a16-20020a05683012d000b0068be15b6df4mr13119otq.63.1676423244260; Tue, 14 Feb 2023 17:07:24 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:04 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-4-rananta@google.com> Subject: [REPOST PATCH 03/16] KVM: selftests: aarch64: vPMU register test for implemented counters From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230214_170731_371616_600F7510 X-CRM114-Status: GOOD ( 32.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe Add a new test case to the vpmu_counter_access test to check if PMU registers or their bits for implemented counters on the vCPU are readable/writable as expected, and can be programmed to count events. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- .../kvm/aarch64/vpmu_counter_access.c | 350 +++++++++++++++++- 1 file changed, 347 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c index 7a4333f64daef..b6593eee2be3d 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -5,7 +5,8 @@ * Copyright (c) 2022 Google LLC. * * This test checks if the guest can see the same number of the PMU event - * counters (PMCR_EL0.N) that userspace sets. + * counters (PMCR_EL0.N) that userspace sets, and if the guest can access + * those counters. * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. */ #include @@ -18,14 +19,348 @@ /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) +/* + * The macros and functions below for reading/writing PMEVT{CNTR,TYPER}_EL0 + * were basically copied from arch/arm64/kernel/perf_event.c. + */ +#define PMEVN_CASE(n, case_macro) \ + case n: case_macro(n); break + +#define PMEVN_SWITCH(x, case_macro) \ + do { \ + switch (x) { \ + PMEVN_CASE(0, case_macro); \ + PMEVN_CASE(1, case_macro); \ + PMEVN_CASE(2, case_macro); \ + PMEVN_CASE(3, case_macro); \ + PMEVN_CASE(4, case_macro); \ + PMEVN_CASE(5, case_macro); \ + PMEVN_CASE(6, case_macro); \ + PMEVN_CASE(7, case_macro); \ + PMEVN_CASE(8, case_macro); \ + PMEVN_CASE(9, case_macro); \ + PMEVN_CASE(10, case_macro); \ + PMEVN_CASE(11, case_macro); \ + PMEVN_CASE(12, case_macro); \ + PMEVN_CASE(13, case_macro); \ + PMEVN_CASE(14, case_macro); \ + PMEVN_CASE(15, case_macro); \ + PMEVN_CASE(16, case_macro); \ + PMEVN_CASE(17, case_macro); \ + PMEVN_CASE(18, case_macro); \ + PMEVN_CASE(19, case_macro); \ + PMEVN_CASE(20, case_macro); \ + PMEVN_CASE(21, case_macro); \ + PMEVN_CASE(22, case_macro); \ + PMEVN_CASE(23, case_macro); \ + PMEVN_CASE(24, case_macro); \ + PMEVN_CASE(25, case_macro); \ + PMEVN_CASE(26, case_macro); \ + PMEVN_CASE(27, case_macro); \ + PMEVN_CASE(28, case_macro); \ + PMEVN_CASE(29, case_macro); \ + PMEVN_CASE(30, case_macro); \ + default: \ + GUEST_ASSERT_1(0, x); \ + } \ + } while (0) + +#define RETURN_READ_PMEVCNTRN(n) \ + return read_sysreg(pmevcntr##n##_el0) +static unsigned long read_pmevcntrn(int n) +{ + PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN); + return 0; +} + +#define WRITE_PMEVCNTRN(n) \ + write_sysreg(val, pmevcntr##n##_el0) +static void write_pmevcntrn(int n, unsigned long val) +{ + PMEVN_SWITCH(n, WRITE_PMEVCNTRN); + isb(); +} + +#define READ_PMEVTYPERN(n) \ + return read_sysreg(pmevtyper##n##_el0) +static unsigned long read_pmevtypern(int n) +{ + PMEVN_SWITCH(n, READ_PMEVTYPERN); + return 0; +} + +#define WRITE_PMEVTYPERN(n) \ + write_sysreg(val, pmevtyper##n##_el0) +static void write_pmevtypern(int n, unsigned long val) +{ + PMEVN_SWITCH(n, WRITE_PMEVTYPERN); + isb(); +} + +/* Read PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ +static inline unsigned long read_sel_evcntr(int sel) +{ + write_sysreg(sel, pmselr_el0); + isb(); + return read_sysreg(pmxevcntr_el0); +} + +/* Write PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ +static inline void write_sel_evcntr(int sel, unsigned long val) +{ + write_sysreg(sel, pmselr_el0); + isb(); + write_sysreg(val, pmxevcntr_el0); + isb(); +} + +/* Read PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ +static inline unsigned long read_sel_evtyper(int sel) +{ + write_sysreg(sel, pmselr_el0); + isb(); + return read_sysreg(pmxevtyper_el0); +} + +/* Write PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ +static inline void write_sel_evtyper(int sel, unsigned long val) +{ + write_sysreg(sel, pmselr_el0); + isb(); + write_sysreg(val, pmxevtyper_el0); + isb(); +} + +static inline void enable_counter(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmcntenset_el0); + isb(); +} + +static inline void disable_counter(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmcntenclr_el0); + isb(); +} + +/* + * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}_EL0 + * accessors that test cases will use. Each of the accessors will + * either directly reads/writes PMEVT{CNTR,TYPER}_EL0 + * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through + * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()). + * + * This is used to test that combinations of those accessors provide + * the consistent behavior. + */ +struct pmc_accessor { + /* A function to be used to read PMEVTCNTR_EL0 */ + unsigned long (*read_cntr)(int idx); + /* A function to be used to write PMEVTCNTR_EL0 */ + void (*write_cntr)(int idx, unsigned long val); + /* A function to be used to read PMEVTYPER_EL0 */ + unsigned long (*read_typer)(int idx); + /* A function to be used to write PMEVTYPER_EL0 */ + void (*write_typer)(int idx, unsigned long val); +}; + +struct pmc_accessor pmc_accessors[] = { + /* test with all direct accesses */ + { read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern }, + /* test with all indirect accesses */ + { read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper }, + /* read with direct accesses, and write with indirect accesses */ + { read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper }, + /* read with indirect accesses, and write with direct accesses */ + { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, +}; + +static void pmu_disable_reset(void) +{ + uint64_t pmcr = read_sysreg(pmcr_el0); + + /* Reset all counters, disabling them */ + pmcr &= ~ARMV8_PMU_PMCR_E; + write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0); + isb(); +} + +static void pmu_enable(void) +{ + uint64_t pmcr = read_sysreg(pmcr_el0); + + /* Reset all counters, disabling them */ + pmcr |= ARMV8_PMU_PMCR_E; + write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0); + isb(); +} + +static bool pmu_event_is_supported(uint64_t event) +{ + GUEST_ASSERT_1(event < 64, event); + return (read_sysreg(pmceid0_el0) & BIT(event)); +} + +#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected) \ +{ \ + uint64_t _tval = read_sysreg(regname); \ + \ + if (set_expected) \ + GUEST_ASSERT_3((_tval & mask), _tval, mask, set_expected); \ + else \ + GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\ +} + +/* + * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers + * are set or cleared as specified in @set_expected. + */ +static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected) +{ + GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmintenset_el1, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmintenclr_el1, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected); +} + +/* + * Check if the bit in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers corresponding + * to the specified counter (@pmc_idx) can be read/written as expected. + * When @set_op is true, it tries to set the bit for the counter in + * those registers by writing the SET registers (the bit won't be set + * if the counter is not implemented though). + * Otherwise, it tries to clear the bits in the registers by writing + * the CLR registers. + * Then, it checks if the values indicated in the registers are as expected. + */ +static void test_bitmap_pmu_regs(int pmc_idx, bool set_op) +{ + uint64_t pmcr_n, test_bit = BIT(pmc_idx); + bool set_expected = false; + + if (set_op) { + write_sysreg(test_bit, pmcntenset_el0); + write_sysreg(test_bit, pmintenset_el1); + write_sysreg(test_bit, pmovsset_el0); + + /* The bit will be set only if the counter is implemented */ + pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); + set_expected = (pmc_idx < pmcr_n) ? true : false; + } else { + write_sysreg(test_bit, pmcntenclr_el0); + write_sysreg(test_bit, pmintenclr_el1); + write_sysreg(test_bit, pmovsclr_el0); + } + check_bitmap_pmu_regs(test_bit, set_expected); +} + +/* + * Tests for reading/writing registers for the (implemented) event counter + * specified by @pmc_idx. + */ +static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx) +{ + uint64_t write_data, read_data, read_data_prev; + + /* Disable all PMCs and reset all PMCs to zero. */ + pmu_disable_reset(); + + + /* + * Tests for reading/writing {PMCNTEN,PMINTEN,PMOVS}{SET,CLR}_EL1. + */ + + /* Make sure that the bit in those registers are set to 0 */ + test_bitmap_pmu_regs(pmc_idx, false); + /* Test if setting the bit in those registers works */ + test_bitmap_pmu_regs(pmc_idx, true); + /* Test if clearing the bit in those registers works */ + test_bitmap_pmu_regs(pmc_idx, false); + + + /* + * Tests for reading/writing the event type register. + */ + + read_data = acc->read_typer(pmc_idx); + /* + * Set the event type register to an arbitrary value just for testing + * of reading/writing the register. + * ArmARM says that for the event from 0x0000 to 0x003F, + * the value indicated in the PMEVTYPER_EL0.evtCount field is + * the value written to the field even when the specified event + * is not supported. + */ + write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED); + acc->write_typer(pmc_idx, write_data); + read_data = acc->read_typer(pmc_idx); + GUEST_ASSERT_4(read_data == write_data, + pmc_idx, acc, read_data, write_data); + + + /* + * Tests for reading/writing the event count register. + */ + + read_data = acc->read_cntr(pmc_idx); + + /* The count value must be 0, as it is not used after the reset */ + GUEST_ASSERT_3(read_data == 0, pmc_idx, acc, read_data); + + write_data = read_data + pmc_idx + 0x12345; + acc->write_cntr(pmc_idx, write_data); + read_data = acc->read_cntr(pmc_idx); + GUEST_ASSERT_4(read_data == write_data, + pmc_idx, acc, read_data, write_data); + + + /* The following test requires the INST_RETIRED event support. */ + if (!pmu_event_is_supported(ARMV8_PMUV3_PERFCTR_INST_RETIRED)) + return; + + pmu_enable(); + acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED); + + /* + * Make sure that the counter doesn't count the INST_RETIRED + * event when disabled, and the counter counts the event when enabled. + */ + disable_counter(pmc_idx); + read_data_prev = acc->read_cntr(pmc_idx); + read_data = acc->read_cntr(pmc_idx); + GUEST_ASSERT_4(read_data == read_data_prev, + pmc_idx, acc, read_data, read_data_prev); + + enable_counter(pmc_idx); + read_data = acc->read_cntr(pmc_idx); + + /* + * The counter should be increased by at least 1, as there is at + * least one instruction between enabling the counter and reading + * the counter (the test assumes that all event counters are not + * being used by the host's higher priority events). + */ + GUEST_ASSERT_4(read_data > read_data_prev, + pmc_idx, acc, read_data, read_data_prev); +} + /* * The guest is configured with PMUv3 with @expected_pmcr_n number of * event counters. - * Check if @expected_pmcr_n is consistent with PMCR_EL0.N. + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and + * if reading/writing PMU registers for implemented counters can work + * as expected. */ static void guest_code(uint64_t expected_pmcr_n) { uint64_t pmcr, pmcr_n; + int i, pmc; GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); @@ -35,6 +370,15 @@ static void guest_code(uint64_t expected_pmcr_n) /* Make sure that PMCR_EL0.N indicates the value userspace set */ GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + /* + * Tests for reading/writing PMU registers for implemented counters. + * Use each combination of PMEVT{CNTR,TYPER}_EL0 accessor functions. + */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + for (pmc = 0; pmc < pmcr_n; pmc++) + test_access_pmc_regs(&pmc_accessors[i], pmc); + } + GUEST_DONE(); } @@ -91,7 +435,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) vcpu_run(vcpu); switch (get_ucall(vcpu, &uc)) { case UCALL_ABORT: - REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx"); + REPORT_GUEST_ASSERT_4(uc, "values:%#lx %#lx %#lx %#lx"); break; case UCALL_DONE: break; From patchwork Wed Feb 15 01:07:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141108 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6714CC6379F for ; Wed, 15 Feb 2023 01:08:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=gWL8KzScO96aYdcAbsy+jZTX9UCW8WOnXgRqLEkk3+Y=; b=herujJqunQpjXSjgUSKR37cl0M d+1YkxpT61DeVCZ/213e3b0nJNjYl/Te5bmD3KA7ccuzkUYe+HN8MURt8jBrCS5Iq6KMEkQ9IgXi8 6jrNWNTeMQWRiOCPowfwnf+LG8h5d0svU1o4bNZ851QgzGxfKE71EDX5upFbASi9gGwQK2PJeN1nJ X1A2jn/skwWUTKYbx4x/sE45cHa8CzqCkjJ8WAon8UpynHAPhdNXFO7IBi4ej62SKe32yoYOUg9Kb Cq38MgzxzqmwEGLse+SgqTM3fBTeI9SucBHx+vONBTsyE+oWJinkaBT1yhKJ+ZbLlmjBeY8Y9MG13 voT4rSLQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6H2-00499o-RV; Wed, 15 Feb 2023 01:07:52 +0000 Received: from mail-io1-xd49.google.com ([2607:f8b0:4864:20::d49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gc-0048xv-Cn for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:28 +0000 Received: by mail-io1-xd49.google.com with SMTP id y22-20020a5d94d6000000b007076e06ba3dso11271180ior.20 for ; Tue, 14 Feb 2023 17:07:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zSVkC9rZ4+fWxuOnkXY8T8sONGYthX2exM9HAjgt0uc=; b=G53KrpTHmI/wtEkCAl3Qaf3e3QyVB04z78drdLKwZ5kyKLgDdaUym0nOzZpwGOez+Q XmKhPNqBbHsR0+vgF/l+91Mm8aZ20F1dikeRfXB+gvqeHbCLaapQwSpVb3iuqkPY2EQ+ uWstSTrHZFaMz7m0xYNvaH/9pWzPJT7OppuYCkCeSvwPEUNpVCeUMkizZB9U/31oS1MG q+u0DyWZPXYuR+L1ARseGDks/8BsIotM6Pcr0mcfMsDU8leqAxXWIxR2BBTwLR12KBNt PMnLu1fM6cuUrxzsGpRbM/VmJ62zXpktIzTet8YBHtF1erQtAmc10EaX/meH5/bXbuO7 HJuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zSVkC9rZ4+fWxuOnkXY8T8sONGYthX2exM9HAjgt0uc=; b=Nm6LIDN2i0QdY3TZzPYmXbb0Fkq+cU0RRI3x5j0mH7tO+oUSd1ETMbfs7PgpUTqhWf mx00lmXKPebUS0aASzU1JY9mujDnRYFe6upc1itX4nQ8X45O/wrfI+b3D8jA2T75n4ip QfqR6pfGMxn1r7EFeGBlagTDHYjUKemUsSqn6tXneqgRi4TkDwLRszye3OqCitPHh5v4 B6DH9fvJny1EqjZYwgoKRq1F0P2GmYcoBvM0iKvoZK4bFWcwOY4nQhtKttxr9fpE2MUf qNlKSPrwsyBkdUHfbjQSASukC7kHwm+Wjms1o3nPwUkeL6U5y0q7t9vY3jBolaxWgxtb MF5w== X-Gm-Message-State: AO0yUKVRXAc4WVzY1/hHP7NXsBj44oCt1ttMcGD3Hzah8occTCkLDI1U NRDACduePefrskCn0YhT4cW88SVDCmpM X-Google-Smtp-Source: AK7set/0ScijWFLWJF405Eaaw/oB4KG2YLd9fePaf37ujixQrXPhZ5qncsmw/CLeonW/jDi/2394+GJub39S X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:7611:0:b0:313:a2e4:1b05 with SMTP id r17-20020a927611000000b00313a2e41b05mr244304ilc.1.1676423245048; Tue, 14 Feb 2023 17:07:25 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:05 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-5-rananta@google.com> Subject: [REPOST PATCH 04/16] KVM: selftests: aarch64: vPMU register test for unimplemented counters From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230214_170726_472245_35153AF3 X-CRM114-Status: GOOD ( 24.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe Add a new test case to the vpmu_counter_access test to check if PMU registers or their bits for unimplemented counters are not accessible or are RAZ, as expected. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- .../kvm/aarch64/vpmu_counter_access.c | 111 ++++++++++++++++-- .../selftests/kvm/include/aarch64/processor.h | 1 + 2 files changed, 102 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c index b6593eee2be3d..453f0dd240f44 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -5,8 +5,8 @@ * Copyright (c) 2022 Google LLC. * * This test checks if the guest can see the same number of the PMU event - * counters (PMCR_EL0.N) that userspace sets, and if the guest can access - * those counters. + * counters (PMCR_EL0.N) that userspace sets, if the guest can access + * those counters, and if the guest cannot access any other counters. * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. */ #include @@ -20,7 +20,7 @@ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) /* - * The macros and functions below for reading/writing PMEVT{CNTR,TYPER}_EL0 + * The macros and functions below for reading/writing PMEV{CNTR,TYPER}_EL0 * were basically copied from arch/arm64/kernel/perf_event.c. */ #define PMEVN_CASE(n, case_macro) \ @@ -148,9 +148,9 @@ static inline void disable_counter(int idx) } /* - * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}_EL0 + * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}_EL0 * accessors that test cases will use. Each of the accessors will - * either directly reads/writes PMEVT{CNTR,TYPER}_EL0 + * either directly reads/writes PMEV{CNTR,TYPER}_EL0 * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()). * @@ -179,6 +179,51 @@ struct pmc_accessor pmc_accessors[] = { { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, }; +#define INVALID_EC (-1ul) +uint64_t expected_ec = INVALID_EC; +uint64_t op_end_addr; + +static void guest_sync_handler(struct ex_regs *regs) +{ + uint64_t esr, ec; + + esr = read_sysreg(esr_el1); + ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK; + GUEST_ASSERT_4(op_end_addr && (expected_ec == ec), + regs->pc, esr, ec, expected_ec); + + /* Will go back to op_end_addr after the handler exits */ + regs->pc = op_end_addr; + + /* + * Clear op_end_addr, and setting expected_ec to INVALID_EC + * as a sign that an exception has occurred. + */ + op_end_addr = 0; + expected_ec = INVALID_EC; +} + +/* + * Run the given operation that should trigger an exception with the + * given exception class. The exception handler (guest_sync_handler) + * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and + * will come back to the instruction at the @done_label. + * The @done_label must be a unique label in this test program. + */ +#define TEST_EXCEPTION(ec, ops, done_label) \ +{ \ + extern int done_label; \ + \ + WRITE_ONCE(op_end_addr, (uint64_t)&done_label); \ + GUEST_ASSERT(ec != INVALID_EC); \ + WRITE_ONCE(expected_ec, ec); \ + dsb(ish); \ + ops; \ + asm volatile(#done_label":"); \ + GUEST_ASSERT(!op_end_addr); \ + GUEST_ASSERT(expected_ec == INVALID_EC); \ +} + static void pmu_disable_reset(void) { uint64_t pmcr = read_sysreg(pmcr_el0); @@ -350,16 +395,38 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx) pmc_idx, acc, read_data, read_data_prev); } +/* + * Tests for reading/writing registers for the unimplemented event counter + * specified by @pmc_idx (>= PMCR_EL0.N). + */ +static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx) +{ + /* + * Reading/writing the event count/type registers should cause + * an UNDEFINED exception. + */ + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer); + /* + * The bit corresponding to the (unimplemented) counter in + * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ. + */ + test_bitmap_pmu_regs(pmc_idx, 1); + test_bitmap_pmu_regs(pmc_idx, 0); +} + /* * The guest is configured with PMUv3 with @expected_pmcr_n number of * event counters. * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and - * if reading/writing PMU registers for implemented counters can work - * as expected. + * if reading/writing PMU registers for implemented or unimplemented + * counters can work as expected. */ static void guest_code(uint64_t expected_pmcr_n) { - uint64_t pmcr, pmcr_n; + uint64_t pmcr, pmcr_n, unimp_mask; int i, pmc; GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); @@ -370,15 +437,31 @@ static void guest_code(uint64_t expected_pmcr_n) /* Make sure that PMCR_EL0.N indicates the value userspace set */ GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + /* + * Make sure that (RAZ) bits corresponding to unimplemented event + * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero. + * (NOTE: bits for implemented event counters are reset to UNKNOWN) + */ + unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n); + check_bitmap_pmu_regs(unimp_mask, false); + /* * Tests for reading/writing PMU registers for implemented counters. - * Use each combination of PMEVT{CNTR,TYPER}_EL0 accessor functions. + * Use each combination of PMEV{CNTR,TYPER}_EL0 accessor functions. */ for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { for (pmc = 0; pmc < pmcr_n; pmc++) test_access_pmc_regs(&pmc_accessors[i], pmc); } + /* + * Tests for reading/writing PMU registers for unimplemented counters. + * Use each combination of PMEV{CNTR,TYPER}_EL0 accessor functions. + */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++) + test_access_invalid_pmc_regs(&pmc_accessors[i], pmc); + } GUEST_DONE(); } @@ -392,7 +475,7 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, struct kvm_vm *vm; struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; - uint8_t pmuver; + uint8_t pmuver, ec; uint64_t dfr0, irq = 23; struct kvm_device_attr irq_attr = { .group = KVM_ARM_VCPU_PMU_V3_CTRL, @@ -405,11 +488,18 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, }; vm = vm_create(1); + vm_init_descriptor_tables(vm); + /* Catch exceptions for easier debugging */ + for (ec = 0; ec < ESR_EC_NUM; ec++) { + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec, + guest_sync_handler); + } /* Create vCPU with PMUv3 */ vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + vcpu_init_descriptor_tables(vcpu); *gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); /* Make sure that PMUv3 support is indicated in the ID register */ @@ -478,6 +568,7 @@ static void run_test(uint64_t pmcr_n) vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); aarch64_vcpu_setup(vcpu, &init); + vcpu_init_descriptor_tables(vcpu); vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index 5f977528e09c0..52d87809356c8 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -104,6 +104,7 @@ enum { #define ESR_EC_SHIFT 26 #define ESR_EC_MASK (ESR_EC_NUM - 1) +#define ESR_EC_UNKNOWN 0x0 #define ESR_EC_SVC64 0x15 #define ESR_EC_IABT 0x21 #define ESR_EC_DABT 0x25 From patchwork Wed Feb 15 01:07:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141134 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 86901C05027 for ; Wed, 15 Feb 2023 01:09:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=cNlVK3/P2T8H4wLiLK2QTF8DNYZSJ1vge+BNw4jyGVk=; b=QnUb+5Ov9hw3f7uy84LbYbv4aB T264UhUzFJLKVrdy5xHX1RTSSEx9oxjWETK/lyqx5cI6SyvsQHPO73JAEkNfpevBYVuo7CVtKyBV2 1FaOGMM1zkjBCSKrB1Gr7+uDr4i+4Pd3kZ6H582JXJklx1CgtrOA/dBny9zUoY+fbmnppYXCjYe81 J8d3KxjC47hJlPFijVFRZcR0Y8iMgrDG79EGHYWr1LwrSm7S5Hbn6H8deCifk2VggsXwnCUKnDRXW boxAHxBDSKLchN735viRNwiZoLFGfr/r1C7kYqt3kRHXiL62O+AqP2b7Kyk+iZNj31M3SMdN93G08 DNeMESDw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Hg-0049P6-Fq; Wed, 15 Feb 2023 01:08:33 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gg-0048yb-35 for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:32 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-507aac99fdfso179600867b3.11 for ; Tue, 14 Feb 2023 17:07:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Cu9ya3q/z6nc/xU+J5zlAOmWao9LGcxeHPAl4gmxetg=; b=K5wam+e+wpbMEHXcHiGE71fKuJ39a7A9AaRX308CunrTvb0vwUxmjkdYru/4ZZ9lcp GOK58ahy2vLxBI5aJ95hc4A02kHTx/vjtcI3f/aRxFDj6TGV5SH739yiY5+dwCI7hylo jAtNJXyy4Wb/jn2bkfXXjF0oGSEH44fhEwzxr4dl0sLFjnkg4KVqeNlC/cHp0DRlrLT6 EaUfFZaPokh4MYxeDbdIVCAsMRJ8HjXgXjnfyZKC9SvxHGPKKD2ZLwGXC5BkpWKXgBaS 6p/gxgoKuoAo6bfnwwGJO+B7Od8+AfUEgd9tZ5Q9boXyh6S53MWaD/vl3cWA3TvwkwKN yK0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Cu9ya3q/z6nc/xU+J5zlAOmWao9LGcxeHPAl4gmxetg=; b=dPuAQbqeNMt44U0T/fg7UvgwK3M836jXcvsakSyl+3kQLm+InN0yqFsw80tRg2kftY Nl/IO58DrjPxiJZA0S1S+/alWbQaWqxLGoAN1VT8mMQuaNOcUaUUxxDaOKrqWkI8JlWE 1Yg4/zqUFVUxbjZ+9CoQGg+cz2vMXHpkrw8wZaAvgGq+ZNRNTUSqsPfMnHKNK8/OZ/UZ /VkvbVFRpcca7aVnX0qzkKB6LOMK1KY7R9QHvA+mP032jq7E6dnZsZginbCEkHheNCqd sIgUgi32nZA9BtsxgOMtaB3XvelsM54teJAYZS1SgyyxTknuIEuB4CpDYQhbAH7gr0o0 GXkg== X-Gm-Message-State: AO0yUKU9gMAALB5TDZL58nyo/iIZKJLaYg1CFtDA/utZaPnsMKXJzZcJ Tr7Pih2GNthpjTIBSD3wJhTicyzGYFiE X-Google-Smtp-Source: AK7set92VavwRC6hdtJrTarPTSUlJGunS3jkjl3HEw2Yfq8P8qkaZHe1cu0aIME/YVHWyQdR1zJIYVU7SBdw X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6902:18b:b0:80b:b72a:34c0 with SMTP id t11-20020a056902018b00b0080bb72a34c0mr38461ybh.80.1676423246098; Tue, 14 Feb 2023 17:07:26 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:06 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-6-rananta@google.com> Subject: [REPOST PATCH 05/16] selftests: KVM: aarch64: Refactor the vPMU counter access tests From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230214_170730_176500_FD5C0FD6 X-CRM114-Status: GOOD ( 23.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Refactor the existing counter access tests into its own independent functions and make running the tests generic to make way for the upcoming tests. As a part of the refactoring, rename the test file to a more genetric name vpmu_test.c. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- tools/testing/selftests/kvm/Makefile | 2 +- .../{vpmu_counter_access.c => vpmu_test.c} | 142 ++++++++++++------ 2 files changed, 100 insertions(+), 44 deletions(-) rename tools/testing/selftests/kvm/aarch64/{vpmu_counter_access.c => vpmu_test.c} (88%) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index b27fea0ce5918..a4d262e139b18 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -143,7 +143,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/psci_test TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq -TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access +TEST_GEN_PROGS_aarch64 += aarch64/vpmu_test TEST_GEN_PROGS_aarch64 += access_tracking_perf_test TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c similarity index 88% rename from tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c rename to tools/testing/selftests/kvm/aarch64/vpmu_test.c index 453f0dd240f44..d72c3c9b9c39f 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* - * vpmu_counter_access - Test vPMU event counter access + * vpmu_test - Test the vPMU * * Copyright (c) 2022 Google LLC. * @@ -147,6 +147,11 @@ static inline void disable_counter(int idx) isb(); } +static inline uint64_t get_pmcr_n(void) +{ + return FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); +} + /* * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}_EL0 * accessors that test cases will use. Each of the accessors will @@ -183,6 +188,23 @@ struct pmc_accessor pmc_accessors[] = { uint64_t expected_ec = INVALID_EC; uint64_t op_end_addr; +struct vpmu_vm { + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; +}; + +enum test_stage { + TEST_STAGE_COUNTER_ACCESS = 1, +}; + +struct guest_data { + enum test_stage test_stage; + uint64_t expected_pmcr_n; +}; + +static struct guest_data guest_data; + static void guest_sync_handler(struct ex_regs *regs) { uint64_t esr, ec; @@ -295,7 +317,7 @@ static void test_bitmap_pmu_regs(int pmc_idx, bool set_op) write_sysreg(test_bit, pmovsset_el0); /* The bit will be set only if the counter is implemented */ - pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); + pmcr_n = get_pmcr_n(); set_expected = (pmc_idx < pmcr_n) ? true : false; } else { write_sysreg(test_bit, pmcntenclr_el0); @@ -424,15 +446,14 @@ static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx) * if reading/writing PMU registers for implemented or unimplemented * counters can work as expected. */ -static void guest_code(uint64_t expected_pmcr_n) +static void guest_counter_access_test(uint64_t expected_pmcr_n) { - uint64_t pmcr, pmcr_n, unimp_mask; + uint64_t pmcr_n, unimp_mask; int i, pmc; GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); - pmcr = read_sysreg(pmcr_el0); - pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); + pmcr_n = get_pmcr_n(); /* Make sure that PMCR_EL0.N indicates the value userspace set */ GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); @@ -462,6 +483,18 @@ static void guest_code(uint64_t expected_pmcr_n) for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++) test_access_invalid_pmc_regs(&pmc_accessors[i], pmc); } +} + +static void guest_code(void) +{ + switch (guest_data.test_stage) { + case TEST_STAGE_COUNTER_ACCESS: + guest_counter_access_test(guest_data.expected_pmcr_n); + break; + default: + GUEST_ASSERT_1(0, guest_data.test_stage); + } + GUEST_DONE(); } @@ -469,14 +502,14 @@ static void guest_code(uint64_t expected_pmcr_n) #define GICR_BASE_GPA 0x80A0000ULL /* Create a VM that has one vCPU with PMUv3 configured. */ -static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, - int *gic_fd) +static struct vpmu_vm *create_vpmu_vm(void *guest_code) { struct kvm_vm *vm; struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; uint8_t pmuver, ec; uint64_t dfr0, irq = 23; + struct vpmu_vm *vpmu_vm; struct kvm_device_attr irq_attr = { .group = KVM_ARM_VCPU_PMU_V3_CTRL, .attr = KVM_ARM_VCPU_PMU_V3_IRQ, @@ -487,7 +520,10 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, .attr = KVM_ARM_VCPU_PMU_V3_INIT, }; - vm = vm_create(1); + vpmu_vm = calloc(1, sizeof(*vpmu_vm)); + TEST_ASSERT(vpmu_vm, "Failed to allocate vpmu_vm"); + + vpmu_vm->vm = vm = vm_create(1); vm_init_descriptor_tables(vm); /* Catch exceptions for easier debugging */ for (ec = 0; ec < ESR_EC_NUM; ec++) { @@ -498,9 +534,9 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, /* Create vCPU with PMUv3 */ vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); - vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + vpmu_vm->vcpu = vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); vcpu_init_descriptor_tables(vcpu); - *gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); + vpmu_vm->gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); /* Make sure that PMUv3 support is indicated in the ID register */ vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); @@ -513,15 +549,21 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); - *vcpup = vcpu; - return vm; + return vpmu_vm; +} + +static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) +{ + close(vpmu_vm->gic_fd); + kvm_vm_free(vpmu_vm->vm); + free(vpmu_vm); } -static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) +static void run_vcpu(struct kvm_vcpu *vcpu) { struct ucall uc; - vcpu_args_set(vcpu, 1, pmcr_n); + sync_global_to_guest(vcpu->vm, guest_data); vcpu_run(vcpu); switch (get_ucall(vcpu, &uc)) { case UCALL_ABORT: @@ -539,16 +581,18 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n, * and run the test. */ -static void run_test(uint64_t pmcr_n) +static void run_counter_access_test(uint64_t pmcr_n) { - struct kvm_vm *vm; + struct vpmu_vm *vpmu_vm; struct kvm_vcpu *vcpu; - int gic_fd; uint64_t sp, pmcr, pmcr_orig; struct kvm_vcpu_init init; + guest_data.expected_pmcr_n = pmcr_n; + pr_debug("Test with pmcr_n %lu\n", pmcr_n); - vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + vpmu_vm = create_vpmu_vm(guest_code); + vcpu = vpmu_vm->vcpu; /* Save the initial sp to restore them later to run the guest again */ vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); @@ -559,23 +603,22 @@ static void run_test(uint64_t pmcr_n) pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); - run_vcpu(vcpu, pmcr_n); + run_vcpu(vcpu); /* * Reset and re-initialize the vCPU, and run the guest code again to * check if PMCR_EL0.N is preserved. */ - vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + vm_ioctl(vpmu_vm->vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); aarch64_vcpu_setup(vcpu, &init); vcpu_init_descriptor_tables(vcpu); vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); - run_vcpu(vcpu, pmcr_n); + run_vcpu(vcpu); - close(gic_fd); - kvm_vm_free(vm); + destroy_vpmu_vm(vpmu_vm); } /* @@ -583,15 +626,18 @@ static void run_test(uint64_t pmcr_n) * the vCPU to @pmcr_n, which is larger than the host value. * The attempt should fail as @pmcr_n is too big to set for the vCPU. */ -static void run_error_test(uint64_t pmcr_n) +static void run_counter_access_error_test(uint64_t pmcr_n) { - struct kvm_vm *vm; + struct vpmu_vm *vpmu_vm; struct kvm_vcpu *vcpu; - int gic_fd, ret; + int ret; uint64_t pmcr, pmcr_orig; + guest_data.expected_pmcr_n = pmcr_n; + pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); - vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + vpmu_vm = create_vpmu_vm(guest_code); + vcpu = vpmu_vm->vcpu; /* Update the PMCR_EL0.N with @pmcr_n */ vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); @@ -603,8 +649,25 @@ static void run_error_test(uint64_t pmcr_n) TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail", pmcr, pmcr_orig); - close(gic_fd); - kvm_vm_free(vm); + destroy_vpmu_vm(vpmu_vm); +} + +static void run_counter_access_tests(uint64_t pmcr_n) +{ + uint64_t i; + + guest_data.test_stage = TEST_STAGE_COUNTER_ACCESS; + + for (i = 0; i <= pmcr_n; i++) + run_counter_access_test(i); + + for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) + run_counter_access_error_test(i); +} + +static void run_tests(uint64_t pmcr_n) +{ + run_counter_access_tests(pmcr_n); } /* @@ -613,30 +676,23 @@ static void run_error_test(uint64_t pmcr_n) */ static uint64_t get_pmcr_n_limit(void) { - struct kvm_vm *vm; - struct kvm_vcpu *vcpu; - int gic_fd; + struct vpmu_vm *vpmu_vm; uint64_t pmcr; - vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); - vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); - close(gic_fd); - kvm_vm_free(vm); + vpmu_vm = create_vpmu_vm(guest_code); + vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + destroy_vpmu_vm(vpmu_vm); return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); } int main(void) { - uint64_t i, pmcr_n; + uint64_t pmcr_n; TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); pmcr_n = get_pmcr_n_limit(); - for (i = 0; i <= pmcr_n; i++) - run_test(i); - - for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) - run_error_test(i); + run_tests(pmcr_n); return 0; } From patchwork Wed Feb 15 01:07:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 357B6C05027 for ; Wed, 15 Feb 2023 01:09:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=XLC41ibgkuEQ9gCpfcP/798I+N1Mq6+RgpGDxQR1WvI=; b=NnukEdGue+NBoz8oRBAWNBeJj9 7MaggbcR7wmkph1HagC3KrVrZRY7lb7YwFEsyn07s7rnnKpPR+neYSnJWinRqVnE4Thljj/b+Sdia EVg8awP+338zQVNaoADdMoFOomZRVx69qhX8YE/YDWsCuOdh0FzieE29A/RGlmzO7CEkZFSU3/am3 FXqsK9KeddsStjohuP1bmKK5ZRn2bt+iVI56z/l6vG5leNaHjxx0OXqYMN76iRQNcTFmORdxBrokU /LWM+Ich5AR7GeaYntpIswT5CVC4E2la6C9JKKCL32gcmy+7HjsKwF2Fn3q1uUngs0EUo0kN5UhW9 C8Kogdog==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6HH-0049En-7l; Wed, 15 Feb 2023 01:08:07 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gg-0048za-6E for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:31 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-4fa63c84621so181848597b3.20 for ; Tue, 14 Feb 2023 17:07:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=I+uK5I847Ub+3lN0aBU7Gcssw6jbSaFlZxnmEwJsF78=; b=kEGDGgMAIhF4WS+uWDkyobWv0l+l/abqGQR5J9Ib3vG7/2CW2w00Im9EQC8yt+LmU5 aBpYEjxKJO5KwUDKL6A+OgHwEdEfdGVlunFgEJ0ktKo0TEkEicL42N/2+6h4LBlqiFlV EVPXRfisozsa8/Ak2bQrEuHlDM4KrVy0LUz8NkvwjSwTjDdpxh7Qo5hPw6EFMdFhqqbd XNV3Cb7Fm9kGvxLo7Oz0cCu0/JHFejb9zWSDrDz/f0SykUMsKeTkHWF2G77h+yxGfMlP 1WnpYGTKTmzFCIS7p1IDYnFsG7BN9QAT6YbQ6AfRVEWAaYUAJu4jESt67UodFPKa0cyy /ItQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=I+uK5I847Ub+3lN0aBU7Gcssw6jbSaFlZxnmEwJsF78=; b=4pGNvaFR3keCXvUwmqhQa+Yk5dFmnBHOj7FPD9sa2dKsQKnuwYSJbRKVTUGJEJbBIt Qaq9ZTYkD7whqCRBSFVH7fSu2FlE9aq7NAqIa1HkIISkw5q2/NjJyv0exbagOiebbKut z2imjmadUi36vl0oPkXEfrQqUGWVGpHA1Bg8GPlOZQcgN+CY7aSUj1b43hPK7+PAJiPp r5CPd3dVkp5cQqti5152j2m4OpL5HYYzDNY9/KQY4jlbof8bTu4Wb5AyDjpsgQ4AHjlz zCrCUpbIeRqXp5cWZN2a3rSmrz1c1xgHRh12lud3uvfqf5rFY0U6LXhah6du2EiBpEC1 cxpg== X-Gm-Message-State: AO0yUKWCxtytjMAhjfCr1ahdYMGw17kHdSzdqeSjL/LSUvSmzP+ZX6MD PH3OYs9i8usPajIKTrjAeQvPTKW8sASB X-Google-Smtp-Source: AK7set91iu1Pp41RZ1Xv7Dro5d0bpbq1rHsaKAxqEIG8x/vMkAhgvV2qniDnuybFL/IYT9ABfXBZ9sg+uh1R X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a5b:c2:0:b0:855:fdcb:446d with SMTP id d2-20020a5b00c2000000b00855fdcb446dmr3ybp.6.1676423248137; Tue, 14 Feb 2023 17:07:28 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:07 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-7-rananta@google.com> Subject: [REPOST PATCH 06/16] tools: arm64: perf_event: Define Cycle counter enable/overflow bits From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230214_170730_316090_1DF8FA87 X-CRM114-Status: UNSURE ( 9.06 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add the definitions of ARMV8_PMU_CNTOVS_C (Cycle counter overflow bit) for overflow status registers and ARMV8_PMU_CNTENSET_C (Cycle counter enable bit) for PMCNTENSET_EL0 register. Signed-off-by: Raghavendra Rao Ananta --- tools/arch/arm64/include/asm/perf_event.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/include/asm/perf_event.h index 97e49a4d4969f..8ce23aabf6fe6 100644 --- a/tools/arch/arm64/include/asm/perf_event.h +++ b/tools/arch/arm64/include/asm/perf_event.h @@ -222,9 +222,11 @@ /* * PMOVSR: counters overflow flag status reg */ +#define ARMV8_PMU_CNTOVS_C (1 << 31) /* Cycle counter overflow bit */ #define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */ #define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK + /* * PMXEVTYPER: Event selection reg */ @@ -247,6 +249,11 @@ #define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ #define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ +/* + * PMCNTENSET: Count Enable set reg + */ +#define ARMV8_PMU_CNTENSET_C (1 << 31) /* Cycle counter enable bit */ + /* PMMIR_EL1.SLOTS mask */ #define ARMV8_PMU_SLOTS_MASK 0xff From patchwork Wed Feb 15 01:07:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CEBBFC61DA4 for ; Wed, 15 Feb 2023 01:09:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lb5Hewti0Z2N5BCC4Phorglp42WZZbaeWYTT6dLXg+Q=; b=c7KAToqy4s7zb4Of3kSEj4WtVK V8/VtxChy9UEojb4Ay2I0O5kGJMVsxZTj8NO5X+r5kZ8rNY77WIySz/LPEBMRxupsOdjnJPmbQLq9 Eq/fsTiWY6vZFAklDvLBXjPNEt40pyumVcx0hvU7RSaayh5PPKhKiiMImErEJsJko7it0MoPW80sV h6mC3/0uhhIpuNRA0HrFPGwx6IBkj65SVqJAby9Ap+jSF5SzGdm1ddmolPE0sjk+errGVLijveiU1 oKRg90LJURawK7Ly4IPhx+5SdjV742jrQHBtVW4hACyPrCcSohi9zEaDEoj8l6ZQ5YFoqsgjh4Zp9 5dS2XvCA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Hx-0049Y1-BE; Wed, 15 Feb 2023 01:08:49 +0000 Received: from mail-il1-x14a.google.com ([2607:f8b0:4864:20::14a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gh-0048zv-Nn for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:33 +0000 Received: by mail-il1-x14a.google.com with SMTP id k13-20020a92c24d000000b003127853ef5dso12372999ilo.5 for ; Tue, 14 Feb 2023 17:07:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=m5yslkEpatxEcn+ZjK5j+e94Zo++KRDXawZarevQP94=; b=ej58N8ysN78T8KDIyhj11kncochsRhNrA7Bi1OZuap4Gcpp106AW/WAmUpics1Lde9 kGvkmrs+EJ/ITG/iLFxr7BlJjVCMrCfMydCohpT3IMM3duvIOEbBca0NDCaC9k5vDekp oSGROHd7tFDelb/YWizk3eyN9FziTr642Viat9eIJun84G/gs2Ps603ZiGKb0x4DcJUq JPJjjkUK7GP8EWJpE6UWuTqBYHACKRbI8Zig0XQV2X3ztZEdAu7mL8LKQbKdggWvU4wS cjsXmnc2IAS3tO4RpRViCgFmrjyChCrnV62EEUw563Wsig3C9O/Zj+Wk8TLLCEZrt2hN fkdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=m5yslkEpatxEcn+ZjK5j+e94Zo++KRDXawZarevQP94=; b=UnW3XV7CvynnBeGyIYmbpvlGP/K5EIvx1QN8rMyfwJroz1cawLY03HdqR2C1Aojn6p u+FbmUkLRH4wRARJ3Ob8lmndlYnYQGqmiCGwLETj+qL3LT8/eMkoEDs8ZQTL9JcTjXcx +P9dRZnqO1c3bJ5obQVrqggAo3NCmXmSFNN1C4WZF8eormeqGXja0RcwzU7IDFrebn5k zCC8Ye43AHynHUjpq7cy13hvP/z3lpVT+uTCXP37n4yHOhup62uTok93WmQ9UJcICslq +z/4CaXL6WDOFkwJ7b+MchPs5B+6lIjAvrFOQIQijdYZAKEVJwerEXtgmOPHMvVJTBe/ sBpw== X-Gm-Message-State: AO0yUKVcPAwobNO+c9SGSvwhJhG5U7xrR9qjbRbwxOSYW0KtoChQpDO6 Q76ap5rdiR66blVdMPW438XnU0Zs0buq X-Google-Smtp-Source: AK7set/cu+50CTo8tUd6eYrVsq7uo+6UvW9Awlq764pXwqJwlSlmlCFOjle5EQ89W8sHdvl4KibNwCJJu8LF X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6638:3e0f:b0:3c4:a4d1:cc49 with SMTP id co15-20020a0566383e0f00b003c4a4d1cc49mr359864jab.3.1676423249641; Tue, 14 Feb 2023 17:07:29 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:08 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-8-rananta@google.com> Subject: [REPOST PATCH 07/16] selftests: KVM: aarch64: Add PMU cycle counter helpers From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230214_170731_811823_F63389A8 X-CRM114-Status: UNSURE ( 9.60 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add basic helpers for the test to access the cycle counter registers. The helpers will be used in the upcoming patches to run the tests related to cycle counter. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 40 +++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index d72c3c9b9c39f..15aebc7d7dc94 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -147,6 +147,46 @@ static inline void disable_counter(int idx) isb(); } +static inline uint64_t read_cycle_counter(void) +{ + return read_sysreg(pmccntr_el0); +} + +static inline void reset_cycle_counter(void) +{ + uint64_t v = read_sysreg(pmcr_el0); + + write_sysreg(ARMV8_PMU_PMCR_C | v, pmcr_el0); + isb(); +} + +static inline void enable_cycle_counter(void) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenset_el0); + isb(); +} + +static inline void disable_cycle_counter(void) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenclr_el0); + isb(); +} + +static inline void write_pmccfiltr(unsigned long val) +{ + write_sysreg(val, pmccfiltr_el0); + isb(); +} + +static inline uint64_t read_pmccfiltr(void) +{ + return read_sysreg(pmccfiltr_el0); +} + static inline uint64_t get_pmcr_n(void) { return FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); From patchwork Wed Feb 15 01:07:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141136 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 183A8C05027 for ; Wed, 15 Feb 2023 01:10:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=gArh57fcLe+cSY54hDy1nK/Kjf8Mnqy33dMKN0jU+7Q=; b=mZzkGeIxCoPORJBisj3IFEwXwx NsvZ2qZAg9riwzjWFR540Qiwc3CL5ZobzLBRRMteVfbTKlda3JvBiObbYZ5qoxOC6Jhehi2XK4vso EKsXrPxcEeKW/1tIbmorN/+r6bMdpUdn8R+ErU0QCQQE/Z4SceA4cSxnCKWjbOn18ESZxeQ5liZSo cJzc8YE+YrukPYFsd75tzRHERF/YEl/aIU+EDMctLNnTW0MDlP/2u8E7RieiDk7qEKVTMO52692sx 8CMDa6NgB/AyUDP9zE4vDl3lPjVC/N42WK4mdp/yjMsTlPNPs25BHM7ykHQjgNL0jwYnWqQhTlACq KW41bTog==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6IC-0049fb-Vj; Wed, 15 Feb 2023 01:09:05 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gi-00491O-Px for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:35 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-52f1641b79aso89093687b3.16 for ; Tue, 14 Feb 2023 17:07:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9BAtKaaHohTEAYMjD0cR4uj3MWVe56GuEhp9bvwPfa4=; b=Yj822Ifz/waLQy8MmSjHN2pZu32oORMwB771wgCi32mwkJC0hUQpN1N/nWxoPuQKkY MR5Obqol9/aE1XAhNk7AVogHzndhjoI3zj/ErGMJSzobg1QkdKs40EF9od/2mDXGl3Mr vML3D5iNNfD1nb34vZDllfCbGcaAnWRYJUliZNg+yJkjJr9xtUkHpwrusRFPJfJbTXvS iIZLxpFggJQJTFRdLpI55cjbpErqmMuA8qkkrcU08xAnYu4PDLbIWuBAlSrsut48bP65 31IIpqq6lv+RfBeb3Rip0r7AbEsVn78niJMaW1e4tKNTl7PeTbGqq+6vGsP827kL6i0p hNoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9BAtKaaHohTEAYMjD0cR4uj3MWVe56GuEhp9bvwPfa4=; b=vLEYM8MoBOEw3dQCDYo75I4uZAEqGoOyH0sYIpJy5a3MaVSNlrU7/upcc4BEZhrlZo rRoK1mEgVLyS0arVaVvCeBirRVDjJJvB1j19R+rs8VOrWGxdLNV5HomqyZoxGhLSshAc 0viRU0jWzReMIlZtSzNQOu1Uyq+Mr1jYvR8d3hACgQm6Ta1aORw7Hc7pyZwj2etalTH+ mW+ac4Yw68Ci+o0w7JADna9z7EgNrxsSIToAClpYiovxY1tixlsnd2wqMhaFccHtlgDW nZkNtl0OCxCCs5HhUWXLb8a3Nft5OwrFqOUDENHwyUscdYCVZBduFsSeVfOdot/8tam+ c2Pw== X-Gm-Message-State: AO0yUKUj3q+pgLq44CNmkDIFzeJOSps/G7Fn13KddZXgP/SJE1ZmWKM8 ZP005S76DPKx8QmfaZ9+wrvisyekCEfs X-Google-Smtp-Source: AK7set9KQp5+rmM+igHZkkICMtVguIIXyRbXqTvu6s5OOn6LZ8Hz4m5ENjHy86UYN6YyxNTRrDsY/D2AXKXH X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:ae0b:0:b0:530:a340:bcd7 with SMTP id m11-20020a81ae0b000000b00530a340bcd7mr0ywh.8.1676423250349; Tue, 14 Feb 2023 17:07:30 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:09 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-9-rananta@google.com> Subject: [REPOST PATCH 08/16] selftests: KVM: aarch64: Consider PMU event filters for VM creation From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230214_170732_888976_08049FC2 X-CRM114-Status: GOOD ( 19.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Accept a list of KVM PMU event filters as an argument while creating a VM via create_vpmu_vm(). Upcoming patches would leverage this to test the event filters' functionality. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 64 +++++++++++++++++-- 1 file changed, 60 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 15aebc7d7dc94..2b3a4fa3afa9c 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -15,10 +15,14 @@ #include #include #include +#include /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) +/* The max number of event numbers that's supported */ +#define ARMV8_PMU_MAX_EVENTS 64 + /* * The macros and functions below for reading/writing PMEV{CNTR,TYPER}_EL0 * were basically copied from arch/arm64/kernel/perf_event.c. @@ -224,6 +228,8 @@ struct pmc_accessor pmc_accessors[] = { { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, }; +#define MAX_EVENT_FILTERS_PER_VM 10 + #define INVALID_EC (-1ul) uint64_t expected_ec = INVALID_EC; uint64_t op_end_addr; @@ -232,6 +238,7 @@ struct vpmu_vm { struct kvm_vm *vm; struct kvm_vcpu *vcpu; int gic_fd; + unsigned long *pmu_filter; }; enum test_stage { @@ -541,8 +548,51 @@ static void guest_code(void) #define GICD_BASE_GPA 0x8000000ULL #define GICR_BASE_GPA 0x80A0000ULL +static unsigned long * +set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters) +{ + int j; + unsigned long *pmu_filter; + struct kvm_device_attr filter_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_FILTER, + }; + + /* + * Setting up of the bitmap is similar to what KVM does. + * If the first filter denys an event, default all the others to allow, and vice-versa. + */ + pmu_filter = bitmap_zalloc(ARMV8_PMU_MAX_EVENTS); + TEST_ASSERT(pmu_filter, "Failed to allocate the pmu_filter"); + + if (pmu_event_filters[0].action == KVM_PMU_EVENT_DENY) + bitmap_fill(pmu_filter, ARMV8_PMU_MAX_EVENTS); + + for (j = 0; j < MAX_EVENT_FILTERS_PER_VM; j++) { + struct kvm_pmu_event_filter *pmu_event_filter = &pmu_event_filters[j]; + + if (!pmu_event_filter->nevents) + break; + + pr_debug("Applying event filter:: event: 0x%x; action: %s\n", + pmu_event_filter->base_event, + pmu_event_filter->action == KVM_PMU_EVENT_ALLOW ? "ALLOW" : "DENY"); + + filter_attr.addr = (uint64_t) pmu_event_filter; + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + + if (pmu_event_filter->action == KVM_PMU_EVENT_ALLOW) + __set_bit(pmu_event_filter->base_event, pmu_filter); + else + __clear_bit(pmu_event_filter->base_event, pmu_filter); + } + + return pmu_filter; +} + /* Create a VM that has one vCPU with PMUv3 configured. */ -static struct vpmu_vm *create_vpmu_vm(void *guest_code) +static struct vpmu_vm * +create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) { struct kvm_vm *vm; struct kvm_vcpu *vcpu; @@ -586,6 +636,9 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code) "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); /* Initialize vPMU */ + if (pmu_event_filters) + vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters); + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); @@ -594,6 +647,8 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code) static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) { + if (vpmu_vm->pmu_filter) + bitmap_free(vpmu_vm->pmu_filter); close(vpmu_vm->gic_fd); kvm_vm_free(vpmu_vm->vm); free(vpmu_vm); @@ -631,7 +686,7 @@ static void run_counter_access_test(uint64_t pmcr_n) guest_data.expected_pmcr_n = pmcr_n; pr_debug("Test with pmcr_n %lu\n", pmcr_n); - vpmu_vm = create_vpmu_vm(guest_code); + vpmu_vm = create_vpmu_vm(guest_code, NULL); vcpu = vpmu_vm->vcpu; /* Save the initial sp to restore them later to run the guest again */ @@ -676,7 +731,7 @@ static void run_counter_access_error_test(uint64_t pmcr_n) guest_data.expected_pmcr_n = pmcr_n; pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); - vpmu_vm = create_vpmu_vm(guest_code); + vpmu_vm = create_vpmu_vm(guest_code, NULL); vcpu = vpmu_vm->vcpu; /* Update the PMCR_EL0.N with @pmcr_n */ @@ -719,9 +774,10 @@ static uint64_t get_pmcr_n_limit(void) struct vpmu_vm *vpmu_vm; uint64_t pmcr; - vpmu_vm = create_vpmu_vm(guest_code); + vpmu_vm = create_vpmu_vm(guest_code, NULL); vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); destroy_vpmu_vm(vpmu_vm); + return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); } From patchwork Wed Feb 15 01:07:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1C36CC61DA4 for ; Wed, 15 Feb 2023 01:13:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=SfS/3QjeyDl9Ro/v0XCtoIPqb5E2qvFwgstNC0hQqsU=; b=yG0fzbrsl2DPY5z7JGeGzklVTI WaaLDvnBzw42hiAECKl8Wlx3GCKg2VXMtAxVpbAC7bqN1KB9Ha/bsulJLdi4YPg1Sg0grAnddtf9O dcE71BVhnEOKUii2AfbIbWt6r+7wKIyXZBaPKJ0qKFPEZc2Y0IomA1FKu/b2KxDg/P1hBWvAiwskw t5HLxiGCh/0NjPhaRQM2HO4/e4G/MMdGV507W3tCUOTzJhsT3zzdAPg2kMkJlcd014P0rTS3sxthQ TKDKxBzivv2oamEZBFMefn2D6bcnaBzUFrl9GcbM8UIbyng7fT7cxyv8eYvimW9kvgLnvMxL4oM+A CmFag/hQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6J5-004A3L-JP; Wed, 15 Feb 2023 01:09:59 +0000 Received: from mail-il1-x14a.google.com ([2607:f8b0:4864:20::14a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gj-00492T-Hd for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:37 +0000 Received: by mail-il1-x14a.google.com with SMTP id q3-20020a056e02096300b003157134a9fbso24561ilt.2 for ; Tue, 14 Feb 2023 17:07:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b4AB06q1cmw2xfepDKSJYP3pCnth/5v7kXmR5lfk+6U=; b=bdUvRc4DFoVm18CrtgjPtwfSmf/jugjUOqPNo+23bsO4zspSHgom/Yv/uC66fCRW+7 rJ4qlZc+9xxfBnXoTeZo3SB+07yRLx76s2NalRU5H4354eoQLSxxnLOjy3hWhX8D5k6C ObzdFQobYDiaXHWRE7F2Q7axtCzBfQ4ZViGvuEF3Fri9/US01mTPJR22ZrdOarchVYoE OlziY8aOOGAz6M3KBjhfGWmk+nujvFsivDkSdTUevM/VgaxTa6J5b1QcXrIZi2etNs/b 5VTvodxVUBYlRcGRNuUQvYofDHYbXW3uIv/ss9KNSvikBMEGNO2ewLecnJQAMevPmdbd Shxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b4AB06q1cmw2xfepDKSJYP3pCnth/5v7kXmR5lfk+6U=; b=REX/lbU1i7NBpGMDtebPjxoa3tFWWxF5yJfwla9Pt7nM3sWGam7vgCi3cqCnsZ83hy ZLp7k2/+saHmIIjLHf+LRGe/F71UN+2uNTxRSHa7yBVmNQlrcOzg236Cl20ruzwJpL3V c0p0n5p4wyZTQEYduxA91exuqOYvILGjP9wpoVeS4ymbPUrU9pD3jbuu5hUBOdRyP6oz Cjf64OEOT0J64sMvZvLMwxob1MbSsCV1wEBYYTKWoWi7495/2kUTE1mPpJnKEQmWXO4H 9HCWxWL5xRwy1xf0BwFVe2NL9pDbZqQAyCHdiss1yvPioUN5HVRt1SpUiWzq63CTa95a czYg== X-Gm-Message-State: AO0yUKU5gVPDI2Jvs6gGkK7+M6hAV/KxBAgc8h78P0yYv5SY9ZDkcG/x TgzDlzz8wtPhu26jFcuTjfZ7sC2WYyvL X-Google-Smtp-Source: AK7set99bR7dInb6Kz+vuZES78IAF9nNNWrKmWdNrVdeJgfAodCapNt9RI16sxPqq9n205Lc9VucfelRLeF+ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6638:1218:b0:3b7:1ec7:de9f with SMTP id n24-20020a056638121800b003b71ec7de9fmr578254jas.3.1676423252317; Tue, 14 Feb 2023 17:07:32 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:10 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-10-rananta@google.com> Subject: [REPOST PATCH 09/16] selftests: KVM: aarch64: Add KVM PMU event filter test From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230214_170733_667285_C2A17E81 X-CRM114-Status: GOOD ( 33.89 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add tests to validate KVM's KVM_ARM_VCPU_PMU_V3_FILTER attribute by applying a series of filters to allow or deny events from the userspace. Validation is done by the guest in a way that it should be able to count only the events that are allowed. The workload to execute a precise number of instructions (execute_precise_instrs() and precise_instrs_loop()) is taken from the kvm-unit-tests' arm/pmu.c. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 261 +++++++++++++++++- 1 file changed, 258 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 2b3a4fa3afa9c..3dfb770b538e9 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -2,12 +2,21 @@ /* * vpmu_test - Test the vPMU * - * Copyright (c) 2022 Google LLC. + * The test suit contains a series of checks to validate the vPMU + * functionality. This test runs only when KVM_CAP_ARM_PMU_V3 is + * supported on the host. The tests include: * - * This test checks if the guest can see the same number of the PMU event + * 1. Check if the guest can see the same number of the PMU event * counters (PMCR_EL0.N) that userspace sets, if the guest can access * those counters, and if the guest cannot access any other counters. - * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. + * + * 2. Test the functionality of KVM's KVM_ARM_VCPU_PMU_V3_FILTER + * attribute by applying a series of filters in various combinations + * of allowing or denying the events. The guest validates it by + * checking if it's able to count only the events that are allowed. + * + * Copyright (c) 2022 Google LLC. + * */ #include #include @@ -230,6 +239,12 @@ struct pmc_accessor pmc_accessors[] = { #define MAX_EVENT_FILTERS_PER_VM 10 +#define EVENT_ALLOW(ev) \ + {.base_event = ev, .nevents = 1, .action = KVM_PMU_EVENT_ALLOW} + +#define EVENT_DENY(ev) \ + {.base_event = ev, .nevents = 1, .action = KVM_PMU_EVENT_DENY} + #define INVALID_EC (-1ul) uint64_t expected_ec = INVALID_EC; uint64_t op_end_addr; @@ -243,11 +258,13 @@ struct vpmu_vm { enum test_stage { TEST_STAGE_COUNTER_ACCESS = 1, + TEST_STAGE_KVM_EVENT_FILTER, }; struct guest_data { enum test_stage test_stage; uint64_t expected_pmcr_n; + unsigned long *pmu_filter; }; static struct guest_data guest_data; @@ -329,6 +346,113 @@ static bool pmu_event_is_supported(uint64_t event) GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\ } + +/* + * Extra instructions inserted by the compiler would be difficult to compensate + * for, so hand assemble everything between, and including, the PMCR accesses + * to start and stop counting. isb instructions are inserted to make sure + * pmccntr read after this function returns the exact instructions executed + * in the controlled block. Total instrs = isb + nop + 2*loop = 2 + 2*loop. + */ +static inline void precise_instrs_loop(int loop, uint32_t pmcr) +{ + uint64_t pmcr64 = pmcr; + + asm volatile( + " msr pmcr_el0, %[pmcr]\n" + " isb\n" + "1: subs %w[loop], %w[loop], #1\n" + " b.gt 1b\n" + " nop\n" + " msr pmcr_el0, xzr\n" + " isb\n" + : [loop] "+r" (loop) + : [pmcr] "r" (pmcr64) + : "cc"); +} + +/* + * Execute a known number of guest instructions. Only even instruction counts + * greater than or equal to 4 are supported by the in-line assembly code. The + * control register (PMCR_EL0) is initialized with the provided value (allowing + * for example for the cycle counter or event counters to be reset). At the end + * of the exact instruction loop, zero is written to PMCR_EL0 to disable + * counting, allowing the cycle counter or event counters to be read at the + * leisure of the calling code. + */ +static void execute_precise_instrs(int num, uint32_t pmcr) +{ + int loop = (num - 2) / 2; + + GUEST_ASSERT_2(num >= 4 && ((num - 2) % 2 == 0), num, loop); + precise_instrs_loop(loop, pmcr); +} + +static void test_instructions_count(int pmc_idx, bool expect_count) +{ + int i; + struct pmc_accessor *acc; + uint64_t cnt; + int instrs_count = 100; + + enable_counter(pmc_idx); + + /* Test the event using all the possible way to configure the event */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + acc = &pmc_accessors[i]; + + pmu_disable_reset(); + + acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED); + + /* Enable the PMU and execute precisely number of instructions as a workload */ + execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E); + + /* If a count is expected, the counter should be increased by 'instrs_count' */ + cnt = acc->read_cntr(pmc_idx); + GUEST_ASSERT_4(expect_count == (cnt == instrs_count), + i, expect_count, cnt, instrs_count); + } + + disable_counter(pmc_idx); +} + +static void test_cycles_count(bool expect_count) +{ + uint64_t cnt; + + pmu_enable(); + reset_cycle_counter(); + + /* Count cycles in EL0 and EL1 */ + write_pmccfiltr(0); + enable_cycle_counter(); + + cnt = read_cycle_counter(); + + /* + * If a count is expected by the test, the cycle counter should be increased by + * at least 1, as there is at least one instruction between enabling the + * counter and reading the counter. + */ + GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count); + + disable_cycle_counter(); + pmu_disable_reset(); +} + +static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) +{ + switch (event) { + case ARMV8_PMUV3_PERFCTR_INST_RETIRED: + test_instructions_count(pmc_idx, expect_count); + break; + case ARMV8_PMUV3_PERFCTR_CPU_CYCLES: + test_cycles_count(expect_count); + break; + } +} + /* * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers * are set or cleared as specified in @set_expected. @@ -532,12 +656,37 @@ static void guest_counter_access_test(uint64_t expected_pmcr_n) } } +static void guest_event_filter_test(unsigned long *pmu_filter) +{ + uint64_t event; + + /* + * Check if PMCEIDx_EL0 is advertized as configured by the userspace. + * It's possible that even though the userspace allowed it, it may not be supported + * by the hardware and could be advertized as 'disabled'. Hence, only validate against + * the events that are advertized. + * + * Furthermore, check if the event is in fact counting if enabled, or vice-versa. + */ + for (event = 0; event < ARMV8_PMU_MAX_EVENTS - 1; event++) { + if (pmu_event_is_supported(event)) { + GUEST_ASSERT_1(test_bit(event, pmu_filter), event); + test_event_count(event, 0, true); + } else { + test_event_count(event, 0, false); + } + } +} + static void guest_code(void) { switch (guest_data.test_stage) { case TEST_STAGE_COUNTER_ACCESS: guest_counter_access_test(guest_data.expected_pmcr_n); break; + case TEST_STAGE_KVM_EVENT_FILTER: + guest_event_filter_test(guest_data.pmu_filter); + break; default: GUEST_ASSERT_1(0, guest_data.test_stage); } @@ -760,9 +909,115 @@ static void run_counter_access_tests(uint64_t pmcr_n) run_counter_access_error_test(i); } +static struct kvm_pmu_event_filter pmu_event_filters[][MAX_EVENT_FILTERS_PER_VM] = { + /* + * Each set of events denotes a filter configuration for that VM. + * During VM creation, the filters will be applied in the sequence mentioned here. + */ + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + }, + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, +}; + +static void run_kvm_event_filter_error_tests(void) +{ + int ret; + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct vpmu_vm *vpmu_vm; + struct kvm_vcpu_init init; + struct kvm_pmu_event_filter pmu_event_filter = EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES); + struct kvm_device_attr filter_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_FILTER, + .addr = (uint64_t) &pmu_event_filter, + }; + + /* KVM should not allow configuring filters after the PMU is initialized */ + vpmu_vm = create_vpmu_vm(guest_code, NULL); + ret = __vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + TEST_ASSERT(ret == -1 && errno == EBUSY, + "Failed to disallow setting an event filter after PMU init"); + destroy_vpmu_vm(vpmu_vm); + + /* Check for invalid event filter setting */ + vm = vm_create(1); + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + + pmu_event_filter.base_event = UINT16_MAX; + pmu_event_filter.nevents = 5; + ret = __vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + TEST_ASSERT(ret == -1 && errno == EINVAL, "Failed check for invalid filter configuration"); + kvm_vm_free(vm); +} + +static void run_kvm_event_filter_test(void) +{ + int i; + struct vpmu_vm *vpmu_vm; + struct kvm_vm *vm; + vm_vaddr_t pmu_filter_gva; + size_t pmu_filter_bmap_sz = BITS_TO_LONGS(ARMV8_PMU_MAX_EVENTS) * sizeof(unsigned long); + + guest_data.test_stage = TEST_STAGE_KVM_EVENT_FILTER; + + /* Test for valid filter configurations */ + for (i = 0; i < ARRAY_SIZE(pmu_event_filters); i++) { + vpmu_vm = create_vpmu_vm(guest_code, pmu_event_filters[i]); + vm = vpmu_vm->vm; + + pmu_filter_gva = vm_vaddr_alloc(vm, pmu_filter_bmap_sz, KVM_UTIL_MIN_VADDR); + memcpy(addr_gva2hva(vm, pmu_filter_gva), vpmu_vm->pmu_filter, pmu_filter_bmap_sz); + guest_data.pmu_filter = (unsigned long *) pmu_filter_gva; + + run_vcpu(vpmu_vm->vcpu); + + destroy_vpmu_vm(vpmu_vm); + } + + /* Check if KVM is handling the errors correctly */ + run_kvm_event_filter_error_tests(); +} + static void run_tests(uint64_t pmcr_n) { run_counter_access_tests(pmcr_n); + run_kvm_event_filter_test(); } /* From patchwork Wed Feb 15 01:07:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141147 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2EE02C05027 for ; Wed, 15 Feb 2023 01:14:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=JiGPQ91Q/nruZc7xcZcv8KhjKxJzwVsn9k0hGHNGTQU=; b=BPRcJ2af8lyrpBDpR7K4jE/1LT dEWwoucRLOmBO+4EEtGeULLFjDriVww6A3PA2IvFGo3jdyTNQN85TgTevy/AxD1GN5FMXkhdom3Tf FOhulEwdPe5EX9Y6BlpY7jVegVmJtLMgwACrb+lOQeDuBcopY6Z/oPIVpQOk7NXSpeARraDZmMWOE eDqZNIaXaduWVW45THrjflBMFareKLy1uT97REIuHetfxnuIg+xuNhQFJ8Ek1j/C4H0x6CWMdfkUi 8HqEUYtkH+DvGLK9whCVm6OPcxygqJu7/380dpbjW5dFTf9ScIRp3lFdngT9RmkK2Iz9C0swNAXAC ts0FhO5g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Kd-004Aj5-UX; Wed, 15 Feb 2023 01:11:36 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gx-00498Z-9b for linux-arm-kernel@bombadil.infradead.org; Wed, 15 Feb 2023 01:07:47 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=/4pVme0YnvFkbmhGf1BNKoMvu6OdpqpHsZJlFNEmDZU=; b=qCJ8LuUKP0XIR8f+jtpUl+Uh+K gjwc5iHneZlr0QQkQjOLCeu+2BoZhvXU46eFSY+HahZXwa+5JzclbnyHLdMY9sZgpL7fGsxjz1xhI ZrdGH3SnCSIRFtESOMHM058CJaUEoD78WBsCSHuyRCNCsvZHaEXe5+/i3H60JgMeGSOeT+O+cSkhb GDQ8EkJpzJGCsXnS0GrZHY7dtfEJKRnW6Hw0kw2sNTo9ZK6gKeaEfhJssTQkeVD8pkW5BojY6wIxf 4rEnBmnTsoC088+HdkcY/ZL7R/InROLYXG+k66Q4Y2j+V+CYyg4fRSCdY1aMRT7umbCdjbkPOV0gj 34xZfY3g==; Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pS6GB-009oaH-37 for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:44 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-50e79ffba49so177433937b3.9 for ; Tue, 14 Feb 2023 17:07:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/4pVme0YnvFkbmhGf1BNKoMvu6OdpqpHsZJlFNEmDZU=; b=ObqpJ3X4Zzzxcthx2JxQal7z6qTNzX0aROAeY+JqG1bSWZY5Eiu6wXNgAa1sJK4lp6 BtjQCAheEnQDtcSI9+r2CfvzWHwsKFkiiiOElApjEDSk7ja4AX/Tsg3GePdBGX1TpxhH 8RcFt91LUTW5UZzn+gaVrvMsveq0lmd1LdeCVKxyrLkdmSEzd3anP2KIw9YGNSiNsZoB 7CYeHJCpj3CA+3IK5usRsXbXrlgz/CnR49SXSfnOLPv0T5fzBqfrgRgQAdvnG7lMoLdy mvpc8qoiQomiDHpToX8qlVk45/hmvVZDfWJoUtHP1t3KaeOl+8VXiq9tKR5z2CUzzTX8 91gA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/4pVme0YnvFkbmhGf1BNKoMvu6OdpqpHsZJlFNEmDZU=; b=6u8ff+pM4EDOUrEIqR6gd7nF2By9PFHxubc30kXocewIzGnai3gSI+ZxmBaMHOf+pF uQZ/6Y7jGM4FQtu90DAizgo1uGb5IdvB5J5ySzmqvNQadIzChvL/ODsVD4vBsPq9Nh0r r6pHjDJV3KWz2EDDieQUzuiqpN/E8K/moZofWkgeNvqUfg+3uP//+BMAJwUID2BGtaOe ynrOWINi46R41RB0OLFOUSdW2L3Xgn27jKGjFJfC/qMOMsgg7wQ2gcWmfYHSiW22bBgx jXQ/t43AEzM7mYUFERGU6cT+HEJk3x53+DSPhsnhrE1DvI0UwR2/o5ZVTxa1n0iFP00K PFUQ== X-Gm-Message-State: AO0yUKVNTO+dSg+JzTNI7qQNzbUDG+v5J1rWAEYKbnTQji5qUP4F3E2X xYHxw8trFiTNyZsSFyH7z70TNpgHTevv X-Google-Smtp-Source: AK7set/yth4yrA5ctQs+Ets8c+gX8SkJ+jqv3vTU0OJChXY9EOPoTKuziDGTG20vcC7yRb46VfVl/mtRqm08 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6902:4e6:b0:910:5688:3360 with SMTP id w6-20020a05690204e600b0091056883360mr58485ybs.375.1676423253392; Tue, 14 Feb 2023 17:07:33 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:11 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-11-rananta@google.com> Subject: [REPOST PATCH 10/16] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230215_010742_848033_D6A4255F X-CRM114-Status: GOOD ( 19.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org KVM doest't allow the guests to modify the filter types such counting events in nonsecure/secure-EL2, EL3, and so on. Validate the same by force-configuring the bits in PMXEVTYPER_EL0, PMEVTYPERn_EL0, and PMCCFILTR_EL0 registers. The test extends further by trying to create an event for counting only in EL2 and validates if the counter is not progressing. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 85 +++++++++++++++++++ 1 file changed, 85 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 3dfb770b538e9..5c166df245589 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -15,6 +15,10 @@ * of allowing or denying the events. The guest validates it by * checking if it's able to count only the events that are allowed. * + * 3. KVM doesn't allow the guest to count the events attributed with + * higher exception levels (EL2, EL3). Verify this functionality by + * configuring and trying to count the events for EL2 in the guest. + * * Copyright (c) 2022 Google LLC. * */ @@ -23,6 +27,7 @@ #include #include #include +#include #include #include @@ -259,6 +264,7 @@ struct vpmu_vm { enum test_stage { TEST_STAGE_COUNTER_ACCESS = 1, TEST_STAGE_KVM_EVENT_FILTER, + TEST_STAGE_KVM_EVTYPE_FILTER, }; struct guest_data { @@ -678,6 +684,70 @@ static void guest_event_filter_test(unsigned long *pmu_filter) } } +static void guest_evtype_filter_test(void) +{ + int i; + struct pmc_accessor *acc; + uint64_t typer, cnt; + struct arm_smccc_res res; + + pmu_enable(); + + /* + * KVM blocks the guests from creating events for counting in Secure/Non-Secure Hyp (EL2), + * Monitor (EL3), and Multithreading configuration. It applies the mask + * ARMV8_PMU_EVTYPE_MASK against guest accesses to PMXEVTYPER_EL0, PMEVTYPERn_EL0, + * and PMCCFILTR_EL0 registers to prevent this. Check if KVM honors this using all possible + * ways to configure the EVTYPER. + */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + acc = &pmc_accessors[i]; + + /* Set all filter bits (31-24), readback, and check against the mask */ + acc->write_typer(0, 0xff000000); + typer = acc->read_typer(0); + + GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) == ARMV8_PMU_EVTYPE_MASK, + typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK); + + /* + * Regardless of ARMV8_PMU_EVTYPE_MASK, KVM sets perf attr.exclude_hv + * to not count NS-EL2 events. Verify this functionality by configuring + * a NS-EL2 event, for which the couunt shouldn't increment. + */ + typer = ARMV8_PMUV3_PERFCTR_INST_RETIRED; + typer |= ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0; + acc->write_typer(0, typer); + acc->write_cntr(0, 0); + enable_counter(0); + + /* Issue a hypercall to enter EL2 and return */ + memset(&res, 0, sizeof(res)); + smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res); + + cnt = acc->read_cntr(0); + GUEST_ASSERT_3(cnt == 0, cnt, typer, i); + } + + /* Check the same sequence for the Cycle counter */ + write_pmccfiltr(0xff000000); + typer = read_pmccfiltr(); + GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) == ARMV8_PMU_EVTYPE_MASK, + typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK); + + typer = ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0; + write_pmccfiltr(typer); + reset_cycle_counter(); + enable_cycle_counter(); + + /* Issue a hypercall to enter EL2 and return */ + memset(&res, 0, sizeof(res)); + smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res); + + cnt = read_cycle_counter(); + GUEST_ASSERT_2(cnt == 0, cnt, typer); +} + static void guest_code(void) { switch (guest_data.test_stage) { @@ -687,6 +757,9 @@ static void guest_code(void) case TEST_STAGE_KVM_EVENT_FILTER: guest_event_filter_test(guest_data.pmu_filter); break; + case TEST_STAGE_KVM_EVTYPE_FILTER: + guest_evtype_filter_test(); + break; default: GUEST_ASSERT_1(0, guest_data.test_stage); } @@ -1014,10 +1087,22 @@ static void run_kvm_event_filter_test(void) run_kvm_event_filter_error_tests(); } +static void run_kvm_evtype_filter_test(void) +{ + struct vpmu_vm *vpmu_vm; + + guest_data.test_stage = TEST_STAGE_KVM_EVTYPE_FILTER; + + vpmu_vm = create_vpmu_vm(guest_code, NULL); + run_vcpu(vpmu_vm->vcpu); + destroy_vpmu_vm(vpmu_vm); +} + static void run_tests(uint64_t pmcr_n) { run_counter_access_tests(pmcr_n); run_kvm_event_filter_test(); + run_kvm_evtype_filter_test(); } /* From patchwork Wed Feb 15 01:07:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141149 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 092F7C05027 for ; Wed, 15 Feb 2023 01:15:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=+VRaU3Pa6W2BzsH7i1VMLW5ouJ3wYeWel2llZX0Q2P4=; b=GzPvHGaeEn7PQOreS8w70dLAvU YRRH2AnwMP5P1Sln9ijH0wgxJrYn9Lg46wzWlmt8ub5Jqy5E0PiMcRQ2o2qFH9lJyvFDl0PcF1dG6 48BlMlba2UDG5MYGIH8tzPLypryHcQG6wg8rxknQpaehgScBoHJN56wPzCC2AVrveVVRWdOrm9II2 X7HFr0VCq6nnlr93k0DSN03nOpwsTtvV8JrYg3aJj6lqnxu+bVJULAsr2RrMuiGo5KNe4Klhhd9gT 7KUbFCwE8Pi95eZ7Vh6QETnZiexZ9DD1hGZ45hnDZAg38Gpi8bdMHqPf/rytFyonmYD4RrxA3ykJX 3/XhBpOg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6JN-004AB9-Ld; Wed, 15 Feb 2023 01:10:17 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gm-0048yb-QK for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:41 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-507aac99fdfso179608967b3.11 for ; Tue, 14 Feb 2023 17:07:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NBYdz/0TDvO7IvfPjxUDSlCbo4CB4F+r8/XPr+qUWDQ=; b=aU7pmUYv2yFznya4HKfX1wyvhQAM3XhVN5tZHo14V3xLjl6LM+msYiUQUI/dDntR9P CbvnwgZh3cRfQx79R9WIbI+LKjbcZAai7Y+RaW92yDgl4FStzDKPO10HY8CUoR2PJt1T 834vABtVSpIIOinRgAahgffjX3r/2UIjphb+uUNNKpECqpq9C2PGKzX6/bW+UoQ49VOr PrO+lotusnp1yDayMDsB4Jxg86PSOnjUDHZIHcsitjU74lBcPCkdIei3IyOzbGfdYxle PZkwnkMzeN5Wg3Et/kktfJNg9pmdWwD6MLBpyDRVQ1mUrdmKRIyfonpQpa9tL16jPgJW kXsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NBYdz/0TDvO7IvfPjxUDSlCbo4CB4F+r8/XPr+qUWDQ=; b=BURqh9Ej3SvG2zDdhsQHNrkNjiHbpAdFVUK2cCHpeHTYYoy2JCsvRoJMAvpavEDf5o RejQBWUOojn6SGQ8YjS7+RVADfMwcBbGaYAdfGc9E0POtTMXtCU8ESC0B8M36+3lEdpR IXrqTuHo7AdakIvYf7et+rhODLTWbd2C3mPewzaxKpGzZRh2JjNrj8gCCVCOxWM49aci xQGyeBSmq87vvALBfP7+LFlHBD1dWIjVwAID2SmyE8oCVFWVAm0ulhbmDamgy5ys++/o FvsXH5Q+ZW9V65br/5+b0U4qoW5khI925elcF02e41ELTJh9ugHoSpBejGD+C4vt9GJX U/uA== X-Gm-Message-State: AO0yUKUvps7Zsu3Gm1GzcyTHWNAA+52ehA17noDJk2HIl9Wntw9xWUSp 0xPMCtDJtYOE77sXt+qLn318KHvPtiUq X-Google-Smtp-Source: AK7set+QJGvEyaQlgDqCrNd1yxkrORT3h4o9kypjzu0frfeLgSsVp3Gp+lfw/eT5m69TfV+mIApL4A1Yp/op X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:4309:0:b0:52e:c2c5:b75e with SMTP id q9-20020a814309000000b0052ec2c5b75emr2ywa.2.1676423255128; Tue, 14 Feb 2023 17:07:35 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:12 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-12-rananta@google.com> Subject: [REPOST PATCH 11/16] selftests: KVM: aarch64: Add vCPU migration test for PMU From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230214_170736_945030_1BEE2C7A X-CRM114-Status: GOOD ( 22.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Implement a stress test for KVM by frequently force-migrating the vCPU to random pCPUs in the system. This would validate the save/restore functionality of KVM and starting/stopping of PMU counters as necessary. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 195 +++++++++++++++++- 1 file changed, 193 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 5c166df245589..0c9d801f4e602 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -19,9 +19,15 @@ * higher exception levels (EL2, EL3). Verify this functionality by * configuring and trying to count the events for EL2 in the guest. * + * 4. Since the PMU registers are per-cpu, stress KVM by frequently + * migrating the guest vCPU to random pCPUs in the system, and check + * if the vPMU is still behaving as expected. + * * Copyright (c) 2022 Google LLC. * */ +#define _GNU_SOURCE + #include #include #include @@ -30,6 +36,11 @@ #include #include #include +#include +#include +#include + +#include "delay.h" /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) @@ -37,6 +48,8 @@ /* The max number of event numbers that's supported */ #define ARMV8_PMU_MAX_EVENTS 64 +#define msecs_to_usecs(msec) ((msec) * 1000LL) + /* * The macros and functions below for reading/writing PMEV{CNTR,TYPER}_EL0 * were basically copied from arch/arm64/kernel/perf_event.c. @@ -265,6 +278,7 @@ enum test_stage { TEST_STAGE_COUNTER_ACCESS = 1, TEST_STAGE_KVM_EVENT_FILTER, TEST_STAGE_KVM_EVTYPE_FILTER, + TEST_STAGE_VCPU_MIGRATION, }; struct guest_data { @@ -275,6 +289,19 @@ struct guest_data { static struct guest_data guest_data; +#define VCPU_MIGRATIONS_TEST_ITERS_DEF 1000 +#define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2 + +struct test_args { + int vcpu_migration_test_iter; + int vcpu_migration_test_migrate_freq_ms; +}; + +static struct test_args test_args = { + .vcpu_migration_test_iter = VCPU_MIGRATIONS_TEST_ITERS_DEF, + .vcpu_migration_test_migrate_freq_ms = VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS, +}; + static void guest_sync_handler(struct ex_regs *regs) { uint64_t esr, ec; @@ -352,7 +379,6 @@ static bool pmu_event_is_supported(uint64_t event) GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\ } - /* * Extra instructions inserted by the compiler would be difficult to compensate * for, so hand assemble everything between, and including, the PMCR accesses @@ -459,6 +485,13 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) } } +static void test_basic_pmu_functionality(void) +{ + /* Test events on generic and cycle counters */ + test_instructions_count(0, true); + test_cycles_count(true); +} + /* * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers * are set or cleared as specified in @set_expected. @@ -748,6 +781,16 @@ static void guest_evtype_filter_test(void) GUEST_ASSERT_2(cnt == 0, cnt, typer); } +static void guest_vcpu_migration_test(void) +{ + /* + * While the userspace continuously migrates this vCPU to random pCPUs, + * run basic PMU functionalities and verify the results. + */ + while (test_args.vcpu_migration_test_iter--) + test_basic_pmu_functionality(); +} + static void guest_code(void) { switch (guest_data.test_stage) { @@ -760,6 +803,9 @@ static void guest_code(void) case TEST_STAGE_KVM_EVTYPE_FILTER: guest_evtype_filter_test(); break; + case TEST_STAGE_VCPU_MIGRATION: + guest_vcpu_migration_test(); + break; default: GUEST_ASSERT_1(0, guest_data.test_stage); } @@ -837,6 +883,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) vpmu_vm->vm = vm = vm_create(1); vm_init_descriptor_tables(vm); + /* Catch exceptions for easier debugging */ for (ec = 0; ec < ESR_EC_NUM; ec++) { vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec, @@ -881,6 +928,8 @@ static void run_vcpu(struct kvm_vcpu *vcpu) struct ucall uc; sync_global_to_guest(vcpu->vm, guest_data); + sync_global_to_guest(vcpu->vm, test_args); + vcpu_run(vcpu); switch (get_ucall(vcpu, &uc)) { case UCALL_ABORT: @@ -1098,11 +1147,112 @@ static void run_kvm_evtype_filter_test(void) destroy_vpmu_vm(vpmu_vm); } +struct vcpu_migrate_data { + struct vpmu_vm *vpmu_vm; + pthread_t *pt_vcpu; + bool vcpu_done; +}; + +static void *run_vcpus_migrate_test_func(void *arg) +{ + struct vcpu_migrate_data *migrate_data = arg; + struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm; + + run_vcpu(vpmu_vm->vcpu); + migrate_data->vcpu_done = true; + + return NULL; +} + +static uint32_t get_pcpu(void) +{ + uint32_t pcpu; + unsigned int nproc_conf; + cpu_set_t online_cpuset; + + nproc_conf = get_nprocs_conf(); + sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset); + + /* Randomly find an available pCPU to place the vCPU on */ + do { + pcpu = rand() % nproc_conf; + } while (!CPU_ISSET(pcpu, &online_cpuset)); + + return pcpu; +} + +static int migrate_vcpu(struct vcpu_migrate_data *migrate_data) +{ + int ret; + cpu_set_t cpuset; + uint32_t new_pcpu = get_pcpu(); + + CPU_ZERO(&cpuset); + CPU_SET(new_pcpu, &cpuset); + + pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu); + + ret = pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &cpuset); + + /* Allow the error where the vCPU thread is already finished */ + TEST_ASSERT(ret == 0 || ret == ESRCH, + "Failed to migrate the vCPU to pCPU: %u; ret: %d\n", new_pcpu, ret); + + return ret; +} + +static void *vcpus_migrate_func(void *arg) +{ + struct vcpu_migrate_data *migrate_data = arg; + + while (!migrate_data->vcpu_done) { + usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms)); + migrate_vcpu(migrate_data); + } + + return NULL; +} + +static void run_vcpu_migration_test(uint64_t pmcr_n) +{ + int ret; + struct vpmu_vm *vpmu_vm; + pthread_t pt_vcpu, pt_sched; + struct vcpu_migrate_data migrate_data = { + .pt_vcpu = &pt_vcpu, + .vcpu_done = false, + }; + + __TEST_REQUIRE(get_nprocs() >= 2, "At least two pCPUs needed for vCPU migration test"); + + guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION; + guest_data.expected_pmcr_n = pmcr_n; + + migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(guest_code, NULL); + + /* Initialize random number generation for migrating vCPUs to random pCPUs */ + srand(time(NULL)); + + /* Spawn a vCPU thread */ + ret = pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migrate_data); + TEST_ASSERT(!ret, "Failed to create the vCPU thread"); + + /* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */ + ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data); + TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating the vCPUs"); + + pthread_join(pt_sched, NULL); + pthread_join(pt_vcpu, NULL); + + destroy_vpmu_vm(vpmu_vm); +} + static void run_tests(uint64_t pmcr_n) { run_counter_access_tests(pmcr_n); run_kvm_event_filter_test(); run_kvm_evtype_filter_test(); + run_vcpu_migration_test(pmcr_n); } /* @@ -1121,12 +1271,53 @@ static uint64_t get_pmcr_n_limit(void) return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); } -int main(void) +static void print_help(char *name) +{ + pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]\n", + name); + pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)\n", + VCPU_MIGRATIONS_TEST_ITERS_DEF); + pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (default: %u)\n", + VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS); + pr_info("\t-h: print this help screen\n"); +} + +static bool parse_args(int argc, char *argv[]) +{ + int opt; + + while ((opt = getopt(argc, argv, "hi:m:")) != -1) { + switch (opt) { + case 'i': + test_args.vcpu_migration_test_iter = + atoi_positive("Nr vCPU migration iterations", optarg); + break; + case 'm': + test_args.vcpu_migration_test_migrate_freq_ms = + atoi_positive("vCPU migration frequency", optarg); + break; + case 'h': + default: + goto err; + } + } + + return true; + +err: + print_help(argv[0]); + return false; +} + +int main(int argc, char *argv[]) { uint64_t pmcr_n; TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + if (!parse_args(argc, argv)) + exit(KSFT_SKIP); + pmcr_n = get_pmcr_n_limit(); run_tests(pmcr_n); From patchwork Wed Feb 15 01:07:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141152 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 44AEBC05027 for ; Wed, 15 Feb 2023 01:16:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=aHDa1ItQyvsoPZ1syexOLXxdMiHoUC2fdOcmQw1mb3g=; b=wQ1DcV1CG/K+sYLwTplDp1kcfI S6E72UQvPevW2JxpqVxT0RUvZhwcgv4A71D+BBm7DxTBcpdUl81xCMmY5sWzY35oVldU+LE4HKv7d Fa87kM+5R2BDTH5tb8b5J+y13/v41zJOPYsohSQ7TTnAlgWygBuhEO7zIjZj0Rq+VrRf2QGlbbwOQ 32UZlwqYXDoaw1+zRvx5P7kQTtnYpAGVOHFyE0LuNhQW0J+EQEfrULnbZtnWboRlThMRNOvGC1Onw ZUY0aKMNClQ5FVJFmIi+QKqyJ15iInd3HfAmAFzve4C7V1usVeb+xqrhlv1Mw+Nu8SmSIZ6fd4WTF kqmK8IeA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6KG-004AYh-Qm; Wed, 15 Feb 2023 01:11:13 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gx-00498K-9b for linux-arm-kernel@bombadil.infradead.org; Wed, 15 Feb 2023 01:07:47 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=BS2eIj6/WnJybsbailZVj7vXOCSqKu90WeWXBSqAGkA=; b=D+ittpOLDcgo5WFi8/SjaIqjSo JB9kMTPm0rPoA8d3AIhtCpLbOjmjZbiERNAYLF9RLpo4o5jf1/5Jujev2yv3xgyew4TrWHg3uTl3t Qo0pv02sc84cAKGNEAxhLYNUY6nYQAS1lwi+UbyafxaO35jOZNJryLfYn0fzGeG4vYJQPkieA5Pev ywE+DnyEE11poulkvMQjDdVCsABdxMRF499cYGDkGz4hF4yGii4R8a/mDRE4mDzULmYAMqM5PYKdF ZaHNVJ//MkoEOt4KRxv2o/OFEElnKjyvY3GM839+Rueo3ChfV2ksTuDmQSGqRUoK/vutNQGSyhcHm XQmLRmpA==; Received: from mail-il1-x14a.google.com ([2607:f8b0:4864:20::14a]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pS6GB-009oaK-0M for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:44 +0000 Received: by mail-il1-x14a.google.com with SMTP id t6-20020a056e02010600b0031417634f4bso9911728ilm.18 for ; Tue, 14 Feb 2023 17:07:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BS2eIj6/WnJybsbailZVj7vXOCSqKu90WeWXBSqAGkA=; b=AjTSergnfy8C/785t8axK+Vvs1zbfaktJqaWCHRu2YXjH4xSRl/eM7AVQrl/R1X4W0 Uw3DsRxwBYSZbHBn3a50IqNX0UB5SJmO6sHffJg1P0Bxnwv6AxD/+TbMWp78pL3GW6dO xz3WajyDhVJiyiUgqwXXdBGiiF60PBusNvd4DX+lO6ijTpygrm8HYu94OC3zngjlrJEd pfXf0uTNxmyDGyxCL4rdXRjIMORzQ2ycxh0iTW8lWA7rfbrU+o+bM1Ox+D0XaSAstClp /hxqsgBELyFZCuUJtJjhEnjO1QAi2UGnfSppIUljZnIZqQGIyx5NuAb5aZxrE3E8Eg7F 1UMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BS2eIj6/WnJybsbailZVj7vXOCSqKu90WeWXBSqAGkA=; b=WENQU8T17NJ3qD246lPYTPVEPLjhhY/RrNkpLsrBXLO78jUU04N++r3w5k5oOBfI/N eoFKWsNfiztTz8m/OZYHSv3y4+dlshPW7gfGLprW+VYmLeP9/e1yRBDeTHE7dAY5E4gg 1UBamMjTgovJ3sjt2ijgC2id8IXUDn9BJ7ZXDHexCjaGgubjzVjMtSyhDGwcVcWxeaJZ IzdNf2XfA+tPtB4DXN3wUXw+yhD6d+0K7kcOzxnJH6nYmhDWX16j6kYTRiNhiVxbbnGD jSMkN43IB41dkOfW1mG/CfIytxESKF4H9jFOfX2/NycyiNoxIIgKGY5hJTOJiTQ1/F7u LQ+A== X-Gm-Message-State: AO0yUKWUgz9MgHpHGqGTClNkdosPPpHL+xK2aew2ctI8p6SmIY/4rUWd agQZkZHV5Vp34tMb4OnO1mEPsn+HzjUZ X-Google-Smtp-Source: AK7set/7gzYfd4tGsCtPonDEuhVJyQ8zfTN9MXyCGP1j08RK6Rci4b2doJcZes78PxFC87kC1dW/HOjrZgFW X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6602:334a:b0:73d:fd95:8e1c with SMTP id c10-20020a056602334a00b0073dfd958e1cmr304543ioz.41.1676423256592; Tue, 14 Feb 2023 17:07:36 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:13 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-13-rananta@google.com> Subject: [REPOST PATCH 12/16] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230215_010741_740364_77F98808 X-CRM114-Status: GOOD ( 33.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Extend the vCPU migration test to also validate the vPMU's functionality when set up for overflow conditions. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 223 ++++++++++++++++-- 1 file changed, 198 insertions(+), 25 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 0c9d801f4e602..066dc17fa3906 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -21,7 +21,9 @@ * * 4. Since the PMU registers are per-cpu, stress KVM by frequently * migrating the guest vCPU to random pCPUs in the system, and check - * if the vPMU is still behaving as expected. + * if the vPMU is still behaving as expected. The sub-tests include + * testing basic functionalities such as basic counters behavior, + * overflow, and overflow interrupts. * * Copyright (c) 2022 Google LLC. * @@ -41,13 +43,27 @@ #include #include "delay.h" +#include "gic.h" +#include "spinlock.h" /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) +/* The cycle counter bit position that's common among the PMU registers */ +#define ARMV8_PMU_CYCLE_COUNTER_IDX 31 + /* The max number of event numbers that's supported */ #define ARMV8_PMU_MAX_EVENTS 64 +#define PMU_IRQ 23 + +#define COUNT_TO_OVERFLOW 0xFULL +#define PRE_OVERFLOW_32 (GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1) +#define PRE_OVERFLOW_64 (GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1) + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL + #define msecs_to_usecs(msec) ((msec) * 1000LL) /* @@ -162,6 +178,17 @@ static inline void write_sel_evtyper(int sel, unsigned long val) isb(); } +static inline void write_pmovsclr(unsigned long val) +{ + write_sysreg(val, pmovsclr_el0); + isb(); +} + +static unsigned long read_pmovsclr(void) +{ + return read_sysreg(pmovsclr_el0); +} + static inline void enable_counter(int idx) { uint64_t v = read_sysreg(pmcntenset_el0); @@ -178,11 +205,33 @@ static inline void disable_counter(int idx) isb(); } +static inline void enable_irq(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmintenset_el1); + isb(); +} + +static inline void disable_irq(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmintenclr_el1); + isb(); +} + static inline uint64_t read_cycle_counter(void) { return read_sysreg(pmccntr_el0); } +static inline void write_cycle_counter(uint64_t v) +{ + write_sysreg(v, pmccntr_el0); + isb(); +} + static inline void reset_cycle_counter(void) { uint64_t v = read_sysreg(pmcr_el0); @@ -289,6 +338,15 @@ struct guest_data { static struct guest_data guest_data; +/* Data to communicate among guest threads */ +struct guest_irq_data { + uint32_t pmc_idx_bmap; + uint32_t irq_received_bmap; + struct spinlock lock; +}; + +static struct guest_irq_data guest_irq_data; + #define VCPU_MIGRATIONS_TEST_ITERS_DEF 1000 #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2 @@ -322,6 +380,79 @@ static void guest_sync_handler(struct ex_regs *regs) expected_ec = INVALID_EC; } +static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_bmap) +{ + /* + * Fail if there's an interrupt from unexpected PMCs. + * All the expected events' IRQs may not arrive at the same time. + * Hence, check if the interrupt is valid only if it's expected. + */ + if (pmovsclr & BIT(pmc_idx)) { + GUEST_ASSERT_3(pmc_idx_bmap & BIT(pmc_idx), pmc_idx, pmovsclr, pmc_idx_bmap); + write_pmovsclr(BIT(pmc_idx)); + } +} + +static void guest_irq_handler(struct ex_regs *regs) +{ + uint32_t pmc_idx_bmap; + uint64_t i, pmcr_n = get_pmcr_n(); + uint32_t pmovsclr = read_pmovsclr(); + unsigned int intid = gic_get_and_ack_irq(); + + /* No other IRQ apart from the PMU IRQ is expected */ + GUEST_ASSERT_1(intid == PMU_IRQ, intid); + + spin_lock(&guest_irq_data.lock); + pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap); + + for (i = 0; i < pmcr_n; i++) + guest_validate_irq(i, pmovsclr, pmc_idx_bmap); + guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap); + + /* Mark IRQ as recived for the corresponding PMCs */ + WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr); + spin_unlock(&guest_irq_data.lock); + + gic_set_eoi(intid); +} + +static int pmu_irq_received(int pmc_idx) +{ + bool irq_received; + + spin_lock(&guest_irq_data.lock); + irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + return irq_received; +} + +static void pmu_irq_init(int pmc_idx) +{ + write_pmovsclr(BIT(pmc_idx)); + + spin_lock(&guest_irq_data.lock); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + enable_irq(pmc_idx); +} + +static void pmu_irq_exit(int pmc_idx) +{ + write_pmovsclr(BIT(pmc_idx)); + + spin_lock(&guest_irq_data.lock); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + disable_irq(pmc_idx); +} + /* * Run the given operation that should trigger an exception with the * given exception class. The exception handler (guest_sync_handler) @@ -420,12 +551,20 @@ static void execute_precise_instrs(int num, uint32_t pmcr) precise_instrs_loop(loop, pmcr); } -static void test_instructions_count(int pmc_idx, bool expect_count) +static void test_instructions_count(int pmc_idx, bool expect_count, bool test_overflow) { int i; struct pmc_accessor *acc; - uint64_t cnt; - int instrs_count = 100; + uint64_t cntr_val = 0; + int instrs_count = 500; + + if (test_overflow) { + /* Overflow scenarios can only be tested when a count is expected */ + GUEST_ASSERT_1(expect_count, pmc_idx); + + cntr_val = PRE_OVERFLOW_32; + pmu_irq_init(pmc_idx); + } enable_counter(pmc_idx); @@ -433,41 +572,68 @@ static void test_instructions_count(int pmc_idx, bool expect_count) for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { acc = &pmc_accessors[i]; - pmu_disable_reset(); - + acc->write_cntr(pmc_idx, cntr_val); acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED); - /* Enable the PMU and execute precisely number of instructions as a workload */ - execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E); + /* + * Enable the PMU and execute a precise number of instructions as a workload. + * Since execute_precise_instrs() disables the PMU at the end, 'instrs_count' + * should have enough instructions to raise an IRQ. + */ + execute_precise_instrs(instrs_count, ARMV8_PMU_PMCR_E); - /* If a count is expected, the counter should be increased by 'instrs_count' */ - cnt = acc->read_cntr(pmc_idx); - GUEST_ASSERT_4(expect_count == (cnt == instrs_count), - i, expect_count, cnt, instrs_count); + /* + * If an overflow is expected, only check for the overflag flag. + * As overflow interrupt is enabled, the interrupt would add additional + * instructions and mess up the precise instruction count. Hence, measure + * the instructions count only when the test is not set up for an overflow. + */ + if (test_overflow) { + GUEST_ASSERT_2(pmu_irq_received(pmc_idx), pmc_idx, i); + } else { + uint64_t cnt = acc->read_cntr(pmc_idx); + + GUEST_ASSERT_4(expect_count == (cnt == instrs_count), + pmc_idx, i, cnt, expect_count); + } } - disable_counter(pmc_idx); + if (test_overflow) + pmu_irq_exit(pmc_idx); } -static void test_cycles_count(bool expect_count) +static void test_cycles_count(bool expect_count, bool test_overflow) { uint64_t cnt; - pmu_enable(); - reset_cycle_counter(); + if (test_overflow) { + /* Overflow scenarios can only be tested when a count is expected */ + GUEST_ASSERT(expect_count); + + write_cycle_counter(PRE_OVERFLOW_64); + pmu_irq_init(ARMV8_PMU_CYCLE_COUNTER_IDX); + } else { + reset_cycle_counter(); + } /* Count cycles in EL0 and EL1 */ write_pmccfiltr(0); enable_cycle_counter(); + /* Enable the PMU and execute precisely number of instructions as a workload */ + execute_precise_instrs(500, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E); cnt = read_cycle_counter(); /* * If a count is expected by the test, the cycle counter should be increased by - * at least 1, as there is at least one instruction between enabling the + * at least 1, as there are a number of instructions between enabling the * counter and reading the counter. */ GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count); + if (test_overflow) { + GUEST_ASSERT_2(pmu_irq_received(ARMV8_PMU_CYCLE_COUNTER_IDX), cnt, expect_count); + pmu_irq_exit(ARMV8_PMU_CYCLE_COUNTER_IDX); + } disable_cycle_counter(); pmu_disable_reset(); @@ -477,19 +643,28 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) { switch (event) { case ARMV8_PMUV3_PERFCTR_INST_RETIRED: - test_instructions_count(pmc_idx, expect_count); + test_instructions_count(pmc_idx, expect_count, false); break; case ARMV8_PMUV3_PERFCTR_CPU_CYCLES: - test_cycles_count(expect_count); + test_cycles_count(expect_count, false); break; } } static void test_basic_pmu_functionality(void) { + local_irq_disable(); + gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA); + gic_irq_enable(PMU_IRQ); + local_irq_enable(); + /* Test events on generic and cycle counters */ - test_instructions_count(0, true); - test_cycles_count(true); + test_instructions_count(0, true, false); + test_cycles_count(true, false); + + /* Test overflow with interrupts on generic and cycle counters */ + test_instructions_count(0, true, true); + test_cycles_count(true, true); } /* @@ -813,9 +988,6 @@ static void guest_code(void) GUEST_DONE(); } -#define GICD_BASE_GPA 0x8000000ULL -#define GICR_BASE_GPA 0x80A0000ULL - static unsigned long * set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters) { @@ -866,7 +1038,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; uint8_t pmuver, ec; - uint64_t dfr0, irq = 23; + uint64_t dfr0, irq = PMU_IRQ; struct vpmu_vm *vpmu_vm; struct kvm_device_attr irq_attr = { .group = KVM_ARM_VCPU_PMU_V3_CTRL, @@ -883,6 +1055,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) vpmu_vm->vm = vm = vm_create(1); vm_init_descriptor_tables(vm); + vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler); /* Catch exceptions for easier debugging */ for (ec = 0; ec < ESR_EC_NUM; ec++) { From patchwork Wed Feb 15 01:07:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141150 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 95486C05027 for ; Wed, 15 Feb 2023 01:15:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=sslrW0XEYosRArR+85rHDrOETGEIoyPzIHSMDC6PueM=; b=UqHTirWGqeZxKjfT0MlmlXMe/Y NJTCdSFiQlqnFPLdHFly1KxfHNmNZZWT1gasyRSD2Fk52bsGJfCz5FDYHmJF7OTzXik2tXN6/qOKl 90KItRq0U+F58zvsjWM7vKqtg45NEnm3GDCAFveWL22gpXYYU/rYSXmQZKkc41VZJIRdiabaycnWl 8m3Ob7oFJ1TPNdCedmKUNgPSB97sicawy4QUlan0RwrDBqQzLHJrHMQqML48y9CVRvOiaX49Da/B2 2EHtEWUeX+GuRAzna+ck6JN6vXlvzoq6uMZ4NL3WwPeSV0Tu8ab6oL+25s8y+/he5x0tamLkf+awX 56PR4Hbg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Jg-004AJP-16; Wed, 15 Feb 2023 01:10:36 +0000 Received: from mail-io1-xd49.google.com ([2607:f8b0:4864:20::d49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gp-00495q-3M for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:42 +0000 Received: by mail-io1-xd49.google.com with SMTP id q3-20020a5edb03000000b00725625524e5so11052296iop.2 for ; Tue, 14 Feb 2023 17:07:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cRn8pqb+ZATFzovvOq02T9w/U2XDHj1IusmF+T0soEI=; b=tFso5WtZ/6jKsl3SoKrW0h0/JXk2Zohgx7CBo+sqFGlc8SPkBH/FLv21x0mRbrYb+6 sHNDbjL+I8AVibLJG9YCCq/FH43TX89MSdJQvBQxdPx7PO/c5pNdiGSrIWeqrHGDLdlz Zv/w3Iw9Uzcla+S95SpuJ0tStuyq0zeHfrWKGe++vjtcNCJsnHFw7d260HFUnrcF8izf HuMWFwpgS5mPeLVRVJil+Te4MvNc1y4M36S0N+JBcw3J05wTyvJpuDe3JqI4mt2OCIPc YlwNGCQ54uial65J7N48Y7mtnmKttG3H3x5/8KThx7MnkyrFMUNmqhk2X4nMobBl+8/M DH9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cRn8pqb+ZATFzovvOq02T9w/U2XDHj1IusmF+T0soEI=; b=y+Wu4WiMTccosoLvWsNz0+5yJ4pMV0rLwaL23iASs+LxCdeAVJPm0Dzn+Pv6xu6Xye TxYC5cz5LonW3RrzxryzR0cwaHcJjBxL0dqx/ME5MlJ4cdWcu2LcdpYBFDRFXqeX4AM5 AEOkXfHu5QzCz1zpZGeg0xCbLYBrrIbG+kboJ+EftuyvkJgGvWs0FghbxAHtzFOZnY5Y XZ78rIP95THZ53/88BuF1SoaUCMgFlNDgWhFmrssHrzOc3RfWrAo0dtjxx31BuEtBX1h mLaL3IYlUpZgmloUYUU5GHstNtlDlThQSuhBJnVF/XfEqi/YzuPCqND1Gx9kh3m0ekL8 CeVA== X-Gm-Message-State: AO0yUKVVaF4OPpeTIJLl1qExyq7rEGe36uGE1Xjm/6vlrKRPVMDCE2X1 hTOArAoL0pGEhgt902DBAgH4EBmdSlZB X-Google-Smtp-Source: AK7set879hJleeEG485JEjMlVmvoEKEYhPpaft0WTFabkmYcY62Nyc9GtszHET3D2Wz3m5gr37YJ/w5AUtRq X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a6b:6b06:0:b0:740:694b:54b2 with SMTP id g6-20020a6b6b06000000b00740694b54b2mr318908ioc.21.1676423257661; Tue, 14 Feb 2023 17:07:37 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:14 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-14-rananta@google.com> Subject: [REPOST PATCH 13/16] selftests: KVM: aarch64: Test chained events for PMU From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230214_170739_215997_6AA2E055 X-CRM114-Status: GOOD ( 19.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Extend the vPMU's vCPU migration test to validate chained events, and their overflow conditions. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 76 ++++++++++++++++++- 1 file changed, 75 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 066dc17fa3906..de725f4339ad5 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -23,7 +23,7 @@ * migrating the guest vCPU to random pCPUs in the system, and check * if the vPMU is still behaving as expected. The sub-tests include * testing basic functionalities such as basic counters behavior, - * overflow, and overflow interrupts. + * overflow, overflow interrupts, and chained events. * * Copyright (c) 2022 Google LLC. * @@ -61,6 +61,8 @@ #define PRE_OVERFLOW_32 (GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1) #define PRE_OVERFLOW_64 (GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1) +#define ALL_SET_64 GENMASK(63, 0) + #define GICD_BASE_GPA 0x8000000ULL #define GICR_BASE_GPA 0x80A0000ULL @@ -639,6 +641,75 @@ static void test_cycles_count(bool expect_count, bool test_overflow) pmu_disable_reset(); } +static void test_chained_count(int pmc_idx) +{ + int i, chained_pmc_idx; + struct pmc_accessor *acc; + uint64_t pmcr_n, cnt, cntr_val; + + /* The test needs at least two PMCs */ + pmcr_n = get_pmcr_n(); + GUEST_ASSERT_1(pmcr_n >= 2, pmcr_n); + + /* + * The chained counter's idx is always chained with (pmc_idx + 1). + * pmc_idx should be even as the chained event doesn't count on + * odd numbered counters. + */ + GUEST_ASSERT_1(pmc_idx % 2 == 0, pmc_idx); + + /* + * The max counter idx that the chained counter can occupy is + * (pmcr_n - 1), while the actual event sits on (pmcr_n - 2). + */ + chained_pmc_idx = pmc_idx + 1; + GUEST_ASSERT(chained_pmc_idx < pmcr_n); + + enable_counter(chained_pmc_idx); + pmu_irq_init(chained_pmc_idx); + + /* Configure the chained event using all the possible ways*/ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + acc = &pmc_accessors[i]; + + /* Test if the chained counter increments when the base event overflows */ + + cntr_val = 1; + acc->write_cntr(chained_pmc_idx, cntr_val); + acc->write_typer(chained_pmc_idx, ARMV8_PMUV3_PERFCTR_CHAIN); + + /* Chain the counter with pmc_idx that's configured for an overflow */ + test_instructions_count(pmc_idx, true, true); + + /* + * pmc_idx is also configured to run for all the ARRAY_SIZE(pmc_accessors) + * combinations. Hence, the chained chained_pmc_idx is expected to be + * cntr_val + ARRAY_SIZE(pmc_accessors). + */ + cnt = acc->read_cntr(chained_pmc_idx); + GUEST_ASSERT_4(cnt == cntr_val + ARRAY_SIZE(pmc_accessors), + pmc_idx, i, cnt, cntr_val + ARRAY_SIZE(pmc_accessors)); + + /* Test for the overflow of the chained counter itself */ + + cntr_val = ALL_SET_64; + acc->write_cntr(chained_pmc_idx, cntr_val); + + test_instructions_count(pmc_idx, true, true); + + /* + * At this point, an interrupt should've been fired for the chained + * counter (which validates the overflow bit), and the counter should've + * wrapped around to ARRAY_SIZE(pmc_accessors) - 1. + */ + cnt = acc->read_cntr(chained_pmc_idx); + GUEST_ASSERT_4(cnt == ARRAY_SIZE(pmc_accessors) - 1, + pmc_idx, i, cnt, ARRAY_SIZE(pmc_accessors)); + } + + pmu_irq_exit(chained_pmc_idx); +} + static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) { switch (event) { @@ -665,6 +736,9 @@ static void test_basic_pmu_functionality(void) /* Test overflow with interrupts on generic and cycle counters */ test_instructions_count(0, true, true); test_cycles_count(true, true); + + /* Test chained events */ + test_chained_count(0); } /* From patchwork Wed Feb 15 01:07:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141151 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 06A28C05027 for ; Wed, 15 Feb 2023 01:16:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=jP/6n6JcjPhbWu1iAgkhQusscFGIM/D+qDCs50i9qaQ=; b=BttPKqFDQrhXMZlECrw9rN291H BbnQw3ltRja/u5ET4ydeTYs8zhqJSRwQHnwn5+Jw/ci/ekqnI1j9P3DAVspFscg7UkAI1P6Z/2JFQ Ro1qrGi7/0n+y1W5AL3/j8d34e9gO31DdThrdiWlRgSpAcWhJ/GguUxCoV+ZSz+7utypudAOq1Htl bQkrpD/fc2WUHHp7GgirlR7ceHH15dvxJFycFqhDj07NC16aW3UOBITMhuxlHeiVbncSVta3H7hDI cUqXOh+pufixx/OcmkbANbf+U80MipfOBmR+SusD6Sjvek+8GcAoCO4fThoe1eC1StrC+Sfo26ho/ NwdJLZpw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Jy-004ARK-Au; Wed, 15 Feb 2023 01:10:54 +0000 Received: from mail-il1-x14a.google.com ([2607:f8b0:4864:20::14a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gp-004963-TB for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:43 +0000 Received: by mail-il1-x14a.google.com with SMTP id w3-20020a92d2c3000000b003152aeb055eso7394257ilg.11 for ; Tue, 14 Feb 2023 17:07:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=E305a934JyKR6d0LbQnwDb1KLXy1uRhITDhO1feUNVQ=; b=I9xImCmVWCFNHbTSunyTqAaQSap8s5DCNhGLVbV7DgL0T5/D1hdtBQJhSlNOe0jyPF voYUROttsE9l9SnEr6HvowKBOJbe3Owouy9OMVDxedlEyBXWr5K1mib2fl7gRhdLO48L fnbPgDfuU6IyeV5gmtkDjhnSwbDWrRJiY/rOnl9/nyjQn/kkRiKJ7TrYIVTF4wF6Riqy H4EGjONPbs2vlw0XIQt4v1m1JZuOeQ5uTUSmzUK+wKvS/rdrVoM46iwXIui3VhOGipia XgImDP9+Y9tLkAorIZjdPiohqFBlpPbzSEPNJfNVZHgYBDc7MHB0TWhNYdEakb6KtRlu ToZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=E305a934JyKR6d0LbQnwDb1KLXy1uRhITDhO1feUNVQ=; b=LYrhp9S+CiTnLEr4qKImo4Z+XQdyL5ZtxY9A6AbBJ0UYzaBtuY52qzo64+6ugoYRrH 2qh7bQM7EG9+aKt9Hb+8e7o2qNRcY82OPDj/qeNwhMsJcjAM6VPbPRM4Hr5LCKhT3w31 g7BrCrm/8c7431JP2AIkFxwveaPUTXK9tBhPoTcbSRERqWR5dGCx3oml1+x0w0s1YcxW uFW/jCb2q6M0HM18oc6s6+GVLdb65V0qRfH0rRyaH7TnkaBqj8DpGEYA9w4iWbrSPEsn 650sEi8cqqP36ANsUfDVlS796vSwhxqudMebQDRguF0WaTGw/E7aPwjI71fiTQRR0qYr ouhw== X-Gm-Message-State: AO0yUKXrGqTWVMzxkiasyPHUzjYjPdk7rdqDK7Bxbebj1cCxbEjio0YY 7bveLdqFOP9zp27GHbUw5tMzQOKmcBGV X-Google-Smtp-Source: AK7set+b4ssQzb8aMv+uSf2RmCPqXoTCRttRjQLG+i9Fm0UeKWjHu4LVoeyGHXP+DptEEnYx2/E5xZzWwpO6 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a6b:fb01:0:b0:71c:479d:741a with SMTP id h1-20020a6bfb01000000b0071c479d741amr332606iog.38.1676423258679; Tue, 14 Feb 2023 17:07:38 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:15 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-15-rananta@google.com> Subject: [REPOST PATCH 14/16] selftests: KVM: aarch64: Add PMU test to chain all the counters From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230214_170740_015390_02BEFF4E X-CRM114-Status: GOOD ( 17.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Extend the vCPU migration test to occupy all the vPMU counters, by configuring chained events on alternate counter-ids and chaining them with its corresponding predecessor counter, and verify against the extended behavior. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index de725f4339ad5..fd00acb9391c8 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -710,6 +710,63 @@ static void test_chained_count(int pmc_idx) pmu_irq_exit(chained_pmc_idx); } +static void test_chain_all_counters(void) +{ + int i; + uint64_t cnt, pmcr_n = get_pmcr_n(); + struct pmc_accessor *acc = &pmc_accessors[0]; + + /* + * Test the occupancy of all the event counters, by chaining the + * alternate counters. The test assumes that the host hasn't + * occupied any counters. Hence, if the test fails, it could be + * because all the counters weren't available to the guest or + * there's actually a bug in KVM. + */ + + /* + * Configure even numbered counters to count cpu-cycles, and chain + * each of them with its odd numbered counter. + */ + for (i = 0; i < pmcr_n; i++) { + if (i % 2) { + acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CHAIN); + acc->write_cntr(i, 1); + } else { + pmu_irq_init(i); + acc->write_cntr(i, PRE_OVERFLOW_32); + acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CPU_CYCLES); + } + enable_counter(i); + } + + /* Introduce some cycles */ + execute_precise_instrs(500, ARMV8_PMU_PMCR_E); + + /* + * An overflow interrupt should've arrived for all the even numbered + * counters but none for the odd numbered ones. The odd numbered ones + * should've incremented exactly by 1. + */ + for (i = 0; i < pmcr_n; i++) { + if (i % 2) { + GUEST_ASSERT_1(!pmu_irq_received(i), i); + + cnt = acc->read_cntr(i); + GUEST_ASSERT_2(cnt == 2, i, cnt); + } else { + GUEST_ASSERT_1(pmu_irq_received(i), i); + } + } + + /* Cleanup the states */ + for (i = 0; i < pmcr_n; i++) { + if (i % 2 == 0) + pmu_irq_exit(i); + disable_counter(i); + } +} + static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) { switch (event) { @@ -739,6 +796,9 @@ static void test_basic_pmu_functionality(void) /* Test chained events */ test_chained_count(0); + + /* Test running chained events on all the implemented counters */ + test_chain_all_counters(); } /* From patchwork Wed Feb 15 01:07:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 79905C05027 for ; Wed, 15 Feb 2023 02:04:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=gMJewwI+YLWayjcQsCXE43KdZTNXH/+aP0WaZKapg9A=; b=HtqywzEc9V1CWngPvOEBXcwQkk WjCvJYjiemeJQrCAKyEmTyxnBlPXAXKuWFcL3EYM+vYpqFQtEPgzQ0iM0AOJiH1xaUb4K11/Keu/i YWG2MYVWaKI1tCgmIhqEl36PAxeHXQWpnPEtg1z6pjyvGuW8C9la+AogSlDDnzHlKUt/Xkc/MBa0I XJW8tOavQyZ9DxFLnP4/ruBT+dzIPtcwmYqWiI65S4YLiNyY431sTaRNlHiYAnEVGAVDL142FhHEz wUM/H0nG+RKUwniB49qeHkgM2u07QitGFpV4qjYqAAgaT4htinHTIalCpGWvnA/MxP0RJquMqq2Iq LehSq/CQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS78h-004Lei-7d; Wed, 15 Feb 2023 02:03:19 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS78g-004LeT-4P for linux-arm-kernel@bombadil.infradead.org; Wed, 15 Feb 2023 02:03:18 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=0LRKHv8C7GNrFb1gl+E1gfO7HmO+3D0ztdjnWC4dVGc=; b=Zwop3zHVp21GPLkrnHRjOQCLX0 ItG5QU5M3T9fyjFIW2Y3gxoToNudt+Ha9G4feL08zlnleIaf0XoFQJLKcQMYqpMqcGS4dDAZ9HaME 77TgVpyEru2Cr3HO/PIQRt/8JDDFZA88tzHdtA2xPKZ/RzeaTKY40RZrjmpAmIjOWqh7wQ8Nzwr01 7f9uooyGX5BUKE0TQNDXFEgH1awUZr9UQmK3thyvwOqc+JdWSNVBV3IVz88cuyEWxTa4lThm2T0gn F9rV9e8/gvbhl88Vt3GrQCT6wmm1qNvFUjrofRAyd1N9YpUoi9te2bBG6WWpdT9SLWzU9MLAyC3eR k7epwYxg==; Received: from mail-io1-xd4a.google.com ([2607:f8b0:4864:20::d4a]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pS6GO-009oaL-0z for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:56 +0000 Received: by mail-io1-xd4a.google.com with SMTP id k4-20020a6b7e44000000b0071e11cafea7so11241593ioq.15 for ; Tue, 14 Feb 2023 17:07:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0LRKHv8C7GNrFb1gl+E1gfO7HmO+3D0ztdjnWC4dVGc=; b=JTC5S3GsS6ZOJmXLKswiG7whyctIzAppUOG+oyJhenHSIgULJn4qFWjonyPM1bwZ4j V5JVVnZnnKwYnEbMGSiKr+u9zcm08pMT0x93hS3W31FzGxAaxPw/MPXDS6XIoef9W/E4 nwpv9TxTnio1zPy1qecC8ZASO42aHoVaPdkChEfso+gdaWu9aySEUhEYCJhuv3rhFU5v mD9UY879zV8g+0xNPD91fAYVAN/UCGvDtSi/qg51fC2g/kvK9vByNVdXJIfmgpG6GqqN Au6tImv1NRghYiEJCtBWFTvT9XRnYdoxdhdb9cm840YZqAZ/HQCbwwaghKATKR2oUnqt L6Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0LRKHv8C7GNrFb1gl+E1gfO7HmO+3D0ztdjnWC4dVGc=; b=TkS0DDMcgNsBNDItel4qZqxPHabyV29BdiwLR2b/yCQsAXOHa20bv4KOT3o1ENaHZq rcQJ6EGv5inUGTf/MMBjXtdANalNDIgcSaWOARMk31bB4PCJCmwHyqwb1Zj1F37PWCh0 JVYNUL447WpW7rd0Y8hDBl2tz5I3AuMHcnoiRJ7Qls+2QadlTqLWXKWchWc2cnU8vG1a 0/wL+TLI7u0midspa5QRRnLVcT23ACALixYzctcKdkOiol6nTgm+J1LjzxNQACG2qF5G XJ7L2Q55WF27olmMJ0xRVpQHmy5yx3Vs6BTNmuolQnoICpsmkD6D3b/Pe8vj8bAL/uAy 71ug== X-Gm-Message-State: AO0yUKVvELnpKdkZEazNexPoMrW20Xkv0Coq2rFA8qpxD/6IwDyEJYgi Isj8/Gn5qoG6XrYO70Gs9vj8WOmRGx0s X-Google-Smtp-Source: AK7set/PNHzkCHVO4XQYy4JJRB4oQAwAFnQi9qjc12IrECLlkauDLQD3bIuQ4egx1XgKPntcuYC2H1BANpGX X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6e02:96c:b0:315:5d89:fb2c with SMTP id q12-20020a056e02096c00b003155d89fb2cmr584391ilt.2.1676423259705; Tue, 14 Feb 2023 17:07:39 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:16 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-16-rananta@google.com> Subject: [REPOST PATCH 15/16] selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230215_010754_819444_F3823BB1 X-CRM114-Status: GOOD ( 19.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The PMU test's create_vpmu_vm() currently creates a VM with only one cpu. Extend this to accept a number of cpus as a argument to create a multi-vCPU VM. This would help the upcoming patches to test the vPMU context across multiple vCPUs. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 82 +++++++++++-------- 1 file changed, 49 insertions(+), 33 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index fd00acb9391c8..239fc7e06b3b9 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -320,7 +320,8 @@ uint64_t op_end_addr; struct vpmu_vm { struct kvm_vm *vm; - struct kvm_vcpu *vcpu; + int nr_vcpus; + struct kvm_vcpu **vcpus; int gic_fd; unsigned long *pmu_filter; }; @@ -1164,10 +1165,11 @@ set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_ return pmu_filter; } -/* Create a VM that has one vCPU with PMUv3 configured. */ +/* Create a VM that with PMUv3 configured. */ static struct vpmu_vm * -create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) +create_vpmu_vm(int nr_vcpus, void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) { + int i; struct kvm_vm *vm; struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; @@ -1187,7 +1189,11 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) vpmu_vm = calloc(1, sizeof(*vpmu_vm)); TEST_ASSERT(vpmu_vm, "Failed to allocate vpmu_vm"); - vpmu_vm->vm = vm = vm_create(1); + vpmu_vm->vcpus = calloc(nr_vcpus, sizeof(struct kvm_vcpu *)); + TEST_ASSERT(vpmu_vm->vcpus, "Failed to allocate kvm_vpus"); + vpmu_vm->nr_vcpus = nr_vcpus; + + vpmu_vm->vm = vm = vm_create(nr_vcpus); vm_init_descriptor_tables(vm); vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler); @@ -1197,26 +1203,35 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) guest_sync_handler); } - /* Create vCPU with PMUv3 */ + /* Create vCPUs with PMUv3 */ vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); - vpmu_vm->vcpu = vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); - vcpu_init_descriptor_tables(vcpu); - vpmu_vm->gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); - /* Make sure that PMUv3 support is indicated in the ID register */ - vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); - pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0); - TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF && - pmuver >= ID_AA64DFR0_PMUVER_8_0, - "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); + for (i = 0; i < nr_vcpus; i++) { + vpmu_vm->vcpus[i] = vcpu = aarch64_vcpu_add(vm, i, &init, guest_code); + vcpu_init_descriptor_tables(vcpu); + } - /* Initialize vPMU */ - if (pmu_event_filters) - vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters); + /* vGIC setup is expected after the vCPUs are created but before the vPMU is initialized */ + vpmu_vm->gic_fd = vgic_v3_setup(vm, nr_vcpus, 64, GICD_BASE_GPA, GICR_BASE_GPA); - vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); - vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + for (i = 0; i < nr_vcpus; i++) { + vcpu = vpmu_vm->vcpus[i]; + + /* Make sure that PMUv3 support is indicated in the ID register */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0); + TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF && + pmuver >= ID_AA64DFR0_PMUVER_8_0, + "Unexpected PMUVER (0x%x) on the vCPU %d with PMUv3", i, pmuver); + + /* Initialize vPMU */ + if (pmu_event_filters) + vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters); + + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + } return vpmu_vm; } @@ -1227,6 +1242,7 @@ static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) bitmap_free(vpmu_vm->pmu_filter); close(vpmu_vm->gic_fd); kvm_vm_free(vpmu_vm->vm); + free(vpmu_vm->vcpus); free(vpmu_vm); } @@ -1264,8 +1280,8 @@ static void run_counter_access_test(uint64_t pmcr_n) guest_data.expected_pmcr_n = pmcr_n; pr_debug("Test with pmcr_n %lu\n", pmcr_n); - vpmu_vm = create_vpmu_vm(guest_code, NULL); - vcpu = vpmu_vm->vcpu; + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + vcpu = vpmu_vm->vcpus[0]; /* Save the initial sp to restore them later to run the guest again */ vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); @@ -1309,8 +1325,8 @@ static void run_counter_access_error_test(uint64_t pmcr_n) guest_data.expected_pmcr_n = pmcr_n; pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); - vpmu_vm = create_vpmu_vm(guest_code, NULL); - vcpu = vpmu_vm->vcpu; + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + vcpu = vpmu_vm->vcpus[0]; /* Update the PMCR_EL0.N with @pmcr_n */ vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); @@ -1396,8 +1412,8 @@ static void run_kvm_event_filter_error_tests(void) }; /* KVM should not allow configuring filters after the PMU is initialized */ - vpmu_vm = create_vpmu_vm(guest_code, NULL); - ret = __vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + ret = __vcpu_ioctl(vpmu_vm->vcpus[0], KVM_SET_DEVICE_ATTR, &filter_attr); TEST_ASSERT(ret == -1 && errno == EBUSY, "Failed to disallow setting an event filter after PMU init"); destroy_vpmu_vm(vpmu_vm); @@ -1427,14 +1443,14 @@ static void run_kvm_event_filter_test(void) /* Test for valid filter configurations */ for (i = 0; i < ARRAY_SIZE(pmu_event_filters); i++) { - vpmu_vm = create_vpmu_vm(guest_code, pmu_event_filters[i]); + vpmu_vm = create_vpmu_vm(1, guest_code, pmu_event_filters[i]); vm = vpmu_vm->vm; pmu_filter_gva = vm_vaddr_alloc(vm, pmu_filter_bmap_sz, KVM_UTIL_MIN_VADDR); memcpy(addr_gva2hva(vm, pmu_filter_gva), vpmu_vm->pmu_filter, pmu_filter_bmap_sz); guest_data.pmu_filter = (unsigned long *) pmu_filter_gva; - run_vcpu(vpmu_vm->vcpu); + run_vcpu(vpmu_vm->vcpus[0]); destroy_vpmu_vm(vpmu_vm); } @@ -1449,8 +1465,8 @@ static void run_kvm_evtype_filter_test(void) guest_data.test_stage = TEST_STAGE_KVM_EVTYPE_FILTER; - vpmu_vm = create_vpmu_vm(guest_code, NULL); - run_vcpu(vpmu_vm->vcpu); + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + run_vcpu(vpmu_vm->vcpus[0]); destroy_vpmu_vm(vpmu_vm); } @@ -1465,7 +1481,7 @@ static void *run_vcpus_migrate_test_func(void *arg) struct vcpu_migrate_data *migrate_data = arg; struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm; - run_vcpu(vpmu_vm->vcpu); + run_vcpu(vpmu_vm->vcpus[0]); migrate_data->vcpu_done = true; return NULL; @@ -1535,7 +1551,7 @@ static void run_vcpu_migration_test(uint64_t pmcr_n) guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION; guest_data.expected_pmcr_n = pmcr_n; - migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(guest_code, NULL); + migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(1, guest_code, NULL); /* Initialize random number generation for migrating vCPUs to random pCPUs */ srand(time(NULL)); @@ -1571,8 +1587,8 @@ static uint64_t get_pmcr_n_limit(void) struct vpmu_vm *vpmu_vm; uint64_t pmcr; - vpmu_vm = create_vpmu_vm(guest_code, NULL); - vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + vcpu_get_reg(vpmu_vm->vcpus[0], KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); destroy_vpmu_vm(vpmu_vm); return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); From patchwork Wed Feb 15 01:07:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141222 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EAABBC6379F for ; Wed, 15 Feb 2023 02:04:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=1Ko4iDeptrKjVEo6Iz7LrCH/8EvQLD12DDTd/u0QUVw=; b=K3RH5pTlrBOlkhkbpO2GnE7LCF QTk9Bvq9O/L+4gxYQRm9TH42JYaSlNC8DOCpFmkflpOF7al7uWUUmucsetU+BDXDwg0cWkWjID79l KbTd/2zFl+qeWU9Gw3hb8evJpdJizBKEZYQlNtl+bDFcX8yRi+krOBwd2z7i3hOpRiUI87SJk4MyF ggK2eBlF6CTvmqexJxRQ7tMnRzlFnKAsVYJXre6EFsuWTvsVwaIMzfLPyrHKV2lyKtcASAFyYTHCC lLaX8IuT5dP/Wg1CQwa40exzKNzMLZAeL46TzfPNQljJxyLCHT5C5XFVREA8Lf1kHJGx5N92RbbDm vwXr2SMQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS78p-004Lhj-NV; Wed, 15 Feb 2023 02:03:27 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS78i-004Lex-7j for linux-arm-kernel@bombadil.infradead.org; Wed, 15 Feb 2023 02:03:20 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=YKcCEtcasEy96yYR95uUDcecBsDQdV3knxsKnUz3/Gk=; b=IOUVtqjKQZ++p85Vc8pNNaxZhL yBe0CynujJQwr1FSY0SucntrlQ8sZOCvX+05xk8A4/72MHNRua3DZ6spMRPXOrp2yTtoPnUYHkdvG OXVvyssyz0OxzM2bUnxySnwDkTNMZewg0XA0GseoszXcp51wgu8v4+JWKhGovqfa3vCsZ4wHBhWzu lqRMFhjuysdowC4d2FgDm4t5wOA9uw1ntpP6OJdWc67kQxRIR9TU8Ko1J2UYllMYnfNw6xwPtmBHH +I41l0A7U2poG2hAp1cNqTcYl+FskSriCvX3OGjoDh68UIpupUNzt96nLlUwRDjTHvHo3+BaYMMUZ 19l6m76g==; Received: from mail-io1-xd49.google.com ([2607:f8b0:4864:20::d49]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pS6GK-009oaM-22 for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:53 +0000 Received: by mail-io1-xd49.google.com with SMTP id t185-20020a6bc3c2000000b00733ef3dabe3so11299739iof.14 for ; Tue, 14 Feb 2023 17:07:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YKcCEtcasEy96yYR95uUDcecBsDQdV3knxsKnUz3/Gk=; b=rIfYoIsp11nNcN2vFQFjrLmBGnkW3VKkYELEsBzzdFFKc1eONWEWEhfn7jUzJlAoMt cu+Ez8zm4icxn2Z2lh80OLwAA7AZ9w+CI8Ne5gadJqKHfe1Q7bGiltcBiADfflGELbs6 OD0FlA7wv9AIURH3u6ovnn3pYVRsnG1ebwxoQooTVuO7okm7vXbEzbZwJHmW/pShuk65 xdpN56Wij3QdIA7unnNsS67RhL4mUIgKWdwinB8n+P5sFOpntJ3UWdcJgjIqLdIzO2U2 edssb8KFyLxO3/Pdh8xS1SPuuN+VDNJCWB1LSqHoH6tkJw8hiqvAyRZyO759S9TywzFw kRQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YKcCEtcasEy96yYR95uUDcecBsDQdV3knxsKnUz3/Gk=; b=DJkDGbkiX/dANoa5M80O2HovK/WV+E+8qn97pIhMApk+wt8zCgcUD0OoQ5rUKmpYwr hEMb4IG+ppwQox9Y5UFDWF9tXs3zQC43V88ZDYnhwEpEJxTA2C+PmliU7QfTuRCrWOWw x/ZmjBlgVL/HHBI/2S//GdffKRtQpfWssWaLF4zlNkK1EYbvpzlClfTu5E/PR0Q+5cyy RzwPYVl6ft1cGIutyRPx9m3a+XLvoHFNkAl5M7vpM5iAePAT45udYapyEKjE1ObGEzBo Z5wA0dSqYbs6s2fk6fYX3xdFeHgH9gXhEnIA09rvGOGqzYM14Pp74M7NiPLkfrYmS30R v1mA== X-Gm-Message-State: AO0yUKXNqRkj7obFtCY6V7aaa21Uxjeeey/f4HQQrUMmZ7fzgTDfuoXE TH2XMtVIXh6o+JXY6Qy3I5I0d2hopFFX X-Google-Smtp-Source: AK7set8eypuzPxapcfOamnWx0i/1N4TWhXPPHogRkxrXl/P/lAHb3z5rdSK4UAh3DxdOcRqbbCw0c3QWJhrI X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:7606:0:b0:30f:4ee:3d93 with SMTP id r6-20020a927606000000b0030f04ee3d93mr296233ilc.1.1676423260760; Tue, 14 Feb 2023 17:07:40 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:17 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-17-rananta@google.com> Subject: [REPOST PATCH 16/16] selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230215_010751_219715_3C7E1DF0 X-CRM114-Status: GOOD ( 24.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To test KVM's handling of multiple vCPU contexts together, that are frequently migrating across random pCPUs in the system, extend the test to create a VM with multiple vCPUs and validate the behavior. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 166 ++++++++++++------ 1 file changed, 114 insertions(+), 52 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 239fc7e06b3b9..c9d8e5f9a22ab 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -19,11 +19,12 @@ * higher exception levels (EL2, EL3). Verify this functionality by * configuring and trying to count the events for EL2 in the guest. * - * 4. Since the PMU registers are per-cpu, stress KVM by frequently - * migrating the guest vCPU to random pCPUs in the system, and check - * if the vPMU is still behaving as expected. The sub-tests include - * testing basic functionalities such as basic counters behavior, - * overflow, overflow interrupts, and chained events. + * 4. Since the PMU registers are per-cpu, stress KVM by creating a + * multi-vCPU VM, then frequently migrate the guest vCPUs to random + * pCPUs in the system, and check if the vPMU is still behaving as + * expected. The sub-tests include testing basic functionalities such + * as basic counters behavior, overflow, overflow interrupts, and + * chained events. * * Copyright (c) 2022 Google LLC. * @@ -348,19 +349,22 @@ struct guest_irq_data { struct spinlock lock; }; -static struct guest_irq_data guest_irq_data; +static struct guest_irq_data guest_irq_data[KVM_MAX_VCPUS]; #define VCPU_MIGRATIONS_TEST_ITERS_DEF 1000 #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2 +#define VCPU_MIGRATIONS_TEST_NR_VPUS_DEF 2 struct test_args { int vcpu_migration_test_iter; int vcpu_migration_test_migrate_freq_ms; + int vcpu_migration_test_nr_vcpus; }; static struct test_args test_args = { .vcpu_migration_test_iter = VCPU_MIGRATIONS_TEST_ITERS_DEF, .vcpu_migration_test_migrate_freq_ms = VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS, + .vcpu_migration_test_nr_vcpus = VCPU_MIGRATIONS_TEST_NR_VPUS_DEF, }; static void guest_sync_handler(struct ex_regs *regs) @@ -396,26 +400,34 @@ static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_ } } +static struct guest_irq_data *get_irq_data(void) +{ + uint32_t cpu = guest_get_vcpuid(); + + return &guest_irq_data[cpu]; +} + static void guest_irq_handler(struct ex_regs *regs) { uint32_t pmc_idx_bmap; uint64_t i, pmcr_n = get_pmcr_n(); uint32_t pmovsclr = read_pmovsclr(); unsigned int intid = gic_get_and_ack_irq(); + struct guest_irq_data *irq_data = get_irq_data(); /* No other IRQ apart from the PMU IRQ is expected */ GUEST_ASSERT_1(intid == PMU_IRQ, intid); - spin_lock(&guest_irq_data.lock); - pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap); + spin_lock(&irq_data->lock); + pmc_idx_bmap = READ_ONCE(irq_data->pmc_idx_bmap); for (i = 0; i < pmcr_n; i++) guest_validate_irq(i, pmovsclr, pmc_idx_bmap); guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap); /* Mark IRQ as recived for the corresponding PMCs */ - WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr); - spin_unlock(&guest_irq_data.lock); + WRITE_ONCE(irq_data->irq_received_bmap, pmovsclr); + spin_unlock(&irq_data->lock); gic_set_eoi(intid); } @@ -423,35 +435,40 @@ static void guest_irq_handler(struct ex_regs *regs) static int pmu_irq_received(int pmc_idx) { bool irq_received; + struct guest_irq_data *irq_data = get_irq_data(); - spin_lock(&guest_irq_data.lock); - irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx); - WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); - spin_unlock(&guest_irq_data.lock); + spin_lock(&irq_data->lock); + irq_received = READ_ONCE(irq_data->irq_received_bmap) & BIT(pmc_idx); + WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx)); + spin_unlock(&irq_data->lock); return irq_received; } static void pmu_irq_init(int pmc_idx) { + struct guest_irq_data *irq_data = get_irq_data(); + write_pmovsclr(BIT(pmc_idx)); - spin_lock(&guest_irq_data.lock); - WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); - WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx)); - spin_unlock(&guest_irq_data.lock); + spin_lock(&irq_data->lock); + WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx)); + WRITE_ONCE(irq_data->pmc_idx_bmap, irq_data->pmc_idx_bmap | BIT(pmc_idx)); + spin_unlock(&irq_data->lock); enable_irq(pmc_idx); } static void pmu_irq_exit(int pmc_idx) { + struct guest_irq_data *irq_data = get_irq_data(); + write_pmovsclr(BIT(pmc_idx)); - spin_lock(&guest_irq_data.lock); - WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); - WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); - spin_unlock(&guest_irq_data.lock); + spin_lock(&irq_data->lock); + WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx)); + WRITE_ONCE(irq_data->pmc_idx_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx)); + spin_unlock(&irq_data->lock); disable_irq(pmc_idx); } @@ -783,7 +800,8 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) static void test_basic_pmu_functionality(void) { local_irq_disable(); - gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA); + gic_init(GIC_V3, test_args.vcpu_migration_test_nr_vcpus, + (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA); gic_irq_enable(PMU_IRQ); local_irq_enable(); @@ -1093,11 +1111,13 @@ static void guest_evtype_filter_test(void) static void guest_vcpu_migration_test(void) { + int iter = test_args.vcpu_migration_test_iter; + /* * While the userspace continuously migrates this vCPU to random pCPUs, * run basic PMU functionalities and verify the results. */ - while (test_args.vcpu_migration_test_iter--) + while (iter--) test_basic_pmu_functionality(); } @@ -1472,17 +1492,23 @@ static void run_kvm_evtype_filter_test(void) struct vcpu_migrate_data { struct vpmu_vm *vpmu_vm; - pthread_t *pt_vcpu; - bool vcpu_done; + pthread_t *pt_vcpus; + unsigned long *vcpu_done_map; + pthread_mutex_t vcpu_done_map_lock; }; +struct vcpu_migrate_data migrate_data; + static void *run_vcpus_migrate_test_func(void *arg) { - struct vcpu_migrate_data *migrate_data = arg; - struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm; + struct vpmu_vm *vpmu_vm = migrate_data.vpmu_vm; + unsigned int vcpu_idx = (unsigned long)arg; - run_vcpu(vpmu_vm->vcpus[0]); - migrate_data->vcpu_done = true; + run_vcpu(vpmu_vm->vcpus[vcpu_idx]); + + pthread_mutex_lock(&migrate_data.vcpu_done_map_lock); + __set_bit(vcpu_idx, migrate_data.vcpu_done_map); + pthread_mutex_unlock(&migrate_data.vcpu_done_map_lock); return NULL; } @@ -1504,7 +1530,7 @@ static uint32_t get_pcpu(void) return pcpu; } -static int migrate_vcpu(struct vcpu_migrate_data *migrate_data) +static int migrate_vcpu(int vcpu_idx) { int ret; cpu_set_t cpuset; @@ -1513,9 +1539,9 @@ static int migrate_vcpu(struct vcpu_migrate_data *migrate_data) CPU_ZERO(&cpuset); CPU_SET(new_pcpu, &cpuset); - pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu); + pr_debug("Migrating vCPU %d to pCPU: %u\n", vcpu_idx, new_pcpu); - ret = pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &cpuset); + ret = pthread_setaffinity_np(migrate_data.pt_vcpus[vcpu_idx], sizeof(cpuset), &cpuset); /* Allow the error where the vCPU thread is already finished */ TEST_ASSERT(ret == 0 || ret == ESRCH, @@ -1526,48 +1552,74 @@ static int migrate_vcpu(struct vcpu_migrate_data *migrate_data) static void *vcpus_migrate_func(void *arg) { - struct vcpu_migrate_data *migrate_data = arg; + struct vpmu_vm *vpmu_vm = migrate_data.vpmu_vm; + int i, n_done, nr_vcpus = vpmu_vm->nr_vcpus; + bool vcpu_done; - while (!migrate_data->vcpu_done) { + do { usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms)); - migrate_vcpu(migrate_data); - } + for (n_done = 0, i = 0; i < nr_vcpus; i++) { + pthread_mutex_lock(&migrate_data.vcpu_done_map_lock); + vcpu_done = test_bit(i, migrate_data.vcpu_done_map); + pthread_mutex_unlock(&migrate_data.vcpu_done_map_lock); + + if (vcpu_done) { + n_done++; + continue; + } + + migrate_vcpu(i); + } + + } while (nr_vcpus != n_done); return NULL; } static void run_vcpu_migration_test(uint64_t pmcr_n) { - int ret; + int i, nr_vcpus, ret; struct vpmu_vm *vpmu_vm; - pthread_t pt_vcpu, pt_sched; - struct vcpu_migrate_data migrate_data = { - .pt_vcpu = &pt_vcpu, - .vcpu_done = false, - }; + pthread_t pt_sched, *pt_vcpus; __TEST_REQUIRE(get_nprocs() >= 2, "At least two pCPUs needed for vCPU migration test"); guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION; guest_data.expected_pmcr_n = pmcr_n; - migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + nr_vcpus = test_args.vcpu_migration_test_nr_vcpus; + + migrate_data.vcpu_done_map = bitmap_zalloc(nr_vcpus); + TEST_ASSERT(migrate_data.vcpu_done_map, "Failed to create vCPU done bitmap"); + pthread_mutex_init(&migrate_data.vcpu_done_map_lock, NULL); + + migrate_data.pt_vcpus = pt_vcpus = calloc(nr_vcpus, sizeof(*pt_vcpus)); + TEST_ASSERT(pt_vcpus, "Failed to create vCPU thread pointers"); + + migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(nr_vcpus, guest_code, NULL); /* Initialize random number generation for migrating vCPUs to random pCPUs */ srand(time(NULL)); - /* Spawn a vCPU thread */ - ret = pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migrate_data); - TEST_ASSERT(!ret, "Failed to create the vCPU thread"); + /* Spawn vCPU threads */ + for (i = 0; i < nr_vcpus; i++) { + ret = pthread_create(&pt_vcpus[i], NULL, + run_vcpus_migrate_test_func, (void *)(unsigned long)i); + TEST_ASSERT(!ret, "Failed to create the vCPU thread: %d", i); + } /* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */ - ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data); + ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, NULL); TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating the vCPUs"); pthread_join(pt_sched, NULL); - pthread_join(pt_vcpu, NULL); + + for (i = 0; i < nr_vcpus; i++) + pthread_join(pt_vcpus[i], NULL); destroy_vpmu_vm(vpmu_vm); + free(pt_vcpus); + bitmap_free(migrate_data.vcpu_done_map); } static void run_tests(uint64_t pmcr_n) @@ -1596,12 +1648,14 @@ static uint64_t get_pmcr_n_limit(void) static void print_help(char *name) { - pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]\n", - name); + pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]" + "[-n vcpu_migration_nr_vcpus]\n", name); pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)\n", VCPU_MIGRATIONS_TEST_ITERS_DEF); pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (default: %u)\n", VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS); + pr_info("\t-n: Number of vCPUs for vCPU migrations test. (default: %u)\n", + VCPU_MIGRATIONS_TEST_NR_VPUS_DEF); pr_info("\t-h: print this help screen\n"); } @@ -1609,7 +1663,7 @@ static bool parse_args(int argc, char *argv[]) { int opt; - while ((opt = getopt(argc, argv, "hi:m:")) != -1) { + while ((opt = getopt(argc, argv, "hi:m:n:")) != -1) { switch (opt) { case 'i': test_args.vcpu_migration_test_iter = @@ -1619,6 +1673,14 @@ static bool parse_args(int argc, char *argv[]) test_args.vcpu_migration_test_migrate_freq_ms = atoi_positive("vCPU migration frequency", optarg); break; + case 'n': + test_args.vcpu_migration_test_nr_vcpus = + atoi_positive("Nr vCPUs for vCPU migrations", optarg); + if (test_args.vcpu_migration_test_nr_vcpus > KVM_MAX_VCPUS) { + pr_info("Max allowed vCPUs: %u\n", KVM_MAX_VCPUS); + goto err; + } + break; case 'h': default: goto err;