diff mbox series

[47/47] KVM: arm64: nv: Add trap forwarding for FEAT_FGT2 described registers

Message ID 20241001024356.1096072-48-anshuman.khandual@arm.com (mailing list archive)
State New
Headers show
Series KVM: arm64: Enable FGU (Fine Grained Undefined) for FEAT_FGT2 registers | expand

Commit Message

Anshuman Khandual Oct. 1, 2024, 2:43 a.m. UTC
Describe remaining MDCR_EL2 register, and associate that with all FEAT_FGT2
exposed system registers it allows to trap.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Oliver Upton <oliver.upton@linux.dev>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: kvmarm@lists.linux.dev
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 arch/arm64/include/asm/kvm_arm.h  |   2 +
 arch/arm64/include/asm/kvm_host.h |   2 +
 arch/arm64/kvm/emulate-nested.c   | 262 ++++++++++++++++++++++++++++++
 3 files changed, 266 insertions(+)

Comments

Oliver Upton Oct. 1, 2024, 2:46 p.m. UTC | #1
Hi Anshuman,

On Tue, Oct 01, 2024 at 08:13:56AM +0530, Anshuman Khandual wrote:
> +#define check_cntr_accessible(num)						\
> +static enum trap_behaviour check_cntr_accessible_##num(struct kvm_vcpu *vcpu)	\
> +{										\
> +	u64 mdcr_el2 = __vcpu_sys_reg(vcpu, MDCR_EL2);				\
> +	int cntr = FIELD_GET(MDCR_EL2_HPMN_MASK, mdcr_el2);			\
> +										\
> +	if (num >= cntr)							\
> +		return BEHAVE_FORWARD_ANY;					\
> +	return BEHAVE_HANDLE_LOCALLY;						\
> +}										\
> +
> +check_cntr_accessible(0)
> +check_cntr_accessible(1)
> +check_cntr_accessible(2)
> +check_cntr_accessible(3)
> +check_cntr_accessible(4)
> +check_cntr_accessible(5)
> +check_cntr_accessible(6)
> +check_cntr_accessible(7)
> +check_cntr_accessible(8)
> +check_cntr_accessible(9)
> +check_cntr_accessible(10)
> +check_cntr_accessible(11)
> +check_cntr_accessible(12)
> +check_cntr_accessible(13)
> +check_cntr_accessible(14)
> +check_cntr_accessible(15)
> +check_cntr_accessible(16)
> +check_cntr_accessible(17)
> +check_cntr_accessible(18)
> +check_cntr_accessible(19)
> +check_cntr_accessible(20)
> +check_cntr_accessible(21)
> +check_cntr_accessible(22)
> +check_cntr_accessible(23)
> +check_cntr_accessible(24)
> +check_cntr_accessible(25)
> +check_cntr_accessible(26)
> +check_cntr_accessible(27)
> +check_cntr_accessible(28)
> +check_cntr_accessible(29)
> +check_cntr_accessible(30)

I'd rather we not use templates for this problem. It bloats the kernel text
as well as the trap encoding space.

I have a patch in the nested PMU series that uses a single complex trap
ID to evaluate HPMN, and derives the index from ESR_EL2. I think it
could also be extended to the PMEVCNTSVR<n> range as well.

Also, keep in mind that the HPMN trap is annoying since it affects Host
EL0 in addition to 'guest' ELs.

[*]: https://lore.kernel.org/kvmarm/20240827002235.1753237-9-oliver.upton@linux.dev/
Anshuman Khandual Oct. 3, 2024, 4:16 a.m. UTC | #2
On 10/1/24 20:16, Oliver Upton wrote:
> Hi Anshuman,
> 
> On Tue, Oct 01, 2024 at 08:13:56AM +0530, Anshuman Khandual wrote:
>> +#define check_cntr_accessible(num)						\
>> +static enum trap_behaviour check_cntr_accessible_##num(struct kvm_vcpu *vcpu)	\
>> +{										\
>> +	u64 mdcr_el2 = __vcpu_sys_reg(vcpu, MDCR_EL2);				\
>> +	int cntr = FIELD_GET(MDCR_EL2_HPMN_MASK, mdcr_el2);			\
>> +										\
>> +	if (num >= cntr)							\
>> +		return BEHAVE_FORWARD_ANY;					\
>> +	return BEHAVE_HANDLE_LOCALLY;						\
>> +}										\
>> +
>> +check_cntr_accessible(0)
>> +check_cntr_accessible(1)
>> +check_cntr_accessible(2)
>> +check_cntr_accessible(3)
>> +check_cntr_accessible(4)
>> +check_cntr_accessible(5)
>> +check_cntr_accessible(6)
>> +check_cntr_accessible(7)
>> +check_cntr_accessible(8)
>> +check_cntr_accessible(9)
>> +check_cntr_accessible(10)
>> +check_cntr_accessible(11)
>> +check_cntr_accessible(12)
>> +check_cntr_accessible(13)
>> +check_cntr_accessible(14)
>> +check_cntr_accessible(15)
>> +check_cntr_accessible(16)
>> +check_cntr_accessible(17)
>> +check_cntr_accessible(18)
>> +check_cntr_accessible(19)
>> +check_cntr_accessible(20)
>> +check_cntr_accessible(21)
>> +check_cntr_accessible(22)
>> +check_cntr_accessible(23)
>> +check_cntr_accessible(24)
>> +check_cntr_accessible(25)
>> +check_cntr_accessible(26)
>> +check_cntr_accessible(27)
>> +check_cntr_accessible(28)
>> +check_cntr_accessible(29)
>> +check_cntr_accessible(30)
> 
> I'd rather we not use templates for this problem. It bloats the kernel text
> as well as the trap encoding space.

Alright, fair point.

> 
> I have a patch in the nested PMU series that uses a single complex trap
> ID to evaluate HPMN, and derives the index from ESR_EL2. I think it
> could also be extended to the PMEVCNTSVR<n> range as well.

Just for reference - the mentioned complex trap ID function from the
given link below.

static enum trap_behaviour check_mdcr_hpmn(struct kvm_vcpu *vcpu)
{
	u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu));
	u64 mask = kvm_pmu_accessible_counter_mask(vcpu);
	unsigned int idx;


	switch (sysreg) {
	case SYS_PMEVTYPERn_EL0(0) ... SYS_PMEVTYPERn_EL0(30):
	case SYS_PMEVCNTRn_EL0(0) ... SYS_PMEVCNTRn_EL0(30):

---------------------------------------------------------------------
Just add the new system register range here ?

+	case SYS_PMEVCNTSVR_EL1(0)... SYS_PMEVCNTSVR_EL1(31):
---------------------------------------------------------------------

		idx = (sys_reg_CRm(sysreg) & 0x3) << 3 | sys_reg_Op2(sysreg);
		break;
	case SYS_PMXEVTYPER_EL0:
	case SYS_PMXEVCNTR_EL0:
		idx = __vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_PMU_COUNTER_MASK;
		break;
	default:
		/* Someone used this trap helper for something else... */
		KVM_BUG_ON(1, vcpu->kvm);
		return BEHAVE_HANDLE_LOCALLY;
	}

	/*
	 * Programming HPMN=0 is CONSTRAINED UNPREDICTABLE if FEAT_HPMN0 isn't
	 * implemented. Since KVM's ability to emulate HPMN=0 does not directly
	 * depend on hardware (all PMU registers are trapped), make the
	 * implementation choice that all counters are included in the second
	 * range reserved for EL2/EL3.
	 */
	return !(BIT(idx) & mask) ? (BEHAVE_FORWARD_RW | BEHAVE_IN_HOST_EL0) :
			BEHAVE_HANDLE_LOCALLY;
}

> 
> Also, keep in mind that the HPMN trap is annoying since it affects Host
> EL0 in addition to 'guest' ELs.

Does this require any more special handling other than the above complex trap
ID function ?

> 
> [*]: https://lore.kernel.org/kvmarm/20240827002235.1753237-9-oliver.upton@linux.dev/
>
Oliver Upton Oct. 4, 2024, 5:01 a.m. UTC | #3
On Thu, Oct 03, 2024 at 09:46:08AM +0530, Anshuman Khandual wrote:
> > I have a patch in the nested PMU series that uses a single complex trap
> > ID to evaluate HPMN, and derives the index from ESR_EL2. I think it
> > could also be extended to the PMEVCNTSVR<n> range as well.
> 
> Just for reference - the mentioned complex trap ID function from the
> given link below.
> 
> static enum trap_behaviour check_mdcr_hpmn(struct kvm_vcpu *vcpu)
> {
> 	u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu));
> 	u64 mask = kvm_pmu_accessible_counter_mask(vcpu);
> 	unsigned int idx;
> 
> 
> 	switch (sysreg) {
> 	case SYS_PMEVTYPERn_EL0(0) ... SYS_PMEVTYPERn_EL0(30):
> 	case SYS_PMEVCNTRn_EL0(0) ... SYS_PMEVCNTRn_EL0(30):
> 
> ---------------------------------------------------------------------
> Just add the new system register range here ?
> 
> +	case SYS_PMEVCNTSVR_EL1(0)... SYS_PMEVCNTSVR_EL1(31):
> ---------------------------------------------------------------------
> 
> 		idx = (sys_reg_CRm(sysreg) & 0x3) << 3 | sys_reg_Op2(sysreg);
> 		break;

Yes, so long as the layout of encodings matches the established pattern
for value / type registers (I haven't checked this).

> > 
> > Also, keep in mind that the HPMN trap is annoying since it affects Host
> > EL0 in addition to 'guest' ELs.
> 
> Does this require any more special handling other than the above complex trap
> ID function ?

There's another patch in that series I linked that allows EL2 traps to
describe behavior that takes effect in host EL0.

So I don't believe there's anything in particular related to HPMN that
you need to evaluate. I wanted to mention it because some of the PMU
related traps besides HPMN take effect in Host EL0, so do keep it in
mind.

With that said, I haven't seen an FGT yet that applies to Host EL0.
Anshuman Khandual Oct. 21, 2024, 4:01 a.m. UTC | #4
On 10/4/24 10:31, Oliver Upton wrote:
> On Thu, Oct 03, 2024 at 09:46:08AM +0530, Anshuman Khandual wrote:
>>> I have a patch in the nested PMU series that uses a single complex trap
>>> ID to evaluate HPMN, and derives the index from ESR_EL2. I think it
>>> could also be extended to the PMEVCNTSVR<n> range as well.
>>
>> Just for reference - the mentioned complex trap ID function from the
>> given link below.
>>
>> static enum trap_behaviour check_mdcr_hpmn(struct kvm_vcpu *vcpu)
>> {
>> 	u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu));
>> 	u64 mask = kvm_pmu_accessible_counter_mask(vcpu);
>> 	unsigned int idx;
>>
>>
>> 	switch (sysreg) {
>> 	case SYS_PMEVTYPERn_EL0(0) ... SYS_PMEVTYPERn_EL0(30):
>> 	case SYS_PMEVCNTRn_EL0(0) ... SYS_PMEVCNTRn_EL0(30):
>>
>> ---------------------------------------------------------------------
>> Just add the new system register range here ?
>>
>> +	case SYS_PMEVCNTSVR_EL1(0)... SYS_PMEVCNTSVR_EL1(31):
>> ---------------------------------------------------------------------
>>
>> 		idx = (sys_reg_CRm(sysreg) & 0x3) << 3 | sys_reg_Op2(sysreg);
>> 		break;
> 
> Yes, so long as the layout of encodings matches the established pattern
> for value / type registers (I haven't checked this).
> 
>>>
>>> Also, keep in mind that the HPMN trap is annoying since it affects Host
>>> EL0 in addition to 'guest' ELs.
>>
>> Does this require any more special handling other than the above complex trap
>> ID function ?
> 
> There's another patch in that series I linked that allows EL2 traps to
> describe behavior that takes effect in host EL0.
> 
> So I don't believe there's anything in particular related to HPMN that
> you need to evaluate. I wanted to mention it because some of the PMU
> related traps besides HPMN take effect in Host EL0, so do keep it in
> mind.
> 
> With that said, I haven't seen an FGT yet that applies to Host EL0.
> 

Hello Oliver,

Should I rebase this series on the latest series you have posted earlier this
month [1] ? Also wondering if you had a chance to look into other KVM patches
here ? Please do let me know if they too need any modification.

  KVM: arm64: nv: Add FEAT_FGT2 registers access from virtual EL2
  KVM: arm64: nv: Add FEAT_FGT2 registers based FGU handling
  KVM: arm64: nv: Add trap forwarding for FEAT_FGT2 described registers

[1] https://lore.kernel.org/kvmarm/20241007174559.1830205-1-oliver.upton@linux.dev/

- Anshuman
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 449bccffd529..850fac9a7840 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -323,6 +323,7 @@ 
 #define MDCR_EL2_TTRF		(UL(1) << 19)
 #define MDCR_EL2_HPMD		(UL(1) << 17)
 #define MDCR_EL2_TPMS		(UL(1) << 14)
+#define MDCR_EL2_EnSPM		(UL(1) << 15)
 #define MDCR_EL2_E2PB_MASK	(UL(0x3))
 #define MDCR_EL2_E2PB_SHIFT	(UL(12))
 #define MDCR_EL2_TDRA		(UL(1) << 11)
@@ -333,6 +334,7 @@ 
 #define MDCR_EL2_TPM		(UL(1) << 6)
 #define MDCR_EL2_TPMCR		(UL(1) << 5)
 #define MDCR_EL2_HPMN_MASK	(UL(0x1F))
+#define MDCR_EL2_HPMN_SHIFT	(UL(0))
 #define MDCR_EL2_RES0		(GENMASK(63, 37) |	\
 				 GENMASK(35, 30) |	\
 				 GENMASK(25, 24) |	\
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index ca98f6d810c2..802ad88235af 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -433,6 +433,7 @@  enum vcpu_sysreg {
 	PMINTENSET_EL1,	/* Interrupt Enable Set Register */
 	PMOVSSET_EL0,	/* Overflow Flag Status Set Register */
 	PMUSERENR_EL0,	/* User Enable Register */
+	SPMSELR_EL0,	/* System PMU Select Register */
 
 	/* Pointer Authentication Registers in a strict increasing order. */
 	APIAKEYLO_EL1,
@@ -491,6 +492,7 @@  enum vcpu_sysreg {
 	CNTHP_CVAL_EL2,
 	CNTHV_CTL_EL2,
 	CNTHV_CVAL_EL2,
+	SPMACCESSR_EL2, /* System PMU Access Register */
 
 	__VNCR_START__,	/* Any VNCR-capable reg goes after this point */
 
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index f22a5f10ffe5..d66722c71b45 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -75,6 +75,7 @@  enum cgt_group_id {
 	CGT_MDCR_TDRA,
 	CGT_MDCR_E2PB,
 	CGT_MDCR_TPMS,
+	CGT_MDCR_EnSPM,
 	CGT_MDCR_TTRF,
 	CGT_MDCR_E2TB,
 	CGT_MDCR_TDCC,
@@ -120,6 +121,38 @@  enum cgt_group_id {
 	__COMPLEX_CONDITIONS__,
 	CGT_CNTHCTL_EL1PCTEN = __COMPLEX_CONDITIONS__,
 	CGT_CNTHCTL_EL1PTEN,
+	CGT_SPMSEL_SPMACCESS,
+	CGT_CNTR_ACCESSIBLE_0,
+	CGT_CNTR_ACCESSIBLE_1,
+	CGT_CNTR_ACCESSIBLE_2,
+	CGT_CNTR_ACCESSIBLE_3,
+	CGT_CNTR_ACCESSIBLE_4,
+	CGT_CNTR_ACCESSIBLE_5,
+	CGT_CNTR_ACCESSIBLE_6,
+	CGT_CNTR_ACCESSIBLE_7,
+	CGT_CNTR_ACCESSIBLE_8,
+	CGT_CNTR_ACCESSIBLE_9,
+	CGT_CNTR_ACCESSIBLE_10,
+	CGT_CNTR_ACCESSIBLE_11,
+	CGT_CNTR_ACCESSIBLE_12,
+	CGT_CNTR_ACCESSIBLE_13,
+	CGT_CNTR_ACCESSIBLE_14,
+	CGT_CNTR_ACCESSIBLE_15,
+	CGT_CNTR_ACCESSIBLE_16,
+	CGT_CNTR_ACCESSIBLE_17,
+	CGT_CNTR_ACCESSIBLE_18,
+	CGT_CNTR_ACCESSIBLE_19,
+	CGT_CNTR_ACCESSIBLE_20,
+	CGT_CNTR_ACCESSIBLE_21,
+	CGT_CNTR_ACCESSIBLE_22,
+	CGT_CNTR_ACCESSIBLE_23,
+	CGT_CNTR_ACCESSIBLE_24,
+	CGT_CNTR_ACCESSIBLE_25,
+	CGT_CNTR_ACCESSIBLE_26,
+	CGT_CNTR_ACCESSIBLE_27,
+	CGT_CNTR_ACCESSIBLE_28,
+	CGT_CNTR_ACCESSIBLE_29,
+	CGT_CNTR_ACCESSIBLE_30,
 
 	CGT_CPTR_TTA,
 
@@ -344,6 +377,12 @@  static const struct trap_bits coarse_trap_bits[] = {
 		.mask		= MDCR_EL2_TPMS,
 		.behaviour	= BEHAVE_FORWARD_ANY,
 	},
+	[CGT_MDCR_EnSPM] = {
+		.index		= MDCR_EL2,
+		.value		= MDCR_EL2_EnSPM,
+		.mask		= MDCR_EL2_EnSPM,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
 	[CGT_MDCR_TTRF] = {
 		.index		= MDCR_EL2,
 		.value		= MDCR_EL2_TTRF,
@@ -498,6 +537,65 @@  static enum trap_behaviour check_cptr_tta(struct kvm_vcpu *vcpu)
 	return BEHAVE_HANDLE_LOCALLY;
 }
 
+static enum trap_behaviour check_spmsel_spmaccess(struct kvm_vcpu *vcpu)
+{
+	u64 spmaccessr_el2, spmselr_el2;
+	int syspmusel;
+
+	if (__vcpu_sys_reg(vcpu, MDCR_EL2) & MDCR_EL2_EnSPM) {
+		spmselr_el2 = __vcpu_sys_reg(vcpu, SPMSELR_EL0);
+		spmaccessr_el2 = __vcpu_sys_reg(vcpu, SPMACCESSR_EL2);
+		syspmusel = FIELD_GET(SPMSELR_EL0_SYSPMUSEL_MASK, spmselr_el2);
+
+		if (((spmaccessr_el2 >> (syspmusel * 2)) & 0x3) == 0x0)
+			return BEHAVE_FORWARD_ANY;
+	}
+	return BEHAVE_HANDLE_LOCALLY;
+}
+
+#define check_cntr_accessible(num)						\
+static enum trap_behaviour check_cntr_accessible_##num(struct kvm_vcpu *vcpu)	\
+{										\
+	u64 mdcr_el2 = __vcpu_sys_reg(vcpu, MDCR_EL2);				\
+	int cntr = FIELD_GET(MDCR_EL2_HPMN_MASK, mdcr_el2);			\
+										\
+	if (num >= cntr)							\
+		return BEHAVE_FORWARD_ANY;					\
+	return BEHAVE_HANDLE_LOCALLY;						\
+}										\
+
+check_cntr_accessible(0)
+check_cntr_accessible(1)
+check_cntr_accessible(2)
+check_cntr_accessible(3)
+check_cntr_accessible(4)
+check_cntr_accessible(5)
+check_cntr_accessible(6)
+check_cntr_accessible(7)
+check_cntr_accessible(8)
+check_cntr_accessible(9)
+check_cntr_accessible(10)
+check_cntr_accessible(11)
+check_cntr_accessible(12)
+check_cntr_accessible(13)
+check_cntr_accessible(14)
+check_cntr_accessible(15)
+check_cntr_accessible(16)
+check_cntr_accessible(17)
+check_cntr_accessible(18)
+check_cntr_accessible(19)
+check_cntr_accessible(20)
+check_cntr_accessible(21)
+check_cntr_accessible(22)
+check_cntr_accessible(23)
+check_cntr_accessible(24)
+check_cntr_accessible(25)
+check_cntr_accessible(26)
+check_cntr_accessible(27)
+check_cntr_accessible(28)
+check_cntr_accessible(29)
+check_cntr_accessible(30)
+
 #define CCC(id, fn)				\
 	[id - __COMPLEX_CONDITIONS__] = fn
 
@@ -505,6 +603,38 @@  static const complex_condition_check ccc[] = {
 	CCC(CGT_CNTHCTL_EL1PCTEN, check_cnthctl_el1pcten),
 	CCC(CGT_CNTHCTL_EL1PTEN, check_cnthctl_el1pten),
 	CCC(CGT_CPTR_TTA, check_cptr_tta),
+	CCC(CGT_SPMSEL_SPMACCESS, check_spmsel_spmaccess),
+	CCC(CGT_CNTR_ACCESSIBLE_0, check_cntr_accessible_0),
+	CCC(CGT_CNTR_ACCESSIBLE_1, check_cntr_accessible_1),
+	CCC(CGT_CNTR_ACCESSIBLE_2, check_cntr_accessible_2),
+	CCC(CGT_CNTR_ACCESSIBLE_3, check_cntr_accessible_3),
+	CCC(CGT_CNTR_ACCESSIBLE_4, check_cntr_accessible_4),
+	CCC(CGT_CNTR_ACCESSIBLE_5, check_cntr_accessible_5),
+	CCC(CGT_CNTR_ACCESSIBLE_6, check_cntr_accessible_6),
+	CCC(CGT_CNTR_ACCESSIBLE_7, check_cntr_accessible_7),
+	CCC(CGT_CNTR_ACCESSIBLE_8, check_cntr_accessible_8),
+	CCC(CGT_CNTR_ACCESSIBLE_9, check_cntr_accessible_9),
+	CCC(CGT_CNTR_ACCESSIBLE_10, check_cntr_accessible_10),
+	CCC(CGT_CNTR_ACCESSIBLE_11, check_cntr_accessible_11),
+	CCC(CGT_CNTR_ACCESSIBLE_12, check_cntr_accessible_12),
+	CCC(CGT_CNTR_ACCESSIBLE_13, check_cntr_accessible_13),
+	CCC(CGT_CNTR_ACCESSIBLE_14, check_cntr_accessible_14),
+	CCC(CGT_CNTR_ACCESSIBLE_15, check_cntr_accessible_15),
+	CCC(CGT_CNTR_ACCESSIBLE_16, check_cntr_accessible_16),
+	CCC(CGT_CNTR_ACCESSIBLE_17, check_cntr_accessible_17),
+	CCC(CGT_CNTR_ACCESSIBLE_18, check_cntr_accessible_18),
+	CCC(CGT_CNTR_ACCESSIBLE_19, check_cntr_accessible_19),
+	CCC(CGT_CNTR_ACCESSIBLE_20, check_cntr_accessible_20),
+	CCC(CGT_CNTR_ACCESSIBLE_21, check_cntr_accessible_21),
+	CCC(CGT_CNTR_ACCESSIBLE_22, check_cntr_accessible_22),
+	CCC(CGT_CNTR_ACCESSIBLE_23, check_cntr_accessible_23),
+	CCC(CGT_CNTR_ACCESSIBLE_24, check_cntr_accessible_24),
+	CCC(CGT_CNTR_ACCESSIBLE_25, check_cntr_accessible_25),
+	CCC(CGT_CNTR_ACCESSIBLE_26, check_cntr_accessible_26),
+	CCC(CGT_CNTR_ACCESSIBLE_27, check_cntr_accessible_27),
+	CCC(CGT_CNTR_ACCESSIBLE_28, check_cntr_accessible_28),
+	CCC(CGT_CNTR_ACCESSIBLE_29, check_cntr_accessible_29),
+	CCC(CGT_CNTR_ACCESSIBLE_30, check_cntr_accessible_30),
 };
 
 /*
@@ -912,6 +1042,7 @@  static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
 	SR_TRAP(SYS_ERXPFGF_EL1,	CGT_HCR_nFIEN),
 	SR_TRAP(SYS_ERXPFGCTL_EL1,	CGT_HCR_nFIEN),
 	SR_TRAP(SYS_ERXPFGCDN_EL1,	CGT_HCR_nFIEN),
+
 	SR_TRAP(SYS_PMCR_EL0,		CGT_MDCR_TPM_TPMCR),
 	SR_TRAP(SYS_PMCNTENSET_EL0,	CGT_MDCR_TPM),
 	SR_TRAP(SYS_PMCNTENCLR_EL0,	CGT_MDCR_TPM),
@@ -1085,6 +1216,7 @@  static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
 	SR_TRAP(SYS_PMSIRR_EL1,		CGT_MDCR_TPMS),
 	SR_TRAP(SYS_PMSLATFR_EL1,	CGT_MDCR_TPMS),
 	SR_TRAP(SYS_PMSNEVFR_EL1,	CGT_MDCR_TPMS),
+
 	SR_TRAP(SYS_TRFCR_EL1,		CGT_MDCR_TTRF),
 	SR_TRAP(SYS_TRBBASER_EL1,	CGT_MDCR_E2TB),
 	SR_TRAP(SYS_TRBLIMITR_EL1,	CGT_MDCR_E2TB),
@@ -1092,6 +1224,136 @@  static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
 	SR_TRAP(SYS_TRBPTR_EL1, 	CGT_MDCR_E2TB),
 	SR_TRAP(SYS_TRBSR_EL1, 		CGT_MDCR_E2TB),
 	SR_TRAP(SYS_TRBTRG_EL1,		CGT_MDCR_E2TB),
+
+	SR_TRAP(SYS_MDSTEPOP_EL1,	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_TRBMPAM_EL1,	CGT_MDCR_E2TB),
+	SR_TRAP(SYS_PMSDSFR_EL1,	CGT_MDCR_TPMS),
+
+	SR_TRAP(SYS_SPMDEVAFF_EL1,	CGT_MDCR_EnSPM),
+	SR_TRAP(SYS_SPMCGCR0_EL1,	CGT_MDCR_EnSPM),
+	SR_TRAP(SYS_SPMCGCR1_EL1,	CGT_MDCR_EnSPM),
+	SR_TRAP(SYS_SPMIIDR_EL1,	CGT_MDCR_EnSPM),
+	SR_TRAP(SYS_SPMDEVARCH_EL1,	CGT_MDCR_EnSPM),
+	SR_TRAP(SYS_SPMCFGR_EL1,	CGT_MDCR_EnSPM),
+	SR_TRAP(SYS_SPMSCR_EL1,		CGT_MDCR_EnSPM),
+	SR_TRAP(SYS_SPMACCESSR_EL1,	CGT_MDCR_EnSPM),
+	SR_TRAP(SYS_SPMCR_EL0,		CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMOVSCLR_EL0,	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMOVSSET_EL0,	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMINTENCLR_EL1,	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMINTENSET_EL1,	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMCNTENCLR_EL0,	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMCNTENSET_EL0,	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMSELR_EL0,	CGT_MDCR_EnSPM),
+
+	SR_TRAP(SYS_SPMEVTYPER_EL0(0),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(1),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(2),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(3),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(4),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(5),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(6),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(7),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(8),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(9),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(10),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(11),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(12),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(13),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(14),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVTYPER_EL0(15),	CGT_SPMSEL_SPMACCESS),
+
+	SR_TRAP(SYS_SPMEVFILTR_EL0(0),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(1),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(2),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(3),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(4),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(5),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(6),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(7),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(8),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(9),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(10), CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(11),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(12),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(13),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(14),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILTR_EL0(15),	CGT_SPMSEL_SPMACCESS),
+
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(0),		CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(1),		CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(2),		CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(3),		CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(4),		CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(5),		CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(6),		CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(7),		CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(8),		CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(9),		CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(10),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(11),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(12),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(13),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(14),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVFILT2R_EL0(15),	CGT_SPMSEL_SPMACCESS),
+
+	SR_TRAP(SYS_SPMEVCNTR_EL0(0),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(1),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(2),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(3),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(4),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(5),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(6),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(7),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(8),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(9),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(10),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(11),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(12),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(13),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(14),	CGT_SPMSEL_SPMACCESS),
+	SR_TRAP(SYS_SPMEVCNTR_EL0(15),	CGT_SPMSEL_SPMACCESS),
+
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(0),	CGT_CNTR_ACCESSIBLE_0),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(1),	CGT_CNTR_ACCESSIBLE_1),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(2),	CGT_CNTR_ACCESSIBLE_2),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(3),	CGT_CNTR_ACCESSIBLE_3),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(4),  CGT_CNTR_ACCESSIBLE_4),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(5),	CGT_CNTR_ACCESSIBLE_5),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(6),	CGT_CNTR_ACCESSIBLE_6),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(7),	CGT_CNTR_ACCESSIBLE_7),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(8),	CGT_CNTR_ACCESSIBLE_8),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(9),	CGT_CNTR_ACCESSIBLE_9),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(10),	CGT_CNTR_ACCESSIBLE_10),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(11),	CGT_CNTR_ACCESSIBLE_11),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(12),	CGT_CNTR_ACCESSIBLE_12),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(13),	CGT_CNTR_ACCESSIBLE_13),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(14),	CGT_CNTR_ACCESSIBLE_14),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(15),	CGT_CNTR_ACCESSIBLE_15),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(16),	CGT_CNTR_ACCESSIBLE_16),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(17),	CGT_CNTR_ACCESSIBLE_17),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(18),	CGT_CNTR_ACCESSIBLE_18),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(19),	CGT_CNTR_ACCESSIBLE_19),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(20),	CGT_CNTR_ACCESSIBLE_20),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(21),	CGT_CNTR_ACCESSIBLE_21),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(22),	CGT_CNTR_ACCESSIBLE_22),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(23),	CGT_CNTR_ACCESSIBLE_23),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(24),	CGT_CNTR_ACCESSIBLE_24),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(25),	CGT_CNTR_ACCESSIBLE_25),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(26),	CGT_CNTR_ACCESSIBLE_26),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(27),	CGT_CNTR_ACCESSIBLE_27),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(28),	CGT_CNTR_ACCESSIBLE_28),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(29),	CGT_CNTR_ACCESSIBLE_29),
+	SR_TRAP(SYS_PMEVCNTSVR_EL1(30),	CGT_CNTR_ACCESSIBLE_30),
+
+	SR_TRAP(SYS_MDSELR_EL1,		CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_PMUACR_EL1,		CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMICFILTR_EL0,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMICNTR_EL0,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMIAR_EL1,		CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMECR_EL1,		CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMZR_EL0,		CGT_MDCR_TPM),
+
 	SR_TRAP(SYS_CPACR_EL1,		CGT_CPTR_TCPAC),
 	SR_TRAP(SYS_AMUSERENR_EL0,	CGT_CPTR_TAM),
 	SR_TRAP(SYS_AMCFGR_EL0,		CGT_CPTR_TAM),