From patchwork Tue Mar 11 04:03:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 14011059 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 861761EB9E3; Tue, 11 Mar 2025 04:03:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.191 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741665819; cv=none; b=BE/IOxjUStePZC4xjMComFe2vkc04topB5GiMWl5HBAzJt/f8l0bR6W9EvFW1ouops7VnNsU69DL1oVldrWPRXrgai/VWjATTXePL+HzBMZeQvIxhY4FF3cvREOKZa1LTZaQrv9f+CC6mPiVYULi4RRLKID/iB8mOcJ0la8TnrA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741665819; c=relaxed/simple; bh=Spmp68LUlEvycWuF1rBIfzsE+AdyqpsTvSrApE5GVOk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=cp1eVZoP1/WTCmSXAhzoedGdJ33r33/UBNgYtqM6F578ViVwu+0rTzePk6Smazs0s72iNxGJhaLgnkYo0mgMOU4ZyheFMQ73HVUQGc31KbYUhD0kZDSOzcaP8llpWGAOZidKpe1gg4xS7GhVhHKs8e2/P0oooAL8FKF05Sriuxg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.191 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4ZBg7P1FMBz1R6Ql; Tue, 11 Mar 2025 12:01:53 +0800 (CST) Received: from kwepemj500003.china.huawei.com (unknown [7.202.194.33]) by mail.maildlp.com (Postfix) with ESMTPS id BA7E91A0188; Tue, 11 Mar 2025 12:03:33 +0800 (CST) Received: from DESKTOP-KKJBAGG.huawei.com (10.174.178.32) by kwepemj500003.china.huawei.com (7.202.194.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 11 Mar 2025 12:03:32 +0800 From: Zhenyu Ye To: , , , , , CC: , , , , , , , Subject: [PATCH v1 1/5] arm64/sysreg: add HDBSS related register information Date: Tue, 11 Mar 2025 12:03:17 +0800 Message-ID: <20250311040321.1460-2-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20250311040321.1460-1-yezhenyu2@huawei.com> References: <20250311040321.1460-1-yezhenyu2@huawei.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemj500003.china.huawei.com (7.202.194.33) From: eillon The ARM architecture added the HDBSS feature and descriptions of related registers (HDBSSBR/HDBSSPROD) in the DDI0601(ID121123) version, add them to Linux. Signed-off-by: eillon --- arch/arm64/include/asm/esr.h | 2 ++ arch/arm64/include/asm/kvm_arm.h | 1 + arch/arm64/include/asm/sysreg.h | 4 ++++ arch/arm64/tools/sysreg | 28 +++++++++++++++++++++++++++ tools/arch/arm64/include/asm/sysreg.h | 4 ++++ 5 files changed, 39 insertions(+) diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index d1b1a33f9a8b..a33befe0999a 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -147,6 +147,8 @@ #define ESR_ELx_CM (UL(1) << ESR_ELx_CM_SHIFT) /* ISS2 field definitions for Data Aborts */ +#define ESR_ELx_HDBSSF_SHIFT (11) +#define ESR_ELx_HDBSSF (UL(1) << ESR_ELx_HDBSSF_SHIFT) #define ESR_ELx_TnD_SHIFT (10) #define ESR_ELx_TnD (UL(1) << ESR_ELx_TnD_SHIFT) #define ESR_ELx_TagAccess_SHIFT (9) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index c2417a424b98..80793ef57f8b 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -122,6 +122,7 @@ TCR_EL2_ORGN0_MASK | TCR_EL2_IRGN0_MASK) /* VTCR_EL2 Registers bits */ +#define VTCR_EL2_HDBSS (1UL << 45) #define VTCR_EL2_DS TCR_EL2_DS #define VTCR_EL2_RES1 (1U << 31) #define VTCR_EL2_HD (1 << 22) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 05ea5223d2d5..b727772c06fb 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -522,6 +522,10 @@ #define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2) #define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0) + +#define SYS_HDBSSBR_EL2 sys_reg(3, 4, 2, 3, 2) +#define SYS_HDBSSPROD_EL2 sys_reg(3, 4, 2, 3, 3) + #define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6) #define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0) #define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1) diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index 762ee084b37c..c2aea1e7fd22 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -2876,6 +2876,34 @@ Sysreg GCSPR_EL2 3 4 2 5 1 Fields GCSPR_ELx EndSysreg +Sysreg HDBSSBR_EL2 3 4 2 3 2 +Res0 63:56 +Field 55:12 BADDR +Res0 11:4 +Enum 3:0 SZ + 0b0001 8KB + 0b0010 16KB + 0b0011 32KB + 0b0100 64KB + 0b0101 128KB + 0b0110 256KB + 0b0111 512KB + 0b1000 1MB + 0b1001 2MB +EndEnum +EndSysreg + +Sysreg HDBSSPROD_EL2 3 4 2 3 3 +Res0 63:32 +Enum 31:26 FSC + 0b000000 OK + 0b010000 ExternalAbort + 0b101000 GPF +EndEnum +Res0 25:19 +Field 18:0 INDEX +EndSysreg + Sysreg DACR32_EL2 3 4 3 0 0 Res0 63:32 Field 31:30 D15 diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h index 150416682e2c..95fc6a4ee655 100644 --- a/tools/arch/arm64/include/asm/sysreg.h +++ b/tools/arch/arm64/include/asm/sysreg.h @@ -518,6 +518,10 @@ #define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2) #define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0) + +#define SYS_HDBSSBR_EL2 sys_reg(3, 4, 2, 3, 2) +#define SYS_HDBSSPROD_EL2 sys_reg(3, 4, 2, 3, 3) + #define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6) #define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0) #define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1) From patchwork Tue Mar 11 04:03:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 14011060 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 51D3C1EB9F3; Tue, 11 Mar 2025 04:03:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.32 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741665820; cv=none; b=S3GNwz77NI0C6kOjLvzpDCo7BJULEGAagdJA76TWLeBo7ZhiEdu8aUat2sHD2cvdqgmAqSR3CIFA8lggLK8h/CVgpYCtCqHBqLVJ6qhL/h9LXxxcwwjuiYMGrSKY2gjRam4pn7omAniHpNH6TYi/Tth9mpaWXOB7iHdHOL3yrq4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741665820; c=relaxed/simple; bh=lKUtWeeuuNv/3bGyvIabAKg3vG9/yt8By6iM12+PVJM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=UPOt7iIUQFQ2sja9tOluLbrIlEWX0b0ZLibd1MX+PilZOKE9ioaAUvbAozPRS+fkbh4ATLZ2BkKkbd85JP1FtAhl6M49ztxaB3W0gdGSFnOZMhfsV/gciq2DxRnQkbiyvXmunvO8mB1rGQ0sNudIriW6i9k12OxLDgcdfPmBkwo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4ZBgB24Y5cz27gBD; Tue, 11 Mar 2025 12:04:10 +0800 (CST) Received: from kwepemj500003.china.huawei.com (unknown [7.202.194.33]) by mail.maildlp.com (Postfix) with ESMTPS id D9D2C1A016C; Tue, 11 Mar 2025 12:03:35 +0800 (CST) Received: from DESKTOP-KKJBAGG.huawei.com (10.174.178.32) by kwepemj500003.china.huawei.com (7.202.194.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 11 Mar 2025 12:03:34 +0800 From: Zhenyu Ye To: , , , , , CC: , , , , , , , Subject: [PATCH v1 2/5] arm64/kvm: support set the DBM attr during memory abort Date: Tue, 11 Mar 2025 12:03:18 +0800 Message-ID: <20250311040321.1460-3-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20250311040321.1460-1-yezhenyu2@huawei.com> References: <20250311040321.1460-1-yezhenyu2@huawei.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemj500003.china.huawei.com (7.202.194.33) From: eillon Since the ARMv8, the page entry has supported the DBM attribute. Support set the attr during user_mem_abort(). Signed-off-by: eillon --- arch/arm64/include/asm/kvm_pgtable.h | 3 +++ arch/arm64/kvm/hyp/pgtable.c | 6 ++++++ 2 files changed, 9 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 6b9d274052c7..35648d7f08f5 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -86,6 +86,8 @@ typedef u64 kvm_pte_t; #define KVM_PTE_LEAF_ATTR_HI_S2_XN BIT(54) +#define KVM_PTE_LEAF_ATTR_HI_S2_DBM BIT(51) + #define KVM_PTE_LEAF_ATTR_HI_S1_GP BIT(50) #define KVM_PTE_LEAF_ATTR_S2_PERMS (KVM_PTE_LEAF_ATTR_LO_S2_S2AP_R | \ @@ -252,6 +254,7 @@ enum kvm_pgtable_prot { KVM_PGTABLE_PROT_DEVICE = BIT(3), KVM_PGTABLE_PROT_NORMAL_NC = BIT(4), + KVM_PGTABLE_PROT_DBM = BIT(5), KVM_PGTABLE_PROT_SW0 = BIT(55), KVM_PGTABLE_PROT_SW1 = BIT(56), diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index df5cc74a7dd0..3ea6bdbc02a0 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -700,6 +700,9 @@ static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot p if (prot & KVM_PGTABLE_PROT_W) attr |= KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W; + if (prot & KVM_PGTABLE_PROT_DBM) + attr |= KVM_PTE_LEAF_ATTR_HI_S2_DBM; + if (!kvm_lpa2_is_enabled()) attr |= FIELD_PREP(KVM_PTE_LEAF_ATTR_LO_S2_SH, sh); @@ -1309,6 +1312,9 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, if (prot & KVM_PGTABLE_PROT_W) set |= KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W; + if (prot & KVM_PGTABLE_PROT_DBM) + set |= KVM_PTE_LEAF_ATTR_HI_S2_DBM; + if (prot & KVM_PGTABLE_PROT_X) clr |= KVM_PTE_LEAF_ATTR_HI_S2_XN; From patchwork Tue Mar 11 04:03:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 14011063 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC1DA1EB184; Tue, 11 Mar 2025 04:03:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741665827; cv=none; b=DQIsIRVuCTnF4WBnzV+4StQq/BgVrMoO/cTYiLIAdVslcslAG5oA5/Q4esDIdE0wVF4fHYIal8m4Q1Dxutl7JdzWiw4CwVZ8NWya0ZWCqc7Hg9+4wqeUWRIom/XGQHo5gzkruytpY1l2GbQBbp25M5RdWrYDZLzVzX44bd+Lp5s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741665827; c=relaxed/simple; bh=Lx8rVbG+c3mJpvZpfSYYDm25AVJWBrGYoa0uXRnDYNw=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=BOliGeDpOefVAkN6+6SPsQk0EAmJ0LR8+M1DOTjyKLirrgBhRQ7clTVtAyOdtYMyhTG4vykDO/hQVVeNfKCeFFCVvKkx6Hb8jnhMaAMPK5KlqiZldmO+LSVU88+hkEFExZ7t24/uYXplaa9gwtHftQFDTRgfcNiEtnsX0DwS8uM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4ZBg7h0sg7zqVXC; Tue, 11 Mar 2025 12:02:08 +0800 (CST) Received: from kwepemj500003.china.huawei.com (unknown [7.202.194.33]) by mail.maildlp.com (Postfix) with ESMTPS id 9D7351800CD; Tue, 11 Mar 2025 12:03:37 +0800 (CST) Received: from DESKTOP-KKJBAGG.huawei.com (10.174.178.32) by kwepemj500003.china.huawei.com (7.202.194.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 11 Mar 2025 12:03:36 +0800 From: Zhenyu Ye To: , , , , , CC: , , , , , , , Subject: [PATCH v1 3/5] arm64/kvm: using ioctl to enable/disable the HDBSS feature Date: Tue, 11 Mar 2025 12:03:19 +0800 Message-ID: <20250311040321.1460-4-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20250311040321.1460-1-yezhenyu2@huawei.com> References: <20250311040321.1460-1-yezhenyu2@huawei.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemj500003.china.huawei.com (7.202.194.33) From: eillon In ARM64, the buffer size corresponding to the HDBSS feature is configurable. Therefore, we cannot enable the HDBSS feature during KVM initialization, but we should enable it when triggering a live migration, where the buffer size can be configured by the user. The KVM_CAP_ARM_HW_DIRTY_STATE_TRACK ioctl is added to enable/disable this feature. Users (such as qemu) can invoke the ioctl to enable HDBSS at the beginning of the migration and disable the feature by invoking the ioctl again at the end of the migration with size set to 0. Signed-off-by: eillon --- arch/arm64/include/asm/cpufeature.h | 12 +++++ arch/arm64/include/asm/kvm_host.h | 6 +++ arch/arm64/include/asm/kvm_mmu.h | 12 +++++ arch/arm64/include/asm/sysreg.h | 12 +++++ arch/arm64/kvm/arm.c | 70 +++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/vhe/switch.c | 1 + arch/arm64/kvm/mmu.c | 3 ++ arch/arm64/kvm/reset.c | 7 +++ include/linux/kvm_host.h | 1 + include/uapi/linux/kvm.h | 1 + tools/include/uapi/linux/kvm.h | 1 + 11 files changed, 126 insertions(+) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index e0e4478f5fb5..c76d51506562 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -743,6 +743,18 @@ static __always_inline bool system_supports_fpsimd(void) return alternative_has_cap_likely(ARM64_HAS_FPSIMD); } +static inline bool system_supports_hdbss(void) +{ + u64 mmfr1; + u32 val; + + mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); + val = cpuid_feature_extract_unsigned_field(mmfr1, + ID_AA64MMFR1_EL1_HAFDBS_SHIFT); + + return val == ID_AA64MMFR1_EL1_HAFDBS_HDBSS; +} + static inline bool system_uses_hw_pan(void) { return alternative_has_cap_unlikely(ARM64_HAS_PAN); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index d919557af5e5..bd73ee92b12c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -787,6 +787,12 @@ struct kvm_vcpu_arch { /* Per-vcpu CCSIDR override or NULL */ u32 *ccsidr; + + /* HDBSS registers info */ + struct { + u64 br_el2; + u64 prod_el2; + } hdbss; }; /* diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index b98ac6aa631f..ed5b68c2085e 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -330,6 +330,18 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); } +static __always_inline void __load_hdbss(struct kvm_vcpu *vcpu) +{ + if (!vcpu->kvm->enable_hdbss) + return; + + write_sysreg_s(vcpu->arch.hdbss.br_el2, SYS_HDBSSBR_EL2); + write_sysreg_s(vcpu->arch.hdbss.prod_el2, SYS_HDBSSPROD_EL2); + + dsb(sy); + isb(); +} + static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu) { return container_of(mmu->arch, struct kvm, arch); diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index b727772c06fb..3040eac74f8c 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -1105,6 +1105,18 @@ #define GCS_CAP(x) ((((unsigned long)x) & GCS_CAP_ADDR_MASK) | \ GCS_CAP_VALID_TOKEN) +/* + * Definitions for the HDBSS feature + */ +#define HDBSS_MAX_SIZE HDBSSBR_EL2_SZ_2MB + +#define HDBSSBR_EL2(baddr, sz) (((baddr) & GENMASK(55, 12 + sz)) | \ + ((sz) << HDBSSBR_EL2_SZ_SHIFT)) +#define HDBSSBR_BADDR(br) ((br) & GENMASK(55, (12 + HDBSSBR_SZ(br)))) +#define HDBSSBR_SZ(br) (((br) & HDBSSBR_EL2_SZ_MASK) >> HDBSSBR_EL2_SZ_SHIFT) + +#define HDBSSPROD_IDX(prod) (((prod) & HDBSSPROD_EL2_INDEX_MASK) >> HDBSSPROD_EL2_INDEX_SHIFT) + #define ARM64_FEATURE_FIELD_BITS 4 /* Defined for compatibility only, do not add new users. */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 0160b4924351..825cfef3b1c2 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -80,6 +80,70 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; } +static int kvm_cap_arm_enable_hdbss(struct kvm *kvm, + struct kvm_enable_cap *cap) +{ + unsigned long i; + struct kvm_vcpu *vcpu; + struct page *hdbss_pg; + int size = cap->args[0]; + + if (!system_supports_hdbss()) { + kvm_err("This system does not support HDBSS!\n"); + return -EINVAL; + } + + if (size < 0 || size > HDBSS_MAX_SIZE) { + kvm_err("Invalid HDBSS buffer size: %d!\n", size); + return -EINVAL; + } + + /* Enable the HDBSS feature if size > 0, otherwise disable it. */ + if (size) { + kvm->enable_hdbss = true; + kvm->arch.mmu.vtcr |= VTCR_EL2_HD | VTCR_EL2_HDBSS; + + kvm_for_each_vcpu(i, vcpu, kvm) { + hdbss_pg = alloc_pages(GFP_KERNEL, size); + if (!hdbss_pg) { + kvm_err("Alloc HDBSS buffer failed!\n"); + return -EINVAL; + } + + vcpu->arch.hdbss.br_el2 = HDBSSBR_EL2(page_to_phys(hdbss_pg), size); + vcpu->arch.hdbss.prod_el2 = 0; + + /* + * We should kick vcpus out of guest mode here to + * load new vtcr value to vtcr_el2 register when + * re-enter guest mode. + */ + kvm_vcpu_kick(vcpu); + } + + kvm_info("Enable HDBSS success, HDBSS buffer size: %d\n", size); + } else if (kvm->enable_hdbss) { + kvm->arch.mmu.vtcr &= ~(VTCR_EL2_HD | VTCR_EL2_HDBSS); + + kvm_for_each_vcpu(i, vcpu, kvm) { + /* Kick vcpus to flush hdbss buffer. */ + kvm_vcpu_kick(vcpu); + + hdbss_pg = phys_to_page(HDBSSBR_BADDR(vcpu->arch.hdbss.br_el2)); + if (hdbss_pg) + __free_pages(hdbss_pg, HDBSSBR_SZ(vcpu->arch.hdbss.br_el2)); + + vcpu->arch.hdbss.br_el2 = 0; + vcpu->arch.hdbss.prod_el2 = 0; + } + + kvm->enable_hdbss = false; + kvm_info("Disable HDBSS success\n"); + } + + return 0; +} + int kvm_vm_ioctl_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) { @@ -125,6 +189,9 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, } mutex_unlock(&kvm->slots_lock); break; + case KVM_CAP_ARM_HW_DIRTY_STATE_TRACK: + r = kvm_cap_arm_enable_hdbss(kvm, cap); + break; default: break; } @@ -393,6 +460,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES: r = BIT(0); break; + case KVM_CAP_ARM_HW_DIRTY_STATE_TRACK: + r = system_supports_hdbss(); + break; default: r = 0; } diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 647737d6e8d0..6b633a219e4d 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -256,6 +256,7 @@ void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu) __vcpu_load_switch_sysregs(vcpu); __vcpu_load_activate_traps(vcpu); __load_stage2(vcpu->arch.hw_mmu, vcpu->arch.hw_mmu->arch); + __load_hdbss(vcpu); } void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1f55b0c7b11d..9c11e2292b1e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1703,6 +1703,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (writable) prot |= KVM_PGTABLE_PROT_W; + if (kvm->enable_hdbss && logging_active) + prot |= KVM_PGTABLE_PROT_DBM; + if (exec_fault) prot |= KVM_PGTABLE_PROT_X; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 803e11b0dc8f..4e518f9a3df0 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -153,12 +153,19 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) { void *sve_state = vcpu->arch.sve_state; + struct page *hdbss_pg; kvm_unshare_hyp(vcpu, vcpu + 1); if (sve_state) kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); kfree(sve_state); kfree(vcpu->arch.ccsidr); + + if (vcpu->arch.hdbss.br_el2) { + hdbss_pg = phys_to_page(HDBSSBR_BADDR(vcpu->arch.hdbss.br_el2)); + if (hdbss_pg) + __free_pages(hdbss_pg, HDBSSBR_SZ(vcpu->arch.hdbss.br_el2)); + } } static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f34f4cfaa513..aae37141c4a6 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -862,6 +862,7 @@ struct kvm { struct xarray mem_attr_array; #endif char stats_id[KVM_STATS_NAME_SIZE]; + bool enable_hdbss; }; #define kvm_err(fmt, ...) \ diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 45e6d8fca9b9..748891902426 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -929,6 +929,7 @@ struct kvm_enable_cap { #define KVM_CAP_PRE_FAULT_MEMORY 236 #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 +#define KVM_CAP_ARM_HW_DIRTY_STATE_TRACK 239 struct kvm_irq_routing_irqchip { __u32 irqchip; diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h index 502ea63b5d2e..27d58b751e77 100644 --- a/tools/include/uapi/linux/kvm.h +++ b/tools/include/uapi/linux/kvm.h @@ -933,6 +933,7 @@ struct kvm_enable_cap { #define KVM_CAP_PRE_FAULT_MEMORY 236 #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 +#define KVM_CAP_ARM_HW_DIRTY_STATE_TRACK 239 struct kvm_irq_routing_irqchip { __u32 irqchip; From patchwork Tue Mar 11 04:03:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 14011061 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E7B631EA7FF; Tue, 11 Mar 2025 04:03:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.255 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741665824; cv=none; b=dhi2p4Yvdt2Pd/8sIPSCQIMjmS6r5qGYffb8Eg51zl8RigS0ciH4B3Ntsb8LEAjENPE5Dg6OcTv0S7TbjN+XeV5KbqqD98ETej+8xo77EQUpovMO77YnTgrVIWThPnF94gVsLEcp5iM/AkmDo/yCYk5hnt7opOJeAONhX7cOdXY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741665824; c=relaxed/simple; bh=W2fjiz10uTkl6epLzqjMqom8vKAEYUI9EDrH25MkceU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=jzEpO2w8KpYWnm8Xs5x/cWX/fB02VNvUKvlIeVXbuOx4S+bW4DR3dOt/8JstwkTjJG80qPEyuDe4+r2RJMOgj1HMl5RqUfrGOBlru1uewHO5QrnZb/6z867rIFHX9YUmQLbGqgf4n/Bg8MKCSlVJ1lx0LdrwIQraGnSAfz7Gfks= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.255 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4ZBg9N59TJz1cyWs; Tue, 11 Mar 2025 12:03:36 +0800 (CST) Received: from kwepemj500003.china.huawei.com (unknown [7.202.194.33]) by mail.maildlp.com (Postfix) with ESMTPS id 8BFB01800CD; Tue, 11 Mar 2025 12:03:39 +0800 (CST) Received: from DESKTOP-KKJBAGG.huawei.com (10.174.178.32) by kwepemj500003.china.huawei.com (7.202.194.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 11 Mar 2025 12:03:38 +0800 From: Zhenyu Ye To: , , , , , CC: , , , , , , , Subject: [PATCH v1 4/5] arm64/kvm: support to handle the HDBSSF event Date: Tue, 11 Mar 2025 12:03:20 +0800 Message-ID: <20250311040321.1460-5-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20250311040321.1460-1-yezhenyu2@huawei.com> References: <20250311040321.1460-1-yezhenyu2@huawei.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemj500003.china.huawei.com (7.202.194.33) From: eillon Updating the dirty bitmap based on the HDBSS buffer. Similar to the implementation of the x86 pml feature, KVM flushes the buffers on all VM-Exits, thus we only need to kick running vCPUs to force a VM-Exit. Signed-off-by: eillon --- arch/arm64/kvm/arm.c | 10 ++++++++ arch/arm64/kvm/handle_exit.c | 47 ++++++++++++++++++++++++++++++++++++ arch/arm64/kvm/mmu.c | 7 ++++++ 3 files changed, 64 insertions(+) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 825cfef3b1c2..fceceeead011 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1845,7 +1845,17 @@ long kvm_arch_vcpu_ioctl(struct file *filp, void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) { + /* + * Flush all CPUs' dirty log buffers to the dirty_bitmap. Called + * before reporting dirty_bitmap to userspace. KVM flushes the buffers + * on all VM-Exits, thus we only need to kick running vCPUs to force a + * VM-Exit. + */ + struct kvm_vcpu *vcpu; + unsigned long i; + kvm_for_each_vcpu(i, vcpu, kvm) + kvm_vcpu_kick(vcpu); } static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm, diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 512d152233ff..db9d7e1f72bf 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -330,6 +330,50 @@ static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu) return arm_exit_handlers[esr_ec]; } +#define HDBSS_ENTRY_VALID_SHIFT 0 +#define HDBSS_ENTRY_VALID_MASK (1UL << HDBSS_ENTRY_VALID_SHIFT) +#define HDBSS_ENTRY_IPA_SHIFT 12 +#define HDBSS_ENTRY_IPA_MASK GENMASK_ULL(55, HDBSS_ENTRY_IPA_SHIFT) + +static void kvm_flush_hdbss_buffer(struct kvm_vcpu *vcpu) +{ + int idx, curr_idx; + u64 *hdbss_buf; + + if (!vcpu->kvm->enable_hdbss) + return; + + dsb(sy); + isb(); + curr_idx = HDBSSPROD_IDX(read_sysreg_s(SYS_HDBSSPROD_EL2)); + + /* Do nothing if HDBSS buffer is empty or br_el2 is NULL */ + if (curr_idx == 0 || vcpu->arch.hdbss.br_el2 == 0) + return; + + hdbss_buf = page_address(phys_to_page(HDBSSBR_BADDR(vcpu->arch.hdbss.br_el2))); + if (!hdbss_buf) { + kvm_err("Enter flush hdbss buffer with buffer == NULL!"); + return; + } + + for (idx = 0; idx < curr_idx; idx++) { + u64 gpa; + + gpa = hdbss_buf[idx]; + if (!(gpa & HDBSS_ENTRY_VALID_MASK)) + continue; + + gpa = gpa & HDBSS_ENTRY_IPA_MASK; + kvm_vcpu_mark_page_dirty(vcpu, gpa >> PAGE_SHIFT); + } + + /* reset HDBSS index */ + write_sysreg_s(0, SYS_HDBSSPROD_EL2); + dsb(sy); + isb(); +} + /* * We may be single-stepping an emulated instruction. If the emulation * has been completed in the kernel, we can return to userspace with a @@ -365,6 +409,9 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index) { struct kvm_run *run = vcpu->run; + if (vcpu->kvm->enable_hdbss) + kvm_flush_hdbss_buffer(vcpu); + if (ARM_SERROR_PENDING(exception_index)) { /* * The SError is handled by handle_exit_early(). If the guest diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 9c11e2292b1e..3e0781ae0ae1 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1790,6 +1790,13 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) ipa = fault_ipa = kvm_vcpu_get_fault_ipa(vcpu); is_iabt = kvm_vcpu_trap_is_iabt(vcpu); + /* + * HDBSS buffer already flushed when enter handle_trap_exceptions(). + * Nothing to do here. + */ + if (ESR_ELx_ISS2(esr) & ESR_ELx_HDBSSF) + return 1; + if (esr_fsc_is_translation_fault(esr)) { /* Beyond sanitised PARange (which is the IPA limit) */ if (fault_ipa >= BIT_ULL(get_kvm_ipa_limit())) { From patchwork Tue Mar 11 04:03:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 14011062 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0FE31F4C87; Tue, 11 Mar 2025 04:03:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.35 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741665826; cv=none; b=Y80iau0GLKMZmueqdDkktEehallB5VVzFeSWuRKB565fiQssAxiYLwrv3r56P6/gDCEBYUxaAehHFHTLy9eAvspBA7kPA2vsM4KskvhOgPn/zti1UQz5olUvm1rc4nb5YztA2e55hfddgWDFMc2MZyD40dxZCFcub00lGiJoBLA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741665826; c=relaxed/simple; bh=ummS0YLZrl6/ek9FLrqOaWeDKWLnyo887hHkaiLIeCE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=VEmvPdVT9C8ns8QAFAFXv3jMo5a1JeOQWMMrvWNMB/KTUhLkgWLCymUgW01pHOqF2Zr/ECCSxUtgcUMSIT/uJHhwEhiVztswL5tj0TC7H0ds/KIrQzB8XpoVYjT8OHi/evRE1dcCGr/lt5TDYJR6XqsyOO4whJN/xQehJpgv6+U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.35 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4ZBg4S5m9fz1f0Pm; Tue, 11 Mar 2025 11:59:20 +0800 (CST) Received: from kwepemj500003.china.huawei.com (unknown [7.202.194.33]) by mail.maildlp.com (Postfix) with ESMTPS id 50EDE1A016C; Tue, 11 Mar 2025 12:03:41 +0800 (CST) Received: from DESKTOP-KKJBAGG.huawei.com (10.174.178.32) by kwepemj500003.china.huawei.com (7.202.194.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 11 Mar 2025 12:03:40 +0800 From: Zhenyu Ye To: , , , , , CC: , , , , , , , Subject: [PATCH v1 5/5] arm64/config: add config to control whether enable HDBSS feature Date: Tue, 11 Mar 2025 12:03:21 +0800 Message-ID: <20250311040321.1460-6-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20250311040321.1460-1-yezhenyu2@huawei.com> References: <20250311040321.1460-1-yezhenyu2@huawei.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemj500003.china.huawei.com (7.202.194.33) From: eillon The HDBSS feature introduces new assembly registers (HDBSSBR_EL2 and HDBSSPROD_EL2), which depends on the armv9.5-a compilation support. So add ARM64_HDBSS config to control whether enable the HDBSS feature. Signed-off-by: eillon --- arch/arm64/Kconfig | 19 +++++++++++++++++++ arch/arm64/Makefile | 4 +++- arch/arm64/include/asm/cpufeature.h | 3 +++ 3 files changed, 25 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 940343beb3d4..3458261eb14b 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2237,6 +2237,25 @@ config ARM64_GCS endmenu # "v9.4 architectural features" +menu "ARMv9.5 architectural features" + +config ARM64_HDBSS + bool "Enable support for Hardware Dirty state tracking Structure (HDBSS)" + default y + depends on AS_HAS_ARMV9_5 + help + Hardware Dirty state tracking Structure(HDBSS) enhances tracking + translation table descriptors’ dirty state to reduce the cost of + surveying for dirtied granules. + + The feature introduces new assembly registers (HDBSSBR_EL2 and + HDBSSPROD_EL2), which depends on AS_HAS_ARMV9_5. + +config AS_HAS_ARMV9_5 + def_bool $(cc-option,-Wa$(comma)-march=armv9.5-a) + +endmenu # "ARMv9.5 architectural features" + config ARM64_SVE bool "ARM Scalable Vector Extension support" default y diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 2b25d671365f..f22507fb09b9 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -103,7 +103,9 @@ endif # freely generate instructions which are not supported by earlier architecture # versions, which would prevent a single kernel image from working on earlier # hardware. -ifeq ($(CONFIG_AS_HAS_ARMV8_5), y) +ifeq ($(CONFIG_AS_HAS_ARMV9_5), y) + asm-arch := armv9.5-a +else ifeq ($(CONFIG_AS_HAS_ARMV8_5), y) asm-arch := armv8.5-a else ifeq ($(CONFIG_AS_HAS_ARMV8_4), y) asm-arch := armv8.4-a diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index c76d51506562..32e432827934 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -748,6 +748,9 @@ static inline bool system_supports_hdbss(void) u64 mmfr1; u32 val; + if (!IS_ENABLED(CONFIG_ARM64_HDBSS)) + return false; + mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); val = cpuid_feature_extract_unsigned_field(mmfr1, ID_AA64MMFR1_EL1_HAFDBS_SHIFT);