diff mbox series

[V3,03/14] kvm: x86/mmu: Check mmu->sync_page pointer in kvm_sync_page_check()

Message ID 20230216154115.710033-4-jiangshanlai@gmail.com (mailing list archive)
State New, archived
Headers show
Series [V3,01/14] KVM: x86/mmu: Use 64-bit address to invalidate to fix a subtle bug | expand

Commit Message

Lai Jiangshan Feb. 16, 2023, 3:41 p.m. UTC
From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

Check the pointer before calling it to catch any possible mistake.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index ee2837ea18d4..69ab0d1bb0ec 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1940,7 +1940,7 @@  static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 	 * differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the
 	 * reserved bits checks will be wrong, etc...
 	 */
-	if (WARN_ON_ONCE(sp->role.direct ||
+	if (WARN_ON_ONCE(sp->role.direct || !vcpu->arch.mmu->sync_page ||
 			 (sp->role.word ^ root_role.word) & ~sync_role_ign.word))
 		return false;