diff mbox

[3/5] KVM: MMU: add kvm_mmu_get_spte_hierarchy helper

Message ID 20090611140416.759106501@localhost.localdomain (mailing list archive)
State New, archived
Headers show

Commit Message

Marcelo Tosatti June 11, 2009, 2:02 p.m. UTC
Required by EPT misconfiguration handler.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

Comments

Avi Kivity June 11, 2009, 2:31 p.m. UTC | #1
Marcelo Tosatti wrote:
> Required by EPT misconfiguration handler.
>
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
>
> Index: kvm/arch/x86/kvm/mmu.c
> ===================================================================
> --- kvm.orig/arch/x86/kvm/mmu.c
> +++ kvm/arch/x86/kvm/mmu.c
> @@ -3013,6 +3013,24 @@ out:
>  	return r;
>  }
>  
> +int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes[4])
> +{
> +	struct kvm_shadow_walk_iterator iterator;
> +	int nr_sptes = 0;
> +
> +	spin_lock(&vcpu->kvm->mmu_lock);
> +	for_each_shadow_entry(vcpu, addr, iterator) {
> +		sptes[iterator.level-1] = iterator.sptep;
>   

Returning a pointer...

> +		nr_sptes++;
> +		if (!is_shadow_present_pte(*iterator.sptep))
> +			break;
> +	}
> +	spin_unlock(&vcpu->kvm->mmu_lock);
>   

... and unlocking the lock that protects it.

True, this is called in extreme cases, but I think you can dereference 
the pointer in the function just as easily.
Marcelo Tosatti June 11, 2009, 3:07 p.m. UTC | #2
Addressing comments.
Avi Kivity June 14, 2009, 9:54 a.m. UTC | #3
Marcelo Tosatti wrote:
> Addressing comments.
>
>
>   

Applied all, thanks.
diff mbox

Patch

Index: kvm/arch/x86/kvm/mmu.c
===================================================================
--- kvm.orig/arch/x86/kvm/mmu.c
+++ kvm/arch/x86/kvm/mmu.c
@@ -3013,6 +3013,24 @@  out:
 	return r;
 }
 
+int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes[4])
+{
+	struct kvm_shadow_walk_iterator iterator;
+	int nr_sptes = 0;
+
+	spin_lock(&vcpu->kvm->mmu_lock);
+	for_each_shadow_entry(vcpu, addr, iterator) {
+		sptes[iterator.level-1] = iterator.sptep;
+		nr_sptes++;
+		if (!is_shadow_present_pte(*iterator.sptep))
+			break;
+	}
+	spin_unlock(&vcpu->kvm->mmu_lock);
+
+	return nr_sptes;
+}
+EXPORT_SYMBOL_GPL(kvm_mmu_get_spte_hierarchy);
+
 #ifdef AUDIT
 
 static const char *audit_msg;
Index: kvm/arch/x86/kvm/mmu.h
===================================================================
--- kvm.orig/arch/x86/kvm/mmu.h
+++ kvm/arch/x86/kvm/mmu.h
@@ -37,6 +37,8 @@ 
 #define PT32_ROOT_LEVEL 2
 #define PT32E_ROOT_LEVEL 3
 
+int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes[4]);
+
 static inline void kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu)
 {
 	if (unlikely(vcpu->kvm->arch.n_free_mmu_pages < KVM_MIN_FREE_MMU_PAGES))