diff mbox

[v2,4/5] KVM: MMU: introduce page_fault_start and page_fault_end

Message ID 5052FFEA.1040607@linux.vnet.ibm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Xiao Guangrong Sept. 14, 2012, 9:59 a.m. UTC
Wrap the common operations into these two functions

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
 arch/x86/kvm/mmu.c         |   53 +++++++++++++++++++++++++++----------------
 arch/x86/kvm/paging_tmpl.h |   16 +++++--------
 2 files changed, 39 insertions(+), 30 deletions(-)

Comments

Marcelo Tosatti Sept. 15, 2012, 3:25 p.m. UTC | #1
On Fri, Sep 14, 2012 at 05:59:06PM +0800, Xiao Guangrong wrote:
> Wrap the common operations into these two functions
> 
> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>

Why? I think people are used to 

spin_lock(lock)
sequence
spin_unlock(lock)

So its easy to verify whether access to data structures are protected.

Unrelated to this patch, one opportunity i see to simplify this
code is:

- error pfn / mmio pfn / invalid pfn relation

Have the meaning of this bits unified in a single function/helper, see
comment to patch 1 (perhaps you can further improve).

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Xiao Guangrong Sept. 18, 2012, 8:15 a.m. UTC | #2
On 09/15/2012 11:25 PM, Marcelo Tosatti wrote:
> On Fri, Sep 14, 2012 at 05:59:06PM +0800, Xiao Guangrong wrote:
>> Wrap the common operations into these two functions
>>
>> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
> 
> Why? I think people are used to 
> 
> spin_lock(lock)
> sequence
> spin_unlock(lock)

Marcelo,

There are many functions use this style that wrap the lock into the
_start and _end functions in kernel (eg.: cgroup_pidlist_start and
cgroup_pidlist_stop in kernel/cgroup.c).

Actually, i just wanted to remove below duplicate ugly code:

	if (!is_error_pfn(pfn))
		kvm_release_pfn_clean(pfn);

> 
> So its easy to verify whether access to data structures are protected.
> 
> Unrelated to this patch, one opportunity i see to simplify this
> code is:
> 
> - error pfn / mmio pfn / invalid pfn relation
> 
> Have the meaning of this bits unified in a single function/helper, see
> comment to patch 1 (perhaps you can further improve).

Sorry, more detail?

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Marcelo Tosatti Sept. 18, 2012, 11:43 p.m. UTC | #3
On Tue, Sep 18, 2012 at 04:15:32PM +0800, Xiao Guangrong wrote:
> On 09/15/2012 11:25 PM, Marcelo Tosatti wrote:
> > On Fri, Sep 14, 2012 at 05:59:06PM +0800, Xiao Guangrong wrote:
> >> Wrap the common operations into these two functions
> >>
> >> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
> > 
> > Why? I think people are used to 
> > 
> > spin_lock(lock)
> > sequence
> > spin_unlock(lock)
> 
> Marcelo,
> 
> There are many functions use this style that wrap the lock into the
> _start and _end functions in kernel (eg.: cgroup_pidlist_start and
> cgroup_pidlist_stop in kernel/cgroup.c).
> 
> Actually, i just wanted to remove below duplicate ugly code:
> 
> 	if (!is_error_pfn(pfn))
> 		kvm_release_pfn_clean(pfn);
> 
> > 
> > So its easy to verify whether access to data structures are protected.
> > 
> > Unrelated to this patch, one opportunity i see to simplify this
> > code is:
> > 
> > - error pfn / mmio pfn / invalid pfn relation
> > 
> > Have the meaning of this bits unified in a single function/helper, see
> > comment to patch 1 (perhaps you can further improve).
> 
> Sorry, more detail?

Should force the reader of the code to understand error pfn / mmio pfn /
invalid pfn in a single helper. That is, avoid using error pfn at all.

> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Xiao Guangrong Sept. 20, 2012, 2:59 a.m. UTC | #4
On 09/19/2012 07:43 AM, Marcelo Tosatti wrote:

>>> - error pfn / mmio pfn / invalid pfn relation
>>>
>>> Have the meaning of this bits unified in a single function/helper, see
>>> comment to patch 1 (perhaps you can further improve).
>>
>> Sorry, more detail?
> 
> Should force the reader of the code to understand error pfn / mmio pfn /
> invalid pfn in a single helper. That is, avoid using error pfn at all.

Reasonable suggestion, i will think about it more. Thank you, Marcelo!

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity Sept. 20, 2012, 10:57 a.m. UTC | #5
On 09/14/2012 12:59 PM, Xiao Guangrong wrote:
> Wrap the common operations into these two functions
> 
> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
> ---
>  arch/x86/kvm/mmu.c         |   53 +++++++++++++++++++++++++++----------------
>  arch/x86/kvm/paging_tmpl.h |   16 +++++--------
>  2 files changed, 39 insertions(+), 30 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 29ce28b..7e7b8cd 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2825,6 +2825,29 @@ exit:
>  static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
>  			 gva_t gva, pfn_t *pfn, bool write, bool *writable);
> 
> +static bool
> +page_fault_start(struct kvm_vcpu *vcpu, gfn_t *gfnp, pfn_t *pfnp, int *levelp,
> +		 bool force_pt_level, unsigned long mmu_seq)
> +{
> +	spin_lock(&vcpu->kvm->mmu_lock);
> +	if (mmu_notifier_retry(vcpu, mmu_seq))
> +		return false;
> +
> +	kvm_mmu_free_some_pages(vcpu);
> +	if (likely(!force_pt_level))
> +		transparent_hugepage_adjust(vcpu, gfnp, pfnp, levelp);
> +
> +	return true;
> +}
> +
> +static void page_fault_end(struct kvm_vcpu *vcpu, pfn_t pfn)
> +{
> +	spin_unlock(&vcpu->kvm->mmu_lock);
> +
> +	if (!is_error_pfn(pfn))
> +		kvm_release_pfn_clean(pfn);
> +}

Needs sparse annotations (__acquires, __releases).

These code blocks have nothing in common except for being shared.  Often
that's not good for maintainability because it means that further
changes can affect one path but not the other.  But we can try it out
and see.
diff mbox

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 29ce28b..7e7b8cd 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2825,6 +2825,29 @@  exit:
 static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
 			 gva_t gva, pfn_t *pfn, bool write, bool *writable);

+static bool
+page_fault_start(struct kvm_vcpu *vcpu, gfn_t *gfnp, pfn_t *pfnp, int *levelp,
+		 bool force_pt_level, unsigned long mmu_seq)
+{
+	spin_lock(&vcpu->kvm->mmu_lock);
+	if (mmu_notifier_retry(vcpu, mmu_seq))
+		return false;
+
+	kvm_mmu_free_some_pages(vcpu);
+	if (likely(!force_pt_level))
+		transparent_hugepage_adjust(vcpu, gfnp, pfnp, levelp);
+
+	return true;
+}
+
+static void page_fault_end(struct kvm_vcpu *vcpu, pfn_t pfn)
+{
+	spin_unlock(&vcpu->kvm->mmu_lock);
+
+	if (!is_error_pfn(pfn))
+		kvm_release_pfn_clean(pfn);
+}
+
 static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code,
 			 gfn_t gfn, bool prefault)
 {
@@ -2862,22 +2885,17 @@  static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code,
 	if (handle_abnormal_pfn(vcpu, v, gfn, pfn, ACC_ALL, &r))
 		return r;

-	spin_lock(&vcpu->kvm->mmu_lock);
-	if (mmu_notifier_retry(vcpu, mmu_seq)) {
+	if (!page_fault_start(vcpu, &gfn, &pfn, &level, force_pt_level,
+	      mmu_seq)) {
 		r = 0;
-		goto out_unlock;
+		goto exit;
 	}

-	kvm_mmu_free_some_pages(vcpu);
-	if (likely(!force_pt_level))
-		transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level);
 	r = __direct_map(vcpu, v, write, map_writable, level, gfn, pfn,
 			 prefault);

-out_unlock:
-	spin_unlock(&vcpu->kvm->mmu_lock);
-	if (!is_error_pfn(pfn))
-		kvm_release_pfn_clean(pfn);
+exit:
+	page_fault_end(vcpu, pfn);
 	return r;
 }

@@ -3331,22 +3349,17 @@  static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
 	if (handle_abnormal_pfn(vcpu, 0, gfn, pfn, ACC_ALL, &r))
 		return r;

-	spin_lock(&vcpu->kvm->mmu_lock);
-	if (mmu_notifier_retry(vcpu, mmu_seq)) {
+	if (!page_fault_start(vcpu, &gfn, &pfn, &level, force_pt_level,
+	      mmu_seq)) {
 		r = 0;
-		goto out_unlock;
+		goto exit;
 	}

-	kvm_mmu_free_some_pages(vcpu);
-	if (likely(!force_pt_level))
-		transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level);
 	r = __direct_map(vcpu, gpa, write, map_writable,
 			 level, gfn, pfn, prefault);

-out_unlock:
-	spin_unlock(&vcpu->kvm->mmu_lock);
-	if (!is_error_pfn(pfn))
-		kvm_release_pfn_clean(pfn);
+exit:
+	page_fault_end(vcpu, pfn);
 	return r;
 }

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 0adf376..1a738c5 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -624,26 +624,22 @@  static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
 				walker.gfn, pfn, walker.pte_access, &r))
 		return r;

-	spin_lock(&vcpu->kvm->mmu_lock);
-	if (mmu_notifier_retry(vcpu, mmu_seq)) {
+	if (!page_fault_start(vcpu, &walker.gfn, &pfn, &level,
+	      force_pt_level, mmu_seq)) {
 		r = 0;
-		goto out_unlock;
+		goto exit;
 	}

 	kvm_mmu_audit(vcpu, AUDIT_PRE_PAGE_FAULT);
-	kvm_mmu_free_some_pages(vcpu);
-	if (!force_pt_level)
-		transparent_hugepage_adjust(vcpu, &walker.gfn, &pfn, &level);
+
 	r = FNAME(fetch)(vcpu, addr, &walker, user_fault, write_fault,
 			 level, pfn, map_writable, prefault);

 	++vcpu->stat.pf_fixed;
 	kvm_mmu_audit(vcpu, AUDIT_POST_PAGE_FAULT);

-out_unlock:
-	spin_unlock(&vcpu->kvm->mmu_lock);
-	if (!is_error_pfn(pfn))
-		kvm_release_pfn_clean(pfn);
+exit:
+	page_fault_end(vcpu, pfn);
 	return r;
 }