diff mbox

[3/3] KVM: MMU: Consolidate common code in mmu_free_roots()

Message ID 20130509154602.da528c3b.yoshikawa_takuya_b1@lab.ntt.co.jp (mailing list archive)
State New, archived
Headers show

Commit Message

Takuya Yoshikawa May 9, 2013, 6:46 a.m. UTC
By making the last three statements common to both if/else cases, the
symmetry between the locking and unlocking becomes clearer. One note
here is that VCPU's root_hpa does not need to be protected by mmu_lock.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/mmu.c |   39 +++++++++++++++++++--------------------
 1 files changed, 19 insertions(+), 20 deletions(-)

Comments

Xiao Guangrong May 9, 2013, 10:11 a.m. UTC | #1
On 05/09/2013 02:46 PM, Takuya Yoshikawa wrote:
> By making the last three statements common to both if/else cases, the
> symmetry between the locking and unlocking becomes clearer. One note
> here is that VCPU's root_hpa does not need to be protected by mmu_lock.
> 
> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
> ---
>  arch/x86/kvm/mmu.c |   39 +++++++++++++++++++--------------------
>  1 files changed, 19 insertions(+), 20 deletions(-)

DO NOT think it makes any thing better.


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Takuya Yoshikawa May 10, 2013, 1:05 a.m. UTC | #2
On Thu, 09 May 2013 18:11:31 +0800
Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:

> On 05/09/2013 02:46 PM, Takuya Yoshikawa wrote:
> > By making the last three statements common to both if/else cases, the
> > symmetry between the locking and unlocking becomes clearer. One note
> > here is that VCPU's root_hpa does not need to be protected by mmu_lock.
> > 
> > Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
> > ---
> >  arch/x86/kvm/mmu.c |   39 +++++++++++++++++++--------------------
> >  1 files changed, 19 insertions(+), 20 deletions(-)
> 
> DO NOT think it makes any thing better.
> 

Why do we need to do the same thing differently in two paths?
Especially one path looks like protecting root_hpa while the other does not.

This one may be a small issue, but these small issues make it difficult
to see what mmu_lock is protecting.

	Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Xiao Guangrong May 10, 2013, 1:43 a.m. UTC | #3
On 05/10/2013 09:05 AM, Takuya Yoshikawa wrote:
> On Thu, 09 May 2013 18:11:31 +0800
> Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:
> 
>> On 05/09/2013 02:46 PM, Takuya Yoshikawa wrote:
>>> By making the last three statements common to both if/else cases, the
>>> symmetry between the locking and unlocking becomes clearer. One note
>>> here is that VCPU's root_hpa does not need to be protected by mmu_lock.
>>>
>>> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
>>> ---
>>>  arch/x86/kvm/mmu.c |   39 +++++++++++++++++++--------------------
>>>  1 files changed, 19 insertions(+), 20 deletions(-)
>>
>> DO NOT think it makes any thing better.
>>
> 
> Why do we need to do the same thing differently in two paths?

They are different cases, one is the long mode, anther is PAE mode,
return from one case and continue to handle the anther case is common
used, and it can reduce the indentation and easily follow the code.

Anyway, this is the code style, using if-else is not better than
if() return.

> Especially one path looks like protecting root_hpa while the other does not.

The simple code, "vcpu->arch.mmu.root_hpa = INVALID_PAGE;", does not hurt
the lock. But moving them out of the lock is ok.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d01f340..bf80b46 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2861,42 +2861,41 @@  out_unlock:
 static void mmu_free_roots(struct kvm_vcpu *vcpu)
 {
 	int i;
+	hpa_t root;
 	struct kvm_mmu_page *sp;
 	LIST_HEAD(invalid_list);
 
 	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
 		return;
+
 	spin_lock(&vcpu->kvm->mmu_lock);
+
 	if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL &&
 	    (vcpu->arch.mmu.root_level == PT64_ROOT_LEVEL ||
 	     vcpu->arch.mmu.direct_map)) {
-		hpa_t root = vcpu->arch.mmu.root_hpa;
-
+		root = vcpu->arch.mmu.root_hpa;
 		sp = page_header(root);
 		--sp->root_count;
-		if (!sp->root_count && sp->role.invalid) {
+		if (!sp->root_count && sp->role.invalid)
 			kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list);
-			kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
+	} else {
+		for (i = 0; i < 4; ++i) {
+			root = vcpu->arch.mmu.pae_root[i];
+			if (root) {
+				root &= PT64_BASE_ADDR_MASK;
+				sp = page_header(root);
+				--sp->root_count;
+				if (!sp->root_count && sp->role.invalid)
+					kvm_mmu_prepare_zap_page(vcpu->kvm, sp,
+								 &invalid_list);
+			}
+			vcpu->arch.mmu.pae_root[i] = INVALID_PAGE;
 		}
-		vcpu->arch.mmu.root_hpa = INVALID_PAGE;
-		spin_unlock(&vcpu->kvm->mmu_lock);
-		return;
 	}
-	for (i = 0; i < 4; ++i) {
-		hpa_t root = vcpu->arch.mmu.pae_root[i];
 
-		if (root) {
-			root &= PT64_BASE_ADDR_MASK;
-			sp = page_header(root);
-			--sp->root_count;
-			if (!sp->root_count && sp->role.invalid)
-				kvm_mmu_prepare_zap_page(vcpu->kvm, sp,
-							 &invalid_list);
-		}
-		vcpu->arch.mmu.pae_root[i] = INVALID_PAGE;
-	}
 	kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
 	spin_unlock(&vcpu->kvm->mmu_lock);
+
 	vcpu->arch.mmu.root_hpa = INVALID_PAGE;
 }