diff mbox

[5/9] KVM: MMU: fask check write-protect for direct mmu

Message ID 50056E59.4090003@linux.vnet.ibm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Xiao Guangrong July 17, 2012, 1:53 p.m. UTC
If it have no indirect shadow pages we need not protect any gfn,
this is always true for direct mmu without nested

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
 arch/x86/kvm/mmu.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

Comments

Marcelo Tosatti July 20, 2012, 12:39 a.m. UTC | #1
On Tue, Jul 17, 2012 at 09:53:29PM +0800, Xiao Guangrong wrote:
> If it have no indirect shadow pages we need not protect any gfn,
> this is always true for direct mmu without nested
> 
> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>

Xiao,

What is the motivation? Numbers please.

In fact, what case was the original indirect_shadow_pages conditional in
kvm_mmu_pte_write optimizing again?


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Xiao Guangrong July 20, 2012, 2:34 a.m. UTC | #2
On 07/20/2012 08:39 AM, Marcelo Tosatti wrote:
> On Tue, Jul 17, 2012 at 09:53:29PM +0800, Xiao Guangrong wrote:
>> If it have no indirect shadow pages we need not protect any gfn,
>> this is always true for direct mmu without nested
>>
>> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
> 
> Xiao,
> 
> What is the motivation? Numbers please.
> 

mmu_need_write_protect is the common path for both soft-mmu and
hard-mmu, checking indirect_shadow_pages can skip hash-table walking
for the case which is tdp is enabled without nested guest.

I will post the Number after I do the performance test.

> In fact, what case was the original indirect_shadow_pages conditional in
> kvm_mmu_pte_write optimizing again?
> 

They are the different paths, mmu_need_write_protect is the real
page fault path, and kvm_mmu_pte_write is caused by mmio emulation.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Xiao Guangrong July 20, 2012, 3:45 a.m. UTC | #3
BTW, they are some bug fix patches on -master branch, but
it is not existed on -next branch:
commit: f411930442e01f9cf1bf4df41ff7e89476575c4d
commit: 85b7059169e128c57a3a8a3e588fb89cb2031da1

It causes code conflict if we do the development on -next.

On 07/20/2012 08:39 AM, Marcelo Tosatti wrote:can
> On Tue, Jul 17, 2012 at 09:53:29PM +0800, Xiao Guangrong wrote:
>> If it have no indirect shadow pages we need not protect any gfn,
>> this is always true for direct mmu without nested
>>
>> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
> 
> Xiao,
> 
> What is the motivation? Numbers please.
> 
> In fact, what case was the original indirect_shadow_pages conditional in
> kvm_mmu_pte_write optimizing again?
> 
> 
> 


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Marcelo Tosatti July 20, 2012, 11:09 a.m. UTC | #4
On Fri, Jul 20, 2012 at 10:34:28AM +0800, Xiao Guangrong wrote:
> On 07/20/2012 08:39 AM, Marcelo Tosatti wrote:
> > On Tue, Jul 17, 2012 at 09:53:29PM +0800, Xiao Guangrong wrote:
> >> If it have no indirect shadow pages we need not protect any gfn,
> >> this is always true for direct mmu without nested
> >>
> >> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
> > 
> > Xiao,
> > 
> > What is the motivation? Numbers please.
> > 
> 
> mmu_need_write_protect is the common path for both soft-mmu and
> hard-mmu, checking indirect_shadow_pages can skip hash-table walking
> for the case which is tdp is enabled without nested guest.

I mean motivation as observation that it is a bottleneck.

> I will post the Number after I do the performance test.
> 
> > In fact, what case was the original indirect_shadow_pages conditional in
> > kvm_mmu_pte_write optimizing again?
> > 
> 
> They are the different paths, mmu_need_write_protect is the real
> page fault path, and kvm_mmu_pte_write is caused by mmio emulation.

Sure. What i am asking is, what use case is the indirect_shadow_pages
optimizing? What scenario, what workload? 

See the "When to optimize" section of
http://en.wikipedia.org/wiki/Program_optimization.

Can't remember why indirect_shadow_pages was introduced in
kvm_mmu_pte_write.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Marcelo Tosatti July 20, 2012, 11:45 a.m. UTC | #5
On Fri, Jul 20, 2012 at 11:45:59AM +0800, Xiao Guangrong wrote:
> BTW, they are some bug fix patches on -master branch, but
> it is not existed on -next branch:
> commit: f411930442e01f9cf1bf4df41ff7e89476575c4d
> commit: 85b7059169e128c57a3a8a3e588fb89cb2031da1
> 
> It causes code conflict if we do the development on -next.

See auto-next branch.

http://www.linux-kvm.org/page/Kvm-Git-Workflow


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Xiao Guangrong July 20, 2012, 1:33 p.m. UTC | #6
On 07/20/2012 07:09 PM, Marcelo Tosatti wrote:
> On Fri, Jul 20, 2012 at 10:34:28AM +0800, Xiao Guangrong wrote:
>> On 07/20/2012 08:39 AM, Marcelo Tosatti wrote:
>>> On Tue, Jul 17, 2012 at 09:53:29PM +0800, Xiao Guangrong wrote:
>>>> If it have no indirect shadow pages we need not protect any gfn,
>>>> this is always true for direct mmu without nested
>>>>
>>>> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
>>>
>>> Xiao,
>>>
>>> What is the motivation? Numbers please.
>>>
>>
>> mmu_need_write_protect is the common path for both soft-mmu and
>> hard-mmu, checking indirect_shadow_pages can skip hash-table walking
>> for the case which is tdp is enabled without nested guest.
> 
> I mean motivation as observation that it is a bottleneck.
> 
>> I will post the Number after I do the performance test.
>>
>>> In fact, what case was the original indirect_shadow_pages conditional in
>>> kvm_mmu_pte_write optimizing again?
>>>
>>
>> They are the different paths, mmu_need_write_protect is the real
>> page fault path, and kvm_mmu_pte_write is caused by mmio emulation.
> 
> Sure. What i am asking is, what use case is the indirect_shadow_pages
> optimizing? What scenario, what workload? 
> 

Sorry, Marcelo, i do know why i completely misunderstood your mail. :(

I am not sure whether this is a bottleneck, i just got it from
code review, i will measure it to see if we can get benefit from
it. :p

> See the "When to optimize" section of
> http://en.wikipedia.org/wiki/Program_optimization.
> 
> Can't remember why indirect_shadow_pages was introduced in
> kvm_mmu_pte_write.
> 

Please refer to:
	https://lkml.org/lkml/2011/5/18/174

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 28b12e2..a846a9c 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2294,6 +2294,9 @@  static int mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn,
 	struct hlist_node *node;
 	bool need_unsync = false;

+	if (!vcpu->kvm->arch.indirect_shadow_pages)
+		return 0;
+
 	for_each_gfn_indirect_valid_sp(vcpu->kvm, s, gfn, node) {
 		if (!can_unsync)
 			return 1;