diff mbox

[2/5] KVM: MMU: make for_each_shadow_entry aware of largepages

Message ID 20090609213312.750051328@amt.cnet (mailing list archive)
State New, archived
Headers show

Commit Message

Marcelo Tosatti June 9, 2009, 9:30 p.m. UTC
This way there is no need to add explicit checks in every
for_each_shadow_entry user.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Avi Kivity June 10, 2009, 9:15 a.m. UTC | #1
Marcelo Tosatti wrote:
> This way there is no need to add explicit checks in every
> for_each_shadow_entry user.
>
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
>
> Index: kvm/arch/x86/kvm/mmu.c
> ===================================================================
> --- kvm.orig/arch/x86/kvm/mmu.c
> +++ kvm/arch/x86/kvm/mmu.c
> @@ -1273,6 +1273,11 @@ static bool shadow_walk_okay(struct kvm_
>  {
>  	if (iterator->level < PT_PAGE_TABLE_LEVEL)
>  		return false;
> +
> +	if (iterator->level == PT_PAGE_TABLE_LEVEL)
> +		if (is_large_pte(*iterator->sptep))
> +			return false;
>
>   
s/==/>/?
Avi Kivity June 10, 2009, 9:21 a.m. UTC | #2
Avi Kivity wrote:
> Marcelo Tosatti wrote:
>> This way there is no need to add explicit checks in every
>> for_each_shadow_entry user.
>>
>> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
>>
>> Index: kvm/arch/x86/kvm/mmu.c
>> ===================================================================
>> --- kvm.orig/arch/x86/kvm/mmu.c
>> +++ kvm/arch/x86/kvm/mmu.c
>> @@ -1273,6 +1273,11 @@ static bool shadow_walk_okay(struct kvm_
>>  {
>>      if (iterator->level < PT_PAGE_TABLE_LEVEL)
>>          return false;
>> +
>> +    if (iterator->level == PT_PAGE_TABLE_LEVEL)
>> +        if (is_large_pte(*iterator->sptep))
>> +            return false;
>>
>>   
> s/==/>/?
>

Ah, it's actually fine.  But changing == to >= will make it 1GBpage-ready.
Marcelo Tosatti June 11, 2009, 12:38 p.m. UTC | #3
On Wed, Jun 10, 2009 at 12:21:05PM +0300, Avi Kivity wrote:
> Avi Kivity wrote:
>> Marcelo Tosatti wrote:
>>> This way there is no need to add explicit checks in every
>>> for_each_shadow_entry user.
>>>
>>> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
>>>
>>> Index: kvm/arch/x86/kvm/mmu.c
>>> ===================================================================
>>> --- kvm.orig/arch/x86/kvm/mmu.c
>>> +++ kvm/arch/x86/kvm/mmu.c
>>> @@ -1273,6 +1273,11 @@ static bool shadow_walk_okay(struct kvm_
>>>  {
>>>      if (iterator->level < PT_PAGE_TABLE_LEVEL)
>>>          return false;
>>> +
>>> +    if (iterator->level == PT_PAGE_TABLE_LEVEL)
>>> +        if (is_large_pte(*iterator->sptep))
>>> +            return false;
>>>
>>>   
>> s/==/>/?
>>
>
> Ah, it's actually fine.  But changing == to >= will make it 1GBpage-ready.

Humpf, better check level explicitly before interpreting bit 7, so lets 
skip this for 1GB pages.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity June 11, 2009, 2:17 p.m. UTC | #4
Marcelo Tosatti wrote:
>>>> @@ -1273,6 +1273,11 @@ static bool shadow_walk_okay(struct kvm_
>>>>  {
>>>>      if (iterator->level < PT_PAGE_TABLE_LEVEL)
>>>>          return false;
>>>> +
>>>> +    if (iterator->level == PT_PAGE_TABLE_LEVEL)
>>>> +        if (is_large_pte(*iterator->sptep))
>>>> +            return false;
>>>>
>>>>   
>>>>         
>>> s/==/>/?
>>>
>>>       
>> Ah, it's actually fine.  But changing == to >= will make it 1GBpage-ready.
>>     
>
> Humpf, better check level explicitly before interpreting bit 7, so lets 
> skip this for 1GB pages.
>
>   

Okay.  But I'm rewriting shadow_walk_* afterwards.
diff mbox

Patch

Index: kvm/arch/x86/kvm/mmu.c
===================================================================
--- kvm.orig/arch/x86/kvm/mmu.c
+++ kvm/arch/x86/kvm/mmu.c
@@ -1273,6 +1273,11 @@  static bool shadow_walk_okay(struct kvm_
 {
 	if (iterator->level < PT_PAGE_TABLE_LEVEL)
 		return false;
+
+	if (iterator->level == PT_PAGE_TABLE_LEVEL)
+		if (is_large_pte(*iterator->sptep))
+			return false;
+
 	iterator->index = SHADOW_PT_INDEX(iterator->addr, iterator->level);
 	iterator->sptep	= ((u64 *)__va(iterator->shadow_addr)) + iterator->index;
 	return true;