diff mbox

delte PAGE_ORDER_1G in pod

Message ID 1461655669-7815-1-git-send-email-zhangcy@cn.fujitsu.com (mailing list archive)
State New, archived
Headers show

Commit Message

zhangcy April 26, 2016, 7:27 a.m. UTC
PoD does not have cache list for 1GB pages.

Signed-off-by: zhangcy <zhangcy@cn.fujitsu.com>
---
 xen/arch/x86/mm/p2m-pod.c | 4 ----
 1 file changed, 4 deletions(-)

Comments

Jan Beulich April 26, 2016, 8:33 a.m. UTC | #1
>>> On 26.04.16 at 09:27, <zhangcy@cn.fujitsu.com> wrote:
> PoD does not have cache list for 1GB pages.

But it might gain support in the future. Is there any harm in this code
being there?

Jan

> Signed-off-by: zhangcy <zhangcy@cn.fujitsu.com>
> ---
>  xen/arch/x86/mm/p2m-pod.c | 4 ----
>  1 file changed, 4 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index a931f2c..89a07ee 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -122,10 +122,6 @@ p2m_pod_cache_add(struct p2m_domain *p2m,
>      /* Then add to the appropriate populate-on-demand list. */
>      switch ( order )
>      {
> -    case PAGE_ORDER_1G:
> -        for ( i = 0; i < (1UL << PAGE_ORDER_1G); i += 1UL << PAGE_ORDER_2M )
> -            page_list_add_tail(page + i, &p2m->pod.super);
> -        break;
>      case PAGE_ORDER_2M:
>          page_list_add_tail(page, &p2m->pod.super);
>          break;
> -- 
> 1.7.12.4
zhangcy April 26, 2016, 8:41 a.m. UTC | #2
>>>> On 26.04.16 at 09:27, <zhangcy@cn.fujitsu.com> wrote:

>> PoD does not have cache list for 1GB pages.

>

>But it might gain support in the future. Is there any harm in this code

>being there?

no harm in this code.
when i read this code, i wonder why order == PAGE_ORDER_1G?
this make me confuse.
i just delete useless code.

thanks 
>

>Jan

>

>> Signed-off-by: zhangcy <zhangcy@cn.fujitsu.com>

>> ---

>>  xen/arch/x86/mm/p2m-pod.c | 4 ----

>>  1 file changed, 4 deletions(-)

>> 

>> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c

>> index a931f2c..89a07ee 100644

>> --- a/xen/arch/x86/mm/p2m-pod.c

>> +++ b/xen/arch/x86/mm/p2m-pod.c

>> @@ -122,10 +122,6 @@ p2m_pod_cache_add(struct p2m_domain *p2m,

>>      /* Then add to the appropriate populate-on-demand list. */

>>      switch ( order )

>>      {

>> -    case PAGE_ORDER_1G:

>> -        for ( i = 0; i < (1UL << PAGE_ORDER_1G); i += 1UL << PAGE_ORDER_2M )

>> -            page_list_add_tail(page + i, &p2m->pod.super);

>> -        break;

>>      case PAGE_ORDER_2M:

>>          page_list_add_tail(page, &p2m->pod.super);

>>          break;

>> -- 

>> 1.7.12.4

>

>

>

>

>
George Dunlap April 26, 2016, 10:23 a.m. UTC | #3
On 26/04/16 08:27, zhangcy wrote:
> PoD does not have cache list for 1GB pages.
> 
> Signed-off-by: zhangcy <zhangcy@cn.fujitsu.com>

Thanks for the patch.  FYI we normally tag the area in the title in a
structured way; I probably would have used something like the following:

xen/pod: Remove code handling PAGE_ORDER_1G from p2m_pod_cache_add

But with regards to the patch itself: The question isn't whether we have
a cache list for 1G pages; the question is whether p2m_pod_cache_add()
will ever be called with order == PAGE_ORDER_1G.

Taking a quick glance around, it looks like in theory if a guest called
decrease_reservation with order == PAGE_ORDER_1G, you could conceivably
get to p2m_pod_cache_add() with order == PAGE_ORDER_1G.

Even if the answer is "no", that may change in the future; which means
we need to at very least add an ASSERT(), and possibly add a more robust
failure case.  And at that point, since handling it properly only
requires 4 lines, you might as well just handle it.

Thanks,
 -George

> ---
>  xen/arch/x86/mm/p2m-pod.c | 4 ----
>  1 file changed, 4 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index a931f2c..89a07ee 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -122,10 +122,6 @@ p2m_pod_cache_add(struct p2m_domain *p2m,
>      /* Then add to the appropriate populate-on-demand list. */
>      switch ( order )
>      {
> -    case PAGE_ORDER_1G:
> -        for ( i = 0; i < (1UL << PAGE_ORDER_1G); i += 1UL << PAGE_ORDER_2M )
> -            page_list_add_tail(page + i, &p2m->pod.super);
> -        break;
>      case PAGE_ORDER_2M:
>          page_list_add_tail(page, &p2m->pod.super);
>          break;
>
zhangcy April 26, 2016, 10:49 a.m. UTC | #4
>On 26/04/16 08:27, zhangcy wrote:

>> PoD does not have cache list for 1GB pages.

>> 

>> Signed-off-by: zhangcy <zhangcy@cn.fujitsu.com>

>

>Thanks for the patch.  FYI we normally tag the area in the title in a

>structured way; I probably would have used something like the following:

>

>xen/pod: Remove code handling PAGE_ORDER_1G from p2m_pod_cache_add

got it, thanks.
>

>But with regards to the patch itself: The question isn't whether we have

>a cache list for 1G pages; the question is whether p2m_pod_cache_add()

>will ever be called with order == PAGE_ORDER_1G.

>

>Taking a quick glance around, it looks like in theory if a guest called

>decrease_reservation with order == PAGE_ORDER_1G, you could conceivably

>get to p2m_pod_cache_add() with order == PAGE_ORDER_1G.

i just think like this:

p2m_pod_decrease_reservation 
  - if ( steal_for_cache && p2m_is_ram(t) )
     - p2m_pod_cache_add(p2m, page, cur_order)
i think p2m_is_ram(t) , ram also from pod cache,
pod cache is just 4k or 2M.
so i think ram is just 4k or 2M.

but i not sure about this...

>

>Even if the answer is "no", that may change in the future; which means

>we need to at very least add an ASSERT(), and possibly add a more robust

>failure case.  And at that point, since handling it properly only

>requires 4 lines, you might as well just handle it.

ok, thanks.

 - zhang
>

>Thanks,

> -George

>

>> ---

>>  xen/arch/x86/mm/p2m-pod.c | 4 ----

>>  1 file changed, 4 deletions(-)

>> 

>> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c

>> index a931f2c..89a07ee 100644

>> --- a/xen/arch/x86/mm/p2m-pod.c

>> +++ b/xen/arch/x86/mm/p2m-pod.c

>> @@ -122,10 +122,6 @@ p2m_pod_cache_add(struct p2m_domain *p2m,

>>      /* Then add to the appropriate populate-on-demand list. */

>>      switch ( order )

>>      {

>> -    case PAGE_ORDER_1G:

>> -        for ( i = 0; i < (1UL << PAGE_ORDER_1G); i += 1UL << PAGE_ORDER_2M )

>> -            page_list_add_tail(page + i, &p2m->pod.super);

>> -        break;

>>      case PAGE_ORDER_2M:

>>          page_list_add_tail(page, &p2m->pod.super);

>>          break;

>> 

>

>

>
George Dunlap April 26, 2016, 10:57 a.m. UTC | #5
On 26/04/16 11:49, Zhang, Chunyu wrote:
> 
>> On 26/04/16 08:27, zhangcy wrote:
>>> PoD does not have cache list for 1GB pages.
>>>
>>> Signed-off-by: zhangcy <zhangcy@cn.fujitsu.com>
>>
>> Thanks for the patch.  FYI we normally tag the area in the title in a
>> structured way; I probably would have used something like the following:
>>
>> xen/pod: Remove code handling PAGE_ORDER_1G from p2m_pod_cache_add
> got it, thanks.
>>
>> But with regards to the patch itself: The question isn't whether we have
>> a cache list for 1G pages; the question is whether p2m_pod_cache_add()
>> will ever be called with order == PAGE_ORDER_1G.
>>
>> Taking a quick glance around, it looks like in theory if a guest called
>> decrease_reservation with order == PAGE_ORDER_1G, you could conceivably
>> get to p2m_pod_cache_add() with order == PAGE_ORDER_1G.
> i just think like this:
> 
> p2m_pod_decrease_reservation 
>   - if ( steal_for_cache && p2m_is_ram(t) )
>      - p2m_pod_cache_add(p2m, page, cur_order)
> i think p2m_is_ram(t) , ram also from pod cache,

No, that's memory from the guest's p2m table.  The p2m table can have 1G
entries.

 -George
zhangcy April 26, 2016, 11:05 a.m. UTC | #6
>On 26/04/16 11:49, Zhang, Chunyu wrote:

>>

>>> On 26/04/16 08:27, zhangcy wrote:

>>>> PoD does not have cache list for 1GB pages.

>>>>

>>>> Signed-off-by: zhangcy <zhangcy@cn.fujitsu.com>

>>>

>>> Thanks for the patch.  FYI we normally tag the area in the title in a

>>> structured way; I probably would have used something like the following:

>>>

>>> xen/pod: Remove code handling PAGE_ORDER_1G from p2m_pod_cache_add

>> got it, thanks.

>>>

>>> But with regards to the patch itself: The question isn't whether we have

>>> a cache list for 1G pages; the question is whether p2m_pod_cache_add()

>>> will ever be called with order == PAGE_ORDER_1G.

>>>

>>> Taking a quick glance around, it looks like in theory if a guest called

>>> decrease_reservation with order == PAGE_ORDER_1G, you could conceivably

>>> get to p2m_pod_cache_add() with order == PAGE_ORDER_1G.

>> i just think like this:

>>

>> p2m_pod_decrease_reservation

>>   - if ( steal_for_cache && p2m_is_ram(t) )

>>      - p2m_pod_cache_add(p2m, page, cur_order)

>> i think p2m_is_ram(t) , ram also from pod cache,

>

>No, that's memory from the guest's p2m table.  The p2m table can have 1G

right..
sorry , i did not write clearly.
i mean: ram come like this:
- pod cache is 4K or 2M
- ram get from pod cache
- set ram to p2m table.
so i think p2m table is 4K or 2M.

i not sure about this O(?_?)O~
>entries.

>

> -George

>

>

>
George Dunlap April 26, 2016, 11:12 a.m. UTC | #7
On 26/04/16 12:05, Zhang, Chunyu wrote:
> 
>> On 26/04/16 11:49, Zhang, Chunyu wrote:
>>>
>>>> On 26/04/16 08:27, zhangcy wrote:
>>>>> PoD does not have cache list for 1GB pages.
>>>>>
>>>>> Signed-off-by: zhangcy <zhangcy@cn.fujitsu.com>
>>>>
>>>> Thanks for the patch.  FYI we normally tag the area in the title in a
>>>> structured way; I probably would have used something like the following:
>>>>
>>>> xen/pod: Remove code handling PAGE_ORDER_1G from p2m_pod_cache_add
>>> got it, thanks.
>>>>
>>>> But with regards to the patch itself: The question isn't whether we have
>>>> a cache list for 1G pages; the question is whether p2m_pod_cache_add()
>>>> will ever be called with order == PAGE_ORDER_1G.
>>>>
>>>> Taking a quick glance around, it looks like in theory if a guest called
>>>> decrease_reservation with order == PAGE_ORDER_1G, you could conceivably
>>>> get to p2m_pod_cache_add() with order == PAGE_ORDER_1G.
>>> i just think like this:
>>>
>>> p2m_pod_decrease_reservation
>>>   - if ( steal_for_cache && p2m_is_ram(t) )
>>>      - p2m_pod_cache_add(p2m, page, cur_order)
>>> i think p2m_is_ram(t) , ram also from pod cache,
>>
>> No, that's memory from the guest's p2m table.  The p2m table can have 1G
> right..
> sorry , i did not write clearly.
> i mean: ram come like this:
> - pod cache is 4K or 2M
> - ram get from pod cache
> - set ram to p2m table.
> so i think p2m table is 4K or 2M.

Oh, right, I see -- a guest booted in PoD mode would normally only have
2M or 4k entries, since that's how they get filled in.

But there's nothing preventing someone coming up with a new domain
builder that comes with some 1G entries filled in already.  Nor is there
anything stopping a guest ballooning out a 1G region, then ballooning it
back in (hoping to get a full 1G entry), and then ballooning it out
again, causing Xen to potentially leak memory.

I haven't checked to see whether any of that is actually feasible or
not, but four lines of code is a small price to pay to not have to worry
about it. :-)

 -George
zhangcy April 26, 2016, 11:48 a.m. UTC | #8
[...]
>>>> i think p2m_is_ram(t) , ram also from pod cache,

>>>

>>> No, that's memory from the guest's p2m table.  The p2m table can have 1G

>> right..

>> sorry , i did not write clearly.

>> i mean: ram come like this:

>> - pod cache is 4K or 2M

>> - ram get from pod cache

>> - set ram to p2m table.

>> so i think p2m table is 4K or 2M.

>

>Oh, right, I see -- a guest booted in PoD mode would normally only have

>2M or 4k entries, since that's how they get filled in.

>

>But there's nothing preventing someone coming up with a new domain

>builder that comes with some 1G entries filled in already.  Nor is there

pod mode?
i think, in pod mode,
- a new domain builder, no mem is populate,
- when ept violation, populate mem from pod cache ,
   - set ept entries

so i think, a new domain builder, will have no 1G entries..

>anything stopping a guest ballooning out a 1G region, then ballooning it

>back in (hoping to get a full 1G entry), and then ballooning it out

>again, causing Xen to potentially leak memory.

>

>I haven't checked to see whether any of that is actually feasible or

>not, but four lines of code is a small price to pay to not have to worry

>about it. :-)

i am not worry about this.
now, i am study pod + balloon + memory.
so i want to know if i am right about pod + balloon + memory ??
>

> -George

>

>

>
diff mbox

Patch

diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index a931f2c..89a07ee 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -122,10 +122,6 @@  p2m_pod_cache_add(struct p2m_domain *p2m,
     /* Then add to the appropriate populate-on-demand list. */
     switch ( order )
     {
-    case PAGE_ORDER_1G:
-        for ( i = 0; i < (1UL << PAGE_ORDER_1G); i += 1UL << PAGE_ORDER_2M )
-            page_list_add_tail(page + i, &p2m->pod.super);
-        break;
     case PAGE_ORDER_2M:
         page_list_add_tail(page, &p2m->pod.super);
         break;