diff mbox series

[v7,7/9] xen/arm: unpopulate memory when domain is static

Message ID 20220620024408.203797-8-Penny.Zheng@arm.com (mailing list archive)
State Superseded
Headers show
Series populate/unpopulate memory when domain on static allocation | expand

Commit Message

Penny Zheng June 20, 2022, 2:44 a.m. UTC
Today when a domain unpopulates the memory on runtime, they will always
hand the memory back to the heap allocator. And it will be a problem if domain
is static.

Pages as guest RAM for static domain shall be reserved to only this domain
and not be used for any other purposes, so they shall never go back to heap
allocator.

This commit puts reserved pages on the new list resv_page_list only after
having taken them off the "normal" list, when the last ref dropped.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
v7 changes:
- Add page on the rsv_page_list *after* it has been freed
---
v6 changes:
- refine in-code comment
- move PGC_static !CONFIG_STATIC_MEMORY definition to common header
---
v5 changes:
- adapt this patch for PGC_staticmem
---
v4 changes:
- no changes
---
v3 changes:
- have page_list_del() just once out of the if()
- remove resv_pages counter
- make arch_free_heap_page be an expression, not a compound statement.
---
v2 changes:
- put reserved pages on resv_page_list after having taken them off
the "normal" list
---
 xen/common/domain.c     | 4 ++++
 xen/common/page_alloc.c | 4 ++++
 xen/include/xen/mm.h    | 9 +++++++++
 xen/include/xen/sched.h | 3 +++
 4 files changed, 20 insertions(+)

Comments

Jan Beulich June 22, 2022, 9:24 a.m. UTC | #1
On 20.06.2022 04:44, Penny Zheng wrote:
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2498,6 +2498,10 @@ void free_domheap_pages(struct page_info *pg, unsigned int order)
>          }
>  
>          free_heap_pages(pg, order, scrub);
> +
> +        /* Add page on the resv_page_list *after* it has been freed. */
> +        if ( unlikely(pg->count_info & PGC_static) )
> +            put_static_pages(d, pg, order);

Unless I'm overlooking something the list addition done there / ...

> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -90,6 +90,15 @@ void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
>                            bool need_scrub);
>  int acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned int nr_mfns,
>                              unsigned int memflags);
> +#ifdef CONFIG_STATIC_MEMORY
> +#define put_static_pages(d, page, order) ({                 \
> +    unsigned int i;                                         \
> +    for ( i = 0; i < (1 << (order)); i++ )                  \
> +        page_list_add_tail((pg) + i, &(d)->resv_page_list); \
> +})

... here isn't guarded by any lock. Feels like we've been there before.
It's not really clear to me why the freeing of staticmem pages needs to
be split like this - if it wasn't split, the list addition would
"naturally" occur with the lock held, I think.

Furthermore careful with the local variable name used here. Consider
what would happen with an invocation of

    put_static_pages(d, page, i);

To common approach is to suffix an underscore to the variable name.
Such names are not supposed to be used outside of macros definitions,
and hence there's then no potential for such a conflict.

Finally I think you mean (1u << (order)) to be on the safe side against
UB if order could ever reach 31. Then again - is "order" as a parameter
needed here in the first place? Wasn't it that staticmem operations are
limited to order-0 regions?

Jan
Penny Zheng June 27, 2022, 10:03 a.m. UTC | #2
Hi jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Wednesday, June 22, 2022 5:24 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Wei Chen <Wei.Chen@arm.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>;
> Julien Grall <julien@xen.org>; Stefano Stabellini <sstabellini@kernel.org>;
> Wei Liu <wl@xen.org>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is
> static
> 
> On 20.06.2022 04:44, Penny Zheng wrote:
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -2498,6 +2498,10 @@ void free_domheap_pages(struct page_info *pg,
> unsigned int order)
> >          }
> >
> >          free_heap_pages(pg, order, scrub);
> > +
> > +        /* Add page on the resv_page_list *after* it has been freed. */
> > +        if ( unlikely(pg->count_info & PGC_static) )
> > +            put_static_pages(d, pg, order);
> 
> Unless I'm overlooking something the list addition done there / ...
> 
> > --- a/xen/include/xen/mm.h
> > +++ b/xen/include/xen/mm.h
> > @@ -90,6 +90,15 @@ void free_staticmem_pages(struct page_info *pg,
> unsigned long nr_mfns,
> >                            bool need_scrub);  int
> > acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned int
> nr_mfns,
> >                              unsigned int memflags);
> > +#ifdef CONFIG_STATIC_MEMORY
> > +#define put_static_pages(d, page, order) ({                 \
> > +    unsigned int i;                                         \
> > +    for ( i = 0; i < (1 << (order)); i++ )                  \
> > +        page_list_add_tail((pg) + i, &(d)->resv_page_list); \
> > +})
> 
> ... here isn't guarded by any lock. Feels like we've been there before.
> It's not really clear to me why the freeing of staticmem pages needs to be
> split like this - if it wasn't split, the list addition would "naturally" occur with
> the lock held, I think.

Reminded by you and Julien, I need to add a lock for operations(free/allocation) on
resv_page_list, I'll guard the put_static_pages with d->page_alloc_lock. And bring
back the lock in acquire_reserved_page.

put_static_pages, that is, adding pages to the reserved list, is only for freeing static
pages on runtime. In static page initialization stage, I also use free_statimem_pages,
and in which stage, I think the domain has not been constructed at all. So I prefer
the freeing of staticmem pages is split into two parts: free_staticmem_pages and
put_static_pages 

> 
> Furthermore careful with the local variable name used here. Consider what
> would happen with an invocation of
> 
>     put_static_pages(d, page, i);
> 
> To common approach is to suffix an underscore to the variable name.
> Such names are not supposed to be used outside of macros definitions, and
> hence there's then no potential for such a conflict.
> 

Understood!! I will change "unsigned int i" to "unsigned int _i";

> Finally I think you mean (1u << (order)) to be on the safe side against UB if
> order could ever reach 31. Then again - is "order" as a parameter needed
> here in the first place? Wasn't it that staticmem operations are limited to
> order-0 regions?

Yes, right now, the actual usage is limited to order-0, how about I add assertion here
and remove order parameter:

        /* Add page on the resv_page_list *after* it has been freed. */
        if ( unlikely(pg->count_info & PGC_static) )
        {
            ASSERT(!order);
            put_static_pages(d, pg);
        }

> Jan
Julien Grall June 27, 2022, 10:19 a.m. UTC | #3
On 27/06/2022 11:03, Penny Zheng wrote:
> Hi jan
> 
>> -----Original Message-----
> put_static_pages, that is, adding pages to the reserved list, is only for freeing static
> pages on runtime. In static page initialization stage, I also use free_statimem_pages,
> and in which stage, I think the domain has not been constructed at all. So I prefer
> the freeing of staticmem pages is split into two parts: free_staticmem_pages and
> put_static_pages

AFAIU, all the pages would have to be allocated via 
acquire_domstatic_pages(). This call requires the domain to be allocated 
and setup for handling memory.

Therefore, I think the split is unnecessary. This would also have the 
advantage to remove one loop. Admittly, this is not important when the 
order 0, but it would become a problem for larger order (you may have to 
pull the struct page_info multiple time in the cache).

Cheers,
Jan Beulich June 27, 2022, 1:09 p.m. UTC | #4
On 27.06.2022 12:03, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Wednesday, June 22, 2022 5:24 PM
>>
>> Furthermore careful with the local variable name used here. Consider what
>> would happen with an invocation of
>>
>>     put_static_pages(d, page, i);
>>
>> To common approach is to suffix an underscore to the variable name.
>> Such names are not supposed to be used outside of macros definitions, and
>> hence there's then no potential for such a conflict.
>>
> 
> Understood!! I will change "unsigned int i" to "unsigned int _i";

Note how I said "suffix", not "prefix".

>> Finally I think you mean (1u << (order)) to be on the safe side against UB if
>> order could ever reach 31. Then again - is "order" as a parameter needed
>> here in the first place? Wasn't it that staticmem operations are limited to
>> order-0 regions?
> 
> Yes, right now, the actual usage is limited to order-0, how about I add assertion here
> and remove order parameter:
> 
>         /* Add page on the resv_page_list *after* it has been freed. */
>         if ( unlikely(pg->count_info & PGC_static) )
>         {
>             ASSERT(!order);
>             put_static_pages(d, pg);
>         }

I don't mind an ASSERT() as long as upper layers indeed guarantee this.
What I'm worried about is that you might assert on user controlled input.

Jan
Penny Zheng June 29, 2022, 3:12 a.m. UTC | #5
Hi Julien and Jan

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Monday, June 27, 2022 6:19 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; Jan Beulich <jbeulich@suse.com>
> Cc: Wei Chen <Wei.Chen@arm.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>;
> Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; xen-
> devel@lists.xenproject.org
> Subject: Re: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is
> static
> 
> 
> 
> On 27/06/2022 11:03, Penny Zheng wrote:
> > Hi jan
> >
> >> -----Original Message-----
> > put_static_pages, that is, adding pages to the reserved list, is only
> > for freeing static pages on runtime. In static page initialization
> > stage, I also use free_statimem_pages, and in which stage, I think the
> > domain has not been constructed at all. So I prefer the freeing of
> > staticmem pages is split into two parts: free_staticmem_pages and
> > put_static_pages
> 
> AFAIU, all the pages would have to be allocated via
> acquire_domstatic_pages(). This call requires the domain to be allocated and
> setup for handling memory.
> 
> Therefore, I think the split is unnecessary. This would also have the
> advantage to remove one loop. Admittly, this is not important when the
> order 0, but it would become a problem for larger order (you may have to
> pull the struct page_info multiple time in the cache).
> 

How about this:
I create a new func free_domstatic_page, and it will be like:
"
static void free_domstatic_page(struct domain *d, struct page_info *page)
{
    unsigned int i;
    bool need_scrub;

    /* NB. May recursively lock from relinquish_memory(). */
    spin_lock_recursive(&d->page_alloc_lock);

    arch_free_heap_page(d, page);

    /*
     * Normally we expect a domain to clear pages before freeing them,
     * if it cares about the secrecy of their contents. However, after
     * a domain has died we assume responsibility for erasure. We do
     * scrub regardless if option scrub_domheap is set.
     */
    need_scrub = d->is_dying || scrub_debug || opt_scrub_domheap;

    free_staticmem_pages(page, 1, need_scrub);

    /* Add page on the resv_page_list *after* it has been freed. */
    put_static_page(d, page);

    drop_dom_ref = !domain_adjust_tot_pages(d, -1);

    spin_unlock_recursive(&d->page_alloc_lock);

    if ( drop_dom_ref )
        put_domain(d);
}
"

In free_domheap_pages, we just call free_domstatic_page:

"
@@ -2430,6 +2430,9 @@ void free_domheap_pages(struct page_info *pg, unsigned int order)

     ASSERT_ALLOC_CONTEXT();

+    if ( unlikely(pg->count_info & PGC_static) )
+        return free_domstatic_page(d, pg);
+
     if ( unlikely(is_xen_heap_page(pg)) )
     {
         /* NB. May recursively lock from relinquish_memory(). */
@@ -2673,6 +2676,38 @@ void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
"

Then the split could be avoided and we could save the loop as much as possible.
Any suggestion? 

> Cheers,
> 
> --
> Julien Grall
Jan Beulich June 29, 2022, 5:55 a.m. UTC | #6
On 29.06.2022 05:12, Penny Zheng wrote:
> Hi Julien and Jan
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Monday, June 27, 2022 6:19 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; Jan Beulich <jbeulich@suse.com>
>> Cc: Wei Chen <Wei.Chen@arm.com>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>;
>> Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; xen-
>> devel@lists.xenproject.org
>> Subject: Re: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is
>> static
>>
>>
>>
>> On 27/06/2022 11:03, Penny Zheng wrote:
>>> Hi jan
>>>
>>>> -----Original Message-----
>>> put_static_pages, that is, adding pages to the reserved list, is only
>>> for freeing static pages on runtime. In static page initialization
>>> stage, I also use free_statimem_pages, and in which stage, I think the
>>> domain has not been constructed at all. So I prefer the freeing of
>>> staticmem pages is split into two parts: free_staticmem_pages and
>>> put_static_pages
>>
>> AFAIU, all the pages would have to be allocated via
>> acquire_domstatic_pages(). This call requires the domain to be allocated and
>> setup for handling memory.
>>
>> Therefore, I think the split is unnecessary. This would also have the
>> advantage to remove one loop. Admittly, this is not important when the
>> order 0, but it would become a problem for larger order (you may have to
>> pull the struct page_info multiple time in the cache).
>>
> 
> How about this:
> I create a new func free_domstatic_page, and it will be like:
> "
> static void free_domstatic_page(struct domain *d, struct page_info *page)
> {
>     unsigned int i;
>     bool need_scrub;
> 
>     /* NB. May recursively lock from relinquish_memory(). */
>     spin_lock_recursive(&d->page_alloc_lock);
> 
>     arch_free_heap_page(d, page);
> 
>     /*
>      * Normally we expect a domain to clear pages before freeing them,
>      * if it cares about the secrecy of their contents. However, after
>      * a domain has died we assume responsibility for erasure. We do
>      * scrub regardless if option scrub_domheap is set.
>      */
>     need_scrub = d->is_dying || scrub_debug || opt_scrub_domheap;
> 
>     free_staticmem_pages(page, 1, need_scrub);
> 
>     /* Add page on the resv_page_list *after* it has been freed. */
>     put_static_page(d, page);
> 
>     drop_dom_ref = !domain_adjust_tot_pages(d, -1);
> 
>     spin_unlock_recursive(&d->page_alloc_lock);
> 
>     if ( drop_dom_ref )
>         put_domain(d);
> }
> "
> 
> In free_domheap_pages, we just call free_domstatic_page:
> 
> "
> @@ -2430,6 +2430,9 @@ void free_domheap_pages(struct page_info *pg, unsigned int order)
> 
>      ASSERT_ALLOC_CONTEXT();
> 
> +    if ( unlikely(pg->count_info & PGC_static) )
> +        return free_domstatic_page(d, pg);
> +
>      if ( unlikely(is_xen_heap_page(pg)) )
>      {
>          /* NB. May recursively lock from relinquish_memory(). */
> @@ -2673,6 +2676,38 @@ void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
> "
> 
> Then the split could be avoided and we could save the loop as much as possible.
> Any suggestion? 

Looks reasonable at the first glance (will need to see it in proper
context for a final opinion), provided e.g. Xen heap pages can never
be static.

Jan
Penny Zheng June 29, 2022, 6:08 a.m. UTC | #7
Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Wednesday, June 29, 2022 1:56 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Wei Chen <Wei.Chen@arm.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>;
> Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; xen-
> devel@lists.xenproject.org; Julien Grall <julien@xen.org>
> Subject: Re: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is
> static
> 
> On 29.06.2022 05:12, Penny Zheng wrote:
> > Hi Julien and Jan
> >
> >> -----Original Message-----
> >> From: Julien Grall <julien@xen.org>
> >> Sent: Monday, June 27, 2022 6:19 PM
> >> To: Penny Zheng <Penny.Zheng@arm.com>; Jan Beulich
> >> <jbeulich@suse.com>
> >> Cc: Wei Chen <Wei.Chen@arm.com>; Andrew Cooper
> >> <andrew.cooper3@citrix.com>; George Dunlap
> >> <george.dunlap@citrix.com>; Stefano Stabellini
> >> <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; xen-
> >> devel@lists.xenproject.org
> >> Subject: Re: [PATCH v7 7/9] xen/arm: unpopulate memory when domain
> is
> >> static
> >>
> >>
> >>
> >> On 27/06/2022 11:03, Penny Zheng wrote:
> >>> Hi jan
> >>>
> >>>> -----Original Message-----
> >>> put_static_pages, that is, adding pages to the reserved list, is
> >>> only for freeing static pages on runtime. In static page
> >>> initialization stage, I also use free_statimem_pages, and in which
> >>> stage, I think the domain has not been constructed at all. So I
> >>> prefer the freeing of staticmem pages is split into two parts:
> >>> free_staticmem_pages and put_static_pages
> >>
> >> AFAIU, all the pages would have to be allocated via
> >> acquire_domstatic_pages(). This call requires the domain to be
> >> allocated and setup for handling memory.
> >>
> >> Therefore, I think the split is unnecessary. This would also have the
> >> advantage to remove one loop. Admittly, this is not important when
> >> the order 0, but it would become a problem for larger order (you may
> >> have to pull the struct page_info multiple time in the cache).
> >>
> >
> > How about this:
> > I create a new func free_domstatic_page, and it will be like:
> > "
> > static void free_domstatic_page(struct domain *d, struct page_info
> > *page) {
> >     unsigned int i;
> >     bool need_scrub;
> >
> >     /* NB. May recursively lock from relinquish_memory(). */
> >     spin_lock_recursive(&d->page_alloc_lock);
> >
> >     arch_free_heap_page(d, page);
> >
> >     /*
> >      * Normally we expect a domain to clear pages before freeing them,
> >      * if it cares about the secrecy of their contents. However, after
> >      * a domain has died we assume responsibility for erasure. We do
> >      * scrub regardless if option scrub_domheap is set.
> >      */
> >     need_scrub = d->is_dying || scrub_debug || opt_scrub_domheap;
> >
> >     free_staticmem_pages(page, 1, need_scrub);
> >
> >     /* Add page on the resv_page_list *after* it has been freed. */
> >     put_static_page(d, page);
> >
> >     drop_dom_ref = !domain_adjust_tot_pages(d, -1);
> >
> >     spin_unlock_recursive(&d->page_alloc_lock);
> >
> >     if ( drop_dom_ref )
> >         put_domain(d);
> > }
> > "
> >
> > In free_domheap_pages, we just call free_domstatic_page:
> >
> > "
> > @@ -2430,6 +2430,9 @@ void free_domheap_pages(struct page_info *pg,
> > unsigned int order)
> >
> >      ASSERT_ALLOC_CONTEXT();
> >
> > +    if ( unlikely(pg->count_info & PGC_static) )
> > +        return free_domstatic_page(d, pg);
> > +
> >      if ( unlikely(is_xen_heap_page(pg)) )
> >      {
> >          /* NB. May recursively lock from relinquish_memory(). */ @@
> > -2673,6 +2676,38 @@ void free_staticmem_pages(struct page_info *pg,
> > unsigned long nr_mfns, "
> >
> > Then the split could be avoided and we could save the loop as much as
> possible.
> > Any suggestion?
> 
> Looks reasonable at the first glance (will need to see it in proper context for a
> final opinion), provided e.g. Xen heap pages can never be static.

If you don't prefer let free_domheap_pages to call free_domstatic_page, then, maybe
the if-array should happen at put_page
"
@@ -1622,6 +1622,8 @@ void put_page(struct page_info *page)

     if ( unlikely((nx & PGC_count_mask) == 0) )
     {
+        if ( unlikely(page->count_info & PGC_static) )
+            free_domstatic_page(page);
         free_domheap_page(page);
     }
 }
"
Wdyt now?
 
> 
> Jan
Jan Beulich June 29, 2022, 6:19 a.m. UTC | #8
On 29.06.2022 08:08, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Wednesday, June 29, 2022 1:56 PM
>>
>> On 29.06.2022 05:12, Penny Zheng wrote:
>>>> From: Julien Grall <julien@xen.org>
>>>> Sent: Monday, June 27, 2022 6:19 PM
>>>>
>>>> On 27/06/2022 11:03, Penny Zheng wrote:
>>>>>> -----Original Message-----
>>>>> put_static_pages, that is, adding pages to the reserved list, is
>>>>> only for freeing static pages on runtime. In static page
>>>>> initialization stage, I also use free_statimem_pages, and in which
>>>>> stage, I think the domain has not been constructed at all. So I
>>>>> prefer the freeing of staticmem pages is split into two parts:
>>>>> free_staticmem_pages and put_static_pages
>>>>
>>>> AFAIU, all the pages would have to be allocated via
>>>> acquire_domstatic_pages(). This call requires the domain to be
>>>> allocated and setup for handling memory.
>>>>
>>>> Therefore, I think the split is unnecessary. This would also have the
>>>> advantage to remove one loop. Admittly, this is not important when
>>>> the order 0, but it would become a problem for larger order (you may
>>>> have to pull the struct page_info multiple time in the cache).
>>>>
>>>
>>> How about this:
>>> I create a new func free_domstatic_page, and it will be like:
>>> "
>>> static void free_domstatic_page(struct domain *d, struct page_info
>>> *page) {
>>>     unsigned int i;
>>>     bool need_scrub;
>>>
>>>     /* NB. May recursively lock from relinquish_memory(). */
>>>     spin_lock_recursive(&d->page_alloc_lock);
>>>
>>>     arch_free_heap_page(d, page);
>>>
>>>     /*
>>>      * Normally we expect a domain to clear pages before freeing them,
>>>      * if it cares about the secrecy of their contents. However, after
>>>      * a domain has died we assume responsibility for erasure. We do
>>>      * scrub regardless if option scrub_domheap is set.
>>>      */
>>>     need_scrub = d->is_dying || scrub_debug || opt_scrub_domheap;
>>>
>>>     free_staticmem_pages(page, 1, need_scrub);
>>>
>>>     /* Add page on the resv_page_list *after* it has been freed. */
>>>     put_static_page(d, page);
>>>
>>>     drop_dom_ref = !domain_adjust_tot_pages(d, -1);
>>>
>>>     spin_unlock_recursive(&d->page_alloc_lock);
>>>
>>>     if ( drop_dom_ref )
>>>         put_domain(d);
>>> }
>>> "
>>>
>>> In free_domheap_pages, we just call free_domstatic_page:
>>>
>>> "
>>> @@ -2430,6 +2430,9 @@ void free_domheap_pages(struct page_info *pg,
>>> unsigned int order)
>>>
>>>      ASSERT_ALLOC_CONTEXT();
>>>
>>> +    if ( unlikely(pg->count_info & PGC_static) )
>>> +        return free_domstatic_page(d, pg);
>>> +
>>>      if ( unlikely(is_xen_heap_page(pg)) )
>>>      {
>>>          /* NB. May recursively lock from relinquish_memory(). */ @@
>>> -2673,6 +2676,38 @@ void free_staticmem_pages(struct page_info *pg,
>>> unsigned long nr_mfns, "
>>>
>>> Then the split could be avoided and we could save the loop as much as
>> possible.
>>> Any suggestion?
>>
>> Looks reasonable at the first glance (will need to see it in proper context for a
>> final opinion), provided e.g. Xen heap pages can never be static.
> 
> If you don't prefer let free_domheap_pages to call free_domstatic_page, then, maybe
> the if-array should happen at put_page
> "
> @@ -1622,6 +1622,8 @@ void put_page(struct page_info *page)
> 
>      if ( unlikely((nx & PGC_count_mask) == 0) )
>      {
> +        if ( unlikely(page->count_info & PGC_static) )
> +            free_domstatic_page(page);
>          free_domheap_page(page);
>      }
>  }
> "
> Wdyt now?

Personally I'd prefer this variant, but we'll have to see what Julien or
the other Arm maintainers think.

Jan
Julien Grall June 29, 2022, 10:49 a.m. UTC | #9
Hi Jan,

On 29/06/2022 07:19, Jan Beulich wrote:
> On 29.06.2022 08:08, Penny Zheng wrote:
>>> From: Jan Beulich <jbeulich@suse.com>
>>> Sent: Wednesday, June 29, 2022 1:56 PM
>>>
>>> On 29.06.2022 05:12, Penny Zheng wrote:
>>>>> From: Julien Grall <julien@xen.org>
>>>>> Sent: Monday, June 27, 2022 6:19 PM
>>>>>
>>>>> On 27/06/2022 11:03, Penny Zheng wrote:
>>>>>>> -----Original Message-----
>>>>>> put_static_pages, that is, adding pages to the reserved list, is
>>>>>> only for freeing static pages on runtime. In static page
>>>>>> initialization stage, I also use free_statimem_pages, and in which
>>>>>> stage, I think the domain has not been constructed at all. So I
>>>>>> prefer the freeing of staticmem pages is split into two parts:
>>>>>> free_staticmem_pages and put_static_pages
>>>>>
>>>>> AFAIU, all the pages would have to be allocated via
>>>>> acquire_domstatic_pages(). This call requires the domain to be
>>>>> allocated and setup for handling memory.
>>>>>
>>>>> Therefore, I think the split is unnecessary. This would also have the
>>>>> advantage to remove one loop. Admittly, this is not important when
>>>>> the order 0, but it would become a problem for larger order (you may
>>>>> have to pull the struct page_info multiple time in the cache).
>>>>>
>>>>
>>>> How about this:
>>>> I create a new func free_domstatic_page, and it will be like:
>>>> "
>>>> static void free_domstatic_page(struct domain *d, struct page_info
>>>> *page) {
>>>>      unsigned int i;
>>>>      bool need_scrub;
>>>>
>>>>      /* NB. May recursively lock from relinquish_memory(). */
>>>>      spin_lock_recursive(&d->page_alloc_lock);
>>>>
>>>>      arch_free_heap_page(d, page);
>>>>
>>>>      /*
>>>>       * Normally we expect a domain to clear pages before freeing them,
>>>>       * if it cares about the secrecy of their contents. However, after
>>>>       * a domain has died we assume responsibility for erasure. We do
>>>>       * scrub regardless if option scrub_domheap is set.
>>>>       */
>>>>      need_scrub = d->is_dying || scrub_debug || opt_scrub_domheap;
>>>>
>>>>      free_staticmem_pages(page, 1, need_scrub);
>>>>
>>>>      /* Add page on the resv_page_list *after* it has been freed. */
>>>>      put_static_page(d, page);
>>>>
>>>>      drop_dom_ref = !domain_adjust_tot_pages(d, -1);
>>>>
>>>>      spin_unlock_recursive(&d->page_alloc_lock);
>>>>
>>>>      if ( drop_dom_ref )
>>>>          put_domain(d);
>>>> }
>>>> "
>>>>
>>>> In free_domheap_pages, we just call free_domstatic_page:
>>>>
>>>> "
>>>> @@ -2430,6 +2430,9 @@ void free_domheap_pages(struct page_info *pg,
>>>> unsigned int order)
>>>>
>>>>       ASSERT_ALLOC_CONTEXT();
>>>>
>>>> +    if ( unlikely(pg->count_info & PGC_static) )
>>>> +        return free_domstatic_page(d, pg);
>>>> +
>>>>       if ( unlikely(is_xen_heap_page(pg)) )
>>>>       {
>>>>           /* NB. May recursively lock from relinquish_memory(). */ @@
>>>> -2673,6 +2676,38 @@ void free_staticmem_pages(struct page_info *pg,
>>>> unsigned long nr_mfns, "
>>>>
>>>> Then the split could be avoided and we could save the loop as much as
>>> possible.
>>>> Any suggestion?
>>>
>>> Looks reasonable at the first glance (will need to see it in proper context for a
>>> final opinion), provided e.g. Xen heap pages can never be static.
>>
>> If you don't prefer let free_domheap_pages to call free_domstatic_page, then, maybe
>> the if-array should happen at put_page
>> "
>> @@ -1622,6 +1622,8 @@ void put_page(struct page_info *page)
>>
>>       if ( unlikely((nx & PGC_count_mask) == 0) )
>>       {
>> +        if ( unlikely(page->count_info & PGC_static) )

At a first glance, this would likely need to be tested against 'nx'.

>> +            free_domstatic_page(page);
>>           free_domheap_page(page);
>>       }
>>   }
>> "
>> Wdyt now?
> 
> Personally I'd prefer this variant, but we'll have to see what Julien or
> the other Arm maintainers think.

I think this is fine so long we are not expecting more places where 
free_domheap_page() may need to be replaced with free_domstatic_page().

I can't think of any at the moment, but I would like Penny to confirm 
what Arm plans to do with static memory.

Cheers,
diff mbox series

Patch

diff --git a/xen/common/domain.c b/xen/common/domain.c
index a3ef991bd1..a49574fa24 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -604,6 +604,10 @@  struct domain *domain_create(domid_t domid,
     INIT_PAGE_LIST_HEAD(&d->page_list);
     INIT_PAGE_LIST_HEAD(&d->extra_page_list);
     INIT_PAGE_LIST_HEAD(&d->xenpage_list);
+#ifdef CONFIG_STATIC_MEMORY
+    INIT_PAGE_LIST_HEAD(&d->resv_page_list);
+#endif
+
 
     spin_lock_init(&d->node_affinity_lock);
     d->node_affinity = NODE_MASK_ALL;
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index d9253df270..7d223087c0 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2498,6 +2498,10 @@  void free_domheap_pages(struct page_info *pg, unsigned int order)
         }
 
         free_heap_pages(pg, order, scrub);
+
+        /* Add page on the resv_page_list *after* it has been freed. */
+        if ( unlikely(pg->count_info & PGC_static) )
+            put_static_pages(d, pg, order);
     }
 
     if ( drop_dom_ref )
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 1c4ddb336b..68a647ceae 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -90,6 +90,15 @@  void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
                           bool need_scrub);
 int acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned int nr_mfns,
                             unsigned int memflags);
+#ifdef CONFIG_STATIC_MEMORY
+#define put_static_pages(d, page, order) ({                 \
+    unsigned int i;                                         \
+    for ( i = 0; i < (1 << (order)); i++ )                  \
+        page_list_add_tail((pg) + i, &(d)->resv_page_list); \
+})
+#else
+#define put_static_pages(d, page, order) ((void)(d), (void)(page), (void)(order))
+#endif
 
 /* Map machine page range in Xen virtual address space. */
 int map_pages_to_xen(
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 5191853c18..bd2782b3c5 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -381,6 +381,9 @@  struct domain
     struct page_list_head page_list;  /* linked list */
     struct page_list_head extra_page_list; /* linked list (size extra_pages) */
     struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */
+#ifdef CONFIG_STATIC_MEMORY
+    struct page_list_head resv_page_list; /* linked list */
+#endif
 
     /*
      * This field should only be directly accessed by domain_adjust_tot_pages()