diff mbox series

[04/10] xen/arm: static memory initialization

Message ID 20210518052113.725808-5-penny.zheng@arm.com (mailing list archive)
State Superseded
Headers show
Series Domain on Static Allocation | expand

Commit Message

Penny Zheng May 18, 2021, 5:21 a.m. UTC
This patch introduces static memory initialization, during system RAM boot up.

New func init_staticmem_pages is the equivalent of init_heap_pages, responsible
for static memory initialization.

Helper func free_staticmem_pages is the equivalent of free_heap_pages, to free
nr_pfns pages of static memory.
For each page, it includes the following steps:
1. change page state from in-use(also initialization state) to free state and
grant PGC_reserved.
2. set its owner NULL and make sure this page is not a guest frame any more
3. follow the same cache coherency policy in free_heap_pages
4. scrub the page in need

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/setup.c    |  2 ++
 xen/common/page_alloc.c | 70 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/mm.h    |  3 ++
 3 files changed, 75 insertions(+)

Comments

Jan Beulich May 18, 2021, 7:15 a.m. UTC | #1
On 18.05.2021 07:21, Penny Zheng wrote:
> This patch introduces static memory initialization, during system RAM boot up.
> 
> New func init_staticmem_pages is the equivalent of init_heap_pages, responsible
> for static memory initialization.
> 
> Helper func free_staticmem_pages is the equivalent of free_heap_pages, to free
> nr_pfns pages of static memory.
> For each page, it includes the following steps:
> 1. change page state from in-use(also initialization state) to free state and
> grant PGC_reserved.
> 2. set its owner NULL and make sure this page is not a guest frame any more

But isn't the goal (as per the previous patch) to associate such pages
with a _specific_ domain?

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -150,6 +150,9 @@
>  #define p2m_pod_offline_or_broken_hit(pg) 0
>  #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
>  #endif
> +#ifdef CONFIG_ARM_64
> +#include <asm/setup.h>
> +#endif

Whatever it is that's needed from this header suggests the code won't
build for other architectures. I think init_staticmem_pages() in its
current shape shouldn't live in this (common) file.

> @@ -1512,6 +1515,49 @@ static void free_heap_pages(
>      spin_unlock(&heap_lock);
>  }
>  
> +/* Equivalent of free_heap_pages to free nr_pfns pages of static memory. */
> +static void free_staticmem_pages(struct page_info *pg, unsigned long nr_pfns,
> +                                 bool need_scrub)

Right now this function gets called only from an __init one. Unless
it is intended to gain further callers, it should be marked __init
itself then. Otherwise it should be made sure that other
architectures don't include this (dead there) code.

> +{
> +    mfn_t mfn = page_to_mfn(pg);
> +    int i;

This type doesn't fit nr_pfns'es.

Jan
Penny Zheng May 18, 2021, 9:51 a.m. UTC | #2
Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Tuesday, May 18, 2021 3:16 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org; julien@xen.org
> Subject: Re: [PATCH 04/10] xen/arm: static memory initialization
> 
> On 18.05.2021 07:21, Penny Zheng wrote:
> > This patch introduces static memory initialization, during system RAM boot
> up.
> >
> > New func init_staticmem_pages is the equivalent of init_heap_pages,
> > responsible for static memory initialization.
> >
> > Helper func free_staticmem_pages is the equivalent of free_heap_pages,
> > to free nr_pfns pages of static memory.
> > For each page, it includes the following steps:
> > 1. change page state from in-use(also initialization state) to free
> > state and grant PGC_reserved.
> > 2. set its owner NULL and make sure this page is not a guest frame any
> > more
> 
> But isn't the goal (as per the previous patch) to associate such pages with a
> _specific_ domain?
> 

Free_staticmem_pages are alike free_heap_pages, are not used only for initialization.
Freeing used pages to unused are also included.
Here, setting its owner NULL is to set owner in used field NULL.
Still, I need to add more explanation here.

> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -150,6 +150,9 @@
> >  #define p2m_pod_offline_or_broken_hit(pg) 0  #define
> > p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)  #endif
> > +#ifdef CONFIG_ARM_64
> > +#include <asm/setup.h>
> > +#endif
> 
> Whatever it is that's needed from this header suggests the code won't build
> for other architectures. I think init_staticmem_pages() in its current shape
> shouldn't live in this (common) file.
> 

Yes, I should include them all both under one specific config, maybe like
CONFIG_STATIC_MEM, and this config is arm-specific.

> > @@ -1512,6 +1515,49 @@ static void free_heap_pages(
> >      spin_unlock(&heap_lock);
> >  }
> >
> > +/* Equivalent of free_heap_pages to free nr_pfns pages of static
> > +memory. */ static void free_staticmem_pages(struct page_info *pg,
> unsigned long nr_pfns,
> > +                                 bool need_scrub)
> 
> Right now this function gets called only from an __init one. Unless it is
> intended to gain further callers, it should be marked __init itself then.
> Otherwise it should be made sure that other architectures don't include this
> (dead there) code.
> 

Sure, I'll add __init. Thx.

> > +{
> > +    mfn_t mfn = page_to_mfn(pg);
> > +    int i;
> 
> This type doesn't fit nr_pfns'es.
> 

Sure, nr_mfns is better in also many other places.

> Jan
Julien Grall May 18, 2021, 10 a.m. UTC | #3
Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> This patch introduces static memory initialization, during system RAM boot up.
> 
> New func init_staticmem_pages is the equivalent of init_heap_pages, responsible
> for static memory initialization.
> 
> Helper func free_staticmem_pages is the equivalent of free_heap_pages, to free
> nr_pfns pages of static memory.
> For each page, it includes the following steps:
> 1. change page state from in-use(also initialization state) to free state and
> grant PGC_reserved.
> 2. set its owner NULL and make sure this page is not a guest frame any more
> 3. follow the same cache coherency policy in free_heap_pages
> 4. scrub the page in need
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/setup.c    |  2 ++
>   xen/common/page_alloc.c | 70 +++++++++++++++++++++++++++++++++++++++++
>   xen/include/xen/mm.h    |  3 ++
>   3 files changed, 75 insertions(+)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 444dbbd676..f80162c478 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -818,6 +818,8 @@ static void __init setup_mm(void)
>   
>       setup_frametable_mappings(ram_start, ram_end);
>       max_page = PFN_DOWN(ram_end);
> +
> +    init_staticmem_pages();
>   }
>   #endif
>   
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index ace6333c18..58b53c6ac2 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -150,6 +150,9 @@
>   #define p2m_pod_offline_or_broken_hit(pg) 0
>   #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
>   #endif
> +#ifdef CONFIG_ARM_64
> +#include <asm/setup.h>
> +#endif
>   
>   /*
>    * Comma-separated list of hexadecimal page numbers containing bad bytes.
> @@ -1512,6 +1515,49 @@ static void free_heap_pages(
>       spin_unlock(&heap_lock);
>   }
>   
> +/* Equivalent of free_heap_pages to free nr_pfns pages of static memory. */
> +static void free_staticmem_pages(struct page_info *pg, unsigned long nr_pfns,

This function is dealing with MFNs, so the second parameter should be 
called nr_mfns.

> +                                 bool need_scrub)
> +{
> +    mfn_t mfn = page_to_mfn(pg);
> +    int i;
> +
> +    for ( i = 0; i < nr_pfns; i++ )
> +    {
> +        switch ( pg[i].count_info & PGC_state )
> +        {
> +        case PGC_state_inuse:
> +            BUG_ON(pg[i].count_info & PGC_broken);
> +            /* Make it free and reserved. */
> +            pg[i].count_info = PGC_state_free | PGC_reserved;
> +            break;
> +
> +        default:
> +            printk(XENLOG_ERR
> +                   "Page state shall be only in PGC_state_inuse. "
> +                   "pg[%u] MFN %"PRI_mfn" count_info=%#lx tlbflush_timestamp=%#x.\n",
> +                   i, mfn_x(mfn) + i,
> +                   pg[i].count_info,
> +                   pg[i].tlbflush_timestamp);
> +            BUG();
> +        }
> +
> +        /*
> +         * Follow the same cache coherence scheme in free_heap_pages.
> +         * If a page has no owner it will need no safety TLB flush.
> +         */
> +        pg[i].u.free.need_tlbflush = (page_get_owner(&pg[i]) != NULL);
> +        if ( pg[i].u.free.need_tlbflush )
> +            page_set_tlbflush_timestamp(&pg[i]);
> +
> +        /* This page is not a guest frame any more. */
> +        page_set_owner(&pg[i], NULL);
> +        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);

The code looks quite similar to free_heap_pages(). Could we possibly 
create an helper which can be called from both?

> +
> +        if ( need_scrub )
> +            scrub_one_page(&pg[i]);

So the scrubbing will be synchronous. Is it what we want?

You also seem to miss the flush the call to flush_page_to_ram().

> +    }
> +}
>   
>   /*
>    * Following rules applied for page offline:
> @@ -1828,6 +1874,30 @@ static void init_heap_pages(
>       }
>   }
>   
> +/* Equivalent of init_heap_pages to do static memory initialization */
> +void __init init_staticmem_pages(void)
> +{
> +    int bank;
> +
> +    /*
> +     * TODO: Considering NUMA-support scenario.
> +     */
> +    for ( bank = 0 ; bank < bootinfo.static_mem.nr_banks; bank++ )

bootinfo is arm specific, so this code should live in arch/arm rather 
than common/.

> +    {
> +        paddr_t bank_start = bootinfo.static_mem.bank[bank].start;
> +        paddr_t bank_size = bootinfo.static_mem.bank[bank].size;
> +        paddr_t bank_end = bank_start + bank_size;
> +
> +        bank_start = round_pgup(bank_start);
> +        bank_end = round_pgdown(bank_end);
> +        if ( bank_end <= bank_start )
> +            return;
> +
> +        free_staticmem_pages(maddr_to_page(bank_start),
> +                            (bank_end - bank_start) >> PAGE_SHIFT, false);
> +    }
> +}
> +
>   static unsigned long avail_heap_pages(
>       unsigned int zone_lo, unsigned int zone_hi, unsigned int node)
>   {
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 667f9dac83..8b1a2207b2 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -85,6 +85,9 @@ bool scrub_free_pages(void);
>   } while ( false )
>   #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
>   
> +/* Static Memory */
> +void init_staticmem_pages(void);
> +
>   /* Map machine page range in Xen virtual address space. */
>   int map_pages_to_xen(
>       unsigned long virt,
> 

Cheers,
Julien Grall May 18, 2021, 10:01 a.m. UTC | #4
On 18/05/2021 11:00, Julien Grall wrote:
> Hi Penny,
> 
> On 18/05/2021 06:21, Penny Zheng wrote:
>> This patch introduces static memory initialization, during system RAM 
>> boot up.
>>
>> New func init_staticmem_pages is the equivalent of init_heap_pages, 
>> responsible
>> for static memory initialization.
>>
>> Helper func free_staticmem_pages is the equivalent of free_heap_pages, 
>> to free
>> nr_pfns pages of static memory.
>> For each page, it includes the following steps:
>> 1. change page state from in-use(also initialization state) to free 
>> state and
>> grant PGC_reserved.
>> 2. set its owner NULL and make sure this page is not a guest frame any 
>> more
>> 3. follow the same cache coherency policy in free_heap_pages
>> 4. scrub the page in need
>>
>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>> ---
>>   xen/arch/arm/setup.c    |  2 ++
>>   xen/common/page_alloc.c | 70 +++++++++++++++++++++++++++++++++++++++++
>>   xen/include/xen/mm.h    |  3 ++
>>   3 files changed, 75 insertions(+)
>>
>> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
>> index 444dbbd676..f80162c478 100644
>> --- a/xen/arch/arm/setup.c
>> +++ b/xen/arch/arm/setup.c
>> @@ -818,6 +818,8 @@ static void __init setup_mm(void)
>>       setup_frametable_mappings(ram_start, ram_end);
>>       max_page = PFN_DOWN(ram_end);
>> +
>> +    init_staticmem_pages();
>>   }
>>   #endif
>> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>> index ace6333c18..58b53c6ac2 100644
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -150,6 +150,9 @@
>>   #define p2m_pod_offline_or_broken_hit(pg) 0
>>   #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
>>   #endif
>> +#ifdef CONFIG_ARM_64
>> +#include <asm/setup.h>
>> +#endif
>>   /*
>>    * Comma-separated list of hexadecimal page numbers containing bad 
>> bytes.
>> @@ -1512,6 +1515,49 @@ static void free_heap_pages(
>>       spin_unlock(&heap_lock);
>>   }
>> +/* Equivalent of free_heap_pages to free nr_pfns pages of static 
>> memory. */
>> +static void free_staticmem_pages(struct page_info *pg, unsigned long 
>> nr_pfns,
> 
> This function is dealing with MFNs, so the second parameter should be 
> called nr_mfns.
> 
>> +                                 bool need_scrub)
>> +{
>> +    mfn_t mfn = page_to_mfn(pg);
>> +    int i;
>> +
>> +    for ( i = 0; i < nr_pfns; i++ )
>> +    {
>> +        switch ( pg[i].count_info & PGC_state )
>> +        {
>> +        case PGC_state_inuse:
>> +            BUG_ON(pg[i].count_info & PGC_broken);
>> +            /* Make it free and reserved. */
>> +            pg[i].count_info = PGC_state_free | PGC_reserved;
>> +            break;
>> +
>> +        default:
>> +            printk(XENLOG_ERR
>> +                   "Page state shall be only in PGC_state_inuse. "
>> +                   "pg[%u] MFN %"PRI_mfn" count_info=%#lx 
>> tlbflush_timestamp=%#x.\n",
>> +                   i, mfn_x(mfn) + i,
>> +                   pg[i].count_info,
>> +                   pg[i].tlbflush_timestamp);
>> +            BUG();
>> +        }
>> +
>> +        /*
>> +         * Follow the same cache coherence scheme in free_heap_pages.
>> +         * If a page has no owner it will need no safety TLB flush.
>> +         */
>> +        pg[i].u.free.need_tlbflush = (page_get_owner(&pg[i]) != NULL);
>> +        if ( pg[i].u.free.need_tlbflush )
>> +            page_set_tlbflush_timestamp(&pg[i]);
>> +
>> +        /* This page is not a guest frame any more. */
>> +        page_set_owner(&pg[i], NULL);
>> +        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);
> 
> The code looks quite similar to free_heap_pages(). Could we possibly 
> create an helper which can be called from both?
> 
>> +
>> +        if ( need_scrub )
>> +            scrub_one_page(&pg[i]);
> 
> So the scrubbing will be synchronous. Is it what we want?
> 
> You also seem to miss the flush the call to flush_page_to_ram().

Hmmmm... Sorry I looked at the wrong function. This is not necessary for 
the free part.

> 
>> +    }
>> +}
>>   /*
>>    * Following rules applied for page offline:
>> @@ -1828,6 +1874,30 @@ static void init_heap_pages(
>>       }
>>   }
>> +/* Equivalent of init_heap_pages to do static memory initialization */
>> +void __init init_staticmem_pages(void)
>> +{
>> +    int bank;
>> +
>> +    /*
>> +     * TODO: Considering NUMA-support scenario.
>> +     */
>> +    for ( bank = 0 ; bank < bootinfo.static_mem.nr_banks; bank++ )
> 
> bootinfo is arm specific, so this code should live in arch/arm rather 
> than common/.
> 
>> +    {
>> +        paddr_t bank_start = bootinfo.static_mem.bank[bank].start;
>> +        paddr_t bank_size = bootinfo.static_mem.bank[bank].size;
>> +        paddr_t bank_end = bank_start + bank_size;
>> +
>> +        bank_start = round_pgup(bank_start);
>> +        bank_end = round_pgdown(bank_end);
>> +        if ( bank_end <= bank_start )
>> +            return;
>> +
>> +        free_staticmem_pages(maddr_to_page(bank_start),
>> +                            (bank_end - bank_start) >> PAGE_SHIFT, 
>> false);
>> +    }
>> +}
>> +
>>   static unsigned long avail_heap_pages(
>>       unsigned int zone_lo, unsigned int zone_hi, unsigned int node)
>>   {
>> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
>> index 667f9dac83..8b1a2207b2 100644
>> --- a/xen/include/xen/mm.h
>> +++ b/xen/include/xen/mm.h
>> @@ -85,6 +85,9 @@ bool scrub_free_pages(void);
>>   } while ( false )
>>   #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
>> +/* Static Memory */
>> +void init_staticmem_pages(void);
>> +
>>   /* Map machine page range in Xen virtual address space. */
>>   int map_pages_to_xen(
>>       unsigned long virt,
>>
> 
> Cheers,
>
Jan Beulich May 18, 2021, 10:43 a.m. UTC | #5
On 18.05.2021 11:51, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 18, 2021 3:16 PM
>>
>> On 18.05.2021 07:21, Penny Zheng wrote:
>>> This patch introduces static memory initialization, during system RAM boot
>> up.
>>>
>>> New func init_staticmem_pages is the equivalent of init_heap_pages,
>>> responsible for static memory initialization.
>>>
>>> Helper func free_staticmem_pages is the equivalent of free_heap_pages,
>>> to free nr_pfns pages of static memory.
>>> For each page, it includes the following steps:
>>> 1. change page state from in-use(also initialization state) to free
>>> state and grant PGC_reserved.
>>> 2. set its owner NULL and make sure this page is not a guest frame any
>>> more
>>
>> But isn't the goal (as per the previous patch) to associate such pages with a
>> _specific_ domain?
>>
> 
> Free_staticmem_pages are alike free_heap_pages, are not used only for initialization.
> Freeing used pages to unused are also included.
> Here, setting its owner NULL is to set owner in used field NULL.

I'm afraid I still don't understand.

> Still, I need to add more explanation here.

Yes please.

>>> @@ -1512,6 +1515,49 @@ static void free_heap_pages(
>>>      spin_unlock(&heap_lock);
>>>  }
>>>
>>> +/* Equivalent of free_heap_pages to free nr_pfns pages of static
>>> +memory. */ static void free_staticmem_pages(struct page_info *pg,
>> unsigned long nr_pfns,
>>> +                                 bool need_scrub)
>>
>> Right now this function gets called only from an __init one. Unless it is
>> intended to gain further callers, it should be marked __init itself then.
>> Otherwise it should be made sure that other architectures don't include this
>> (dead there) code.
>>
> 
> Sure, I'll add __init. Thx.

Didn't I see a 2nd call to the function later in the series? That
one doesn't permit the function to be __init, iirc.

Jan
Penny Zheng May 19, 2021, 5:02 a.m. UTC | #6
Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, May 18, 2021 6:01 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 04/10] xen/arm: static memory initialization
> 
> Hi Penny,
> 
> On 18/05/2021 06:21, Penny Zheng wrote:
> > This patch introduces static memory initialization, during system RAM boot
> up.
> >
> > New func init_staticmem_pages is the equivalent of init_heap_pages,
> > responsible for static memory initialization.
> >
> > Helper func free_staticmem_pages is the equivalent of free_heap_pages,
> > to free nr_pfns pages of static memory.
> > For each page, it includes the following steps:
> > 1. change page state from in-use(also initialization state) to free
> > state and grant PGC_reserved.
> > 2. set its owner NULL and make sure this page is not a guest frame any
> > more 3. follow the same cache coherency policy in free_heap_pages 4.
> > scrub the page in need
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> >   xen/arch/arm/setup.c    |  2 ++
> >   xen/common/page_alloc.c | 70
> +++++++++++++++++++++++++++++++++++++++++
> >   xen/include/xen/mm.h    |  3 ++
> >   3 files changed, 75 insertions(+)
> >
> > diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index
> > 444dbbd676..f80162c478 100644
> > --- a/xen/arch/arm/setup.c
> > +++ b/xen/arch/arm/setup.c
> > @@ -818,6 +818,8 @@ static void __init setup_mm(void)
> >
> >       setup_frametable_mappings(ram_start, ram_end);
> >       max_page = PFN_DOWN(ram_end);
> > +
> > +    init_staticmem_pages();
> >   }
> >   #endif
> >
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > ace6333c18..58b53c6ac2 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -150,6 +150,9 @@
> >   #define p2m_pod_offline_or_broken_hit(pg) 0
> >   #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
> >   #endif
> > +#ifdef CONFIG_ARM_64
> > +#include <asm/setup.h>
> > +#endif
> >
> >   /*
> >    * Comma-separated list of hexadecimal page numbers containing bad
> bytes.
> > @@ -1512,6 +1515,49 @@ static void free_heap_pages(
> >       spin_unlock(&heap_lock);
> >   }
> >
> > +/* Equivalent of free_heap_pages to free nr_pfns pages of static
> > +memory. */ static void free_staticmem_pages(struct page_info *pg,
> > +unsigned long nr_pfns,
> 
> This function is dealing with MFNs, so the second parameter should be called
> nr_mfns.
> 

Agreed, thx.

> > +                                 bool need_scrub) {
> > +    mfn_t mfn = page_to_mfn(pg);
> > +    int i;
> > +
> > +    for ( i = 0; i < nr_pfns; i++ )
> > +    {
> > +        switch ( pg[i].count_info & PGC_state )
> > +        {
> > +        case PGC_state_inuse:
> > +            BUG_ON(pg[i].count_info & PGC_broken);
> > +            /* Make it free and reserved. */
> > +            pg[i].count_info = PGC_state_free | PGC_reserved;
> > +            break;
> > +
> > +        default:
> > +            printk(XENLOG_ERR
> > +                   "Page state shall be only in PGC_state_inuse. "
> > +                   "pg[%u] MFN %"PRI_mfn" count_info=%#lx
> tlbflush_timestamp=%#x.\n",
> > +                   i, mfn_x(mfn) + i,
> > +                   pg[i].count_info,
> > +                   pg[i].tlbflush_timestamp);
> > +            BUG();
> > +        }
> > +
> > +        /*
> > +         * Follow the same cache coherence scheme in free_heap_pages.
> > +         * If a page has no owner it will need no safety TLB flush.
> > +         */
> > +        pg[i].u.free.need_tlbflush = (page_get_owner(&pg[i]) != NULL);
> > +        if ( pg[i].u.free.need_tlbflush )
> > +            page_set_tlbflush_timestamp(&pg[i]);
> > +
> > +        /* This page is not a guest frame any more. */
> > +        page_set_owner(&pg[i], NULL);
> > +        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);
> 
> The code looks quite similar to free_heap_pages(). Could we possibly create
> an helper which can be called from both?
> 

Ok, I will extract the common code here and there.
 
> > +
> > +        if ( need_scrub )
> > +            scrub_one_page(&pg[i]);
> 
> So the scrubbing will be synchronous. Is it what we want?
> 
> You also seem to miss the flush the call to flush_page_to_ram().
> 

Yeah, right now, it is synchronous.
I guess that it is not an optimal choice, only a workable one right now.
I'm trying to borrow some asynchronous scrubbing ideas from heap in the future.

Yes! If we are doing synchronous scrubbing, we need to flush_page_to_ram(). Thx.

> > +    }
> > +}
> >
> >   /*
> >    * Following rules applied for page offline:
> > @@ -1828,6 +1874,30 @@ static void init_heap_pages(
> >       }
> >   }
> >
> > +/* Equivalent of init_heap_pages to do static memory initialization
> > +*/ void __init init_staticmem_pages(void) {
> > +    int bank;
> > +
> > +    /*
> > +     * TODO: Considering NUMA-support scenario.
> > +     */
> > +    for ( bank = 0 ; bank < bootinfo.static_mem.nr_banks; bank++ )
> 
> bootinfo is arm specific, so this code should live in arch/arm rather than
> common/.
> 

Yes, I'm considering to create a new CONFIG_STATIC_MEM to include all
static-memory-related functions.

> > +    {
> > +        paddr_t bank_start = bootinfo.static_mem.bank[bank].start;
> > +        paddr_t bank_size = bootinfo.static_mem.bank[bank].size;
> > +        paddr_t bank_end = bank_start + bank_size;
> > +
> > +        bank_start = round_pgup(bank_start);
> > +        bank_end = round_pgdown(bank_end);
> > +        if ( bank_end <= bank_start )
> > +            return;
> > +
> > +        free_staticmem_pages(maddr_to_page(bank_start),
> > +                            (bank_end - bank_start) >> PAGE_SHIFT, false);
> > +    }
> > +}
> > +
> >   static unsigned long avail_heap_pages(
> >       unsigned int zone_lo, unsigned int zone_hi, unsigned int node)
> >   {
> > diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index
> > 667f9dac83..8b1a2207b2 100644
> > --- a/xen/include/xen/mm.h
> > +++ b/xen/include/xen/mm.h
> > @@ -85,6 +85,9 @@ bool scrub_free_pages(void);
> >   } while ( false )
> >   #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
> >
> > +/* Static Memory */
> > +void init_staticmem_pages(void);
> > +
> >   /* Map machine page range in Xen virtual address space. */
> >   int map_pages_to_xen(
> >       unsigned long virt,
> >
> 
> Cheers,
> 
> --
> Julien Grall

Cheers

Penny
Penny Zheng May 20, 2021, 9:04 a.m. UTC | #7
Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Tuesday, May 18, 2021 6:43 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org; julien@xen.org
> Subject: Re: [PATCH 04/10] xen/arm: static memory initialization
> 
> On 18.05.2021 11:51, Penny Zheng wrote:
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: Tuesday, May 18, 2021 3:16 PM
> >>
> >> On 18.05.2021 07:21, Penny Zheng wrote:
> >>> This patch introduces static memory initialization, during system
> >>> RAM boot
> >> up.
> >>>
> >>> New func init_staticmem_pages is the equivalent of init_heap_pages,
> >>> responsible for static memory initialization.
> >>>
> >>> Helper func free_staticmem_pages is the equivalent of
> >>> free_heap_pages, to free nr_pfns pages of static memory.
> >>> For each page, it includes the following steps:
> >>> 1. change page state from in-use(also initialization state) to free
> >>> state and grant PGC_reserved.
> >>> 2. set its owner NULL and make sure this page is not a guest frame
> >>> any more
> >>
> >> But isn't the goal (as per the previous patch) to associate such
> >> pages with a _specific_ domain?
> >>
> >
> > Free_staticmem_pages are alike free_heap_pages, are not used only for
> initialization.
> > Freeing used pages to unused are also included.
> > Here, setting its owner NULL is to set owner in used field NULL.
> 
> I'm afraid I still don't understand.
> 

When initializing heap, xen is using free_heap_pages to do the initialization.
And also when normal domain get destroyed/rebooted, xen is using free_domheap_pages,
which calls free_heap_pages to free the pages.

So here, since free_staticmem_pages is the equivalent of the free_heap_pages for static
memory, I'm also considering both two scenarios here. And if it is domain get destroyed/rebooted,
Page state is in inuse state(PGC_inuse), and the page_info.v.inuse.domain is holding the
domain owner.
When freeing then, we need to switch the page state to free state(PGC_free) and set its inuse owner to NULL.

I'll clarify it more clearly in commit message.

> > Still, I need to add more explanation here.
> 
> Yes please.
> 
> >>> @@ -1512,6 +1515,49 @@ static void free_heap_pages(
> >>>      spin_unlock(&heap_lock);
> >>>  }
> >>>
> >>> +/* Equivalent of free_heap_pages to free nr_pfns pages of static
> >>> +memory. */ static void free_staticmem_pages(struct page_info *pg,
> >> unsigned long nr_pfns,
> >>> +                                 bool need_scrub)
> >>
> >> Right now this function gets called only from an __init one. Unless
> >> it is intended to gain further callers, it should be marked __init itself then.
> >> Otherwise it should be made sure that other architectures don't
> >> include this (dead there) code.
> >>
> >
> > Sure, I'll add __init. Thx.
> 
> Didn't I see a 2nd call to the function later in the series? That one doesn't
> permit the function to be __init, iirc.
> 
> Jan

Cheers

Penny
Jan Beulich May 20, 2021, 9:32 a.m. UTC | #8
On 20.05.2021 11:04, Penny Zheng wrote:
> Hi Jan
> 
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 18, 2021 6:43 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org; julien@xen.org
>> Subject: Re: [PATCH 04/10] xen/arm: static memory initialization
>>
>> On 18.05.2021 11:51, Penny Zheng wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: Tuesday, May 18, 2021 3:16 PM
>>>>
>>>> On 18.05.2021 07:21, Penny Zheng wrote:
>>>>> This patch introduces static memory initialization, during system
>>>>> RAM boot
>>>> up.
>>>>>
>>>>> New func init_staticmem_pages is the equivalent of init_heap_pages,
>>>>> responsible for static memory initialization.
>>>>>
>>>>> Helper func free_staticmem_pages is the equivalent of
>>>>> free_heap_pages, to free nr_pfns pages of static memory.
>>>>> For each page, it includes the following steps:
>>>>> 1. change page state from in-use(also initialization state) to free
>>>>> state and grant PGC_reserved.
>>>>> 2. set its owner NULL and make sure this page is not a guest frame
>>>>> any more
>>>>
>>>> But isn't the goal (as per the previous patch) to associate such
>>>> pages with a _specific_ domain?
>>>>
>>>
>>> Free_staticmem_pages are alike free_heap_pages, are not used only for
>> initialization.
>>> Freeing used pages to unused are also included.
>>> Here, setting its owner NULL is to set owner in used field NULL.
>>
>> I'm afraid I still don't understand.
>>
> 
> When initializing heap, xen is using free_heap_pages to do the initialization.
> And also when normal domain get destroyed/rebooted, xen is using free_domheap_pages,
> which calls free_heap_pages to free the pages.
> 
> So here, since free_staticmem_pages is the equivalent of the free_heap_pages for static
> memory, I'm also considering both two scenarios here. And if it is domain get destroyed/rebooted,
> Page state is in inuse state(PGC_inuse), and the page_info.v.inuse.domain is holding the
> domain owner.
> When freeing then, we need to switch the page state to free state(PGC_free) and set its inuse owner to NULL.

Perhaps my confusion comes from your earlier outline missing

3. re-associate the page with the domain (as represented in free
   pages)

The property of "designated for Dom<N>" should never go away, if I
understand the overall proposal correctly.

Jan
diff mbox series

Patch

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 444dbbd676..f80162c478 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -818,6 +818,8 @@  static void __init setup_mm(void)
 
     setup_frametable_mappings(ram_start, ram_end);
     max_page = PFN_DOWN(ram_end);
+
+    init_staticmem_pages();
 }
 #endif
 
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index ace6333c18..58b53c6ac2 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -150,6 +150,9 @@ 
 #define p2m_pod_offline_or_broken_hit(pg) 0
 #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
 #endif
+#ifdef CONFIG_ARM_64
+#include <asm/setup.h>
+#endif
 
 /*
  * Comma-separated list of hexadecimal page numbers containing bad bytes.
@@ -1512,6 +1515,49 @@  static void free_heap_pages(
     spin_unlock(&heap_lock);
 }
 
+/* Equivalent of free_heap_pages to free nr_pfns pages of static memory. */
+static void free_staticmem_pages(struct page_info *pg, unsigned long nr_pfns,
+                                 bool need_scrub)
+{
+    mfn_t mfn = page_to_mfn(pg);
+    int i;
+
+    for ( i = 0; i < nr_pfns; i++ )
+    {
+        switch ( pg[i].count_info & PGC_state )
+        {
+        case PGC_state_inuse:
+            BUG_ON(pg[i].count_info & PGC_broken);
+            /* Make it free and reserved. */
+            pg[i].count_info = PGC_state_free | PGC_reserved;
+            break;
+
+        default:
+            printk(XENLOG_ERR
+                   "Page state shall be only in PGC_state_inuse. "
+                   "pg[%u] MFN %"PRI_mfn" count_info=%#lx tlbflush_timestamp=%#x.\n",
+                   i, mfn_x(mfn) + i,
+                   pg[i].count_info,
+                   pg[i].tlbflush_timestamp);
+            BUG();
+        }
+
+        /*
+         * Follow the same cache coherence scheme in free_heap_pages.
+         * If a page has no owner it will need no safety TLB flush.
+         */
+        pg[i].u.free.need_tlbflush = (page_get_owner(&pg[i]) != NULL);
+        if ( pg[i].u.free.need_tlbflush )
+            page_set_tlbflush_timestamp(&pg[i]);
+
+        /* This page is not a guest frame any more. */
+        page_set_owner(&pg[i], NULL);
+        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);
+
+        if ( need_scrub )
+            scrub_one_page(&pg[i]);
+    }
+}
 
 /*
  * Following rules applied for page offline:
@@ -1828,6 +1874,30 @@  static void init_heap_pages(
     }
 }
 
+/* Equivalent of init_heap_pages to do static memory initialization */
+void __init init_staticmem_pages(void)
+{
+    int bank;
+
+    /*
+     * TODO: Considering NUMA-support scenario.
+     */
+    for ( bank = 0 ; bank < bootinfo.static_mem.nr_banks; bank++ )
+    {
+        paddr_t bank_start = bootinfo.static_mem.bank[bank].start;
+        paddr_t bank_size = bootinfo.static_mem.bank[bank].size;
+        paddr_t bank_end = bank_start + bank_size;
+
+        bank_start = round_pgup(bank_start);
+        bank_end = round_pgdown(bank_end);
+        if ( bank_end <= bank_start )
+            return;
+
+        free_staticmem_pages(maddr_to_page(bank_start),
+                            (bank_end - bank_start) >> PAGE_SHIFT, false);
+    }
+}
+
 static unsigned long avail_heap_pages(
     unsigned int zone_lo, unsigned int zone_hi, unsigned int node)
 {
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 667f9dac83..8b1a2207b2 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -85,6 +85,9 @@  bool scrub_free_pages(void);
 } while ( false )
 #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
 
+/* Static Memory */
+void init_staticmem_pages(void);
+
 /* Map machine page range in Xen virtual address space. */
 int map_pages_to_xen(
     unsigned long virt,