diff mbox series

[05/10] xen/arm: introduce alloc_staticmem_pages

Message ID 20210518052113.725808-6-penny.zheng@arm.com (mailing list archive)
State Superseded
Headers show
Series Domain on Static Allocation | expand

Commit Message

Penny Zheng May 18, 2021, 5:21 a.m. UTC
alloc_staticmem_pages is designated to allocate nr_pfns contiguous
pages of static memory. And it is the equivalent of alloc_heap_pages
for static memory.
This commit only covers allocating at specified starting address.

For each page, it shall check if the page is reserved
(PGC_reserved) and free. It shall also do a set of necessary
initialization, which are mostly the same ones in alloc_heap_pages,
like, following the same cache-coherency policy and turning page
status into PGC_state_used, etc.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/common/page_alloc.c | 64 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)

Comments

Jan Beulich May 18, 2021, 7:24 a.m. UTC | #1
On 18.05.2021 07:21, Penny Zheng wrote:
> alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> pages of static memory. And it is the equivalent of alloc_heap_pages
> for static memory.
> This commit only covers allocating at specified starting address.
> 
> For each page, it shall check if the page is reserved
> (PGC_reserved) and free. It shall also do a set of necessary
> initialization, which are mostly the same ones in alloc_heap_pages,
> like, following the same cache-coherency policy and turning page
> status into PGC_state_used, etc.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>  xen/common/page_alloc.c | 64 +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 64 insertions(+)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 58b53c6ac2..adf2889e76 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
>      return pg;
>  }
>  
> +/*
> + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> + * It is the equivalent of alloc_heap_pages for static memory
> + */
> +static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,
> +                                                paddr_t start,
> +                                                unsigned int memflags)

This is surely breaking the build (at this point in the series -
recall that a series should build fine at every patch boundary),
for introducing an unused static function, which most compilers
will warn about.

Also again - please avoid introducing code that's always dead for
certain architectures. Quite likely you want a Kconfig option to
put a suitable #ifdef around such functions.

And a nit: Please correct the apparently off-by-one indentation.

> +{
> +    bool need_tlbflush = false;
> +    uint32_t tlbflush_timestamp = 0;
> +    unsigned int i;

This variable's type should (again) match nr_pfns'es (albeit I
think that parameter really wants to be nr_mfns).

> +    struct page_info *pg;
> +    mfn_t s_mfn;
> +
> +    /* For now, it only supports allocating at specified address. */
> +    s_mfn = maddr_to_mfn(start);
> +    pg = mfn_to_page(s_mfn);
> +    if ( !pg )
> +        return NULL;

Under what conditions would mfn_to_page() return NULL?

> +    for ( i = 0; i < nr_pfns; i++)
> +    {
> +        /*
> +         * Reference count must continuously be zero for free pages
> +         * of static memory(PGC_reserved).
> +         */
> +        ASSERT(pg[i].count_info & PGC_reserved);
> +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> +        {
> +            printk(XENLOG_ERR
> +                    "Reference count must continuously be zero for free pages"
> +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> +                    i, mfn_x(page_to_mfn(pg + i)),
> +                    pg[i].count_info, pg[i].tlbflush_timestamp);

Nit: Indentation again.

> +            BUG();
> +        }
> +
> +        if ( !(memflags & MEMF_no_tlbflush) )
> +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> +                                &tlbflush_timestamp);
> +
> +        /*
> +         * Reserve flag PGC_reserved and change page state

DYM "Preserve ..."?

> +         * to PGC_state_inuse.
> +         */
> +        pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
> +        /* Initialise fields which have other uses for free pages. */
> +        pg[i].u.inuse.type_info = 0;
> +        page_set_owner(&pg[i], NULL);
> +
> +        /*
> +         * Ensure cache and RAM are consistent for platforms where the
> +         * guest can control its own visibility of/through the cache.
> +         */
> +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> +                            !(memflags & MEMF_no_icache_flush));
> +    }
> +
> +    if ( need_tlbflush )
> +        filtered_flush_tlb_mask(tlbflush_timestamp);

With reserved pages dedicated to a specific domain, in how far is it
possible that stale mappings from a prior use can still be around,
making such TLB flushing necessary?

Jan
Penny Zheng May 18, 2021, 9:30 a.m. UTC | #2
Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Tuesday, May 18, 2021 3:24 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org; julien@xen.org
> Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
> 
> On 18.05.2021 07:21, Penny Zheng wrote:
> > alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> > pages of static memory. And it is the equivalent of alloc_heap_pages
> > for static memory.
> > This commit only covers allocating at specified starting address.
> >
> > For each page, it shall check if the page is reserved
> > (PGC_reserved) and free. It shall also do a set of necessary
> > initialization, which are mostly the same ones in alloc_heap_pages,
> > like, following the same cache-coherency policy and turning page
> > status into PGC_state_used, etc.
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> >  xen/common/page_alloc.c | 64
> > +++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 64 insertions(+)
> >
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > 58b53c6ac2..adf2889e76 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
> >      return pg;
> >  }
> >
> > +/*
> > + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> > + * It is the equivalent of alloc_heap_pages for static memory  */
> > +static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,
> > +                                                paddr_t start,
> > +                                                unsigned int
> > +memflags)
> 
> This is surely breaking the build (at this point in the series - recall that a series
> should build fine at every patch boundary), for introducing an unused static
> function, which most compilers will warn about.
>

Sure, I'll combine it with other commits

> Also again - please avoid introducing code that's always dead for certain
> architectures. Quite likely you want a Kconfig option to put a suitable #ifdef
> around such functions.
> 

Sure, sorry for all the missing #ifdefs.

> And a nit: Please correct the apparently off-by-one indentation.
>

Sure, I'll check through the code more carefully.

> > +{
> > +    bool need_tlbflush = false;
> > +    uint32_t tlbflush_timestamp = 0;
> > +    unsigned int i;
> 
> This variable's type should (again) match nr_pfns'es (albeit I think that
> parameter really wants to be nr_mfns).
> 

Correct if I understand you wrongly, you mean that parameters in alloc_staticmem_pages
are better be named after unsigned long nr_mfns, right?

> > +    struct page_info *pg;
> > +    mfn_t s_mfn;
> > +
> > +    /* For now, it only supports allocating at specified address. */
> > +    s_mfn = maddr_to_mfn(start);
> > +    pg = mfn_to_page(s_mfn);
> > +    if ( !pg )
> > +        return NULL;
> 
> Under what conditions would mfn_to_page() return NULL?

Right, my mistake.

>
> > +    for ( i = 0; i < nr_pfns; i++)
> > +    {
> > +        /*
> > +         * Reference count must continuously be zero for free pages
> > +         * of static memory(PGC_reserved).
> > +         */
> > +        ASSERT(pg[i].count_info & PGC_reserved);
> > +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> > +        {
> > +            printk(XENLOG_ERR
> > +                    "Reference count must continuously be zero for free pages"
> > +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> > +                    i, mfn_x(page_to_mfn(pg + i)),
> > +                    pg[i].count_info, pg[i].tlbflush_timestamp);
> 
> Nit: Indentation again.
>
 
Thx

> > +            BUG();
> > +        }
> > +
> > +        if ( !(memflags & MEMF_no_tlbflush) )
> > +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> > +                                &tlbflush_timestamp);
> > +
> > +        /*
> > +         * Reserve flag PGC_reserved and change page state
> 
> DYM "Preserve ..."?
> 

Sure, thx

> > +         * to PGC_state_inuse.
> > +         */
> > +        pg[i].count_info = (pg[i].count_info & PGC_reserved) |
> PGC_state_inuse;
> > +        /* Initialise fields which have other uses for free pages. */
> > +        pg[i].u.inuse.type_info = 0;
> > +        page_set_owner(&pg[i], NULL);
> > +
> > +        /*
> > +         * Ensure cache and RAM are consistent for platforms where the
> > +         * guest can control its own visibility of/through the cache.
> > +         */
> > +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> > +                            !(memflags & MEMF_no_icache_flush));
> > +    }
> > +
> > +    if ( need_tlbflush )
> > +        filtered_flush_tlb_mask(tlbflush_timestamp);
> 
> With reserved pages dedicated to a specific domain, in how far is it possible
> that stale mappings from a prior use can still be around, making such TLB
> flushing necessary?
> 

Yes, you're right.

> Jan
Julien Grall May 18, 2021, 10:09 a.m. UTC | #3
Hi Jan,

On 18/05/2021 08:24, Jan Beulich wrote:
> On 18.05.2021 07:21, Penny Zheng wrote:
>> +         * to PGC_state_inuse.
>> +         */
>> +        pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
>> +        /* Initialise fields which have other uses for free pages. */
>> +        pg[i].u.inuse.type_info = 0;
>> +        page_set_owner(&pg[i], NULL);
>> +
>> +        /*
>> +         * Ensure cache and RAM are consistent for platforms where the
>> +         * guest can control its own visibility of/through the cache.
>> +         */
>> +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
>> +                            !(memflags & MEMF_no_icache_flush));
>> +    }
>> +
>> +    if ( need_tlbflush )
>> +        filtered_flush_tlb_mask(tlbflush_timestamp);
> 
> With reserved pages dedicated to a specific domain, in how far is it
> possible that stale mappings from a prior use can still be around,
> making such TLB flushing necessary?

I would rather not make the assumption. I can see future where we just 
want to allocate memory from a static pool that may be shared with 
multiple domains.

Cheers,
Julien Grall May 18, 2021, 10:15 a.m. UTC | #4
Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> pages of static memory. And it is the equivalent of alloc_heap_pages
> for static memory.
> This commit only covers allocating at specified starting address.
> 
> For each page, it shall check if the page is reserved
> (PGC_reserved) and free. It shall also do a set of necessary
> initialization, which are mostly the same ones in alloc_heap_pages,
> like, following the same cache-coherency policy and turning page
> status into PGC_state_used, etc.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/common/page_alloc.c | 64 +++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 64 insertions(+)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 58b53c6ac2..adf2889e76 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
>       return pg;
>   }
>   
> +/*
> + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> + * It is the equivalent of alloc_heap_pages for static memory
> + */
> +static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,

This wants to be nr_mfns.

> +                                                paddr_t start,

I would prefer if this helper takes an mfn_t in parameter.

> +                                                unsigned int memflags)
> +{
> +    bool need_tlbflush = false;
> +    uint32_t tlbflush_timestamp = 0;
> +    unsigned int i;
> +    struct page_info *pg;
> +    mfn_t s_mfn;
> +
> +    /* For now, it only supports allocating at specified address. */
> +    s_mfn = maddr_to_mfn(start);
> +    pg = mfn_to_page(s_mfn);

We should avoid to make the assumption the start address will be valid. 
So you want to call mfn_valid() first.

At the same time, there is no guarantee that if the first page is valid, 
then the next nr_pfns will be. So the check should be performed for all 
of them.

> +    if ( !pg )
> +        return NULL;
> +
> +    for ( i = 0; i < nr_pfns; i++)
> +    {
> +        /*
> +         * Reference count must continuously be zero for free pages
> +         * of static memory(PGC_reserved).
> +         */
> +        ASSERT(pg[i].count_info & PGC_reserved);
> +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> +        {
> +            printk(XENLOG_ERR
> +                    "Reference count must continuously be zero for free pages"
> +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> +                    i, mfn_x(page_to_mfn(pg + i)),
> +                    pg[i].count_info, pg[i].tlbflush_timestamp);
> +            BUG();

So we would crash Xen if the caller pass a wrong range. Is it what we want?

Also, who is going to prevent concurrent access?

> +        }
> +
> +        if ( !(memflags & MEMF_no_tlbflush) )
> +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> +                                &tlbflush_timestamp);
> +
> +        /*
> +         * Reserve flag PGC_reserved and change page state
> +         * to PGC_state_inuse.
> +         */
> +        pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
> +        /* Initialise fields which have other uses for free pages. */
> +        pg[i].u.inuse.type_info = 0;
> +        page_set_owner(&pg[i], NULL);
> +
> +        /*
> +         * Ensure cache and RAM are consistent for platforms where the
> +         * guest can control its own visibility of/through the cache.
> +         */
> +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> +                            !(memflags & MEMF_no_icache_flush));
> +    }
> +
> +    if ( need_tlbflush )
> +        filtered_flush_tlb_mask(tlbflush_timestamp);
> +
> +    return pg;
> +}
> +
>   /* Remove any offlined page in the buddy pointed to by head. */
>   static int reserve_offlined_page(struct page_info *head)
>   {
> 

Cheers,
Penny Zheng May 19, 2021, 5:23 a.m. UTC | #5
Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, May 18, 2021 6:15 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
> 
> Hi Penny,
> 
> On 18/05/2021 06:21, Penny Zheng wrote:
> > alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> > pages of static memory. And it is the equivalent of alloc_heap_pages
> > for static memory.
> > This commit only covers allocating at specified starting address.
> >
> > For each page, it shall check if the page is reserved
> > (PGC_reserved) and free. It shall also do a set of necessary
> > initialization, which are mostly the same ones in alloc_heap_pages,
> > like, following the same cache-coherency policy and turning page
> > status into PGC_state_used, etc.
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> >   xen/common/page_alloc.c | 64
> +++++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 64 insertions(+)
> >
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > 58b53c6ac2..adf2889e76 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
> >       return pg;
> >   }
> >
> > +/*
> > + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> > + * It is the equivalent of alloc_heap_pages for static memory  */
> > +static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,
> 
> This wants to be nr_mfns.
> 
> > +                                                paddr_t start,
> 
> I would prefer if this helper takes an mfn_t in parameter.
> 

Sure, I will change both.

> > +                                                unsigned int
> > +memflags) {
> > +    bool need_tlbflush = false;
> > +    uint32_t tlbflush_timestamp = 0;
> > +    unsigned int i;
> > +    struct page_info *pg;
> > +    mfn_t s_mfn;
> > +
> > +    /* For now, it only supports allocating at specified address. */
> > +    s_mfn = maddr_to_mfn(start);
> > +    pg = mfn_to_page(s_mfn);
> 
> We should avoid to make the assumption the start address will be valid.
> So you want to call mfn_valid() first.
> 
> At the same time, there is no guarantee that if the first page is valid, then the
> next nr_pfns will be. So the check should be performed for all of them.
> 

Ok. I'll do validation check on both of them.

> > +    if ( !pg )
> > +        return NULL;
> > +
> > +    for ( i = 0; i < nr_pfns; i++)
> > +    {
> > +        /*
> > +         * Reference count must continuously be zero for free pages
> > +         * of static memory(PGC_reserved).
> > +         */
> > +        ASSERT(pg[i].count_info & PGC_reserved);
> > +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> > +        {
> > +            printk(XENLOG_ERR
> > +                    "Reference count must continuously be zero for free pages"
> > +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> > +                    i, mfn_x(page_to_mfn(pg + i)),
> > +                    pg[i].count_info, pg[i].tlbflush_timestamp);
> > +            BUG();
> 
> So we would crash Xen if the caller pass a wrong range. Is it what we want?
> 
> Also, who is going to prevent concurrent access?
> 

Sure, to fix concurrency issue, I may need to add one spinlock like
`static DEFINE_SPINLOCK(staticmem_lock);`

In current alloc_heap_pages, it will do similar check, that pages in free state MUST have
zero reference count. I guess, if condition not met, there is no need to proceed.

> > +        }
> > +
> > +        if ( !(memflags & MEMF_no_tlbflush) )
> > +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> > +                                &tlbflush_timestamp);
> > +
> > +        /*
> > +         * Reserve flag PGC_reserved and change page state
> > +         * to PGC_state_inuse.
> > +         */
> > +        pg[i].count_info = (pg[i].count_info & PGC_reserved) |
> PGC_state_inuse;
> > +        /* Initialise fields which have other uses for free pages. */
> > +        pg[i].u.inuse.type_info = 0;
> > +        page_set_owner(&pg[i], NULL);
> > +
> > +        /*
> > +         * Ensure cache and RAM are consistent for platforms where the
> > +         * guest can control its own visibility of/through the cache.
> > +         */
> > +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> > +                            !(memflags & MEMF_no_icache_flush));
> > +    }
> > +
> > +    if ( need_tlbflush )
> > +        filtered_flush_tlb_mask(tlbflush_timestamp);
> > +
> > +    return pg;
> > +}
> > +
> >   /* Remove any offlined page in the buddy pointed to by head. */
> >   static int reserve_offlined_page(struct page_info *head)
> >   {
> >
> 
> Cheers,
> 
> --
> Julien Grall

Cheers,

Penny Zheng
Penny Zheng May 24, 2021, 10:10 a.m. UTC | #6
Hi Julien

> -----Original Message-----
> From: Penny Zheng
> Sent: Wednesday, May 19, 2021 1:24 PM
> To: Julien Grall <julien@xen.org>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
> 
> Hi Julien
> 
> > -----Original Message-----
> > From: Julien Grall <julien@xen.org>
> > Sent: Tuesday, May 18, 2021 6:15 PM
> > To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> > sstabellini@kernel.org
> > Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> > <Wei.Chen@arm.com>; nd <nd@arm.com>
> > Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
> >
> > Hi Penny,
> >
> > On 18/05/2021 06:21, Penny Zheng wrote:
> > > alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> > > pages of static memory. And it is the equivalent of alloc_heap_pages
> > > for static memory.
> > > This commit only covers allocating at specified starting address.
> > >
> > > For each page, it shall check if the page is reserved
> > > (PGC_reserved) and free. It shall also do a set of necessary
> > > initialization, which are mostly the same ones in alloc_heap_pages,
> > > like, following the same cache-coherency policy and turning page
> > > status into PGC_state_used, etc.
> > >
> > > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > > ---
> > >   xen/common/page_alloc.c | 64
> > +++++++++++++++++++++++++++++++++++++++++
> > >   1 file changed, 64 insertions(+)
> > >
> > > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > > 58b53c6ac2..adf2889e76 100644
> > > --- a/xen/common/page_alloc.c
> > > +++ b/xen/common/page_alloc.c
> > > @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
> > >       return pg;
> > >   }
> > >
> > > +/*
> > > + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> > > + * It is the equivalent of alloc_heap_pages for static memory  */
> > > +static struct page_info *alloc_staticmem_pages(unsigned long
> > > +nr_pfns,
> >
> > This wants to be nr_mfns.
> >
> > > +                                                paddr_t start,
> >
> > I would prefer if this helper takes an mfn_t in parameter.
> >
> 
> Sure, I will change both.
> 
> > > +                                                unsigned int
> > > +memflags) {
> > > +    bool need_tlbflush = false;
> > > +    uint32_t tlbflush_timestamp = 0;
> > > +    unsigned int i;
> > > +    struct page_info *pg;
> > > +    mfn_t s_mfn;
> > > +
> > > +    /* For now, it only supports allocating at specified address. */
> > > +    s_mfn = maddr_to_mfn(start);
> > > +    pg = mfn_to_page(s_mfn);
> >
> > We should avoid to make the assumption the start address will be valid.
> > So you want to call mfn_valid() first.
> >
> > At the same time, there is no guarantee that if the first page is
> > valid, then the next nr_pfns will be. So the check should be performed for all
> of them.
> >
> 
> Ok. I'll do validation check on both of them.
> 
> > > +    if ( !pg )
> > > +        return NULL;
> > > +
> > > +    for ( i = 0; i < nr_pfns; i++)
> > > +    {
> > > +        /*
> > > +         * Reference count must continuously be zero for free pages
> > > +         * of static memory(PGC_reserved).
> > > +         */
> > > +        ASSERT(pg[i].count_info & PGC_reserved);
> > > +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> > > +        {
> > > +            printk(XENLOG_ERR
> > > +                    "Reference count must continuously be zero for free pages"
> > > +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> > > +                    i, mfn_x(page_to_mfn(pg + i)),
> > > +                    pg[i].count_info, pg[i].tlbflush_timestamp);
> > > +            BUG();
> >
> > So we would crash Xen if the caller pass a wrong range. Is it what we want?
> >
> > Also, who is going to prevent concurrent access?
> >
> 
> Sure, to fix concurrency issue, I may need to add one spinlock like `static
> DEFINE_SPINLOCK(staticmem_lock);`
> 
> In current alloc_heap_pages, it will do similar check, that pages in free state
> MUST have zero reference count. I guess, if condition not met, there is no need
> to proceed.
> 

Another thought on concurrency problem, when constructing patch v2, do we need to
consider concurrency here? 
heap_lock is to take care concurrent allocation on the one heap, but static memory is
always reserved for only one specific domain.

> > > +        }
> > > +
> > > +        if ( !(memflags & MEMF_no_tlbflush) )
> > > +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> > > +                                &tlbflush_timestamp);
> > > +
> > > +        /*
> > > +         * Reserve flag PGC_reserved and change page state
> > > +         * to PGC_state_inuse.
> > > +         */
> > > +        pg[i].count_info = (pg[i].count_info & PGC_reserved) |
> > PGC_state_inuse;
> > > +        /* Initialise fields which have other uses for free pages. */
> > > +        pg[i].u.inuse.type_info = 0;
> > > +        page_set_owner(&pg[i], NULL);
> > > +
> > > +        /*
> > > +         * Ensure cache and RAM are consistent for platforms where the
> > > +         * guest can control its own visibility of/through the cache.
> > > +         */
> > > +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> > > +                            !(memflags & MEMF_no_icache_flush));
> > > +    }
> > > +
> > > +    if ( need_tlbflush )
> > > +        filtered_flush_tlb_mask(tlbflush_timestamp);
> > > +
> > > +    return pg;
> > > +}
> > > +
> > >   /* Remove any offlined page in the buddy pointed to by head. */
> > >   static int reserve_offlined_page(struct page_info *head)
> > >   {
> > >
> >
> > Cheers,
> >
> > --
> > Julien Grall
> 
> Cheers,
> 
> Penny Zheng

Cheers

Penny
Julien Grall May 24, 2021, 10:24 a.m. UTC | #7
On 24/05/2021 11:10, Penny Zheng wrote:
> Hi Julien

Hi Penny,

>>>> +    if ( !pg )
>>>> +        return NULL;
>>>> +
>>>> +    for ( i = 0; i < nr_pfns; i++)
>>>> +    {
>>>> +        /*
>>>> +         * Reference count must continuously be zero for free pages
>>>> +         * of static memory(PGC_reserved).
>>>> +         */
>>>> +        ASSERT(pg[i].count_info & PGC_reserved);
>>>> +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
>>>> +        {
>>>> +            printk(XENLOG_ERR
>>>> +                    "Reference count must continuously be zero for free pages"
>>>> +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
>>>> +                    i, mfn_x(page_to_mfn(pg + i)),
>>>> +                    pg[i].count_info, pg[i].tlbflush_timestamp);
>>>> +            BUG();
>>>
>>> So we would crash Xen if the caller pass a wrong range. Is it what we want?
>>>
>>> Also, who is going to prevent concurrent access?
>>>
>>
>> Sure, to fix concurrency issue, I may need to add one spinlock like `static
>> DEFINE_SPINLOCK(staticmem_lock);`
>>
>> In current alloc_heap_pages, it will do similar check, that pages in free state
>> MUST have zero reference count. I guess, if condition not met, there is no need
>> to proceed.
>>
> 
> Another thought on concurrency problem, when constructing patch v2, do we need to
> consider concurrency here?
> heap_lock is to take care concurrent allocation on the one heap, but static memory is
> always reserved for only one specific domain.
In theory yes, but you are relying on the admin to correctly write the 
device-tree nodes.

You are probably not going to hit the problem today because the domains 
are created one by one. But, as you may want to allocate memory at 
runtime, this is quite important to get the code protected from 
concurrent access.

Here, you will likely want to use the heaplock rather than a new lock. 
So you are also protect against concurrent access to count_info from 
other part of Xen.


Cheers,
diff mbox series

Patch

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 58b53c6ac2..adf2889e76 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1068,6 +1068,70 @@  static struct page_info *alloc_heap_pages(
     return pg;
 }
 
+/*
+ * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
+ * It is the equivalent of alloc_heap_pages for static memory
+ */
+static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,
+                                                paddr_t start,
+                                                unsigned int memflags)
+{
+    bool need_tlbflush = false;
+    uint32_t tlbflush_timestamp = 0;
+    unsigned int i;
+    struct page_info *pg;
+    mfn_t s_mfn;
+
+    /* For now, it only supports allocating at specified address. */
+    s_mfn = maddr_to_mfn(start);
+    pg = mfn_to_page(s_mfn);
+    if ( !pg )
+        return NULL;
+
+    for ( i = 0; i < nr_pfns; i++)
+    {
+        /*
+         * Reference count must continuously be zero for free pages
+         * of static memory(PGC_reserved).
+         */
+        ASSERT(pg[i].count_info & PGC_reserved);
+        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
+        {
+            printk(XENLOG_ERR
+                    "Reference count must continuously be zero for free pages"
+                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
+                    i, mfn_x(page_to_mfn(pg + i)),
+                    pg[i].count_info, pg[i].tlbflush_timestamp);
+            BUG();
+        }
+
+        if ( !(memflags & MEMF_no_tlbflush) )
+            accumulate_tlbflush(&need_tlbflush, &pg[i],
+                                &tlbflush_timestamp);
+
+        /*
+         * Reserve flag PGC_reserved and change page state
+         * to PGC_state_inuse.
+         */
+        pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
+        /* Initialise fields which have other uses for free pages. */
+        pg[i].u.inuse.type_info = 0;
+        page_set_owner(&pg[i], NULL);
+
+        /*
+         * Ensure cache and RAM are consistent for platforms where the
+         * guest can control its own visibility of/through the cache.
+         */
+        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
+                            !(memflags & MEMF_no_icache_flush));
+    }
+
+    if ( need_tlbflush )
+        filtered_flush_tlb_mask(tlbflush_timestamp);
+
+    return pg;
+}
+
 /* Remove any offlined page in the buddy pointed to by head. */
 static int reserve_offlined_page(struct page_info *head)
 {