diff mbox series

[v5,9/9] xen: retrieve reserved pages on populate_physmap

Message ID 20220531031241.90374-10-Penny.Zheng@arm.com (mailing list archive)
State Superseded
Headers show
Series populate/unpopulate memory when domain on static allocation | expand

Commit Message

Penny Zheng May 31, 2022, 3:12 a.m. UTC
When a static domain populates memory through populate_physmap at runtime,
it shall retrieve reserved pages from resv_page_list to make sure that
guest RAM is still restricted in statically configured memory regions.
This commit also introduces a new helper acquire_reserved_page to make it work.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
v5 changes:
- extract common codes for assigning pages into a helper assign_domstatic_pages
- refine commit message
- remove stub function acquire_reserved_page
- Alloc/free of memory can happen concurrently. So access to rsv_page_list
needs to be protected with a spinlock
---
v4 changes:
- miss dropping __init in acquire_domstatic_pages
- add the page back to the reserved list in case of error
- remove redundant printk
- refine log message and make it warn level
---
v3 changes:
- move is_domain_using_staticmem to the common header file
- remove #ifdef CONFIG_STATIC_MEMORY-ary
- remove meaningless page_to_mfn(page) in error log
---
v2 changes:
- introduce acquire_reserved_page to retrieve reserved pages from
resv_page_list
- forbid non-zero-order requests in populate_physmap
- let is_domain_static return ((void)(d), false) on x86
---
 xen/common/memory.c     | 23 ++++++++++++++
 xen/common/page_alloc.c | 70 +++++++++++++++++++++++++++++++----------
 xen/include/xen/mm.h    |  1 +
 3 files changed, 77 insertions(+), 17 deletions(-)

Comments

Jan Beulich May 31, 2022, 8:54 a.m. UTC | #1
On 31.05.2022 05:12, Penny Zheng wrote:
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -245,6 +245,29 @@ static void populate_physmap(struct memop_args *a)
>  
>                  mfn = _mfn(gpfn);
>              }
> +            else if ( is_domain_using_staticmem(d) )
> +            {
> +                /*
> +                 * No easy way to guarantee the retrieved pages are contiguous,
> +                 * so forbid non-zero-order requests here.
> +                 */
> +                if ( a->extent_order != 0 )
> +                {
> +                    gdprintk(XENLOG_WARNING,
> +                             "Cannot allocate static order-%u pages for static %pd\n",
> +                             a->extent_order, d);
> +                    goto out;
> +                }
> +
> +                mfn = acquire_reserved_page(d, a->memflags);
> +                if ( mfn_eq(mfn, INVALID_MFN) )
> +                {
> +                    gdprintk(XENLOG_WARNING,
> +                             "%pd: failed to retrieve a reserved page\n",
> +                             d);
> +                    goto out;
> +                }
> +            }

I'm not convinced of having these gdprintk()s here. The adjacent
is_domain_direct_mapped() code is somewhat different - iirc only Dom0
can be direct-mapped, and Dom0 having a problem can certainly be worth
a log message.

> +/*
> + * Acquire a page from reserved page list(resv_page_list), when populating
> + * memory for static domain on runtime.
> + */
> +mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags)
> +{
> +    struct page_info *page;
> +
> +    spin_lock(&d->page_alloc_lock);
> +    /* Acquire a page from reserved page list(resv_page_list). */
> +    page = page_list_remove_head(&d->resv_page_list);
> +    if ( unlikely(!page) )
> +        return INVALID_MFN;
> +    spin_unlock(&d->page_alloc_lock);

You want to drop the lock before returning.

Jan
Julien Grall May 31, 2022, 9:35 a.m. UTC | #2
Hi Jan,

On 31/05/2022 09:54, Jan Beulich wrote:
> On 31.05.2022 05:12, Penny Zheng wrote:
>> --- a/xen/common/memory.c
>> +++ b/xen/common/memory.c
>> @@ -245,6 +245,29 @@ static void populate_physmap(struct memop_args *a)
>>   
>>                   mfn = _mfn(gpfn);
>>               }
>> +            else if ( is_domain_using_staticmem(d) )
>> +            {
>> +                /*
>> +                 * No easy way to guarantee the retrieved pages are contiguous,
>> +                 * so forbid non-zero-order requests here.
>> +                 */
>> +                if ( a->extent_order != 0 )
>> +                {
>> +                    gdprintk(XENLOG_WARNING,
>> +                             "Cannot allocate static order-%u pages for static %pd\n",
>> +                             a->extent_order, d);
>> +                    goto out;
>> +                }
>> +
>> +                mfn = acquire_reserved_page(d, a->memflags);
>> +                if ( mfn_eq(mfn, INVALID_MFN) )
>> +                {
>> +                    gdprintk(XENLOG_WARNING,
>> +                             "%pd: failed to retrieve a reserved page\n",
>> +                             d);
>> +                    goto out;
>> +                }
>> +            }
> 
> I'm not convinced of having these gdprintk()s here. 

There are a number of time where I wished some error paths would contain 
debug printk(). Instead, I often end up to add them myself when I 
struggle to find the reason of a failure.

They are debug printk() and therefore there is no impact on the 
production build. So I would leave them around.

> The adjacent
> is_domain_direct_mapped() code is somewhat different - iirc only Dom0
> can be direct-mapped, and Dom0 having a problem can certainly be worth
> a log message.

There are plan to use direct-mapped domU.

Cheers,
Jan Beulich May 31, 2022, 9:40 a.m. UTC | #3
On 31.05.2022 11:35, Julien Grall wrote:
> On 31/05/2022 09:54, Jan Beulich wrote:
>> On 31.05.2022 05:12, Penny Zheng wrote:
>>> --- a/xen/common/memory.c
>>> +++ b/xen/common/memory.c
>>> @@ -245,6 +245,29 @@ static void populate_physmap(struct memop_args *a)
>>>   
>>>                   mfn = _mfn(gpfn);
>>>               }
>>> +            else if ( is_domain_using_staticmem(d) )
>>> +            {
>>> +                /*
>>> +                 * No easy way to guarantee the retrieved pages are contiguous,
>>> +                 * so forbid non-zero-order requests here.
>>> +                 */
>>> +                if ( a->extent_order != 0 )
>>> +                {
>>> +                    gdprintk(XENLOG_WARNING,
>>> +                             "Cannot allocate static order-%u pages for static %pd\n",
>>> +                             a->extent_order, d);
>>> +                    goto out;
>>> +                }
>>> +
>>> +                mfn = acquire_reserved_page(d, a->memflags);
>>> +                if ( mfn_eq(mfn, INVALID_MFN) )
>>> +                {
>>> +                    gdprintk(XENLOG_WARNING,
>>> +                             "%pd: failed to retrieve a reserved page\n",
>>> +                             d);
>>> +                    goto out;
>>> +                }
>>> +            }
>>
>> I'm not convinced of having these gdprintk()s here. 
> 
> There are a number of time where I wished some error paths would contain 
> debug printk(). Instead, I often end up to add them myself when I 
> struggle to find the reason of a failure.

But this model doesn't scale - we don't want to have log messages on
each and every error path. I agree having such for very unlikely
errors, but order != 0 is clearly a call site mistake and memory
allocation requests failing also ought to not be entirely unexpected.

Jan
Julien Grall May 31, 2022, 9:50 a.m. UTC | #4
Hi,

On 31/05/2022 10:40, Jan Beulich wrote:
> On 31.05.2022 11:35, Julien Grall wrote:
>> On 31/05/2022 09:54, Jan Beulich wrote:
>>> On 31.05.2022 05:12, Penny Zheng wrote:
>>>> --- a/xen/common/memory.c
>>>> +++ b/xen/common/memory.c
>>>> @@ -245,6 +245,29 @@ static void populate_physmap(struct memop_args *a)
>>>>    
>>>>                    mfn = _mfn(gpfn);
>>>>                }
>>>> +            else if ( is_domain_using_staticmem(d) )
>>>> +            {
>>>> +                /*
>>>> +                 * No easy way to guarantee the retrieved pages are contiguous,
>>>> +                 * so forbid non-zero-order requests here.
>>>> +                 */
>>>> +                if ( a->extent_order != 0 )
>>>> +                {
>>>> +                    gdprintk(XENLOG_WARNING,
>>>> +                             "Cannot allocate static order-%u pages for static %pd\n",
>>>> +                             a->extent_order, d);
>>>> +                    goto out;
>>>> +                }
>>>> +
>>>> +                mfn = acquire_reserved_page(d, a->memflags);
>>>> +                if ( mfn_eq(mfn, INVALID_MFN) )
>>>> +                {
>>>> +                    gdprintk(XENLOG_WARNING,
>>>> +                             "%pd: failed to retrieve a reserved page\n",
>>>> +                             d);
>>>> +                    goto out;
>>>> +                }
>>>> +            }
>>>
>>> I'm not convinced of having these gdprintk()s here.
>>
>> There are a number of time where I wished some error paths would contain
>> debug printk(). Instead, I often end up to add them myself when I
>> struggle to find the reason of a failure.
> 
> But this model doesn't scale - we don't want to have log messages on
> each and every error path. I agree having such for very unlikely
> errors, but order != 0 is clearly a call site mistake and memory
> allocation requests failing also ought to not be entirely unexpected.
The problem is from the guest PoV, the error for both is the same. So it 
would be difficult (not impossible) for the developper to know what's 
the exact problem.

But note that we already have a gdprintk() for allocation failure in the 
non-direct map case. So I think they should be here for consistency.

If you want to drop the existing one, then this is a separate 
discussion. And, just so you know, I would strongly argue against 
removing them for the reason I stated above.

Cheers,
diff mbox series

Patch

diff --git a/xen/common/memory.c b/xen/common/memory.c
index f2d009843a..cb330ce877 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -245,6 +245,29 @@  static void populate_physmap(struct memop_args *a)
 
                 mfn = _mfn(gpfn);
             }
+            else if ( is_domain_using_staticmem(d) )
+            {
+                /*
+                 * No easy way to guarantee the retrieved pages are contiguous,
+                 * so forbid non-zero-order requests here.
+                 */
+                if ( a->extent_order != 0 )
+                {
+                    gdprintk(XENLOG_WARNING,
+                             "Cannot allocate static order-%u pages for static %pd\n",
+                             a->extent_order, d);
+                    goto out;
+                }
+
+                mfn = acquire_reserved_page(d, a->memflags);
+                if ( mfn_eq(mfn, INVALID_MFN) )
+                {
+                    gdprintk(XENLOG_WARNING,
+                             "%pd: failed to retrieve a reserved page\n",
+                             d);
+                    goto out;
+                }
+            }
             else
             {
                 page = alloc_domheap_pages(d, a->extent_order, a->memflags);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index bdd2e62865..9448552bab 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2661,9 +2661,8 @@  void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
     }
 }
 
-static bool __init prepare_staticmem_pages(struct page_info *pg,
-                                           unsigned long nr_mfns,
-                                           unsigned int memflags)
+static bool prepare_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
+                                    unsigned int memflags)
 {
     bool need_tlbflush = false;
     uint32_t tlbflush_timestamp = 0;
@@ -2744,21 +2743,9 @@  static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
     return pg;
 }
 
-/*
- * Acquire nr_mfns contiguous pages, starting at #smfn, of static memory,
- * then assign them to one specific domain #d.
- */
-int __init acquire_domstatic_pages(struct domain *d, mfn_t smfn,
-                                   unsigned int nr_mfns, unsigned int memflags)
+static int assign_domstatic_pages(struct domain *d, struct page_info *pg,
+                                  unsigned int nr_mfns, unsigned int memflags)
 {
-    struct page_info *pg;
-
-    ASSERT(!in_irq());
-
-    pg = acquire_staticmem_pages(smfn, nr_mfns, memflags);
-    if ( !pg )
-        return -ENOENT;
-
     if ( !d || (memflags & (MEMF_no_owner | MEMF_no_refcount)) )
     {
         /*
@@ -2777,6 +2764,55 @@  int __init acquire_domstatic_pages(struct domain *d, mfn_t smfn,
 
     return 0;
 }
+
+/*
+ * Acquire nr_mfns contiguous pages, starting at #smfn, of static memory,
+ * then assign them to one specific domain #d.
+ */
+int __init acquire_domstatic_pages(struct domain *d, mfn_t smfn,
+                                   unsigned int nr_mfns, unsigned int memflags)
+{
+    struct page_info *pg;
+
+    ASSERT(!in_irq());
+
+    pg = acquire_staticmem_pages(smfn, nr_mfns, memflags);
+    if ( !pg )
+        return -ENOENT;
+
+    if ( assign_domstatic_pages(d, pg, nr_mfns, memflags) )
+        return -EINVAL;
+
+    return 0;
+}
+
+/*
+ * Acquire a page from reserved page list(resv_page_list), when populating
+ * memory for static domain on runtime.
+ */
+mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags)
+{
+    struct page_info *page;
+
+    spin_lock(&d->page_alloc_lock);
+    /* Acquire a page from reserved page list(resv_page_list). */
+    page = page_list_remove_head(&d->resv_page_list);
+    if ( unlikely(!page) )
+        return INVALID_MFN;
+    spin_unlock(&d->page_alloc_lock);
+
+    if ( !prepare_staticmem_pages(page, 1, memflags) )
+        goto fail;
+
+    if ( assign_domstatic_pages(d, page, 1, memflags) )
+        goto fail;
+
+    return page_to_mfn(page);
+
+ fail:
+    page_list_add_tail(page, &d->resv_page_list);
+    return INVALID_MFN;
+}
 #endif
 
 /*
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index ca2c6f033e..4665bcdd25 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -92,6 +92,7 @@  void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
 int acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned int nr_mfns,
                             unsigned int memflags);
 #endif
+mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags);
 
 /* Map machine page range in Xen virtual address space. */
 int map_pages_to_xen(