diff mbox series

[v7,9/9] xen: retrieve reserved pages on populate_physmap

Message ID 20220620024408.203797-10-Penny.Zheng@arm.com (mailing list archive)
State Superseded
Headers show
Series populate/unpopulate memory when domain on static allocation | expand

Commit Message

Penny Zheng June 20, 2022, 2:44 a.m. UTC
When a static domain populates memory through populate_physmap at runtime,
it shall retrieve reserved pages from resv_page_list to make sure that
guest RAM is still restricted in statically configured memory regions.
This commit also introduces a new helper acquire_reserved_page to make it work.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
v7 changes:
- remove the lock, since we add the page to rsv_page_list after it has
been totally freed.
---
v6 changes:
- drop the lock before returning
---
v5 changes:
- extract common codes for assigning pages into a helper assign_domstatic_pages
- refine commit message
- remove stub function acquire_reserved_page
- Alloc/free of memory can happen concurrently. So access to rsv_page_list
needs to be protected with a spinlock
---
v4 changes:
- miss dropping __init in acquire_domstatic_pages
- add the page back to the reserved list in case of error
- remove redundant printk
- refine log message and make it warn level
---
v3 changes:
- move is_domain_using_staticmem to the common header file
- remove #ifdef CONFIG_STATIC_MEMORY-ary
- remove meaningless page_to_mfn(page) in error log
---
v2 changes:
- introduce acquire_reserved_page to retrieve reserved pages from
resv_page_list
- forbid non-zero-order requests in populate_physmap
- let is_domain_static return ((void)(d), false) on x86
---
 xen/common/memory.c     | 23 ++++++++++++++
 xen/common/page_alloc.c | 68 ++++++++++++++++++++++++++++++-----------
 xen/include/xen/mm.h    |  1 +
 3 files changed, 75 insertions(+), 17 deletions(-)

Comments

Jan Beulich June 22, 2022, 9:26 a.m. UTC | #1
On 20.06.2022 04:44, Penny Zheng wrote:
> When a static domain populates memory through populate_physmap at runtime,
> it shall retrieve reserved pages from resv_page_list to make sure that
> guest RAM is still restricted in statically configured memory regions.
> This commit also introduces a new helper acquire_reserved_page to make it work.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> v7 changes:
> - remove the lock, since we add the page to rsv_page_list after it has
> been totally freed.

Was this meant to go into another patch? I can't seem to locate respective
code here, and the remark also doesn't give enough suitable context.

Jan
Julien Grall June 24, 2022, 7:50 p.m. UTC | #2
Hi Penny,

On 20/06/2022 03:44, Penny Zheng wrote:
> When a static domain populates memory through populate_physmap at runtime,
> it shall retrieve reserved pages from resv_page_list to make sure that
> guest RAM is still restricted in statically configured memory regions.
> This commit also introduces a new helper acquire_reserved_page to make it work.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> v7 changes:
> - remove the lock, since we add the page to rsv_page_list after it has
> been totally freed.

Hmmm... Adding the page after it has been totally freed doesn't mean you 
can get away with the lock. AFAICT you can still have concurrent 
free/allocate that could modify the list.

Therefore if you add/remove pages without the list, you would end up to 
corrupt the list.

If you disagree, then please point out which lock (or mechanism) will 
prevent concurrent access.

Cheers,
Penny Zheng June 27, 2022, 10:11 a.m. UTC | #3
Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Saturday, June 25, 2022 3:51 AM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
> Cc: Wei Chen <Wei.Chen@arm.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>;
> Jan Beulich <jbeulich@suse.com>; Stefano Stabellini <sstabellini@kernel.org>;
> Wei Liu <wl@xen.org>
> Subject: Re: [PATCH v7 9/9] xen: retrieve reserved pages on
> populate_physmap
> 
> Hi Penny,
> 
> On 20/06/2022 03:44, Penny Zheng wrote:
> > When a static domain populates memory through populate_physmap at
> > runtime, it shall retrieve reserved pages from resv_page_list to make
> > sure that guest RAM is still restricted in statically configured memory
> regions.
> > This commit also introduces a new helper acquire_reserved_page to make
> it work.
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> > v7 changes:
> > - remove the lock, since we add the page to rsv_page_list after it has
> > been totally freed.
> 
> Hmmm... Adding the page after it has been totally freed doesn't mean you
> can get away with the lock. AFAICT you can still have concurrent free/allocate
> that could modify the list.
> 
> Therefore if you add/remove pages without the list, you would end up to
> corrupt the list.
> 
> If you disagree, then please point out which lock (or mechanism) will prevent
> concurrent access.
> 

Ok. Combined with the last serie comments, actually, you suggest that we need to add
two locks, right?

One is the lock for concurrent free/allocation on page_info, and we will use
heap_lock, one stays in free_staticmem_pages, one stays in its reversed function
prepare_staticmem_pages.

The other is for concurrent free/allocation on resv_page_list, and we will use
d->page_alloc_lock tp guard it. One stays in put_static_page, and another
stays in reversed function acquire_reserved_page.

> Cheers,
> 
> --
> Julien Grall
Julien Grall June 27, 2022, 10:54 a.m. UTC | #4
Hi Penny,

On 27/06/2022 11:11, Penny Zheng wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Saturday, June 25, 2022 3:51 AM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>;
>> Jan Beulich <jbeulich@suse.com>; Stefano Stabellini <sstabellini@kernel.org>;
>> Wei Liu <wl@xen.org>
>> Subject: Re: [PATCH v7 9/9] xen: retrieve reserved pages on
>> populate_physmap
>>
>> Hi Penny,
>>
>> On 20/06/2022 03:44, Penny Zheng wrote:
>>> When a static domain populates memory through populate_physmap at
>>> runtime, it shall retrieve reserved pages from resv_page_list to make
>>> sure that guest RAM is still restricted in statically configured memory
>> regions.
>>> This commit also introduces a new helper acquire_reserved_page to make
>> it work.
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> ---
>>> v7 changes:
>>> - remove the lock, since we add the page to rsv_page_list after it has
>>> been totally freed.
>>
>> Hmmm... Adding the page after it has been totally freed doesn't mean you
>> can get away with the lock. AFAICT you can still have concurrent free/allocate
>> that could modify the list.
>>
>> Therefore if you add/remove pages without the list, you would end up to
>> corrupt the list.
>>
>> If you disagree, then please point out which lock (or mechanism) will prevent
>> concurrent access.
>>
> 
> Ok. Combined with the last serie comments, actually, you suggest that we need to add
> two locks, right?

We at least need the second lock (i.e. d->page_alloc_lock). The first 
lock (i.e.) may not be necessary if all the static memory are allocated 
for a domain. So you can guarantee ordering by adding to the resv_page_list.

Unless there are an ordering issue between the locks, I would for now 
suggest to keep both. We can refine this afterwards.

Cheers,
diff mbox series

Patch

diff --git a/xen/common/memory.c b/xen/common/memory.c
index f2d009843a..cb330ce877 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -245,6 +245,29 @@  static void populate_physmap(struct memop_args *a)
 
                 mfn = _mfn(gpfn);
             }
+            else if ( is_domain_using_staticmem(d) )
+            {
+                /*
+                 * No easy way to guarantee the retrieved pages are contiguous,
+                 * so forbid non-zero-order requests here.
+                 */
+                if ( a->extent_order != 0 )
+                {
+                    gdprintk(XENLOG_WARNING,
+                             "Cannot allocate static order-%u pages for static %pd\n",
+                             a->extent_order, d);
+                    goto out;
+                }
+
+                mfn = acquire_reserved_page(d, a->memflags);
+                if ( mfn_eq(mfn, INVALID_MFN) )
+                {
+                    gdprintk(XENLOG_WARNING,
+                             "%pd: failed to retrieve a reserved page\n",
+                             d);
+                    goto out;
+                }
+            }
             else
             {
                 page = alloc_domheap_pages(d, a->extent_order, a->memflags);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index fee396a92d..74628889ea 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2673,9 +2673,8 @@  void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
     spin_unlock(&heap_lock);
 }
 
-static bool __init prepare_staticmem_pages(struct page_info *pg,
-                                           unsigned long nr_mfns,
-                                           unsigned int memflags)
+static bool prepare_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
+                                    unsigned int memflags)
 {
     bool need_tlbflush = false;
     uint32_t tlbflush_timestamp = 0;
@@ -2756,21 +2755,9 @@  static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
     return pg;
 }
 
-/*
- * Acquire nr_mfns contiguous pages, starting at #smfn, of static memory,
- * then assign them to one specific domain #d.
- */
-int __init acquire_domstatic_pages(struct domain *d, mfn_t smfn,
-                                   unsigned int nr_mfns, unsigned int memflags)
+static int assign_domstatic_pages(struct domain *d, struct page_info *pg,
+                                  unsigned int nr_mfns, unsigned int memflags)
 {
-    struct page_info *pg;
-
-    ASSERT_ALLOC_CONTEXT();
-
-    pg = acquire_staticmem_pages(smfn, nr_mfns, memflags);
-    if ( !pg )
-        return -ENOENT;
-
     if ( !d || (memflags & (MEMF_no_owner | MEMF_no_refcount)) )
     {
         /*
@@ -2789,6 +2776,53 @@  int __init acquire_domstatic_pages(struct domain *d, mfn_t smfn,
 
     return 0;
 }
+
+/*
+ * Acquire nr_mfns contiguous pages, starting at #smfn, of static memory,
+ * then assign them to one specific domain #d.
+ */
+int __init acquire_domstatic_pages(struct domain *d, mfn_t smfn,
+                                   unsigned int nr_mfns, unsigned int memflags)
+{
+    struct page_info *pg;
+
+    ASSERT_ALLOC_CONTEXT();
+
+    pg = acquire_staticmem_pages(smfn, nr_mfns, memflags);
+    if ( !pg )
+        return -ENOENT;
+
+    if ( assign_domstatic_pages(d, pg, nr_mfns, memflags) )
+        return -EINVAL;
+
+    return 0;
+}
+
+/*
+ * Acquire a page from reserved page list(resv_page_list), when populating
+ * memory for static domain on runtime.
+ */
+mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags)
+{
+    struct page_info *page;
+
+    /* Acquire a page from reserved page list(resv_page_list). */
+    page = page_list_remove_head(&d->resv_page_list);
+    if ( unlikely(!page) )
+        return INVALID_MFN;
+
+    if ( !prepare_staticmem_pages(page, 1, memflags) )
+        goto fail;
+
+    if ( assign_domstatic_pages(d, page, 1, memflags) )
+        goto fail;
+
+    return page_to_mfn(page);
+
+ fail:
+    page_list_add_tail(page, &d->resv_page_list);
+    return INVALID_MFN;
+}
 #endif
 
 /*
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 68a647ceae..e6803fd8a2 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -99,6 +99,7 @@  int acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned int nr_mfns,
 #else
 #define put_static_pages(d, page, order) ((void)(d), (void)(page), (void)(order))
 #endif
+mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags);
 
 /* Map machine page range in Xen virtual address space. */
 int map_pages_to_xen(