diff mbox series

[RFC,69/84] page_alloc: comments on (un)mapping pages in xenheap allocations.

Message ID 71e585138508d7d46c5349f72e1dfd3df8f2b595.1569489002.git.hongyax@amazon.com (mailing list archive)
State New, archived
Headers show
Series Remove direct map from Xen | expand

Commit Message

Xia, Hongyan Sept. 26, 2019, 9:46 a.m. UTC
From: Hongyan Xia <hongyax@amazon.com>

Signed-off-by: Hongyan Xia <hongyax@amazon.com>
---
 xen/common/page_alloc.c | 5 +++++
 1 file changed, 5 insertions(+)

Comments

Julien Grall Sept. 26, 2019, 10:42 a.m. UTC | #1
Hi,

On 9/26/19 10:46 AM, hongyax@amazon.com wrote:
> From: Hongyan Xia <hongyax@amazon.com>
> 

This patch should be squashed in the previous patch (#68). This would 
also help review as it give more insight of why you need to map/unmap.

Cheers,
diff mbox series

Patch

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 4ec6299ba8..a00db4c0d9 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2212,6 +2212,10 @@  void *alloc_xenheap_pages(unsigned int order, unsigned int memflags)
         pg[i].count_info |= PGC_xen_heap;
 
     ret = page_to_virt(pg);
+    /*
+     * The direct map is not always mapped now. We need to populate the direct
+     * map region on demand for security.
+     */
     map_pages_to_xen((unsigned long)ret, page_to_mfn(pg),
                      1UL << order, PAGE_HYPERVISOR);
 
@@ -2234,6 +2238,7 @@  void free_xenheap_pages(void *v, unsigned int order)
         pg[i].count_info &= ~PGC_xen_heap;
 
     ASSERT((unsigned long)v >= DIRECTMAP_VIRT_START);
+    /* Tear down the 1:1 mapping in this region for memory safety. */
     map_pages_to_xen((unsigned long)v, INVALID_MFN, 1UL << order, _PAGE_NONE);
 
     free_heap_pages(pg, order, true);