diff mbox series

[RFC,V2] xen/gnttab: Store frame GFN in struct page_info on Arm

Message ID 1631652245-30746-1-git-send-email-olekstysh@gmail.com (mailing list archive)
State Superseded
Headers show
Series [RFC,V2] xen/gnttab: Store frame GFN in struct page_info on Arm | expand

Commit Message

Oleksandr Tyshchenko Sept. 14, 2021, 8:44 p.m. UTC
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Rework Arm implementation to store grant table frame GFN
in struct page_info directly instead of keeping it in
standalone status/shared arrays.

To cover 64-bit/40-bit IPA on Arm64/Arm32 we need the space
to hold 52-bit/28-bit value respectively. In order to not
grow the size of struct page_info borrow the required amount
of bits from type_info's count portion which current context
won't suffer (currently only 1 bit is used on Arm).

Introduce corresponding PGT_* constructs and access macros.
Update existing gnttab macros to deal with GFN value according
to new location. Also update the use of count portion on Arm
in share_xen_page_with_guest().

Update the P2M code to clean said GFN portion when putting
a reference on the grant table page in p2m_put_l3_page().
The added check is based on the assumption that grant table page
is the xen_heap page and its entry has p2m_ram_rw type, which
is correct. However, this check is not entirely precise and we
might end up clearing the GFN portion for other xen_heap pages
with the same p2m_type. But, this action is considered as
harmless, since only grant table pages really use that portion.

And for everything to work correctly introduce arch-specific
macros to be called from alloc_xenheap_pages()/free_xenheap_pages()
which purposes on Arm are to clear the GFN portion before use and
make sure the portion is cleared after use, on x86 these are
just stubs.

This patch is intended to fix the potential issue on Arm
which might happen when remapping grant-table frame.
A guest (or the toolstack) will unmap the grant-table frame
using XENMEM_remove_physmap. This is a generic hypercall,
so on x86, we are relying on the fact the M2P entry will
be cleared on removal. For architecture without the M2P,
the GFN would still be present in the grant frame/status
array. So on the next call to map the page, we will end up to
request the P2M to remove whatever mapping was the given GFN.
This could well be another mapping.

Besides that, this patch simplifies arch code on Arm by
removing arrays and corresponding management code and
as the result gnttab_init_arch/gnttab_destroy_arch helpers
and struct grant_table_arch become useless and can be
dropped globally.

Suggested-by: Julien Grall <jgrall@amazon.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
You can find the related discussions at:
https://lore.kernel.org/xen-devel/93d0df14-2c8a-c2e3-8c51-54412190171c@xen.org/
https://lore.kernel.org/xen-devel/1628890077-12545-1-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1631228688-30347-1-git-send-email-olekstysh@gmail.com/

! Please note, there is still unresolved locking question here for which
I failed to find a suitable solution !

According to the internal conversation:
Now the GFN field in the struct page_info is accessed from
gnttab_set_frame_gfn() in the grant table code and from page_set_frame_gfn()
in the P2M code (the former uses the latter).

We need to prevent the concurrent access to this field. But, we cannot grab
the grant lock from the P2M code because we will introduce a lock inversion.
The page_set_frame_gfn() will be called from the P2M code with the p2m lock held
and then acquire the grant table lock. The gnttab_map_frame() will do the inverse.

Changes RFC1 -> RFC2:
 - update patch description
 - add/update comments in code
 - clarify check in p2m_put_l3_page()
 - introduce arch_alloc_xenheap_page() and arch_free_xenheap_page()
   and drop page_arch_init()
 - add ASSERT to gnttab_shared_page() and gnttab_status_page()
 - rework changes to Arm's struct page_info: do not split type_info,
   allocate GFN portion by reducing count portion, create corresponding
   PGT_* construct, etc
 - update page_get_frame_gfn() and page_set_frame_gfn()
 - update the use of count portion on Arm
 - drop the leading underscore in the macro parameter names
---
 xen/arch/arm/mm.c                 |  8 ++++--
 xen/arch/arm/p2m.c                | 21 ++++++++++++---
 xen/common/grant_table.c          |  9 -------
 xen/common/page_alloc.c           |  6 +++++
 xen/include/asm-arm/grant_table.h | 57 +++++++++++++++------------------------
 xen/include/asm-arm/mm.h          | 36 ++++++++++++++++++++++---
 xen/include/asm-x86/grant_table.h |  5 ----
 xen/include/asm-x86/mm.h          |  4 +++
 8 files changed, 87 insertions(+), 59 deletions(-)

Comments

Jan Beulich Sept. 15, 2021, 10:06 a.m. UTC | #1
On 14.09.2021 22:44, Oleksandr Tyshchenko wrote:
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1376,14 +1376,18 @@ unsigned long domain_get_maximum_gpfn(struct domain *d)
>  void share_xen_page_with_guest(struct page_info *page, struct domain *d,
>                                 enum XENSHARE_flags flags)
>  {
> +    unsigned long type_info;
> +
>      if ( page_get_owner(page) == d )
>          return;
>  
>      spin_lock(&d->page_alloc_lock);
>  
>      /* The incremented type count pins as writable or read-only. */
> -    page->u.inuse.type_info =
> -        (flags == SHARE_ro ? PGT_none : PGT_writable_page) | 1;
> +    type_info = page->u.inuse.type_info & ~(PGT_type_mask | PGT_count_mask);
> +    page->u.inuse.type_info = type_info |
> +        (flags == SHARE_ro ? PGT_none : PGT_writable_page) |
> +        (1UL << PGT_count_base);

Just as a note: If this was x86 code, I'd request the redundant
PGT_count_base to be dropped. Constructs like the above is what we
have MASK_INSR() for.

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2204,7 +2204,10 @@ void *alloc_xenheap_pages(unsigned int order, unsigned int memflags)
>          return NULL;
>  
>      for ( i = 0; i < (1u << order); i++ )
> +    {
>          pg[i].count_info |= PGC_xen_heap;
> +        arch_alloc_xenheap_page(&pg[i]);
> +    }
>  
>      return page_to_virt(pg);
>  }
> @@ -2222,7 +2225,10 @@ void free_xenheap_pages(void *v, unsigned int order)
>      pg = virt_to_page(v);
>  
>      for ( i = 0; i < (1u << order); i++ )
> +    {
>          pg[i].count_info &= ~PGC_xen_heap;
> +        arch_free_xenheap_page(&pg[i]);
> +    }
>  
>      free_heap_pages(pg, order, true);
>  }

You look to only be adjusting the !SEPARATE_XENHEAP instances of the
functions. Isn't 32-bit Arm using SEPARATE_XENHEAP?

> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -98,9 +98,18 @@ struct page_info
>  #define PGT_writable_page PG_mask(1, 1)  /* has writable mappings?         */
>  #define PGT_type_mask     PG_mask(1, 1)  /* Bits 31 or 63.                 */
>  
> - /* Count of uses of this frame as its current type. */
> -#define PGT_count_width   PG_shift(2)
> -#define PGT_count_mask    ((1UL<<PGT_count_width)-1)
> + /* 3-bit count of uses of this frame as its current type. */
> +#define PGT_count_base    PG_shift(4)
> +#define PGT_count_mask    PG_mask(7, 4)
> +
> +/*
> + * Stored in bits [27:0] or [59:0] GFN if page is used for grant table frame.

I don't know enough Arm details to tell whether this is properly
one bit more than the maximum number of physical address bits.
Without the extra bit you wouldn't be able to tell apart a
guest specified GFN matching the value of PGT_INVALID_FRAME_GFN
from an entry which was set from INVALID_GFN.

> + * This only valid for the xenheap pages.
> + */
> +#define PGT_gfn_width     PG_shift(4)
> +#define PGT_gfn_mask      ((1UL<<PGT_gfn_width)-1)

Any reason you don't use PG_mask() here? Any open-coding is prone
to people later making mistakes.

Jan
Oleksandr Tyshchenko Sept. 15, 2021, 10:13 p.m. UTC | #2
On 15.09.21 13:06, Jan Beulich wrote:

Hi Jan

> On 14.09.2021 22:44, Oleksandr Tyshchenko wrote:
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -1376,14 +1376,18 @@ unsigned long domain_get_maximum_gpfn(struct domain *d)
>>   void share_xen_page_with_guest(struct page_info *page, struct domain *d,
>>                                  enum XENSHARE_flags flags)
>>   {
>> +    unsigned long type_info;
>> +
>>       if ( page_get_owner(page) == d )
>>           return;
>>   
>>       spin_lock(&d->page_alloc_lock);
>>   
>>       /* The incremented type count pins as writable or read-only. */
>> -    page->u.inuse.type_info =
>> -        (flags == SHARE_ro ? PGT_none : PGT_writable_page) | 1;
>> +    type_info = page->u.inuse.type_info & ~(PGT_type_mask | PGT_count_mask);
>> +    page->u.inuse.type_info = type_info |
>> +        (flags == SHARE_ro ? PGT_none : PGT_writable_page) |
>> +        (1UL << PGT_count_base);
> Just as a note: If this was x86 code, I'd request the redundant
> PGT_count_base to be dropped. Constructs like the above is what we
> have MASK_INSR() for.

I got it, I will look at it.


>
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -2204,7 +2204,10 @@ void *alloc_xenheap_pages(unsigned int order, unsigned int memflags)
>>           return NULL;
>>   
>>       for ( i = 0; i < (1u << order); i++ )
>> +    {
>>           pg[i].count_info |= PGC_xen_heap;
>> +        arch_alloc_xenheap_page(&pg[i]);
>> +    }
>>   
>>       return page_to_virt(pg);
>>   }
>> @@ -2222,7 +2225,10 @@ void free_xenheap_pages(void *v, unsigned int order)
>>       pg = virt_to_page(v);
>>   
>>       for ( i = 0; i < (1u << order); i++ )
>> +    {
>>           pg[i].count_info &= ~PGC_xen_heap;
>> +        arch_free_xenheap_page(&pg[i]);
>> +    }
>>   
>>       free_heap_pages(pg, order, true);
>>   }
> You look to only be adjusting the !SEPARATE_XENHEAP instances of the
> functions. Isn't 32-bit Arm using SEPARATE_XENHEAP?

Hmm, looks like yes, thank you. At least config.h defines that on Arm32. 
I will update the instances.


>
>> --- a/xen/include/asm-arm/mm.h
>> +++ b/xen/include/asm-arm/mm.h
>> @@ -98,9 +98,18 @@ struct page_info
>>   #define PGT_writable_page PG_mask(1, 1)  /* has writable mappings?         */
>>   #define PGT_type_mask     PG_mask(1, 1)  /* Bits 31 or 63.                 */
>>   
>> - /* Count of uses of this frame as its current type. */
>> -#define PGT_count_width   PG_shift(2)
>> -#define PGT_count_mask    ((1UL<<PGT_count_width)-1)
>> + /* 3-bit count of uses of this frame as its current type. */
>> +#define PGT_count_base    PG_shift(4)
>> +#define PGT_count_mask    PG_mask(7, 4)
>> +
>> +/*
>> + * Stored in bits [27:0] or [59:0] GFN if page is used for grant table frame.
> I don't know enough Arm details to tell whether this is properly
> one bit more than the maximum number of physical address bits.
> Without the extra bit you wouldn't be able to tell apart a
> guest specified GFN matching the value of PGT_INVALID_FRAME_GFN
> from an entry which was set from INVALID_GFN.
Really good point.

1. On Arm64 the p2m_ipa_bits could (theoretically) be 64-bit which, I 
assume, corresponds to the maximum guest physical address (1 << 64) - 1 
= 0xFFFFFFFFFFFFFFFF.
To store that GFN we need 52-bit. But, we provide 60-bit field which is 
more than enough, I think. Practically, the maximum supported 
p2m_ipa_bits is 48-bit, so the maximum supported GFN will occupy 36-bit 
only. Everything is ok here.
2. On Arm32 the p2m_ipa_bits is 40-bit which, I assume, corresponds to 
the maximum guest physical address (1 << 40) - 1 = 0xFFFFFFFFFF. To 
store that GFN we need 28-bit. If I did the calculation correctly, what 
we have on Arm32 is that PGT_INVALID_FRAME_GFN == maximum guest physical 
address and it looks like we need and extra bit on Arm32. Do you think 
we need to borrow one more bit from the count portion to stay on the 
safe side?


>
>> + * This only valid for the xenheap pages.
>> + */
>> +#define PGT_gfn_width     PG_shift(4)
>> +#define PGT_gfn_mask      ((1UL<<PGT_gfn_width)-1)
> Any reason you don't use PG_mask() here? Any open-coding is prone
> to people later making mistakes.
I failed to come up with idea how to do that without #ifdef. As GFN 
starts at bit 0 different first parameter would be needed for PG_mask on 
32-bit and 64-bit systems.
I wonder whether PGC_count_mask/PGT_count_mask are open-coded on Arm/x86 
because of the same reason.
Jan Beulich Sept. 16, 2021, 6:38 a.m. UTC | #3
On 16.09.2021 00:13, Oleksandr wrote:
> On 15.09.21 13:06, Jan Beulich wrote:
>> On 14.09.2021 22:44, Oleksandr Tyshchenko wrote:
>>> --- a/xen/include/asm-arm/mm.h
>>> +++ b/xen/include/asm-arm/mm.h
>>> @@ -98,9 +98,18 @@ struct page_info
>>>   #define PGT_writable_page PG_mask(1, 1)  /* has writable mappings?         */
>>>   #define PGT_type_mask     PG_mask(1, 1)  /* Bits 31 or 63.                 */
>>>   
>>> - /* Count of uses of this frame as its current type. */
>>> -#define PGT_count_width   PG_shift(2)
>>> -#define PGT_count_mask    ((1UL<<PGT_count_width)-1)
>>> + /* 3-bit count of uses of this frame as its current type. */
>>> +#define PGT_count_base    PG_shift(4)
>>> +#define PGT_count_mask    PG_mask(7, 4)
>>> +
>>> +/*
>>> + * Stored in bits [27:0] or [59:0] GFN if page is used for grant table frame.
>> I don't know enough Arm details to tell whether this is properly
>> one bit more than the maximum number of physical address bits.
>> Without the extra bit you wouldn't be able to tell apart a
>> guest specified GFN matching the value of PGT_INVALID_FRAME_GFN
>> from an entry which was set from INVALID_GFN.
> Really good point.
> 
> 1. On Arm64 the p2m_ipa_bits could (theoretically) be 64-bit which, I 
> assume, corresponds to the maximum guest physical address (1 << 64) - 1 
> = 0xFFFFFFFFFFFFFFFF.
> To store that GFN we need 52-bit. But, we provide 60-bit field which is 
> more than enough, I think. Practically, the maximum supported 
> p2m_ipa_bits is 48-bit, so the maximum supported GFN will occupy 36-bit 
> only. Everything is ok here.
> 2. On Arm32 the p2m_ipa_bits is 40-bit which, I assume, corresponds to 
> the maximum guest physical address (1 << 40) - 1 = 0xFFFFFFFFFF. To 
> store that GFN we need 28-bit. If I did the calculation correctly, what 
> we have on Arm32 is that PGT_INVALID_FRAME_GFN == maximum guest physical 
> address and it looks like we need and extra bit on Arm32. Do you think 
> we need to borrow one more bit from the count portion to stay on the 
> safe side?

I think so, unless there are further restrictions on the GFN range
that I'm unaware of.

For 64-bit, if you only need 52 bits, why do you make the field 60
bits wide? I'd recommend against "wasting" bits. Better keep the
count field as wide as possible.

>>> + * This only valid for the xenheap pages.
>>> + */
>>> +#define PGT_gfn_width     PG_shift(4)
>>> +#define PGT_gfn_mask      ((1UL<<PGT_gfn_width)-1)
>> Any reason you don't use PG_mask() here? Any open-coding is prone
>> to people later making mistakes.
> I failed to come up with idea how to do that without #ifdef. As GFN 
> starts at bit 0 different first parameter would be needed for PG_mask on 
> 32-bit and 64-bit systems.
> I wonder whether PGC_count_mask/PGT_count_mask are open-coded on Arm/x86 
> because of the same reason.

Hmm, that pre-existing open-coding isn't nice, but is perhaps a
good enough reason to keep what you have. (Personally I wouldn't
be afraid of adding an #ifdef here, but I realize views there may
differ.)

Jan
Oleksandr Tyshchenko Sept. 16, 2021, 1:06 p.m. UTC | #4
On 16.09.21 09:38, Jan Beulich wrote:

Hi Jan

> On 16.09.2021 00:13, Oleksandr wrote:
>> On 15.09.21 13:06, Jan Beulich wrote:
>>> On 14.09.2021 22:44, Oleksandr Tyshchenko wrote:
>>>> --- a/xen/include/asm-arm/mm.h
>>>> +++ b/xen/include/asm-arm/mm.h
>>>> @@ -98,9 +98,18 @@ struct page_info
>>>>    #define PGT_writable_page PG_mask(1, 1)  /* has writable mappings?         */
>>>>    #define PGT_type_mask     PG_mask(1, 1)  /* Bits 31 or 63.                 */
>>>>    
>>>> - /* Count of uses of this frame as its current type. */
>>>> -#define PGT_count_width   PG_shift(2)
>>>> -#define PGT_count_mask    ((1UL<<PGT_count_width)-1)
>>>> + /* 3-bit count of uses of this frame as its current type. */
>>>> +#define PGT_count_base    PG_shift(4)
>>>> +#define PGT_count_mask    PG_mask(7, 4)
>>>> +
>>>> +/*
>>>> + * Stored in bits [27:0] or [59:0] GFN if page is used for grant table frame.
>>> I don't know enough Arm details to tell whether this is properly
>>> one bit more than the maximum number of physical address bits.
>>> Without the extra bit you wouldn't be able to tell apart a
>>> guest specified GFN matching the value of PGT_INVALID_FRAME_GFN
>>> from an entry which was set from INVALID_GFN.
>> Really good point.
>>
>> 1. On Arm64 the p2m_ipa_bits could (theoretically) be 64-bit which, I
>> assume, corresponds to the maximum guest physical address (1 << 64) - 1
>> = 0xFFFFFFFFFFFFFFFF.
>> To store that GFN we need 52-bit. But, we provide 60-bit field which is
>> more than enough, I think. Practically, the maximum supported
>> p2m_ipa_bits is 48-bit, so the maximum supported GFN will occupy 36-bit
>> only. Everything is ok here.
>> 2. On Arm32 the p2m_ipa_bits is 40-bit which, I assume, corresponds to
>> the maximum guest physical address (1 << 40) - 1 = 0xFFFFFFFFFF. To
>> store that GFN we need 28-bit. If I did the calculation correctly, what
>> we have on Arm32 is that PGT_INVALID_FRAME_GFN == maximum guest physical
>> address and it looks like we need and extra bit on Arm32. Do you think
>> we need to borrow one more bit from the count portion to stay on the
>> safe side?
> I think so, unless there are further restrictions on the GFN range
> that I'm unaware of.

ok, thank you.


>
> For 64-bit, if you only need 52 bits, why do you make the field 60
> bits wide? I'd recommend against "wasting" bits. Better keep the
> count field as wide as possible.
Well, the reason almost the same as I already provided for why not using 
PG_mask for PGT_gfn_mask.
Although we waste some bits on Arm64, having the same amount of bits for 
count on Arm32 and Arm64 let us avoid introducing an extra #ifdef to 
that header
(if we go with maximum possible bits for count on each configuration we 
would need to have a set of PGT_count_*/PGT_gfn_*).
I was thinking that if there was indeed a lack of bits for count then an 
extra #ifdef wouldn't be an argument at all. If I am not mistaken, the 
code which deals with count (where only 1 bit is used) is one and the 
same on both Arm32 and Arm64, so what is the point of diverging here (I 
mean to provide more bits for count on Arm64 because simply there is a 
reserve).
But, if the reason I provided is weak I will update header to keep the 
count as wide as possible.


>
>>>> + * This only valid for the xenheap pages.
>>>> + */
>>>> +#define PGT_gfn_width     PG_shift(4)
>>>> +#define PGT_gfn_mask      ((1UL<<PGT_gfn_width)-1)
>>> Any reason you don't use PG_mask() here? Any open-coding is prone
>>> to people later making mistakes.
>> I failed to come up with idea how to do that without #ifdef. As GFN
>> starts at bit 0 different first parameter would be needed for PG_mask on
>> 32-bit and 64-bit systems.
>> I wonder whether PGC_count_mask/PGT_count_mask are open-coded on Arm/x86
>> because of the same reason.
> Hmm, that pre-existing open-coding isn't nice, but is perhaps a
> good enough reason to keep what you have. (Personally I wouldn't
> be afraid of adding an #ifdef here, but I realize views there may
> differ.)
>
> Jan
>
diff mbox series

Patch

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index eea926d..1857af4 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1376,14 +1376,18 @@  unsigned long domain_get_maximum_gpfn(struct domain *d)
 void share_xen_page_with_guest(struct page_info *page, struct domain *d,
                                enum XENSHARE_flags flags)
 {
+    unsigned long type_info;
+
     if ( page_get_owner(page) == d )
         return;
 
     spin_lock(&d->page_alloc_lock);
 
     /* The incremented type count pins as writable or read-only. */
-    page->u.inuse.type_info =
-        (flags == SHARE_ro ? PGT_none : PGT_writable_page) | 1;
+    type_info = page->u.inuse.type_info & ~(PGT_type_mask | PGT_count_mask);
+    page->u.inuse.type_info = type_info |
+        (flags == SHARE_ro ? PGT_none : PGT_writable_page) |
+        (1UL << PGT_count_base);
 
     page_set_owner(page, d);
     smp_wmb(); /* install valid domain ptr before updating refcnt. */
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index eff9a10..a2b5597 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -718,8 +718,10 @@  static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn,
  * TODO: Handle superpages, for now we only take special references for leaf
  * pages (specifically foreign ones, which can't be super mapped today).
  */
-static void p2m_put_l3_page(const lpae_t pte)
+static void p2m_put_l3_page(struct p2m_domain *p2m, const lpae_t pte)
 {
+    mfn_t mfn = lpae_get_mfn(pte);
+
     ASSERT(p2m_is_valid(pte));
 
     /*
@@ -731,11 +733,22 @@  static void p2m_put_l3_page(const lpae_t pte)
      */
     if ( p2m_is_foreign(pte.p2m.type) )
     {
-        mfn_t mfn = lpae_get_mfn(pte);
-
         ASSERT(mfn_valid(mfn));
         put_page(mfn_to_page(mfn));
     }
+
+#ifdef CONFIG_GRANT_TABLE
+    /*
+     * Check whether we deal with grant table page. As the grant table page
+     * is xen_heap page and its entry has known p2m type, detect it and mark
+     * the stored GFN as invalid. Although this check is not precise and we
+     * might end up updating this for other xen_heap pages, this action is
+     * harmless to these pages since only grant table pages have this field
+     * in use. So, at worst, unnecessary action might be performed.
+     */
+    if ( (pte.p2m.type == p2m_ram_rw) && is_xen_heap_mfn(mfn) )
+        page_set_frame_gfn(mfn_to_page(mfn), INVALID_GFN);
+#endif
 }
 
 /* Free lpae sub-tree behind an entry */
@@ -768,7 +781,7 @@  static void p2m_free_entry(struct p2m_domain *p2m,
         p2m->stats.mappings[level]--;
         /* Nothing to do if the entry is a super-page. */
         if ( level == 3 )
-            p2m_put_l3_page(entry);
+            p2m_put_l3_page(p2m, entry);
         return;
     }
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index e80f8d0..fd8d7e3 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -93,8 +93,6 @@  struct grant_table {
 
     /* Domain to which this struct grant_table belongs. */
     const struct domain *domain;
-
-    struct grant_table_arch arch;
 };
 
 unsigned int __read_mostly opt_max_grant_frames = 64;
@@ -1981,14 +1979,9 @@  int grant_table_init(struct domain *d, int max_grant_frames,
 
     grant_write_lock(gt);
 
-    ret = gnttab_init_arch(gt);
-    if ( ret )
-        goto unlock;
-
     /* gnttab_grow_table() allocates a min number of frames, so 0 is okay. */
     ret = gnttab_grow_table(d, 0);
 
- unlock:
     grant_write_unlock(gt);
 
  out:
@@ -3894,8 +3887,6 @@  grant_table_destroy(
     if ( t == NULL )
         return;
 
-    gnttab_destroy_arch(t);
-
     for ( i = 0; i < nr_grant_frames(t); i++ )
         free_xenheap_page(t->shared_raw[i]);
     xfree(t->shared_raw);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 958ba0c..00371e7 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2204,7 +2204,10 @@  void *alloc_xenheap_pages(unsigned int order, unsigned int memflags)
         return NULL;
 
     for ( i = 0; i < (1u << order); i++ )
+    {
         pg[i].count_info |= PGC_xen_heap;
+        arch_alloc_xenheap_page(&pg[i]);
+    }
 
     return page_to_virt(pg);
 }
@@ -2222,7 +2225,10 @@  void free_xenheap_pages(void *v, unsigned int order)
     pg = virt_to_page(v);
 
     for ( i = 0; i < (1u << order); i++ )
+    {
         pg[i].count_info &= ~PGC_xen_heap;
+        arch_free_xenheap_page(&pg[i]);
+    }
 
     free_heap_pages(pg, order, true);
 }
diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
index 0ce77f9..479339d 100644
--- a/xen/include/asm-arm/grant_table.h
+++ b/xen/include/asm-arm/grant_table.h
@@ -11,11 +11,6 @@ 
 #define INITIAL_NR_GRANT_FRAMES 1U
 #define GNTTAB_MAX_VERSION 1
 
-struct grant_table_arch {
-    gfn_t *shared_gfn;
-    gfn_t *status_gfn;
-};
-
 static inline void gnttab_clear_flags(struct domain *d,
                                       unsigned int mask, uint16_t *addr)
 {
@@ -46,35 +41,11 @@  int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn,
 #define gnttab_dom0_frames()                                             \
     min_t(unsigned int, opt_max_grant_frames, PFN_DOWN(_etext - _stext))
 
-#define gnttab_init_arch(gt)                                             \
-({                                                                       \
-    unsigned int ngf_ = (gt)->max_grant_frames;                          \
-    unsigned int nsf_ = grant_to_status_frames(ngf_);                    \
-                                                                         \
-    (gt)->arch.shared_gfn = xmalloc_array(gfn_t, ngf_);                  \
-    (gt)->arch.status_gfn = xmalloc_array(gfn_t, nsf_);                  \
-    if ( (gt)->arch.shared_gfn && (gt)->arch.status_gfn )                \
-    {                                                                    \
-        while ( ngf_-- )                                                 \
-            (gt)->arch.shared_gfn[ngf_] = INVALID_GFN;                   \
-        while ( nsf_-- )                                                 \
-            (gt)->arch.status_gfn[nsf_] = INVALID_GFN;                   \
-    }                                                                    \
-    else                                                                 \
-        gnttab_destroy_arch(gt);                                         \
-    (gt)->arch.shared_gfn ? 0 : -ENOMEM;                                 \
-})
-
-#define gnttab_destroy_arch(gt)                                          \
-    do {                                                                 \
-        XFREE((gt)->arch.shared_gfn);                                    \
-        XFREE((gt)->arch.status_gfn);                                    \
-    } while ( 0 )
-
 #define gnttab_set_frame_gfn(gt, st, idx, gfn)                           \
     do {                                                                 \
-        ((st) ? (gt)->arch.status_gfn : (gt)->arch.shared_gfn)[idx] =    \
-            (gfn);                                                       \
+        struct page_info *pg_ = (st) ? gnttab_status_page(gt, idx)       \
+                                     : gnttab_shared_page(gt, idx);      \
+        page_set_frame_gfn(pg_, gfn);                                    \
     } while ( 0 )
 
 #define gnttab_get_frame_gfn(gt, st, idx) ({                             \
@@ -82,11 +53,25 @@  int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn,
         : gnttab_shared_gfn(NULL, gt, idx);                              \
 })
 
-#define gnttab_shared_gfn(d, t, i)                                       \
-    (((i) >= nr_grant_frames(t)) ? INVALID_GFN : (t)->arch.shared_gfn[i])
+#define gnttab_shared_page(t, i) ({                                      \
+    ASSERT((t)->shared_raw[i]);                                          \
+    mfn_to_page(_mfn(__virt_to_mfn((t)->shared_raw[i])));                \
+})
+
+#define gnttab_status_page(t, i) ({                                      \
+    ASSERT((t)->status[i]);                                              \
+    mfn_to_page(_mfn(__virt_to_mfn((t)->status[i])));                    \
+})
 
-#define gnttab_status_gfn(d, t, i)                                       \
-    (((i) >= nr_status_frames(t)) ? INVALID_GFN : (t)->arch.status_gfn[i])
+#define gnttab_shared_gfn(d, t, i) ({                                    \
+    struct page_info *pg_ = gnttab_shared_page(t, i);                    \
+    page_get_frame_gfn(pg_);                                             \
+})
+
+#define gnttab_status_gfn(d, t, i) ({                                    \
+    struct page_info *pg_ = gnttab_status_page(t, i);                    \
+    page_get_frame_gfn(pg_);                                             \
+})
 
 #define gnttab_need_iommu_mapping(d)                    \
     (is_domain_direct_mapped(d) && is_iommu_enabled(d))
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index ded74d2..dd425d6 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -98,9 +98,18 @@  struct page_info
 #define PGT_writable_page PG_mask(1, 1)  /* has writable mappings?         */
 #define PGT_type_mask     PG_mask(1, 1)  /* Bits 31 or 63.                 */
 
- /* Count of uses of this frame as its current type. */
-#define PGT_count_width   PG_shift(2)
-#define PGT_count_mask    ((1UL<<PGT_count_width)-1)
+ /* 3-bit count of uses of this frame as its current type. */
+#define PGT_count_base    PG_shift(4)
+#define PGT_count_mask    PG_mask(7, 4)
+
+/*
+ * Stored in bits [27:0] or [59:0] GFN if page is used for grant table frame.
+ * This only valid for the xenheap pages.
+ */
+#define PGT_gfn_width     PG_shift(4)
+#define PGT_gfn_mask      ((1UL<<PGT_gfn_width)-1)
+
+#define PGT_INVALID_FRAME_GFN   _gfn(PGT_gfn_mask)
 
  /* Cleared when the owning guest 'frees' this page. */
 #define _PGC_allocated    PG_shift(1)
@@ -163,6 +172,27 @@  extern unsigned long xenheap_base_pdx;
 
 #define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
 
+#define page_get_frame_gfn(p) ({                                \
+    gfn_t gfn_ = _gfn((p)->u.inuse.type_info & PGT_gfn_mask);   \
+    gfn_eq(gfn_, PGT_INVALID_FRAME_GFN) ? INVALID_GFN : gfn_;   \
+})
+
+#define page_set_frame_gfn(p, gfn) ({                           \
+    gfn_t gfn_ = gfn_eq(gfn, INVALID_GFN) ?                     \
+                 PGT_INVALID_FRAME_GFN : gfn;                   \
+    (p)->u.inuse.type_info &= ~PGT_gfn_mask;                    \
+    (p)->u.inuse.type_info |= gfn_x(gfn_);                      \
+})
+
+/*
+ * As the struct page_info representing the xen_heap page can contain
+ * the grant table frame GFN on Arm we need to clear it beforehand and
+ * make sure it is not still set when freeing a page.
+ */
+#define arch_alloc_xenheap_page(p)   page_set_frame_gfn(p, INVALID_GFN)
+#define arch_free_xenheap_page(p) \
+    BUG_ON(!gfn_eq(page_get_frame_gfn(p), INVALID_GFN))
+
 #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
 /* PDX of the first page in the frame table. */
 extern unsigned long frametable_base_pdx;
diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-x86/grant_table.h
index 84e3296..0eb018f 100644
--- a/xen/include/asm-x86/grant_table.h
+++ b/xen/include/asm-x86/grant_table.h
@@ -14,9 +14,6 @@ 
 
 #define INITIAL_NR_GRANT_FRAMES 1U
 
-struct grant_table_arch {
-};
-
 static inline int create_grant_host_mapping(uint64_t addr, mfn_t frame,
                                             unsigned int flags,
                                             unsigned int cache_flags)
@@ -35,8 +32,6 @@  static inline int replace_grant_host_mapping(uint64_t addr, mfn_t frame,
     return replace_grant_pv_mapping(addr, frame, new_addr, flags);
 }
 
-#define gnttab_init_arch(gt) 0
-#define gnttab_destroy_arch(gt) do {} while ( 0 )
 #define gnttab_set_frame_gfn(gt, st, idx, gfn) do {} while ( 0 )
 #define gnttab_get_frame_gfn(gt, st, idx) ({                             \
     mfn_t mfn_ = (st) ? gnttab_status_mfn(gt, idx)                       \
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index cb90527..04d8704 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -327,6 +327,10 @@  struct page_info
 
 #define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
 
+/* No arch-specific actions are needed for the xen_heap page */
+#define arch_alloc_xenheap_page(p)   do {} while ( 0 )
+#define arch_free_xenheap_page(p)    do {} while ( 0 )
+
 #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
 extern unsigned long max_page;
 extern unsigned long total_pages;