diff mbox series

[1/4] xen: Introduce non-broken hypercalls for the p2m pool size

Message ID 20221026102018.4144-2-andrew.cooper3@citrix.com (mailing list archive)
State Superseded
Headers show
Series XSA-409 fixes | expand

Commit Message

Andrew Cooper Oct. 26, 2022, 10:20 a.m. UTC
The existing XEN_DOMCTL_SHADOW_OP_{GET,SET}_ALLOCATION have problems:

 * All set_allocation() flavours have an overflow-before-widen bug when
   calculating "sc->mb << (20 - PAGE_SHIFT)".
 * All flavours have a granularity of of 1M.  This was tolerable when the size
   of the pool could only be set at the same granularity, but is broken now
   that ARM has a 16-page stopgap allocation in use.
 * All get_allocation() flavours round up, and in particular turn 0 into 1,
   meaning the get op returns junk before a successful set op.
 * The x86 flavours reject the hypercalls before the VM has vCPUs allocated,
   despite the pool size being a domain property.
 * Even the hypercall names are long-obsolete.

Implement an interface that doesn't suck, which can be first used to unit test
the behaviour, and subsequently correct a broken implementation.  The old
interface will be retired in due course.

This is part of XSA-409 / CVE-2022-33747.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Xen Security Team <security@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
CC: Henry Wang <Henry.Wang@arm.com>
CC: Anthony PERARD <anthony.perard@citrix.com>

Name subject to improvement.  ABI not.  This is the first of many tools ABI
changes required to cleanly separate the logical operation from Xen's choice
of pagetable size.

Future TODOs:
 * x86 shadow still rounds up.  This is buggy as it's a simultaneous equation
   with tot_pages which varies over time with ballooning.
 * x86 PV is weird.  There are no toolstack interact with the shadow pool
   size, but the "shadow" pool it does come into existence when logdirty (or
   pv-l1tf) when first enabled.
 * The shadow+hap logic is in desperate need of deduping.
---
 tools/include/xenctrl.h           |  3 +++
 tools/libs/ctrl/xc_domain.c       | 29 +++++++++++++++++++++++++++++
 xen/arch/arm/p2m.c                | 27 +++++++++++++++++++++++++++
 xen/arch/x86/include/asm/hap.h    |  1 +
 xen/arch/x86/include/asm/shadow.h |  4 ++++
 xen/arch/x86/mm/hap/hap.c         | 10 ++++++++++
 xen/arch/x86/mm/paging.c          | 39 +++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/shadow/common.c   | 10 ++++++++++
 xen/common/domctl.c               | 14 ++++++++++++++
 xen/include/public/domctl.h       | 26 +++++++++++++++++++++++++-
 xen/include/xen/domain.h          |  3 +++
 11 files changed, 165 insertions(+), 1 deletion(-)

Comments

Jan Beulich Oct. 26, 2022, 1:42 p.m. UTC | #1
On 26.10.2022 12:20, Andrew Cooper wrote:
> The existing XEN_DOMCTL_SHADOW_OP_{GET,SET}_ALLOCATION have problems:
> 
>  * All set_allocation() flavours have an overflow-before-widen bug when
>    calculating "sc->mb << (20 - PAGE_SHIFT)".
>  * All flavours have a granularity of of 1M.  This was tolerable when the size
>    of the pool could only be set at the same granularity, but is broken now
>    that ARM has a 16-page stopgap allocation in use.
>  * All get_allocation() flavours round up, and in particular turn 0 into 1,
>    meaning the get op returns junk before a successful set op.
>  * The x86 flavours reject the hypercalls before the VM has vCPUs allocated,
>    despite the pool size being a domain property.

I guess this is merely a remnant and could easily be dropped there.

>  * Even the hypercall names are long-obsolete.
> 
> Implement an interface that doesn't suck, which can be first used to unit test
> the behaviour, and subsequently correct a broken implementation.  The old
> interface will be retired in due course.
> 
> This is part of XSA-409 / CVE-2022-33747.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Xen Security Team <security@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> CC: Bertrand Marquis <bertrand.marquis@arm.com>
> CC: Henry Wang <Henry.Wang@arm.com>
> CC: Anthony PERARD <anthony.perard@citrix.com>
> 
> Name subject to improvement.

paging_{get,set}_mempool_size() for the arch helpers (in particular
fitting better with them living in paging.c as well its multi-purpose use
on x86) and XEN_DOMCTL_{get,set}_paging_mempool_size? Perhaps even the
"mem" could be dropped?

>  ABI not.

With the comment in the public header saying "Users of this interface are
required to identify the granularity by other means" I wonder why the
interface needs to be byte-granular. If the caller needs to know page size
by whatever means, it can as well pass in a page count.

> Future TODOs:
>  * x86 shadow still rounds up.  This is buggy as it's a simultaneous equation
>    with tot_pages which varies over time with ballooning.
>  * x86 PV is weird.  There are no toolstack interact with the shadow pool
>    size, but the "shadow" pool it does come into existence when logdirty (or
>    pv-l1tf) when first enabled.
>  * The shadow+hap logic is in desperate need of deduping.

I have a tiny step towards this queued as post-XSA-410 work, folding HAP's
and shadow's freelist, total_pages, free_pages, and p2m_pages. Here this
would mean {hap,shadow}_get_allocation_bytes() could be done away with,
having the logic exclusively in paging.c.

> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -100,6 +100,14 @@ unsigned int p2m_get_allocation(struct domain *d)
>      return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT);
>  }
>  
> +/* Return the size of the pool, in bytes. */
> +int arch_get_p2m_mempool_size(struct domain *d, uint64_t *size)
> +{
> +    *size = ACCESS_ONCE(d->arch.paging.p2m_total_pages) << PAGE_SHIFT;

This may overflow for Arm32.

> @@ -157,6 +165,25 @@ int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
>      return 0;
>  }
>  
> +int arch_set_p2m_mempool_size(struct domain *d, uint64_t size)
> +{
> +    unsigned long pages = size >> PAGE_SHIFT;
> +    bool preempted = false;
> +    int rc;
> +
> +    if ( (size & ~PAGE_MASK) ||          /* Non page-sized request? */
> +         pages != (size >> PAGE_SHIFT) ) /* 32-bit overflow? */
> +        return -EINVAL;

Simply "(pages << PAGE_SHIFT) != size"? And then move the check into
common code?

> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -345,6 +345,16 @@ unsigned int hap_get_allocation(struct domain *d)
>              + ((pg & ((1 << (20 - PAGE_SHIFT)) - 1)) ? 1 : 0));
>  }
>  
> +int hap_get_allocation_bytes(struct domain *d, uint64_t *size)
> +{
> +    unsigned long pages = (d->arch.paging.hap.total_pages +
> +                           d->arch.paging.hap.p2m_pages);

Unlike for Arm no ACCESS_ONCE() here? Also the addition can in
principle overflow, because being done only in 32 bits.

> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -977,6 +977,45 @@ int __init paging_set_allocation(struct domain *d, unsigned int pages,
>  }
>  #endif
>  
> +int arch_get_p2m_mempool_size(struct domain *d, uint64_t *size)
> +{
> +    int rc;
> +
> +    if ( is_pv_domain(d) )
> +        return -EOPNOTSUPP;
> +
> +    if ( hap_enabled(d) )
> +        rc = hap_get_allocation_bytes(d, size);
> +    else
> +        rc = shadow_get_allocation_bytes(d, size);
> +
> +    return rc;
> +}
> +
> +int arch_set_p2m_mempool_size(struct domain *d, uint64_t size)
> +{
> +    unsigned long pages = size >> PAGE_SHIFT;
> +    bool preempted = false;
> +    int rc;
> +
> +    if ( is_pv_domain(d) )
> +        return -EOPNOTSUPP;

Why? You do say "PV is weird" in a post-commit-message remark, but why
do you want to retain this weirdness? Even if today the tool stack
doesn't set the size when enabling log-dirty mode, I'd view this as a
bug which could be addressed purely in the tool stack if this check
wasn't there.

> +    if ( size & ~PAGE_MASK )             /* Non page-sized request? */
> +        return -EINVAL;
> +
> +    ASSERT(paging_mode_enabled(d));

Not only with the PV aspect in mind - why? It looks reasonable to me
to set the pool size before enabling any paging mode.

> +    paging_lock(d);
> +    if ( hap_enabled(d) )
> +        rc = hap_set_allocation(d, pages, &preempted);
> +    else
> +        rc = shadow_set_allocation(d, pages, &preempted);

Potential truncation from the "unsigned long" -> "unsigned int"
conversions.

Jan
Andrew Cooper Oct. 26, 2022, 7:22 p.m. UTC | #2
On 26/10/2022 14:42, Jan Beulich wrote:
> On 26.10.2022 12:20, Andrew Cooper wrote:
>> The existing XEN_DOMCTL_SHADOW_OP_{GET,SET}_ALLOCATION have problems:
>>
>>  * All set_allocation() flavours have an overflow-before-widen bug when
>>    calculating "sc->mb << (20 - PAGE_SHIFT)".
>>  * All flavours have a granularity of of 1M.  This was tolerable when the size
>>    of the pool could only be set at the same granularity, but is broken now
>>    that ARM has a 16-page stopgap allocation in use.
>>  * All get_allocation() flavours round up, and in particular turn 0 into 1,
>>    meaning the get op returns junk before a successful set op.
>>  * The x86 flavours reject the hypercalls before the VM has vCPUs allocated,
>>    despite the pool size being a domain property.
> I guess this is merely a remnant and could easily be dropped there.

It's intermixed the other shadow operations.  It wasn't trivially-safe
enough to do here, and needs coming back to in future work.

>
>>  * Even the hypercall names are long-obsolete.
>>
>> Implement an interface that doesn't suck, which can be first used to unit test
>> the behaviour, and subsequently correct a broken implementation.  The old
>> interface will be retired in due course.
>>
>> This is part of XSA-409 / CVE-2022-33747.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Xen Security Team <security@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Julien Grall <julien@xen.org>
>> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> CC: Bertrand Marquis <bertrand.marquis@arm.com>
>> CC: Henry Wang <Henry.Wang@arm.com>
>> CC: Anthony PERARD <anthony.perard@citrix.com>
>>
>> Name subject to improvement.
> paging_{get,set}_mempool_size() for the arch helpers (in particular
> fitting better with them living in paging.c as well its multi-purpose use
> on x86) and XEN_DOMCTL_{get,set}_paging_mempool_size? Perhaps even the
> "mem" could be dropped?

Yeah, this was a placeholder for "what are we actually going to call it
in Xen".

I went with mempool over just simply pool because pool has a very
different meaning slightly higher in the toolstack where you talk about
pools of servers.  Admittedly, that's code outside of xen.git, but the
hypercall names do percolate up into those codebases.

paging isn't a great name.  While it's what we call the infrastructure
in x86, it has nothing to do with paging things out to disk (the thing
everyone associates the name with), nor the xenpaging infrastructure
(Xen's version of what OS paging supposedly means).

>
>>  ABI not.
> With the comment in the public header saying "Users of this interface are
> required to identify the granularity by other means" I wonder why the
> interface needs to be byte-granular. If the caller needs to know page size
> by whatever means, it can as well pass in a page count.

Not all architectures have pagetable levels of uniform size.  Not all
architectures have the mapping granularity equal to the pagetable size. 
x86 has examples of both of these (and in a rogue move, one x86 hardware
vendor is trying to add even more pagetable asymmetry).  Other
architectures substantially more variety.

Even on x86, there are performance advantages from using 8k or 16k
arrangements, which could cause us insist upon >4k requirements here. 
(TBH, not actually for this usecase, but the principle is still valid.)


The reason is because this is a size.  Sizes are in bytes, and that's
how everyone thinks about them.  Its how the value is already specified
in an xl cfg file, and it entirely unambiguous at all levels of the stack.

Every translation of the value in the software stack risks breaking
things, even stuff as simple as debugging.  As proof, count the number
of translation errors I've already identified in this patch alone.

This ABI does not require any changes at all (not even recompiling
userspace) for ARM to decide to use 16k or 64k pagetables in Xen, or for
x86 to decide that 8k or 16k is beneficial enough to actually require.

Attempting to compress this uint64_t into something smaller by any means
will create bugs, or at increased complexity and a high risk of bugs. 
There isn't enough money on earth right now to afford a 128bit processor
with enough ram for this current ABI to need changing.


This is going to be a reoccurring theme through fixing the ABIs.  Its
one of a several areas where there is objectively one right answer, both
in terms of ease of use, and compatibility to future circumstances.



>
>> Future TODOs:
>>  * x86 shadow still rounds up.  This is buggy as it's a simultaneous equation
>>    with tot_pages which varies over time with ballooning.
>>  * x86 PV is weird.  There are no toolstack interact with the shadow pool
>>    size, but the "shadow" pool it does come into existence when logdirty (or
>>    pv-l1tf) when first enabled.
>>  * The shadow+hap logic is in desperate need of deduping.
> I have a tiny step towards this queued as post-XSA-410 work, folding HAP's
> and shadow's freelist, total_pages, free_pages, and p2m_pages. Here this
> would mean {hap,shadow}_get_allocation_bytes() could be done away with,
> having the logic exclusively in paging.c.

Thanks.  I'll drop that task from my todo list.

But really, it need to be fully common, because RISC-V is going to need
it too.  (I'm told development on RISC-V will start back up any time now.)

>
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -100,6 +100,14 @@ unsigned int p2m_get_allocation(struct domain *d)
>>      return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT);
>>  }
>>  
>> +/* Return the size of the pool, in bytes. */
>> +int arch_get_p2m_mempool_size(struct domain *d, uint64_t *size)
>> +{
>> +    *size = ACCESS_ONCE(d->arch.paging.p2m_total_pages) << PAGE_SHIFT;
> This may overflow for Arm32.

So it will.  I'll widen first.

>
>> @@ -157,6 +165,25 @@ int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
>>      return 0;
>>  }
>>  
>> +int arch_set_p2m_mempool_size(struct domain *d, uint64_t size)
>> +{
>> +    unsigned long pages = size >> PAGE_SHIFT;
>> +    bool preempted = false;
>> +    int rc;
>> +
>> +    if ( (size & ~PAGE_MASK) ||          /* Non page-sized request? */
>> +         pages != (size >> PAGE_SHIFT) ) /* 32-bit overflow? */
>> +        return -EINVAL;
> Simply "(pages << PAGE_SHIFT) != size"? And then move the check into
> common code?

These checks are deliberately not in common code.  That's just creating
work that someone will need to undo in due course.

>
>> --- a/xen/arch/x86/mm/hap/hap.c
>> +++ b/xen/arch/x86/mm/hap/hap.c
>> @@ -345,6 +345,16 @@ unsigned int hap_get_allocation(struct domain *d)
>>              + ((pg & ((1 << (20 - PAGE_SHIFT)) - 1)) ? 1 : 0));
>>  }
>>  
>> +int hap_get_allocation_bytes(struct domain *d, uint64_t *size)
>> +{
>> +    unsigned long pages = (d->arch.paging.hap.total_pages +
>> +                           d->arch.paging.hap.p2m_pages);
> Unlike for Arm no ACCESS_ONCE() here? Also the addition can in
> principle overflow, because being done only in 32 bits.

I'm not actually convinced ARM needs ACCESS_ONCE() to begin with.  I
can't see any legal transformation of that logic which could result in a
torn load.

Both examples were written to match the existing code, because this
needs backporting to all security trees.

I forgot to mention the overflow on x86 in the future todo section. 
This code is rife with them.

>
>> --- a/xen/arch/x86/mm/paging.c
>> +++ b/xen/arch/x86/mm/paging.c
>> @@ -977,6 +977,45 @@ int __init paging_set_allocation(struct domain *d, unsigned int pages,
>>  }
>>  #endif
>>  
>> +int arch_get_p2m_mempool_size(struct domain *d, uint64_t *size)
>> +{
>> +    int rc;
>> +
>> +    if ( is_pv_domain(d) )
>> +        return -EOPNOTSUPP;
>> +
>> +    if ( hap_enabled(d) )
>> +        rc = hap_get_allocation_bytes(d, size);
>> +    else
>> +        rc = shadow_get_allocation_bytes(d, size);
>> +
>> +    return rc;
>> +}
>> +
>> +int arch_set_p2m_mempool_size(struct domain *d, uint64_t size)
>> +{
>> +    unsigned long pages = size >> PAGE_SHIFT;
>> +    bool preempted = false;
>> +    int rc;
>> +
>> +    if ( is_pv_domain(d) )
>> +        return -EOPNOTSUPP;
> Why? You do say "PV is weird" in a post-commit-message remark, but why
> do you want to retain this weirdness? Even if today the tool stack
> doesn't set the size when enabling log-dirty mode, I'd view this as a
> bug which could be addressed purely in the tool stack if this check
> wasn't there.

I want to clean up PV, but again, it wasn't sufficiently trivially-safe
to do right now.

PV is weird because it is neither hap_enabled() (fundamentally), nor
shadow_enabled() when logdirty isn't active.  While the freelist is
suitably constructed, the get/set operations were previously rejected
and cleanup is local to the disable op, not domain shutdown.

I could put in a /* TODO: relax in due course */ if you'd prefer?

>> +    if ( size & ~PAGE_MASK )             /* Non page-sized request? */
>> +        return -EINVAL;
>> +
>> +    ASSERT(paging_mode_enabled(d));
> Not only with the PV aspect in mind - why? It looks reasonable to me
> to set the pool size before enabling any paging mode.

Because this is how all the existing logic is expressed, and this patch
wants backporting.

There is sooo much to clean up...

>
>> +    paging_lock(d);
>> +    if ( hap_enabled(d) )
>> +        rc = hap_set_allocation(d, pages, &preempted);
>> +    else
>> +        rc = shadow_set_allocation(d, pages, &preempted);
> Potential truncation from the "unsigned long" -> "unsigned int"
> conversions.

I'd not even spotted that ARM and x86 were different in this regard.

More short term hacks, it seems.

~Andrew
Julien Grall Oct. 26, 2022, 9:24 p.m. UTC | #3
Hi Andrew,

On 26/10/2022 20:22, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/mm/hap/hap.c
>>> +++ b/xen/arch/x86/mm/hap/hap.c
>>> @@ -345,6 +345,16 @@ unsigned int hap_get_allocation(struct domain *d)
>>>               + ((pg & ((1 << (20 - PAGE_SHIFT)) - 1)) ? 1 : 0));
>>>   }
>>>   
>>> +int hap_get_allocation_bytes(struct domain *d, uint64_t *size)
>>> +{
>>> +    unsigned long pages = (d->arch.paging.hap.total_pages +
>>> +                           d->arch.paging.hap.p2m_pages);
>> Unlike for Arm no ACCESS_ONCE() here? Also the addition can in
>> principle overflow, because being done only in 32 bits.
> 
> I'm not actually convinced ARM needs ACCESS_ONCE() to begin with.  I
> can't see any legal transformation of that logic which could result in a
> torn load.

AFAIU, ACCESS_ONCE() is not only about torn load but also making sure 
that the compiler will only read the value once.

When LTO is enabled (not yet supported) in Xen, can we guarantee the 
compiler will not try to access total_pages twice (obviously it would be 
caller dependent)?

With that in mind, when LTO is enabled on Linux arm64, the 
implementation of READ_ONCE() is not a simple (volatile *) to prevent 
the compiler to do harmful convertion. Possibly something we will need 
to consider in Xen in the future if we enable LTO. In this context, the 
ACCESS_ONCE() would make sense because we don't know (or should not 
assume) how the caller will use it.

Regardless that, I think using ACCESS_ONCE() help to document how the 
variable should be used. This will reduce the risk that someone decides 
to add a new use total_pages like below:

val = d->arch.paging.total_pages;

if ( val == 0 )
   return ...

/* use val */

AFAIU, a compiler would be allow to read total_pages twice here. Which 
is not what we would want. I am ready to bet this will be missed.

So consistency here is IMO much better. An alternative would be to 
document why we think the compiler would not be naughty.

Cheers,
Jan Beulich Oct. 27, 2022, 6:56 a.m. UTC | #4
On 26.10.2022 23:24, Julien Grall wrote:
> On 26/10/2022 20:22, Andrew Cooper wrote:
>>>> --- a/xen/arch/x86/mm/hap/hap.c
>>>> +++ b/xen/arch/x86/mm/hap/hap.c
>>>> @@ -345,6 +345,16 @@ unsigned int hap_get_allocation(struct domain *d)
>>>>               + ((pg & ((1 << (20 - PAGE_SHIFT)) - 1)) ? 1 : 0));
>>>>   }
>>>>   
>>>> +int hap_get_allocation_bytes(struct domain *d, uint64_t *size)
>>>> +{
>>>> +    unsigned long pages = (d->arch.paging.hap.total_pages +
>>>> +                           d->arch.paging.hap.p2m_pages);
>>> Unlike for Arm no ACCESS_ONCE() here? Also the addition can in
>>> principle overflow, because being done only in 32 bits.
>>
>> I'm not actually convinced ARM needs ACCESS_ONCE() to begin with.  I
>> can't see any legal transformation of that logic which could result in a
>> torn load.
> 
> AFAIU, ACCESS_ONCE() is not only about torn load but also making sure 
> that the compiler will only read the value once.
> 
> When LTO is enabled (not yet supported) in Xen, can we guarantee the 
> compiler will not try to access total_pages twice (obviously it would be 
> caller dependent)?

Aren't all accesses (supposed to be) under paging lock? At which point
there's no issue with multiple (or torn) accesses?

Jan
Jan Beulich Oct. 27, 2022, 7:11 a.m. UTC | #5
On 26.10.2022 21:22, Andrew Cooper wrote:
> On 26/10/2022 14:42, Jan Beulich wrote:
>> On 26.10.2022 12:20, Andrew Cooper wrote:
>>> The existing XEN_DOMCTL_SHADOW_OP_{GET,SET}_ALLOCATION have problems:
>>>
>>>  * All set_allocation() flavours have an overflow-before-widen bug when
>>>    calculating "sc->mb << (20 - PAGE_SHIFT)".
>>>  * All flavours have a granularity of of 1M.  This was tolerable when the size
>>>    of the pool could only be set at the same granularity, but is broken now
>>>    that ARM has a 16-page stopgap allocation in use.
>>>  * All get_allocation() flavours round up, and in particular turn 0 into 1,
>>>    meaning the get op returns junk before a successful set op.
>>>  * The x86 flavours reject the hypercalls before the VM has vCPUs allocated,
>>>    despite the pool size being a domain property.
>> I guess this is merely a remnant and could easily be dropped there.
> 
> It's intermixed the other shadow operations.  It wasn't trivially-safe
> enough to do here, and needs coming back to in future work.

Right, and I should have said that this is merely a remark, not a request
for any change here.

>>> Name subject to improvement.
>> paging_{get,set}_mempool_size() for the arch helpers (in particular
>> fitting better with them living in paging.c as well its multi-purpose use
>> on x86) and XEN_DOMCTL_{get,set}_paging_mempool_size? Perhaps even the
>> "mem" could be dropped?
> 
> Yeah, this was a placeholder for "what are we actually going to call it
> in Xen".
> 
> I went with mempool over just simply pool because pool has a very
> different meaning slightly higher in the toolstack where you talk about
> pools of servers.  Admittedly, that's code outside of xen.git, but the
> hypercall names do percolate up into those codebases.
> 
> paging isn't a great name.  While it's what we call the infrastructure
> in x86, it has nothing to do with paging things out to disk (the thing
> everyone associates the name with), nor the xenpaging infrastructure
> (Xen's version of what OS paging supposedly means).

Okay, "paging" can be somewhat misleading. But "p2m" also doesn't fit
the use(s) on x86. Yet we'd like to use a name clearly better than the
previous (and yet more wrong/misleading) "shadow". I have to admit that
I can't think of any other sensible name, and among the ones discussed
I still think "paging" is the one coming closest despite the
generally different meaning of the word elsewhere.

>>>  ABI not.
>> With the comment in the public header saying "Users of this interface are
>> required to identify the granularity by other means" I wonder why the
>> interface needs to be byte-granular. If the caller needs to know page size
>> by whatever means, it can as well pass in a page count.
> 
> Not all architectures have pagetable levels of uniform size.  Not all
> architectures have the mapping granularity equal to the pagetable size. 
> x86 has examples of both of these (and in a rogue move, one x86 hardware
> vendor is trying to add even more pagetable asymmetry).  Other
> architectures substantially more variety.
> 
> Even on x86, there are performance advantages from using 8k or 16k
> arrangements, which could cause us insist upon >4k requirements here. 
> (TBH, not actually for this usecase, but the principle is still valid.)

Perhaps, but that doesn't change the picture: The tool stack still needs
to know how many of the low bits in the request need to be clear (unless
you would accept to go back to rounding an unaligned input value). And
once it knows this value, it can still convert to a count of that-unit-
sized blocks of memory.

> The reason is because this is a size.  Sizes are in bytes, and that's
> how everyone thinks about them.  Its how the value is already specified
> in an xl cfg file, and it entirely unambiguous at all levels of the stack.
> 
> Every translation of the value in the software stack risks breaking
> things, even stuff as simple as debugging.  As proof, count the number
> of translation errors I've already identified in this patch alone.
> 
> This ABI does not require any changes at all (not even recompiling
> userspace) for ARM to decide to use 16k or 64k pagetables in Xen, or for
> x86 to decide that 8k or 16k is beneficial enough to actually require.
> 
> Attempting to compress this uint64_t into something smaller by any means
> will create bugs, or at increased complexity and a high risk of bugs. 
> There isn't enough money on earth right now to afford a 128bit processor
> with enough ram for this current ABI to need changing.

I didn't suggest to use a type other than uint64_t. I'm merely puzzled
by your insistence on byte granularity while at the same time requiring
inputs to be suitable multiples of a base granularity, obtaining of
which is not even specified alongside this new interface.

> This is going to be a reoccurring theme through fixing the ABIs.  Its
> one of a several areas where there is objectively one right answer, both
> in terms of ease of use, and compatibility to future circumstances.

Well, I wouldn't say using whatever base granularity as a unit is
"objectively" less right.

>>> @@ -157,6 +165,25 @@ int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
>>>      return 0;
>>>  }
>>>  
>>> +int arch_set_p2m_mempool_size(struct domain *d, uint64_t size)
>>> +{
>>> +    unsigned long pages = size >> PAGE_SHIFT;
>>> +    bool preempted = false;
>>> +    int rc;
>>> +
>>> +    if ( (size & ~PAGE_MASK) ||          /* Non page-sized request? */
>>> +         pages != (size >> PAGE_SHIFT) ) /* 32-bit overflow? */
>>> +        return -EINVAL;
>> Simply "(pages << PAGE_SHIFT) != size"? And then move the check into
>> common code?
> 
> These checks are deliberately not in common code.  That's just creating
> work that someone will need to undo in due course.

Would you mind clarifying why you think so? If the base unit isn't PAGE_SIZE
then all it takes is to introduce a suitable #define and/or global
specifying the intended per-arch value. Even if you expected this to become
a domain-dependent property, the corresponding value could still be a field
in (common) struct domain.

>>> --- a/xen/arch/x86/mm/paging.c
>>> +++ b/xen/arch/x86/mm/paging.c
>>> @@ -977,6 +977,45 @@ int __init paging_set_allocation(struct domain *d, unsigned int pages,
>>>  }
>>>  #endif
>>>  
>>> +int arch_get_p2m_mempool_size(struct domain *d, uint64_t *size)
>>> +{
>>> +    int rc;
>>> +
>>> +    if ( is_pv_domain(d) )
>>> +        return -EOPNOTSUPP;
>>> +
>>> +    if ( hap_enabled(d) )
>>> +        rc = hap_get_allocation_bytes(d, size);
>>> +    else
>>> +        rc = shadow_get_allocation_bytes(d, size);
>>> +
>>> +    return rc;
>>> +}
>>> +
>>> +int arch_set_p2m_mempool_size(struct domain *d, uint64_t size)
>>> +{
>>> +    unsigned long pages = size >> PAGE_SHIFT;
>>> +    bool preempted = false;
>>> +    int rc;
>>> +
>>> +    if ( is_pv_domain(d) )
>>> +        return -EOPNOTSUPP;
>> Why? You do say "PV is weird" in a post-commit-message remark, but why
>> do you want to retain this weirdness? Even if today the tool stack
>> doesn't set the size when enabling log-dirty mode, I'd view this as a
>> bug which could be addressed purely in the tool stack if this check
>> wasn't there.
> 
> I want to clean up PV, but again, it wasn't sufficiently trivially-safe
> to do right now.
> 
> PV is weird because it is neither hap_enabled() (fundamentally), nor
> shadow_enabled() when logdirty isn't active.  While the freelist is
> suitably constructed, the get/set operations were previously rejected
> and cleanup is local to the disable op, not domain shutdown.
> 
> I could put in a /* TODO: relax in due course */ if you'd prefer?

Yes please - that would clarify this isn't a hard requirement.

>>> +    if ( size & ~PAGE_MASK )             /* Non page-sized request? */
>>> +        return -EINVAL;
>>> +
>>> +    ASSERT(paging_mode_enabled(d));
>> Not only with the PV aspect in mind - why? It looks reasonable to me
>> to set the pool size before enabling any paging mode.
> 
> Because this is how all the existing logic is expressed, and this patch
> wants backporting.

What do you mean by "is expressed"? I can't seem to be able to find a
similar check on the existing code paths. But given that yesterday I
almost overlooked the d->vcpu check in paging_domctl(), I can easily
accept that I might be overlooking something somewhere.

Jan
Jan Beulich Oct. 27, 2022, 7:42 a.m. UTC | #6
On 26.10.2022 12:20, Andrew Cooper wrote:
> +int arch_set_p2m_mempool_size(struct domain *d, uint64_t size)
> +{
> +    unsigned long pages = size >> PAGE_SHIFT;
> +    bool preempted = false;
> +    int rc;
> +
> +    if ( is_pv_domain(d) )
> +        return -EOPNOTSUPP;
> +
> +    if ( size & ~PAGE_MASK )             /* Non page-sized request? */
> +        return -EINVAL;
> +
> +    ASSERT(paging_mode_enabled(d));
> +
> +    paging_lock(d);
> +    if ( hap_enabled(d) )
> +        rc = hap_set_allocation(d, pages, &preempted);
> +    else
> +        rc = shadow_set_allocation(d, pages, &preempted);
> +    paging_unlock(d);
> +
> +    return preempted ? -ERESTART : rc;
> +}

There's a further difference between HAP and shadow which may want/need
reflecting here: shadow's handling of XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION
rejects 0 as an input when shadow mode is still enabled. On one hand
that's reasonable from an abstract pov, while otoh it may be viewed as
questionable when at the same time setting to a very small value (which
will then be upped to the minimum acceptable one) is permitted. At the
very least this guards against emptying of the pool where active shadows
would be allocated from (which isn't a problem on HAP as there apart
from the allocations through hap_alloc_p2m_page() the only thing coming
from the pool are the monitor tables of each vCPU, which set-allocation
wouldn't attempt to free).

Jan
Julien Grall Oct. 27, 2022, 9:27 a.m. UTC | #7
Hi Jan,

On 27/10/2022 07:56, Jan Beulich wrote:
> On 26.10.2022 23:24, Julien Grall wrote:
>> On 26/10/2022 20:22, Andrew Cooper wrote:
>>>>> --- a/xen/arch/x86/mm/hap/hap.c
>>>>> +++ b/xen/arch/x86/mm/hap/hap.c
>>>>> @@ -345,6 +345,16 @@ unsigned int hap_get_allocation(struct domain *d)
>>>>>                + ((pg & ((1 << (20 - PAGE_SHIFT)) - 1)) ? 1 : 0));
>>>>>    }
>>>>>    
>>>>> +int hap_get_allocation_bytes(struct domain *d, uint64_t *size)
>>>>> +{
>>>>> +    unsigned long pages = (d->arch.paging.hap.total_pages +
>>>>> +                           d->arch.paging.hap.p2m_pages);
>>>> Unlike for Arm no ACCESS_ONCE() here? Also the addition can in
>>>> principle overflow, because being done only in 32 bits.
>>>
>>> I'm not actually convinced ARM needs ACCESS_ONCE() to begin with.  I
>>> can't see any legal transformation of that logic which could result in a
>>> torn load.
>>
>> AFAIU, ACCESS_ONCE() is not only about torn load but also making sure
>> that the compiler will only read the value once.
>>
>> When LTO is enabled (not yet supported) in Xen, can we guarantee the
>> compiler will not try to access total_pages twice (obviously it would be
>> caller dependent)?
> 
> Aren't all accesses (supposed to be) under paging lock? At which point
> there's no issue with multiple (or torn) accesses?

Not in the current code base for Arm. I haven't checked whether this is 
the case with the new version.

If it is suitably locked, then I think we should remove all the 
ACCESS_ONCE() and add an ASSERT(spin_is_locked(...)) to make clear this 
should be called with the lock held.

Cheers,
George Dunlap Oct. 28, 2022, 3:27 p.m. UTC | #8
On Thu, Oct 27, 2022 at 8:12 AM Jan Beulich <jbeulich@suse.com> wrote:

> On 26.10.2022 21:22, Andrew Cooper wrote:
> > On 26/10/2022 14:42, Jan Beulich wrote:
>


> > paging isn't a great name.  While it's what we call the infrastructure
> > in x86, it has nothing to do with paging things out to disk (the thing
> > everyone associates the name with), nor the xenpaging infrastructure
> > (Xen's version of what OS paging supposedly means).
>
> Okay, "paging" can be somewhat misleading. But "p2m" also doesn't fit
> the use(s) on x86. Yet we'd like to use a name clearly better than the
> previous (and yet more wrong/misleading) "shadow". I have to admit that
> I can't think of any other sensible name, and among the ones discussed
> I still think "paging" is the one coming closest despite the
> generally different meaning of the word elsewhere.
>

Inside the world of operating systems / hypervisors, "paging" has always
meant "things related to a pagetable"; this includes "paging out to disk".
In fact, the latter already has a perfectly good name -- "swap" (e.g., swap
file, swappiness, hypervisor swap).

Grep for "paging" inside of Xen.  We have the paging lock, paging modes,
nested paging, and so on.  There's absolutely no reason to start thinking
of "paging" as exclusively meaning "hypervisor swap".

[ A bunch of stuff about using bytes as a unit size]

> This is going to be a reoccurring theme through fixing the ABIs.  Its
> > one of a several areas where there is objectively one right answer, both
> > in terms of ease of use, and compatibility to future circumstances.
>
> Well, I wouldn't say using whatever base granularity as a unit is
> "objectively" less right.
>

Personally I don't think bytes or pages either have a particular advantage:

* Using bytes
 - Advantage: Can always use the same number regardless of the underlying
page size
 - Disadvantage: "Trap" where if you forget to check the page size, you
might accidentally pass an invalid input.  Or to put it differently, most
"reasonable-looking" numbers are actually invalid (since most numbers
aren't page-aligned)/
* Using pages
 - Advantage: No need to check page alignment in HV, no accidentally
invalid input
 - Disadvantage: Caller must check page size and do a shift on every call

What would personally tip me one way or the other is consistency with other
hypercalls.  If most of our hypercalls (or even most of our MM hypercalls)
use bytes, then I'd lean towards bytes.  Whereas if most of our hypercalls
use pages, I'd lean towards pages.

 -George
Jan Beulich Oct. 31, 2022, 9:26 a.m. UTC | #9
On 28.10.2022 17:27, George Dunlap wrote:
> On Thu, Oct 27, 2022 at 8:12 AM Jan Beulich <jbeulich@suse.com> wrote:
> 
>> On 26.10.2022 21:22, Andrew Cooper wrote:
>>> On 26/10/2022 14:42, Jan Beulich wrote:
>>
> 
> 
>>> paging isn't a great name.  While it's what we call the infrastructure
>>> in x86, it has nothing to do with paging things out to disk (the thing
>>> everyone associates the name with), nor the xenpaging infrastructure
>>> (Xen's version of what OS paging supposedly means).
>>
>> Okay, "paging" can be somewhat misleading. But "p2m" also doesn't fit
>> the use(s) on x86. Yet we'd like to use a name clearly better than the
>> previous (and yet more wrong/misleading) "shadow". I have to admit that
>> I can't think of any other sensible name, and among the ones discussed
>> I still think "paging" is the one coming closest despite the
>> generally different meaning of the word elsewhere.
>>
> 
> Inside the world of operating systems / hypervisors, "paging" has always
> meant "things related to a pagetable"; this includes "paging out to disk".
> In fact, the latter already has a perfectly good name -- "swap" (e.g., swap
> file, swappiness, hypervisor swap).
> 
> Grep for "paging" inside of Xen.  We have the paging lock, paging modes,
> nested paging, and so on.  There's absolutely no reason to start thinking
> of "paging" as exclusively meaning "hypervisor swap".

Just to clarify: You actually support my thinking that "paging" is an okay
term to use here? I ask because, perhaps merely because of not being a
native speaker, to me content and wording suggest different things: The
former appears to support my response to Andrew, while the latter reads to
me as if you were objecting.

Jan
George Dunlap Oct. 31, 2022, 10:12 a.m. UTC | #10
> On 31 Oct 2022, at 09:26, Jan Beulich <jbeulich@suse.com> wrote:
> 
> On 28.10.2022 17:27, George Dunlap wrote:
>> On Thu, Oct 27, 2022 at 8:12 AM Jan Beulich <jbeulich@suse.com> wrote:
>> 
>>> On 26.10.2022 21:22, Andrew Cooper wrote:
>>>> On 26/10/2022 14:42, Jan Beulich wrote:
>>> 
>> 
>> 
>>>> paging isn't a great name. While it's what we call the infrastructure
>>>> in x86, it has nothing to do with paging things out to disk (the thing
>>>> everyone associates the name with), nor the xenpaging infrastructure
>>>> (Xen's version of what OS paging supposedly means).
>>> 
>>> Okay, "paging" can be somewhat misleading. But "p2m" also doesn't fit
>>> the use(s) on x86. Yet we'd like to use a name clearly better than the
>>> previous (and yet more wrong/misleading) "shadow". I have to admit that
>>> I can't think of any other sensible name, and among the ones discussed
>>> I still think "paging" is the one coming closest despite the
>>> generally different meaning of the word elsewhere.
>>> 
>> 
>> Inside the world of operating systems / hypervisors, "paging" has always
>> meant "things related to a pagetable"; this includes "paging out to disk".
>> In fact, the latter already has a perfectly good name -- "swap" (e.g., swap
>> file, swappiness, hypervisor swap).
>> 
>> Grep for "paging" inside of Xen. We have the paging lock, paging modes,
>> nested paging, and so on. There's absolutely no reason to start thinking
>> of "paging" as exclusively meaning "hypervisor swap".
> 
> Just to clarify: You actually support my thinking that "paging" is an okay
> term to use here? I ask because, perhaps merely because of not being a
> native speaker, to me content and wording suggest different things: The
> former appears to support my response to Andrew, while the latter reads to
> me as if you were objecting.

Sorry, the tone was “objecting” because it was directed mainly at Andrew’s arguments.  I thought about replying only to his mail, but it seemed like since I was clearly “joining the discussion”, it would make more sense to quote you too.  I could probably have made it more clear by leading with something like, “I tend to agree with Jan here. …”

 -George
Stefano Stabellini Nov. 16, 2022, 1:19 a.m. UTC | #11
On Fri, 28 Oct 2022, George Dunlap wrote:
> On Thu, Oct 27, 2022 at 8:12 AM Jan Beulich <jbeulich@suse.com> wrote:
>       On 26.10.2022 21:22, Andrew Cooper wrote:
>       > On 26/10/2022 14:42, Jan Beulich wrote:
> 
>  
>       > paging isn't a great name.  While it's what we call the infrastructure
>       > in x86, it has nothing to do with paging things out to disk (the thing
>       > everyone associates the name with), nor the xenpaging infrastructure
>       > (Xen's version of what OS paging supposedly means).
> 
>       Okay, "paging" can be somewhat misleading. But "p2m" also doesn't fit
>       the use(s) on x86. Yet we'd like to use a name clearly better than the
>       previous (and yet more wrong/misleading) "shadow". I have to admit that
>       I can't think of any other sensible name, and among the ones discussed
>       I still think "paging" is the one coming closest despite the
>       generally different meaning of the word elsewhere.
> 
> 
> Inside the world of operating systems / hypervisors, "paging" has always meant "things related to a pagetable"; this includes "paging out
> to disk".  In fact, the latter already has a perfectly good name -- "swap" (e.g., swap file, swappiness, hypervisor swap).
> 
> Grep for "paging" inside of Xen.  We have the paging lock, paging modes, nested paging, and so on.  There's absolutely no reason to start
> thinking of "paging" as exclusively meaning "hypervisor swap".
>  
> [ A bunch of stuff about using bytes as a unit size]
> 
>       > This is going to be a reoccurring theme through fixing the ABIs.  Its
>       > one of a several areas where there is objectively one right answer, both
>       > in terms of ease of use, and compatibility to future circumstances.
> 
>       Well, I wouldn't say using whatever base granularity as a unit is
>       "objectively" less right.
> 
> 
> Personally I don't think bytes or pages either have a particular advantage:
> 
> * Using bytes
>  - Advantage: Can always use the same number regardless of the underlying page size
>  - Disadvantage: "Trap" where if you forget to check the page size, you might accidentally pass an invalid input.  Or to put it
> differently, most "reasonable-looking" numbers are actually invalid (since most numbers aren't page-aligned)/
> * Using pages
>  - Advantage: No need to check page alignment in HV, no accidentally invalid input
>  - Disadvantage: Caller must check page size and do a shift on every call
> 
> What would personally tip me one way or the other is consistency with other hypercalls.  If most of our hypercalls (or even most of our MM
> hypercalls) use bytes, then I'd lean towards bytes.  Whereas if most of our hypercalls use pages, I'd lean towards pages.


Joining the discussion late to try to move things forward.

Let me premise that I don't have a strong feeling either way, but I
think it would be clearer to use "bytes" instead of "pages" as argument.
The reason is that with pages you are never sure of the actual
granularity. Is it 4K? 16K? 64K? Especially considering that hypervisor
pages can be of different size than guest pages. In theory you could
have a situation where Xen uses 4K, Dom0 uses 16K and domU uses 64K, or
any combination of the three. With bytes, at least you know the actual
size.

If we use "bytes" as argument, then it also makes sense not to use the
word "pages" in the hypercall name.

That said, any name would work and both bytes and pages would work, so
I would leave it to the contributor who is doing the work to choose.
Jan Beulich Nov. 16, 2022, 8:26 a.m. UTC | #12
On 16.11.2022 02:19, Stefano Stabellini wrote:
> On Fri, 28 Oct 2022, George Dunlap wrote:
>> On Thu, Oct 27, 2022 at 8:12 AM Jan Beulich <jbeulich@suse.com> wrote:
>>       On 26.10.2022 21:22, Andrew Cooper wrote:
>>       > On 26/10/2022 14:42, Jan Beulich wrote:
>>
>>  
>>       > paging isn't a great name.  While it's what we call the infrastructure
>>       > in x86, it has nothing to do with paging things out to disk (the thing
>>       > everyone associates the name with), nor the xenpaging infrastructure
>>       > (Xen's version of what OS paging supposedly means).
>>
>>       Okay, "paging" can be somewhat misleading. But "p2m" also doesn't fit
>>       the use(s) on x86. Yet we'd like to use a name clearly better than the
>>       previous (and yet more wrong/misleading) "shadow". I have to admit that
>>       I can't think of any other sensible name, and among the ones discussed
>>       I still think "paging" is the one coming closest despite the
>>       generally different meaning of the word elsewhere.
>>
>>
>> Inside the world of operating systems / hypervisors, "paging" has always meant "things related to a pagetable"; this includes "paging out
>> to disk".  In fact, the latter already has a perfectly good name -- "swap" (e.g., swap file, swappiness, hypervisor swap).
>>
>> Grep for "paging" inside of Xen.  We have the paging lock, paging modes, nested paging, and so on.  There's absolutely no reason to start
>> thinking of "paging" as exclusively meaning "hypervisor swap".
>>  
>> [ A bunch of stuff about using bytes as a unit size]
>>
>>       > This is going to be a reoccurring theme through fixing the ABIs.  Its
>>       > one of a several areas where there is objectively one right answer, both
>>       > in terms of ease of use, and compatibility to future circumstances.
>>
>>       Well, I wouldn't say using whatever base granularity as a unit is
>>       "objectively" less right.
>>
>>
>> Personally I don't think bytes or pages either have a particular advantage:
>>
>> * Using bytes
>>  - Advantage: Can always use the same number regardless of the underlying page size
>>  - Disadvantage: "Trap" where if you forget to check the page size, you might accidentally pass an invalid input.  Or to put it
>> differently, most "reasonable-looking" numbers are actually invalid (since most numbers aren't page-aligned)/
>> * Using pages
>>  - Advantage: No need to check page alignment in HV, no accidentally invalid input
>>  - Disadvantage: Caller must check page size and do a shift on every call
>>
>> What would personally tip me one way or the other is consistency with other hypercalls.  If most of our hypercalls (or even most of our MM
>> hypercalls) use bytes, then I'd lean towards bytes.  Whereas if most of our hypercalls use pages, I'd lean towards pages.
> 
> 
> Joining the discussion late to try to move things forward.
> 
> Let me premise that I don't have a strong feeling either way, but I
> think it would be clearer to use "bytes" instead of "pages" as argument.
> The reason is that with pages you are never sure of the actual
> granularity. Is it 4K? 16K? 64K? Especially considering that hypervisor
> pages can be of different size than guest pages. In theory you could
> have a situation where Xen uses 4K, Dom0 uses 16K and domU uses 64K, or
> any combination of the three. With bytes, at least you know the actual
> size.
> 
> If we use "bytes" as argument, then it also makes sense not to use the
> word "pages" in the hypercall name.
> 
> That said, any name would work and both bytes and pages would work, so
> I would leave it to the contributor who is doing the work to choose.

FAOD: There was no suggestion to use "pages" in the name; it was "paging"
which was suggested.

Jan
diff mbox series

Patch

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 0c8b4c3aa7a5..f503f03a3927 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -893,6 +893,9 @@  long long xc_logdirty_control(xc_interface *xch,
                               unsigned int mode,
                               xc_shadow_op_stats_t *stats);
 
+int xc_get_p2m_mempool_size(xc_interface *xch, uint32_t domid, uint64_t *size);
+int xc_set_p2m_mempool_size(xc_interface *xch, uint32_t domid, uint64_t size);
+
 int xc_sched_credit_domain_set(xc_interface *xch,
                                uint32_t domid,
                                struct xen_domctl_sched_credit *sdom);
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index 14c0420c35be..9ac09cfab036 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -706,6 +706,35 @@  long long xc_logdirty_control(xc_interface *xch,
     return (rc == 0) ? domctl.u.shadow_op.pages : rc;
 }
 
+int xc_get_p2m_mempool_size(xc_interface *xch, uint32_t domid, uint64_t *size)
+{
+    int rc;
+    struct xen_domctl domctl = {
+        .cmd         = XEN_DOMCTL_get_p2m_mempool_size,
+        .domain      = domid,
+    };
+
+    rc = do_domctl(xch, &domctl);
+    if ( rc )
+        return rc;
+
+    *size = domctl.u.p2m_mempool.size;
+    return 0;
+}
+
+int xc_set_p2m_mempool_size(xc_interface *xch, uint32_t domid, uint64_t size)
+{
+    struct xen_domctl domctl = {
+        .cmd         = XEN_DOMCTL_set_p2m_mempool_size,
+        .domain      = domid,
+        .u.p2m_mempool = {
+            .size = size,
+        },
+    };
+
+    return do_domctl(xch, &domctl);
+}
+
 int xc_domain_setmaxmem(xc_interface *xch,
                         uint32_t domid,
                         uint64_t max_memkb)
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 94d3b60b1387..4607cde6f0b8 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -100,6 +100,14 @@  unsigned int p2m_get_allocation(struct domain *d)
     return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT);
 }
 
+/* Return the size of the pool, in bytes. */
+int arch_get_p2m_mempool_size(struct domain *d, uint64_t *size)
+{
+    *size = ACCESS_ONCE(d->arch.paging.p2m_total_pages) << PAGE_SHIFT;
+
+    return 0;
+}
+
 /*
  * Set the pool of pages to the required number of pages.
  * Returns 0 for success, non-zero for failure.
@@ -157,6 +165,25 @@  int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
     return 0;
 }
 
+int arch_set_p2m_mempool_size(struct domain *d, uint64_t size)
+{
+    unsigned long pages = size >> PAGE_SHIFT;
+    bool preempted = false;
+    int rc;
+
+    if ( (size & ~PAGE_MASK) ||          /* Non page-sized request? */
+         pages != (size >> PAGE_SHIFT) ) /* 32-bit overflow? */
+        return -EINVAL;
+
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, pages, &preempted);
+    spin_unlock(&d->arch.paging.lock);
+
+    ASSERT(preempted == (rc == -ERESTART));
+
+    return rc;
+}
+
 int p2m_teardown_allocation(struct domain *d)
 {
     int ret = 0;
diff --git a/xen/arch/x86/include/asm/hap.h b/xen/arch/x86/include/asm/hap.h
index 90dece29deca..14d2f212dab9 100644
--- a/xen/arch/x86/include/asm/hap.h
+++ b/xen/arch/x86/include/asm/hap.h
@@ -47,6 +47,7 @@  int   hap_track_dirty_vram(struct domain *d,
 extern const struct paging_mode *hap_paging_get_mode(struct vcpu *);
 int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempted);
 unsigned int hap_get_allocation(struct domain *d);
+int hap_get_allocation_bytes(struct domain *d, uint64_t *size);
 
 #endif /* XEN_HAP_H */
 
diff --git a/xen/arch/x86/include/asm/shadow.h b/xen/arch/x86/include/asm/shadow.h
index 1365fe480518..dad876d29499 100644
--- a/xen/arch/x86/include/asm/shadow.h
+++ b/xen/arch/x86/include/asm/shadow.h
@@ -97,6 +97,8 @@  void shadow_blow_tables_per_domain(struct domain *d);
 int shadow_set_allocation(struct domain *d, unsigned int pages,
                           bool *preempted);
 
+int shadow_get_allocation_bytes(struct domain *d, uint64_t *size);
+
 #else /* !CONFIG_SHADOW_PAGING */
 
 #define shadow_vcpu_teardown(v) ASSERT(is_pv_vcpu(v))
@@ -108,6 +110,8 @@  int shadow_set_allocation(struct domain *d, unsigned int pages,
     ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; })
 #define shadow_set_allocation(d, pages, preempted) \
     ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; })
+#define shadow_get_allocation_bytes(d, size) \
+    ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; })
 
 static inline void sh_remove_shadows(struct domain *d, mfn_t gmfn,
                                      int fast, int all) {}
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index f809ea9aa6ae..50c3d6e63fa5 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -345,6 +345,16 @@  unsigned int hap_get_allocation(struct domain *d)
             + ((pg & ((1 << (20 - PAGE_SHIFT)) - 1)) ? 1 : 0));
 }
 
+int hap_get_allocation_bytes(struct domain *d, uint64_t *size)
+{
+    unsigned long pages = (d->arch.paging.hap.total_pages +
+                           d->arch.paging.hap.p2m_pages);
+
+    *size = pages << PAGE_SHIFT;
+
+    return 0;
+}
+
 /* Set the pool of pages to the required number of pages.
  * Returns 0 for success, non-zero for failure. */
 int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempted)
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index 3a355eee9ca3..b3f7c46e1dfd 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -977,6 +977,45 @@  int __init paging_set_allocation(struct domain *d, unsigned int pages,
 }
 #endif
 
+int arch_get_p2m_mempool_size(struct domain *d, uint64_t *size)
+{
+    int rc;
+
+    if ( is_pv_domain(d) )
+        return -EOPNOTSUPP;
+
+    if ( hap_enabled(d) )
+        rc = hap_get_allocation_bytes(d, size);
+    else
+        rc = shadow_get_allocation_bytes(d, size);
+
+    return rc;
+}
+
+int arch_set_p2m_mempool_size(struct domain *d, uint64_t size)
+{
+    unsigned long pages = size >> PAGE_SHIFT;
+    bool preempted = false;
+    int rc;
+
+    if ( is_pv_domain(d) )
+        return -EOPNOTSUPP;
+
+    if ( size & ~PAGE_MASK )             /* Non page-sized request? */
+        return -EINVAL;
+
+    ASSERT(paging_mode_enabled(d));
+
+    paging_lock(d);
+    if ( hap_enabled(d) )
+        rc = hap_set_allocation(d, pages, &preempted);
+    else
+        rc = shadow_set_allocation(d, pages, &preempted);
+    paging_unlock(d);
+
+    return preempted ? -ERESTART : rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index badfd53c6b23..d190601c4424 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1427,6 +1427,16 @@  static unsigned int shadow_get_allocation(struct domain *d)
             + ((pg & ((1 << (20 - PAGE_SHIFT)) - 1)) ? 1 : 0));
 }
 
+int shadow_get_allocation_bytes(struct domain *d, uint64_t *size)
+{
+    unsigned long pages = (d->arch.paging.shadow.total_pages +
+                           d->arch.paging.shadow.p2m_pages);
+
+    *size = pages << PAGE_SHIFT;
+
+    return 0;
+}
+
 /**************************************************************************/
 /* Hash table for storing the guest->shadow mappings.
  * The table itself is an array of pointers to shadows; the shadows are then
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 69fb9abd346f..8f318b830185 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -874,6 +874,20 @@  long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         ret = iommu_do_domctl(op, d, u_domctl);
         break;
 
+    case XEN_DOMCTL_get_p2m_mempool_size:
+        ret = arch_get_p2m_mempool_size(d, &op->u.p2m_mempool.size);
+        if ( !ret )
+            copyback = 1;
+        break;
+
+    case XEN_DOMCTL_set_p2m_mempool_size:
+        ret = arch_set_p2m_mempool_size(d, op->u.p2m_mempool.size);
+
+        if ( ret == -ERESTART )
+            ret = hypercall_create_continuation(
+                __HYPERVISOR_domctl, "h", u_domctl);
+        break;
+
     default:
         ret = arch_do_domctl(op, d, u_domctl);
         break;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index b2ae839c3632..7da09d5925c8 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -214,7 +214,10 @@  struct xen_domctl_getpageframeinfo3 {
  /* Return the bitmap but do not modify internal copy. */
 #define XEN_DOMCTL_SHADOW_OP_PEEK        12
 
-/* Memory allocation accessors. */
+/*
+ * Memory allocation accessors.  These APIs are broken and will be removed.
+ * Use XEN_DOMCTL_{get,set}_p2m_mempool_size instead.
+ */
 #define XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION   30
 #define XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION   31
 
@@ -946,6 +949,24 @@  struct xen_domctl_cacheflush {
     xen_pfn_t start_pfn, nr_pfns;
 };
 
+/*
+ * XEN_DOMCTL_get_p2m_mempool_size / XEN_DOMCTL_set_p2m_mempool_size.
+ *
+ * Get or set the P2M memory pool size.  The size is in bytes.
+ *
+ * The P2M memory pool is a dedicated pool of memory for managing the guest
+ * physical -> host physical mappings, usually containing pagetables.
+ * Implementation details cause there to be a minimum granularity, usually the
+ * size of pagetables used by Xen.  Users of this interface are required to
+ * identify the granularity by other means.
+ *
+ * The set operation can fail midway through the request (e.g. Xen running out
+ * of memory, no free memory to reclaim from the pool, etc.).
+ */
+struct xen_domctl_p2m_mempool {
+    uint64_aligned_t size; /* IN/OUT.  Size in bytes. */
+};
+
 #if defined(__i386__) || defined(__x86_64__)
 struct xen_domctl_vcpu_msr {
     uint32_t         index;
@@ -1274,6 +1295,8 @@  struct xen_domctl {
 #define XEN_DOMCTL_get_cpu_policy                82
 #define XEN_DOMCTL_set_cpu_policy                83
 #define XEN_DOMCTL_vmtrace_op                    84
+#define XEN_DOMCTL_get_p2m_mempool_size          85
+#define XEN_DOMCTL_set_p2m_mempool_size          86
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1335,6 +1358,7 @@  struct xen_domctl {
         struct xen_domctl_psr_alloc         psr_alloc;
         struct xen_domctl_vuart_op          vuart_op;
         struct xen_domctl_vmtrace_op        vmtrace_op;
+        struct xen_domctl_p2m_mempool       p2m_mempool;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 2c8116afba27..01aaf4dedbe8 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -98,6 +98,9 @@  void arch_get_info_guest(struct vcpu *, vcpu_guest_context_u);
 int arch_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg);
 int default_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg);
 
+int arch_get_p2m_mempool_size(struct domain *d, uint64_t *size /* bytes */);
+int arch_set_p2m_mempool_size(struct domain *d, uint64_t size /* bytes */);
+
 int domain_relinquish_resources(struct domain *d);
 
 void dump_pageframe_info(struct domain *d);