diff mbox series

[v3,3/3] x86 / vmx: use a 'normal' domheap page for APIC_DEFAULT_PHYS_BASE

Message ID 20200123140305.21050-4-pdurrant@amazon.com (mailing list archive)
State Superseded
Headers show
Series purge free_shared_domheap_page() | expand

Commit Message

Paul Durrant Jan. 23, 2020, 2:03 p.m. UTC
vmx_alloc_vlapic_mapping() currently contains some very odd looking code
that allocates a MEMF_no_owner domheap page and then shares with the guest
as if it were a xenheap page. This then requires vmx_free_vlapic_mapping()
to call a special function in the mm code: free_shared_domheap_page().

By using a 'normal' domheap page (i.e. by not passing MEMF_no_owner to
alloc_domheap_page()), the odd looking code in vmx_alloc_vlapic_mapping()
can simply use get_page_and_type() to set up a writable mapping before
insertion in the P2M and vmx_free_vlapic_mapping() can simply release the
page using put_page_alloc_ref() followed by put_page_and_type(). This
then allows free_shared_domheap_page() to be purged.

There is, however, some fall-out from this simplification:

- alloc_domheap_page() will now call assign_pages() and run into the fact
  that 'max_pages' is not set until some time after domain_create(). To
  avoid an allocation failure, domain_create() is modified to set
  max_pages to an initial value, sufficient to cover any domheap
  allocations required to complete domain creation. The value will be
  set to the 'real' max_pages when the tool-stack later performs the
  XEN_DOMCTL_max_mem operation, thus allowing the rest of the domain's
  memory to be allocated.

- Because the domheap page is no longer a pseudo-xenheap page, the
  reference counting will prevent the domain from being destroyed. Thus
  the call to vmx_free_vlapic_mapping() is moved from the
  domain_destroy() method into the domain_relinquish_resources() method.
  Whilst in the area, make the domain_destroy() method an optional
  alternative_vcall() (since it will no longer peform any function in VMX
  and is stubbed in SVM anyway).

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>

v2:
 - Set an initial value for max_pages rather than avoiding the check in
   assign_pages()
 - Make domain_destroy() optional
---
 xen/arch/x86/hvm/hvm.c     |  4 +++-
 xen/arch/x86/hvm/svm/svm.c |  5 -----
 xen/arch/x86/hvm/vmx/vmx.c | 25 ++++++++++++++++++++-----
 xen/arch/x86/mm.c          | 10 ----------
 xen/common/domain.c        |  8 ++++++++
 xen/include/asm-x86/mm.h   |  2 --
 6 files changed, 31 insertions(+), 23 deletions(-)

Comments

George Dunlap Jan. 23, 2020, 2:45 p.m. UTC | #1
On 1/23/20 2:03 PM, Paul Durrant wrote:
> vmx_alloc_vlapic_mapping() currently contains some very odd looking code
> that allocates a MEMF_no_owner domheap page and then shares with the guest
> as if it were a xenheap page. This then requires vmx_free_vlapic_mapping()
> to call a special function in the mm code: free_shared_domheap_page().
> 
> By using a 'normal' domheap page (i.e. by not passing MEMF_no_owner to
> alloc_domheap_page()), the odd looking code in vmx_alloc_vlapic_mapping()
> can simply use get_page_and_type() to set up a writable mapping before
> insertion in the P2M and vmx_free_vlapic_mapping() can simply release the
> page using put_page_alloc_ref() followed by put_page_and_type(). This
> then allows free_shared_domheap_page() to be purged.
> 
> There is, however, some fall-out from this simplification:
> 
> - alloc_domheap_page() will now call assign_pages() and run into the fact
>   that 'max_pages' is not set until some time after domain_create(). To
>   avoid an allocation failure, domain_create() is modified to set
>   max_pages to an initial value, sufficient to cover any domheap
>   allocations required to complete domain creation. The value will be
>   set to the 'real' max_pages when the tool-stack later performs the
>   XEN_DOMCTL_max_mem operation, thus allowing the rest of the domain's
>   memory to be allocated.
> 
> - Because the domheap page is no longer a pseudo-xenheap page, the
>   reference counting will prevent the domain from being destroyed. Thus
>   the call to vmx_free_vlapic_mapping() is moved from the
>   domain_destroy() method into the domain_relinquish_resources() method.
>   Whilst in the area, make the domain_destroy() method an optional
>   alternative_vcall() (since it will no longer peform any function in VMX
>   and is stubbed in SVM anyway).
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

This is an excellent change, thank you:

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

My only comment is that this would have been a bit easier to review
broken down into probably three patches: 1) making domain_destroy
optional, 2) moving vmx teardown to a relinquish_resources call 3) using
"normal" pages".  But I don't think it's worth a re-send just for that.

 -George
Julien Grall Jan. 23, 2020, 3:26 p.m. UTC | #2
On 23/01/2020 14:03, Paul Durrant wrote:
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index ee3f9ffd3e..30c777acb8 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -339,6 +339,8 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
>       return arch_sanitise_domain_config(config);
>   }
>   
> +#define DOMAIN_INIT_PAGES 1

Would it make sense to make this a per-arch define? This would allow 
each arch to define a different number of init pages (and catch any misuse).

> +
>   struct domain *domain_create(domid_t domid,
>                                struct xen_domctl_createdomain *config,
>                                bool is_priv)
> @@ -441,6 +443,12 @@ struct domain *domain_create(domid_t domid,
>           radix_tree_init(&d->pirq_tree);
>       }
>   
> +    /*
> +     * Allow a limited number of special pages to be allocated for the
> +     * domain
> +     */
> +    d->max_pages = DOMAIN_INIT_PAGES;
> +
>       if ( (err = arch_domain_create(d, config)) != 0 )
>           goto fail;
>       init_status |= INIT_arch;
> diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
> index 2ca8882ad0..e429f38228 100644
> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -317,8 +317,6 @@ struct page_info
>   
>   #define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
>   
> -extern void free_shared_domheap_page(struct page_info *page);
> -
>   #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
>   extern unsigned long max_page;
>   extern unsigned long total_pages;
> 

Cheers,
Jan Beulich Jan. 23, 2020, 3:32 p.m. UTC | #3
On 23.01.2020 15:03, Paul Durrant wrote:
> vmx_alloc_vlapic_mapping() currently contains some very odd looking code
> that allocates a MEMF_no_owner domheap page and then shares with the guest
> as if it were a xenheap page. This then requires vmx_free_vlapic_mapping()
> to call a special function in the mm code: free_shared_domheap_page().
> 
> By using a 'normal' domheap page (i.e. by not passing MEMF_no_owner to
> alloc_domheap_page()), the odd looking code in vmx_alloc_vlapic_mapping()
> can simply use get_page_and_type() to set up a writable mapping before
> insertion in the P2M and vmx_free_vlapic_mapping() can simply release the
> page using put_page_alloc_ref() followed by put_page_and_type(). This
> then allows free_shared_domheap_page() to be purged.
> 
> There is, however, some fall-out from this simplification:
> 
> - alloc_domheap_page() will now call assign_pages() and run into the fact
>   that 'max_pages' is not set until some time after domain_create(). To
>   avoid an allocation failure, domain_create() is modified to set
>   max_pages to an initial value, sufficient to cover any domheap
>   allocations required to complete domain creation. The value will be
>   set to the 'real' max_pages when the tool-stack later performs the
>   XEN_DOMCTL_max_mem operation, thus allowing the rest of the domain's
>   memory to be allocated.

I continue to disagree with this approach, and I don't think I've
heard back on the alternative suggestion of using MEMF_no_refcount
instead of MEMF_no_owner. As said before, the page (and any other
ones up to DOMAIN_INIT_PAGES, which I find rather fragile to be
set to 1) will now get accounted against the amount allowed for
the domain to allocate.

It also looks to me as if this will break e.g.
p2m_pod_set_mem_target(), which at the very top has

    /* P == B: Nothing to do (unless the guest is being created). */
    populated = d->tot_pages - p2m->pod.count;
    if ( populated > 0 && p2m->pod.entry_count == 0 )
        goto out;

This code assumes that during domain creation all pages recorded
in ->tot_pages have also been recorded in ->pod.count.

There may be other uses of ->tot_pages which this change conflicts
with.

> - Because the domheap page is no longer a pseudo-xenheap page, the
>   reference counting will prevent the domain from being destroyed. Thus
>   the call to vmx_free_vlapic_mapping() is moved from the
>   domain_destroy() method into the domain_relinquish_resources() method.
>   Whilst in the area, make the domain_destroy() method an optional
>   alternative_vcall() (since it will no longer peform any function in VMX
>   and is stubbed in SVM anyway).

All of this would, afaict, become irrelevant when using MEMF_no_refcount.

There looks to be one issue (which I think we have been discussing
before): By not bumping ->tot_pages, its decrementing upon freeing
the page will be a problem. I can see two possible solutions to this:
- Introduce a flag indicating there should also be no accounting
  upon freeing of the page.
- Instead of avoiding to increment ->tot_pages, _also_ increment
  ->max_pages. The interaction with XEN_DOMCTL_max_mem will then of
  course be, well, interesting.

Jan
George Dunlap Jan. 23, 2020, 3:41 p.m. UTC | #4
On 1/23/20 3:32 PM, Jan Beulich wrote:
> On 23.01.2020 15:03, Paul Durrant wrote:
>> vmx_alloc_vlapic_mapping() currently contains some very odd looking code
>> that allocates a MEMF_no_owner domheap page and then shares with the guest
>> as if it were a xenheap page. This then requires vmx_free_vlapic_mapping()
>> to call a special function in the mm code: free_shared_domheap_page().
>>
>> By using a 'normal' domheap page (i.e. by not passing MEMF_no_owner to
>> alloc_domheap_page()), the odd looking code in vmx_alloc_vlapic_mapping()
>> can simply use get_page_and_type() to set up a writable mapping before
>> insertion in the P2M and vmx_free_vlapic_mapping() can simply release the
>> page using put_page_alloc_ref() followed by put_page_and_type(). This
>> then allows free_shared_domheap_page() to be purged.
>>
>> There is, however, some fall-out from this simplification:
>>
>> - alloc_domheap_page() will now call assign_pages() and run into the fact
>>   that 'max_pages' is not set until some time after domain_create(). To
>>   avoid an allocation failure, domain_create() is modified to set
>>   max_pages to an initial value, sufficient to cover any domheap
>>   allocations required to complete domain creation. The value will be
>>   set to the 'real' max_pages when the tool-stack later performs the
>>   XEN_DOMCTL_max_mem operation, thus allowing the rest of the domain's
>>   memory to be allocated.
> 
> I continue to disagree with this approach, and I don't think I've
> heard back on the alternative suggestion of using MEMF_no_refcount
> instead of MEMF_no_owner. As said before, the page (and any other
> ones up to DOMAIN_INIT_PAGES, which I find rather fragile to be
> set to 1) will now get accounted against the amount allowed for
> the domain to allocate.
> 
> It also looks to me as if this will break e.g.
> p2m_pod_set_mem_target(), which at the very top has
> 
>     /* P == B: Nothing to do (unless the guest is being created). */
>     populated = d->tot_pages - p2m->pod.count;
>     if ( populated > 0 && p2m->pod.entry_count == 0 )
>         goto out;
> 
> This code assumes that during domain creation all pages recorded
> in ->tot_pages have also been recorded in ->pod.count.
> 
> There may be other uses of ->tot_pages which this change conflicts
> with.

This basically boils down to the "memory accounting swamp", where
various random features need to allocate domain pages to function.

>> - Because the domheap page is no longer a pseudo-xenheap page, the
>>   reference counting will prevent the domain from being destroyed. Thus
>>   the call to vmx_free_vlapic_mapping() is moved from the
>>   domain_destroy() method into the domain_relinquish_resources() method.
>>   Whilst in the area, make the domain_destroy() method an optional
>>   alternative_vcall() (since it will no longer peform any function in VMX
>>   and is stubbed in SVM anyway).
> 
> All of this would, afaict, become irrelevant when using MEMF_no_refcount.
> 
> There looks to be one issue (which I think we have been discussing
> before): By not bumping ->tot_pages, its decrementing upon freeing
> the page will be a problem. I can see two possible solutions to this:
> - Introduce a flag indicating there should also be no accounting
>   upon freeing of the page.

This seems like a reasonable approach to look into.

 -George
Durrant, Paul Jan. 23, 2020, 3:46 p.m. UTC | #5
> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 23 January 2020 15:26
> To: Durrant, Paul <pdurrant@amazon.co.uk>; xen-devel@lists.xenproject.org
> Cc: Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau Monné
> <roger.pau@citrix.com>; George Dunlap <George.Dunlap@eu.citrix.com>; Ian
> Jackson <ian.jackson@eu.citrix.com>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>; Jun
> Nakajima <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>
> Subject: Re: [PATCH v3 3/3] x86 / vmx: use a 'normal' domheap page for
> APIC_DEFAULT_PHYS_BASE
> 
> 
> 
> On 23/01/2020 14:03, Paul Durrant wrote:
> > diff --git a/xen/common/domain.c b/xen/common/domain.c
> > index ee3f9ffd3e..30c777acb8 100644
> > --- a/xen/common/domain.c
> > +++ b/xen/common/domain.c
> > @@ -339,6 +339,8 @@ static int sanitise_domain_config(struct
> xen_domctl_createdomain *config)
> >       return arch_sanitise_domain_config(config);
> >   }
> >
> > +#define DOMAIN_INIT_PAGES 1
> 
> Would it make sense to make this a per-arch define? This would allow
> each arch to define a different number of init pages (and catch any
> misuse).
>

I thought about that and decided against it. The arch code may want to increase (which may be a bad idea) but I think it should be set early. Ultimately it should come in from the toolstack via the domctl anyway.

  Paul
George Dunlap Jan. 23, 2020, 3:48 p.m. UTC | #6
On 1/23/20 3:46 PM, Durrant, Paul wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: 23 January 2020 15:26
>> To: Durrant, Paul <pdurrant@amazon.co.uk>; xen-devel@lists.xenproject.org
>> Cc: Jan Beulich <jbeulich@suse.com>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau Monné
>> <roger.pau@citrix.com>; George Dunlap <George.Dunlap@eu.citrix.com>; Ian
>> Jackson <ian.jackson@eu.citrix.com>; Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>; Jun
>> Nakajima <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>
>> Subject: Re: [PATCH v3 3/3] x86 / vmx: use a 'normal' domheap page for
>> APIC_DEFAULT_PHYS_BASE
>>
>>
>>
>> On 23/01/2020 14:03, Paul Durrant wrote:
>>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>>> index ee3f9ffd3e..30c777acb8 100644
>>> --- a/xen/common/domain.c
>>> +++ b/xen/common/domain.c
>>> @@ -339,6 +339,8 @@ static int sanitise_domain_config(struct
>> xen_domctl_createdomain *config)
>>>       return arch_sanitise_domain_config(config);
>>>   }
>>>
>>> +#define DOMAIN_INIT_PAGES 1
>>
>> Would it make sense to make this a per-arch define? This would allow
>> each arch to define a different number of init pages (and catch any
>> misuse).
>>
> 
> I thought about that and decided against it. The arch code may want to increase (which may be a bad idea) but I think it should be set early. Ultimately it should come in from the toolstack via the domctl anyway.

I understood Julien to be saying that maybe Arm might want this to be
hard-coded to 5 (or 0) rather than 1.

 -George
Julien Grall Jan. 23, 2020, 3:48 p.m. UTC | #7
On 23/01/2020 15:46, Durrant, Paul wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: 23 January 2020 15:26
>> To: Durrant, Paul <pdurrant@amazon.co.uk>; xen-devel@lists.xenproject.org
>> Cc: Jan Beulich <jbeulich@suse.com>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau Monné
>> <roger.pau@citrix.com>; George Dunlap <George.Dunlap@eu.citrix.com>; Ian
>> Jackson <ian.jackson@eu.citrix.com>; Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>; Jun
>> Nakajima <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>
>> Subject: Re: [PATCH v3 3/3] x86 / vmx: use a 'normal' domheap page for
>> APIC_DEFAULT_PHYS_BASE
>>
>>
>>
>> On 23/01/2020 14:03, Paul Durrant wrote:
>>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>>> index ee3f9ffd3e..30c777acb8 100644
>>> --- a/xen/common/domain.c
>>> +++ b/xen/common/domain.c
>>> @@ -339,6 +339,8 @@ static int sanitise_domain_config(struct
>> xen_domctl_createdomain *config)
>>>        return arch_sanitise_domain_config(config);
>>>    }
>>>
>>> +#define DOMAIN_INIT_PAGES 1
>>
>> Would it make sense to make this a per-arch define? This would allow
>> each arch to define a different number of init pages (and catch any
>> misuse).
>>
> 
> I thought about that and decided against it. The arch code may want to increase (which may be a bad idea) but I think it should be set early. Ultimately it should come in from the toolstack via the domctl anyway.
The value is already arch specific because on Arm we have 0 pages 
requires this trick yet.

While I agree, it should be set early, there is nothing to prevent this 
to be in a arch header. Am I correct?

Cheers,
Durrant, Paul Jan. 23, 2020, 3:56 p.m. UTC | #8
> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 23 January 2020 15:32
> To: Durrant, Paul <pdurrant@amazon.co.uk>
> Cc: xen-devel@lists.xenproject.org; Andrew Cooper
> <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau Monné
> <roger.pau@citrix.com>; George Dunlap <George.Dunlap@eu.citrix.com>; Ian
> Jackson <ian.jackson@eu.citrix.com>; Julien Grall <julien@xen.org>; Konrad
> Rzeszutek Wilk <konrad.wilk@oracle.com>; Stefano Stabellini
> <sstabellini@kernel.org>; Jun Nakajima <jun.nakajima@intel.com>; Kevin
> Tian <kevin.tian@intel.com>
> Subject: Re: [PATCH v3 3/3] x86 / vmx: use a 'normal' domheap page for
> APIC_DEFAULT_PHYS_BASE
> 
> On 23.01.2020 15:03, Paul Durrant wrote:
> > vmx_alloc_vlapic_mapping() currently contains some very odd looking code
> > that allocates a MEMF_no_owner domheap page and then shares with the
> guest
> > as if it were a xenheap page. This then requires
> vmx_free_vlapic_mapping()
> > to call a special function in the mm code: free_shared_domheap_page().
> >
> > By using a 'normal' domheap page (i.e. by not passing MEMF_no_owner to
> > alloc_domheap_page()), the odd looking code in
> vmx_alloc_vlapic_mapping()
> > can simply use get_page_and_type() to set up a writable mapping before
> > insertion in the P2M and vmx_free_vlapic_mapping() can simply release
> the
> > page using put_page_alloc_ref() followed by put_page_and_type(). This
> > then allows free_shared_domheap_page() to be purged.
> >
> > There is, however, some fall-out from this simplification:
> >
> > - alloc_domheap_page() will now call assign_pages() and run into the
> fact
> >   that 'max_pages' is not set until some time after domain_create(). To
> >   avoid an allocation failure, domain_create() is modified to set
> >   max_pages to an initial value, sufficient to cover any domheap
> >   allocations required to complete domain creation. The value will be
> >   set to the 'real' max_pages when the tool-stack later performs the
> >   XEN_DOMCTL_max_mem operation, thus allowing the rest of the domain's
> >   memory to be allocated.
> 
> I continue to disagree with this approach, and I don't think I've
> heard back on the alternative suggestion of using MEMF_no_refcount
> instead of MEMF_no_owner.

I responded in v1:

"
> Did you consider passing MEMF_no_refcount here, to avoid the
> fiddling with assign_pages()? That'll in particular also
> avoid ...
> 

You remember what happened last time we did that (with the ioreq server page), right? That's why assign_pages() vetoes non-refcounted pages.
"

> As said before, the page (and any other
> ones up to DOMAIN_INIT_PAGES, which I find rather fragile to be
> set to 1) will now get accounted against the amount allowed for
> the domain to allocate.
> 
> It also looks to me as if this will break e.g.
> p2m_pod_set_mem_target(), which at the very top has
> 
>     /* P == B: Nothing to do (unless the guest is being created). */
>     populated = d->tot_pages - p2m->pod.count;
>     if ( populated > 0 && p2m->pod.entry_count == 0 )
>         goto out;
> 
> This code assumes that during domain creation all pages recorded
> in ->tot_pages have also been recorded in ->pod.count.
> 
> There may be other uses of ->tot_pages which this change conflicts
> with.
> 
> > - Because the domheap page is no longer a pseudo-xenheap page, the
> >   reference counting will prevent the domain from being destroyed. Thus
> >   the call to vmx_free_vlapic_mapping() is moved from the
> >   domain_destroy() method into the domain_relinquish_resources() method.
> >   Whilst in the area, make the domain_destroy() method an optional
> >   alternative_vcall() (since it will no longer peform any function in
> VMX
> >   and is stubbed in SVM anyway).
> 
> All of this would, afaict, become irrelevant when using MEMF_no_refcount.
> 
> There looks to be one issue (which I think we have been discussing
> before): By not bumping ->tot_pages, its decrementing upon freeing
> the page will be a problem.

Yes, that's exactly the problem with assigning MEMF_no_refcount pages, which is way it is outlawed.

> I can see two possible solutions to this:
> - Introduce a flag indicating there should also be no accounting
>   upon freeing of the page.

What sort of flag did you have in mind? Do we have space anywhere in type-info or count-info to put it? If we can make assigning non-refcounted pages safe then it's certainly an attractive option.

> - Instead of avoiding to increment ->tot_pages, _also_ increment
>   ->max_pages. The interaction with XEN_DOMCTL_max_mem will then of
>   course be, well, interesting.
> 

Indeed, which is why I opted for the simple approach. As I've said in other responses, max_pages ought be set by the toolstack when the domain is created so I wanted to come up with something that's not too invasive until that change is made so if the pages do need to be ref-counted then I guess I'll have to figure out how to make the initial allocation co-exist with PoD.

  Paul
Jan Beulich Jan. 23, 2020, 4:07 p.m. UTC | #9
On 23.01.2020 16:56, Durrant, Paul wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 23 January 2020 15:32
>>
>> On 23.01.2020 15:03, Paul Durrant wrote:
>>> vmx_alloc_vlapic_mapping() currently contains some very odd looking code
>>> that allocates a MEMF_no_owner domheap page and then shares with the
>> guest
>>> as if it were a xenheap page. This then requires
>> vmx_free_vlapic_mapping()
>>> to call a special function in the mm code: free_shared_domheap_page().
>>>
>>> By using a 'normal' domheap page (i.e. by not passing MEMF_no_owner to
>>> alloc_domheap_page()), the odd looking code in
>> vmx_alloc_vlapic_mapping()
>>> can simply use get_page_and_type() to set up a writable mapping before
>>> insertion in the P2M and vmx_free_vlapic_mapping() can simply release
>> the
>>> page using put_page_alloc_ref() followed by put_page_and_type(). This
>>> then allows free_shared_domheap_page() to be purged.
>>>
>>> There is, however, some fall-out from this simplification:
>>>
>>> - alloc_domheap_page() will now call assign_pages() and run into the
>> fact
>>>   that 'max_pages' is not set until some time after domain_create(). To
>>>   avoid an allocation failure, domain_create() is modified to set
>>>   max_pages to an initial value, sufficient to cover any domheap
>>>   allocations required to complete domain creation. The value will be
>>>   set to the 'real' max_pages when the tool-stack later performs the
>>>   XEN_DOMCTL_max_mem operation, thus allowing the rest of the domain's
>>>   memory to be allocated.
>>
>> I continue to disagree with this approach, and I don't think I've
>> heard back on the alternative suggestion of using MEMF_no_refcount
>> instead of MEMF_no_owner.
> 
> I responded in v1:
> 
> "
>> Did you consider passing MEMF_no_refcount here, to avoid the
>> fiddling with assign_pages()? That'll in particular also
>> avoid ...
>>
> 
> You remember what happened last time we did that (with the ioreq
> server page), right? That's why assign_pages() vetoes non-refcounted
> pages.
> "

Interesting - this mail appears to never have reached me. I'm
sorry for implying you didn't reply.

>> As said before, the page (and any other
>> ones up to DOMAIN_INIT_PAGES, which I find rather fragile to be
>> set to 1) will now get accounted against the amount allowed for
>> the domain to allocate.
>>
>> It also looks to me as if this will break e.g.
>> p2m_pod_set_mem_target(), which at the very top has
>>
>>     /* P == B: Nothing to do (unless the guest is being created). */
>>     populated = d->tot_pages - p2m->pod.count;
>>     if ( populated > 0 && p2m->pod.entry_count == 0 )
>>         goto out;
>>
>> This code assumes that during domain creation all pages recorded
>> in ->tot_pages have also been recorded in ->pod.count.
>>
>> There may be other uses of ->tot_pages which this change conflicts
>> with.
>>
>>> - Because the domheap page is no longer a pseudo-xenheap page, the
>>>   reference counting will prevent the domain from being destroyed. Thus
>>>   the call to vmx_free_vlapic_mapping() is moved from the
>>>   domain_destroy() method into the domain_relinquish_resources() method.
>>>   Whilst in the area, make the domain_destroy() method an optional
>>>   alternative_vcall() (since it will no longer peform any function in
>> VMX
>>>   and is stubbed in SVM anyway).
>>
>> All of this would, afaict, become irrelevant when using MEMF_no_refcount.
>>
>> There looks to be one issue (which I think we have been discussing
>> before): By not bumping ->tot_pages, its decrementing upon freeing
>> the page will be a problem.
> 
> Yes, that's exactly the problem with assigning MEMF_no_refcount
> pages, which is way it is outlawed.

Outlawed? It's being special treated, but nothing more/worse.

>> I can see two possible solutions to this:
>> - Introduce a flag indicating there should also be no accounting
>>   upon freeing of the page.
> 
> What sort of flag did you have in mind? Do we have space anywhere
> in type-info or count-info to put it? If we can make assigning
> non-refcounted pages safe then it's certainly an attractive option.

Stealing a bit from PGC_count_mask would likely be the way to go,
unless we can figure a PGC_* bit which can safely be overloaded.
Type-info wouldn't be the right place, I guess.

>> - Instead of avoiding to increment ->tot_pages, _also_ increment
>>   ->max_pages. The interaction with XEN_DOMCTL_max_mem will then of
>>   course be, well, interesting.
>>
> 
> Indeed, which is why I opted for the simple approach. As I've said in
> other responses, max_pages ought be set by the toolstack when the
> domain is created so I wanted to come up with something that's not
> too invasive until that change is made so if the pages do need to be
> ref-counted then I guess I'll have to figure out how to make the
> initial allocation co-exist with PoD.

I'd suggest you don't even try to. The interaction with PoD was just
the example I could easily think of because of having touched PoD
code recently. The general accounting issue that I've also explained
in my previous reply is enough reason to not want to go this route:
The amount of memory given / givable to a domain should not change
as a (silent) side effect of this patch.

And let's face it - the reason why the VMX code does what it does
is because there are no (obvious) easy alternatives.

Jan
Durrant, Paul Jan. 24, 2020, 11:02 a.m. UTC | #10
> -----Original Message-----
> From: George Dunlap [mailto:george.dunlap@citrix.com]
> Sent: 23 January 2020 14:46
> To: Durrant, Paul <pdurrant@amazon.co.uk>; xen-devel@lists.xenproject.org
> Cc: Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau Monné
> <roger.pau@citrix.com>; George Dunlap <George.Dunlap@eu.citrix.com>; Ian
> Jackson <ian.jackson@eu.citrix.com>; Julien Grall <julien@xen.org>; Konrad
> Rzeszutek Wilk <konrad.wilk@oracle.com>; Stefano Stabellini
> <sstabellini@kernel.org>; Jun Nakajima <jun.nakajima@intel.com>; Kevin
> Tian <kevin.tian@intel.com>
> Subject: Re: [PATCH v3 3/3] x86 / vmx: use a 'normal' domheap page for
> APIC_DEFAULT_PHYS_BASE
> 
> On 1/23/20 2:03 PM, Paul Durrant wrote:
> > vmx_alloc_vlapic_mapping() currently contains some very odd looking
> > code that allocates a MEMF_no_owner domheap page and then shares with
> > the guest as if it were a xenheap page. This then requires
> > vmx_free_vlapic_mapping() to call a special function in the mm code:
> free_shared_domheap_page().
> >
> > By using a 'normal' domheap page (i.e. by not passing MEMF_no_owner to
> > alloc_domheap_page()), the odd looking code in
> > vmx_alloc_vlapic_mapping() can simply use get_page_and_type() to set
> > up a writable mapping before insertion in the P2M and
> > vmx_free_vlapic_mapping() can simply release the page using
> > put_page_alloc_ref() followed by put_page_and_type(). This then allows
> free_shared_domheap_page() to be purged.
> >
> > There is, however, some fall-out from this simplification:
> >
> > - alloc_domheap_page() will now call assign_pages() and run into the
> fact
> >   that 'max_pages' is not set until some time after domain_create(). To
> >   avoid an allocation failure, domain_create() is modified to set
> >   max_pages to an initial value, sufficient to cover any domheap
> >   allocations required to complete domain creation. The value will be
> >   set to the 'real' max_pages when the tool-stack later performs the
> >   XEN_DOMCTL_max_mem operation, thus allowing the rest of the domain's
> >   memory to be allocated.
> >
> > - Because the domheap page is no longer a pseudo-xenheap page, the
> >   reference counting will prevent the domain from being destroyed. Thus
> >   the call to vmx_free_vlapic_mapping() is moved from the
> >   domain_destroy() method into the domain_relinquish_resources() method.
> >   Whilst in the area, make the domain_destroy() method an optional
> >   alternative_vcall() (since it will no longer peform any function in
> VMX
> >   and is stubbed in SVM anyway).
> >
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> 
> This is an excellent change, thank you:
> 
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
> 
> My only comment is that this would have been a bit easier to review broken
> down into probably three patches: 1) making domain_destroy optional, 2)
> moving vmx teardown to a relinquish_resources call 3) using "normal"
> pages".  But I don't think it's worth a re-send just for that.

Since I appear to need to do a v4 to re-work the allocation (assuming I can steal another PGC bit) I'll split things as you suggest.

  Paul

> 
>  -George
diff mbox series

Patch

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e51c077269..d2610f5f01 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -746,7 +746,9 @@  void hvm_domain_destroy(struct domain *d)
 
     hvm_destroy_cacheattr_region_list(d);
 
-    hvm_funcs.domain_destroy(d);
+    if ( hvm_funcs.domain_destroy )
+        alternative_vcall(hvm_funcs.domain_destroy, d);
+
     rtc_deinit(d);
     stdvga_deinit(d);
     vioapic_deinit(d);
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index b1c376d455..b7f67f9f03 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1155,10 +1155,6 @@  static int svm_domain_initialise(struct domain *d)
     return 0;
 }
 
-static void svm_domain_destroy(struct domain *d)
-{
-}
-
 static int svm_vcpu_initialise(struct vcpu *v)
 {
     int rc;
@@ -2425,7 +2421,6 @@  static struct hvm_function_table __initdata svm_function_table = {
     .cpu_up               = svm_cpu_up,
     .cpu_down             = svm_cpu_down,
     .domain_initialise    = svm_domain_initialise,
-    .domain_destroy       = svm_domain_destroy,
     .vcpu_initialise      = svm_vcpu_initialise,
     .vcpu_destroy         = svm_vcpu_destroy,
     .save_cpu_ctxt        = svm_save_vmcb_ctxt,
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index b262d38a7c..c3a06b54a8 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -419,7 +419,7 @@  static int vmx_domain_initialise(struct domain *d)
     return 0;
 }
 
-static void vmx_domain_destroy(struct domain *d)
+static void vmx_domain_relinquish_resources(struct domain *d)
 {
     if ( !has_vlapic(d) )
         return;
@@ -2240,7 +2240,7 @@  static struct hvm_function_table __initdata vmx_function_table = {
     .cpu_up_prepare       = vmx_cpu_up_prepare,
     .cpu_dead             = vmx_cpu_dead,
     .domain_initialise    = vmx_domain_initialise,
-    .domain_destroy       = vmx_domain_destroy,
+    .domain_relinquish_resources = vmx_domain_relinquish_resources,
     .vcpu_initialise      = vmx_vcpu_initialise,
     .vcpu_destroy         = vmx_vcpu_destroy,
     .save_cpu_ctxt        = vmx_save_vmcs_ctxt,
@@ -3028,12 +3028,22 @@  static int vmx_alloc_vlapic_mapping(struct domain *d)
     if ( !cpu_has_vmx_virtualize_apic_accesses )
         return 0;
 
-    pg = alloc_domheap_page(d, MEMF_no_owner);
+    pg = alloc_domheap_page(d, 0);
     if ( !pg )
         return -ENOMEM;
+
+    if ( !get_page_and_type(pg, d, PGT_writable_page) )
+    {
+        /*
+         * The domain can't possibly know about this page yet, so failure
+         * here is a clear indication of something fishy going on.
+         */
+        domain_crash(d);
+        return -ENODATA;
+    }
+
     mfn = page_to_mfn(pg);
     clear_domain_page(mfn);
-    share_xen_page_with_guest(pg, d, SHARE_rw);
     d->arch.hvm.vmx.apic_access_mfn = mfn;
 
     return set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), mfn,
@@ -3047,7 +3057,12 @@  static void vmx_free_vlapic_mapping(struct domain *d)
 
     d->arch.hvm.vmx.apic_access_mfn = _mfn(0);
     if ( !mfn_eq(mfn, _mfn(0)) )
-        free_shared_domheap_page(mfn_to_page(mfn));
+    {
+        struct page_info *pg = mfn_to_page(mfn);
+
+        put_page_alloc_ref(pg);
+        put_page_and_type(pg);
+    }
 }
 
 static void vmx_install_vlapic_mapping(struct vcpu *v)
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 654190e9e9..2a6d2e8af9 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -496,16 +496,6 @@  void share_xen_page_with_guest(struct page_info *page, struct domain *d,
     spin_unlock(&d->page_alloc_lock);
 }
 
-void free_shared_domheap_page(struct page_info *page)
-{
-    put_page_alloc_ref(page);
-    if ( !test_and_clear_bit(_PGC_xen_heap, &page->count_info) )
-        ASSERT_UNREACHABLE();
-    page->u.inuse.type_info = 0;
-    page_set_owner(page, NULL);
-    free_domheap_page(page);
-}
-
 void make_cr3(struct vcpu *v, mfn_t mfn)
 {
     struct domain *d = v->domain;
diff --git a/xen/common/domain.c b/xen/common/domain.c
index ee3f9ffd3e..30c777acb8 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -339,6 +339,8 @@  static int sanitise_domain_config(struct xen_domctl_createdomain *config)
     return arch_sanitise_domain_config(config);
 }
 
+#define DOMAIN_INIT_PAGES 1
+
 struct domain *domain_create(domid_t domid,
                              struct xen_domctl_createdomain *config,
                              bool is_priv)
@@ -441,6 +443,12 @@  struct domain *domain_create(domid_t domid,
         radix_tree_init(&d->pirq_tree);
     }
 
+    /*
+     * Allow a limited number of special pages to be allocated for the
+     * domain
+     */
+    d->max_pages = DOMAIN_INIT_PAGES;
+
     if ( (err = arch_domain_create(d, config)) != 0 )
         goto fail;
     init_status |= INIT_arch;
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 2ca8882ad0..e429f38228 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -317,8 +317,6 @@  struct page_info
 
 #define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
 
-extern void free_shared_domheap_page(struct page_info *page);
-
 #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
 extern unsigned long max_page;
 extern unsigned long total_pages;