diff mbox series

x86/altp2m: cleanup p2m_altp2m_lazy_copy

Message ID 20190412163932.2087-1-tamas@tklengyel.com (mailing list archive)
State Superseded
Headers show
Series x86/altp2m: cleanup p2m_altp2m_lazy_copy | expand

Commit Message

Tamas K Lengyel April 12, 2019, 4:39 p.m. UTC
The p2m_altp2m_lazy_copy is responsible for lazily populating an altp2m view
when the guest traps out due to no EPT entry being present in the active view.
Currently the function took several inputs that it didn't use and also
locked/unlocked gfns when it didn't need to.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
---
 xen/arch/x86/hvm/hvm.c    |  7 ++++--
 xen/arch/x86/mm/p2m.c     | 52 +++++++++++++++++----------------------
 xen/include/asm-x86/p2m.h |  5 ++--
 3 files changed, 30 insertions(+), 34 deletions(-)

Comments

Andrew Cooper April 12, 2019, 4:48 p.m. UTC | #1
On 12/04/2019 17:39, Tamas K Lengyel wrote:
> The p2m_altp2m_lazy_copy is responsible for lazily populating an altp2m view
> when the guest traps out due to no EPT entry being present in the active view.
> Currently the function took several inputs that it didn't use and also
> locked/unlocked gfns when it didn't need to.

I've got a series, "[PATCH 00/14] XSA-277 followup" which is still
waiting for mm review, which cleans some of this up, but came to a
different conclusion about the locking safety of the GFNs.

Specifically,

[PATCH 04/14] x86/p2m: Fix locking in p2m_altp2m_lazy_copy()

(and some other cleanup changes in patch 5 and 6) which extends the
duration of the gfn locks.

I admit that I hadn't spotted that gpa/gla/npfec was unnecessary, and
that does look like an improvement.

~Andrew
Tamas K Lengyel April 12, 2019, 5:18 p.m. UTC | #2
On Fri, Apr 12, 2019 at 10:48 AM Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
>
> On 12/04/2019 17:39, Tamas K Lengyel wrote:
> > The p2m_altp2m_lazy_copy is responsible for lazily populating an altp2m view
> > when the guest traps out due to no EPT entry being present in the active view.
> > Currently the function took several inputs that it didn't use and also
> > locked/unlocked gfns when it didn't need to.
>
> I've got a series, "[PATCH 00/14] XSA-277 followup" which is still
> waiting for mm review, which cleans some of this up, but came to a
> different conclusion about the locking safety of the GFNs.
>
> Specifically,
>
> [PATCH 04/14] x86/p2m: Fix locking in p2m_altp2m_lazy_copy()
>
> (and some other cleanup changes in patch 5 and 6) which extends the
> duration of the gfn locks.

Thanks, I remember seeing that but I think my approach here is more
straight forward. The caller of this function already has all the info
gathered from the hostp2m (and holds the lock) this function needs, so
no need to duplicate that within the function. Additionally getting
the p2m_lock for the altp2m while also holding a gfn lock from the
altp2m seems to me like it's not really accomplishing anything since
the gfn_lock is really just p2m_lock in disguise. So let's just get
the p2m_lock once and be done with it.

Tamas
Andrew Cooper April 12, 2019, 7:43 p.m. UTC | #3
On 12/04/2019 17:39, Tamas K Lengyel wrote:
> The p2m_altp2m_lazy_copy is responsible for lazily populating an altp2m view
> when the guest traps out due to no EPT entry being present in the active view.
> Currently the function took several inputs that it didn't use and also
> locked/unlocked gfns when it didn't need to.
>
> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>

Re-reveiwing in light of the discussion.

> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index b9bbb8f485..140c707348 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -2375,54 +2375,46 @@ bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
>   *     indicate that outer handler should handle fault
>   */
>  
> -bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
> -                            unsigned long gla, struct npfec npfec,
> -                            struct p2m_domain **ap2m)
> +bool_t p2m_altp2m_lazy_copy(struct p2m_domain *ap2m, unsigned long gfn_l,
> +                            mfn_t hmfn, p2m_type_t hp2mt, p2m_access_t hp2ma,
> +                            unsigned int page_order)
>  {
> -    struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);
> -    p2m_type_t p2mt;
> -    p2m_access_t p2ma;
> -    unsigned int page_order;
> -    gfn_t gfn = _gfn(paddr_to_pfn(gpa));
> +    p2m_type_t ap2mt;
> +    p2m_access_t ap2ma;
>      unsigned long mask;
> -    mfn_t mfn;
> +    gfn_t gfn;
> +    mfn_t amfn;
>      int rv;
>  
> -    *ap2m = p2m_get_altp2m(v);
> -
> -    mfn = get_gfn_type_access(*ap2m, gfn_x(gfn), &p2mt, &p2ma,
> -                              0, &page_order);
> -    __put_gfn(*ap2m, gfn_x(gfn));
> -
> -    if ( !mfn_eq(mfn, INVALID_MFN) )
> -        return 0;

With the rearranged logic, and feeding in some of my cleanup series, you
want something like this:

/* Entry not present in hp2m?  Caller should handle the fault. */
if ( mfn_eq(hmfn, INVALID_MFN) )
    return 0;

There is no point doing the ap2m lookup if you're going to bail because
of the hp2m miss anyway.

> +    p2m_lock(ap2m);
>  
> -    mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma,
> -                              P2M_ALLOC, &page_order);
> -    __put_gfn(hp2m, gfn_x(gfn));
> +    amfn = __get_gfn_type_access(ap2m, gfn_l, &ap2mt, &ap2ma,
> +                                 0, NULL, false);
>  
> -    if ( mfn_eq(mfn, INVALID_MFN) )
> +    /* Bail if entry is already in altp2m or there is no entry is hostp2m */

This comment then reduces to:

/* Entry already present in ap2m?  Caller should handle the fault. */

> +    if ( !mfn_eq(amfn, INVALID_MFN) || mfn_eq(hmfn, INVALID_MFN) )
> +    {
> +        p2m_unlock(ap2m);
>          return 0;
> -
> -    p2m_lock(*ap2m);
> +    }
>  
>      /*
>       * If this is a superpage mapping, round down both frame numbers
>       * to the start of the superpage.
>       */
>      mask = ~((1UL << page_order) - 1);
> -    mfn = _mfn(mfn_x(mfn) & mask);
> -    gfn = _gfn(gfn_x(gfn) & mask);
> +    hmfn = _mfn(mfn_x(hmfn) & mask);
> +    gfn = _gfn(gfn_l & mask);
>  
> -    rv = p2m_set_entry(*ap2m, gfn, mfn, page_order, p2mt, p2ma);
> -    p2m_unlock(*ap2m);
> +    rv = p2m_set_entry(ap2m, gfn, hmfn, page_order, hp2mt, hp2ma);
> +    p2m_unlock(ap2m);
>  
>      if ( rv )
>      {
>          gdprintk(XENLOG_ERR,
> -	    "failed to set entry for %#"PRIx64" -> %#"PRIx64" p2m %#"PRIx64"\n",
> -	    gfn_x(gfn), mfn_x(mfn), (unsigned long)*ap2m);
> -        domain_crash(hp2m->domain);
> +        "failed to set entry for %#"PRIx64" -> %#"PRIx64" p2m %#"PRIx64"\n",
> +        gfn_l, mfn_x(hmfn), (unsigned long)ap2m);

This should be a gprintk(), not a gdprintk(), because there is nothing
more infuriating in release builds than a domain_crash() with no
qualification.

Printing the virtual address of the ap2m is also useless as far as
diagnostics go.  How about something like:

"Failed to lazy copy gfn %"PRI_gfn" -> %"PRI_mfn" (type %u, access %u,
order %u) for view %u\n"

if there is an easy way to access the view id.  This at least stands a
chance of being useful when debugging.

> +        domain_crash(ap2m->domain);
>      }
>  
>      return 1;
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 2801a8ccca..c25e2a3cd8 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -867,8 +867,9 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx);
>  void p2m_flush_altp2m(struct domain *d);
>  
>  /* Alternate p2m paging */
> -bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
> -    unsigned long gla, struct npfec npfec, struct p2m_domain **ap2m);
> +bool_t p2m_altp2m_lazy_copy(struct p2m_domain *ap2m, unsigned long gfn_l,
> +                            mfn_t hmfn, p2m_type_t hp2mt, p2m_access_t hp2ma,
> +                            unsigned int page_order);

Can we switch to bool while changing everything else as well?

Thanks,

~Andrew
Tamas K Lengyel April 12, 2019, 7:49 p.m. UTC | #4
On Fri, Apr 12, 2019 at 1:44 PM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>
> On 12/04/2019 17:39, Tamas K Lengyel wrote:
> > The p2m_altp2m_lazy_copy is responsible for lazily populating an altp2m view
> > when the guest traps out due to no EPT entry being present in the active view.
> > Currently the function took several inputs that it didn't use and also
> > locked/unlocked gfns when it didn't need to.
> >
> > Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
>
> Re-reveiwing in light of the discussion.
>
> > diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> > index b9bbb8f485..140c707348 100644
> > --- a/xen/arch/x86/mm/p2m.c
> > +++ b/xen/arch/x86/mm/p2m.c
> > @@ -2375,54 +2375,46 @@ bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
> >   *     indicate that outer handler should handle fault
> >   */
> >
> > -bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
> > -                            unsigned long gla, struct npfec npfec,
> > -                            struct p2m_domain **ap2m)
> > +bool_t p2m_altp2m_lazy_copy(struct p2m_domain *ap2m, unsigned long gfn_l,
> > +                            mfn_t hmfn, p2m_type_t hp2mt, p2m_access_t hp2ma,
> > +                            unsigned int page_order)
> >  {
> > -    struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);
> > -    p2m_type_t p2mt;
> > -    p2m_access_t p2ma;
> > -    unsigned int page_order;
> > -    gfn_t gfn = _gfn(paddr_to_pfn(gpa));
> > +    p2m_type_t ap2mt;
> > +    p2m_access_t ap2ma;
> >      unsigned long mask;
> > -    mfn_t mfn;
> > +    gfn_t gfn;
> > +    mfn_t amfn;
> >      int rv;
> >
> > -    *ap2m = p2m_get_altp2m(v);
> > -
> > -    mfn = get_gfn_type_access(*ap2m, gfn_x(gfn), &p2mt, &p2ma,
> > -                              0, &page_order);
> > -    __put_gfn(*ap2m, gfn_x(gfn));
> > -
> > -    if ( !mfn_eq(mfn, INVALID_MFN) )
> > -        return 0;
>
> With the rearranged logic, and feeding in some of my cleanup series, you
> want something like this:
>
> /* Entry not present in hp2m?  Caller should handle the fault. */
> if ( mfn_eq(hmfn, INVALID_MFN) )
>     return 0;
>
> There is no point doing the ap2m lookup if you're going to bail because
> of the hp2m miss anyway.

True. Good catch.

>
> > +    p2m_lock(ap2m);
> >
> > -    mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma,
> > -                              P2M_ALLOC, &page_order);
> > -    __put_gfn(hp2m, gfn_x(gfn));
> > +    amfn = __get_gfn_type_access(ap2m, gfn_l, &ap2mt, &ap2ma,
> > +                                 0, NULL, false);
> >
> > -    if ( mfn_eq(mfn, INVALID_MFN) )
> > +    /* Bail if entry is already in altp2m or there is no entry is hostp2m */
>
> This comment then reduces to:
>
> /* Entry already present in ap2m?  Caller should handle the fault. */
>
> > +    if ( !mfn_eq(amfn, INVALID_MFN) || mfn_eq(hmfn, INVALID_MFN) )
> > +    {
> > +        p2m_unlock(ap2m);
> >          return 0;
> > -
> > -    p2m_lock(*ap2m);
> > +    }
> >
> >      /*
> >       * If this is a superpage mapping, round down both frame numbers
> >       * to the start of the superpage.
> >       */
> >      mask = ~((1UL << page_order) - 1);
> > -    mfn = _mfn(mfn_x(mfn) & mask);
> > -    gfn = _gfn(gfn_x(gfn) & mask);
> > +    hmfn = _mfn(mfn_x(hmfn) & mask);
> > +    gfn = _gfn(gfn_l & mask);
> >
> > -    rv = p2m_set_entry(*ap2m, gfn, mfn, page_order, p2mt, p2ma);
> > -    p2m_unlock(*ap2m);
> > +    rv = p2m_set_entry(ap2m, gfn, hmfn, page_order, hp2mt, hp2ma);
> > +    p2m_unlock(ap2m);
> >
> >      if ( rv )
> >      {
> >          gdprintk(XENLOG_ERR,
> > -         "failed to set entry for %#"PRIx64" -> %#"PRIx64" p2m %#"PRIx64"\n",
> > -         gfn_x(gfn), mfn_x(mfn), (unsigned long)*ap2m);
> > -        domain_crash(hp2m->domain);
> > +        "failed to set entry for %#"PRIx64" -> %#"PRIx64" p2m %#"PRIx64"\n",
> > +        gfn_l, mfn_x(hmfn), (unsigned long)ap2m);
>
> This should be a gprintk(), not a gdprintk(), because there is nothing
> more infuriating in release builds than a domain_crash() with no
> qualification.
>
> Printing the virtual address of the ap2m is also useless as far as
> diagnostics go.  How about something like:
>
> "Failed to lazy copy gfn %"PRI_gfn" -> %"PRI_mfn" (type %u, access %u,
> order %u) for view %u\n"
>
> if there is an easy way to access the view id.  This at least stands a
> chance of being useful when debugging.

Yeap, makes sense.

>
> > +        domain_crash(ap2m->domain);
> >      }
> >
> >      return 1;
> > diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> > index 2801a8ccca..c25e2a3cd8 100644
> > --- a/xen/include/asm-x86/p2m.h
> > +++ b/xen/include/asm-x86/p2m.h
> > @@ -867,8 +867,9 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx);
> >  void p2m_flush_altp2m(struct domain *d);
> >
> >  /* Alternate p2m paging */
> > -bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
> > -    unsigned long gla, struct npfec npfec, struct p2m_domain **ap2m);
> > +bool_t p2m_altp2m_lazy_copy(struct p2m_domain *ap2m, unsigned long gfn_l,
> > +                            mfn_t hmfn, p2m_type_t hp2mt, p2m_access_t hp2ma,
> > +                            unsigned int page_order);
>
> Can we switch to bool while changing everything else as well?

Sure.

Thanks,
Tamas
Andrew Cooper April 12, 2019, 7:58 p.m. UTC | #5
On 12/04/2019 20:49, Tamas K Lengyel wrote:
> On Fri, Apr 12, 2019 at 1:44 PM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> +    p2m_lock(ap2m);
>>>
>>> -    mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma,
>>> -                              P2M_ALLOC, &page_order);
>>> -    __put_gfn(hp2m, gfn_x(gfn));
>>> +    amfn = __get_gfn_type_access(ap2m, gfn_l, &ap2mt, &ap2ma,
>>> +                                 0, NULL, false);
>>>
>>> -    if ( mfn_eq(mfn, INVALID_MFN) )
>>> +    /* Bail if entry is already in altp2m or there is no entry is hostp2m */
>> This comment then reduces to:
>>
>> /* Entry already present in ap2m?  Caller should handle the fault. */
>>
>>> +    if ( !mfn_eq(amfn, INVALID_MFN) || mfn_eq(hmfn, INVALID_MFN) )
>>> +    {
>>> +        p2m_unlock(ap2m);
>>>          return 0;
>>> -
>>> -    p2m_lock(*ap2m);
>>> +    }
>>>
>>>      /*
>>>       * If this is a superpage mapping, round down both frame numbers
>>>       * to the start of the superpage.
>>>       */
>>>      mask = ~((1UL << page_order) - 1);
>>> -    mfn = _mfn(mfn_x(mfn) & mask);
>>> -    gfn = _gfn(gfn_x(gfn) & mask);
>>> +    hmfn = _mfn(mfn_x(hmfn) & mask);
>>> +    gfn = _gfn(gfn_l & mask);
>>>
>>> -    rv = p2m_set_entry(*ap2m, gfn, mfn, page_order, p2mt, p2ma);
>>> -    p2m_unlock(*ap2m);
>>> +    rv = p2m_set_entry(ap2m, gfn, hmfn, page_order, hp2mt, hp2ma);
>>> +    p2m_unlock(ap2m);
>>>
>>>      if ( rv )
>>>      {
>>>          gdprintk(XENLOG_ERR,
>>> -         "failed to set entry for %#"PRIx64" -> %#"PRIx64" p2m %#"PRIx64"\n",
>>> -         gfn_x(gfn), mfn_x(mfn), (unsigned long)*ap2m);
>>> -        domain_crash(hp2m->domain);
>>> +        "failed to set entry for %#"PRIx64" -> %#"PRIx64" p2m %#"PRIx64"\n",
>>> +        gfn_l, mfn_x(hmfn), (unsigned long)ap2m);
>> This should be a gprintk(), not a gdprintk(), because there is nothing
>> more infuriating in release builds than a domain_crash() with no
>> qualification.
>>
>> Printing the virtual address of the ap2m is also useless as far as
>> diagnostics go.  How about something like:
>>
>> "Failed to lazy copy gfn %"PRI_gfn" -> %"PRI_mfn" (type %u, access %u,
>> order %u) for view %u\n"
>>
>> if there is an easy way to access the view id.  This at least stands a
>> chance of being useful when debugging.
> Yeap, makes sense.

Oh, and with rv as well so we can get some hint as to why the
set_entry() failed.

As you're touching every site, how about renaming it to ret or rc for
consistency?

~Andrew
diff mbox series

Patch

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 8adbb61b57..813e69a4c9 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1688,6 +1688,7 @@  int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
     int sharing_enomem = 0;
     vm_event_request_t *req_ptr = NULL;
     bool_t ap2m_active, sync = 0;
+    unsigned int page_order;
 
     /* On Nested Virtualization, walk the guest page table.
      * If this succeeds, all is fine.
@@ -1754,11 +1755,13 @@  int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
     hostp2m = p2m_get_hostp2m(currd);
     mfn = get_gfn_type_access(hostp2m, gfn, &p2mt, &p2ma,
                               P2M_ALLOC | (npfec.write_access ? P2M_UNSHARE : 0),
-                              NULL);
+                              &page_order);
 
     if ( ap2m_active )
     {
-        if ( p2m_altp2m_lazy_copy(curr, gpa, gla, npfec, &p2m) )
+        p2m = p2m_get_altp2m(curr);
+
+        if ( p2m_altp2m_lazy_copy(p2m, gfn, mfn, p2mt, p2ma, page_order) )
         {
             /* entry was lazily copied from host -- retry */
             __put_gfn(hostp2m, gfn);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index b9bbb8f485..140c707348 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2375,54 +2375,46 @@  bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
  *     indicate that outer handler should handle fault
  */
 
-bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
-                            unsigned long gla, struct npfec npfec,
-                            struct p2m_domain **ap2m)
+bool_t p2m_altp2m_lazy_copy(struct p2m_domain *ap2m, unsigned long gfn_l,
+                            mfn_t hmfn, p2m_type_t hp2mt, p2m_access_t hp2ma,
+                            unsigned int page_order)
 {
-    struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);
-    p2m_type_t p2mt;
-    p2m_access_t p2ma;
-    unsigned int page_order;
-    gfn_t gfn = _gfn(paddr_to_pfn(gpa));
+    p2m_type_t ap2mt;
+    p2m_access_t ap2ma;
     unsigned long mask;
-    mfn_t mfn;
+    gfn_t gfn;
+    mfn_t amfn;
     int rv;
 
-    *ap2m = p2m_get_altp2m(v);
-
-    mfn = get_gfn_type_access(*ap2m, gfn_x(gfn), &p2mt, &p2ma,
-                              0, &page_order);
-    __put_gfn(*ap2m, gfn_x(gfn));
-
-    if ( !mfn_eq(mfn, INVALID_MFN) )
-        return 0;
+    p2m_lock(ap2m);
 
-    mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma,
-                              P2M_ALLOC, &page_order);
-    __put_gfn(hp2m, gfn_x(gfn));
+    amfn = __get_gfn_type_access(ap2m, gfn_l, &ap2mt, &ap2ma,
+                                 0, NULL, false);
 
-    if ( mfn_eq(mfn, INVALID_MFN) )
+    /* Bail if entry is already in altp2m or there is no entry is hostp2m */
+    if ( !mfn_eq(amfn, INVALID_MFN) || mfn_eq(hmfn, INVALID_MFN) )
+    {
+        p2m_unlock(ap2m);
         return 0;
-
-    p2m_lock(*ap2m);
+    }
 
     /*
      * If this is a superpage mapping, round down both frame numbers
      * to the start of the superpage.
      */
     mask = ~((1UL << page_order) - 1);
-    mfn = _mfn(mfn_x(mfn) & mask);
-    gfn = _gfn(gfn_x(gfn) & mask);
+    hmfn = _mfn(mfn_x(hmfn) & mask);
+    gfn = _gfn(gfn_l & mask);
 
-    rv = p2m_set_entry(*ap2m, gfn, mfn, page_order, p2mt, p2ma);
-    p2m_unlock(*ap2m);
+    rv = p2m_set_entry(ap2m, gfn, hmfn, page_order, hp2mt, hp2ma);
+    p2m_unlock(ap2m);
 
     if ( rv )
     {
         gdprintk(XENLOG_ERR,
-	    "failed to set entry for %#"PRIx64" -> %#"PRIx64" p2m %#"PRIx64"\n",
-	    gfn_x(gfn), mfn_x(mfn), (unsigned long)*ap2m);
-        domain_crash(hp2m->domain);
+        "failed to set entry for %#"PRIx64" -> %#"PRIx64" p2m %#"PRIx64"\n",
+        gfn_l, mfn_x(hmfn), (unsigned long)ap2m);
+        domain_crash(ap2m->domain);
     }
 
     return 1;
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 2801a8ccca..c25e2a3cd8 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -867,8 +867,9 @@  void p2m_altp2m_check(struct vcpu *v, uint16_t idx);
 void p2m_flush_altp2m(struct domain *d);
 
 /* Alternate p2m paging */
-bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
-    unsigned long gla, struct npfec npfec, struct p2m_domain **ap2m);
+bool_t p2m_altp2m_lazy_copy(struct p2m_domain *ap2m, unsigned long gfn_l,
+                            mfn_t hmfn, p2m_type_t hp2mt, p2m_access_t hp2ma,
+                            unsigned int page_order);
 
 /* Make a specific alternate p2m valid */
 int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);