diff mbox series

[4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits

Message ID aefe5617-9f10-23a4-ee27-6ea66b62cdbe@suse.com (mailing list archive)
State New
Headers show
Series [4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits | expand

Commit Message

Jan Beulich Feb. 25, 2021, 1:03 p.m. UTC
When none of the physical address bits in PTEs are reserved, we can't
create any 4k (leaf) PTEs which would trigger reserved bit faults. Hence
the present SHOPT_FAST_FAULT_PATH machinery needs to be suppressed in
this case, which is most easily achieved by never creating any magic
entries.

To compensate a little, eliminate sh_write_p2m_entry_post()'s impact on
such hardware.

While at it, also avoid using an MMIO magic entry when that would
truncate the incoming GFN.

Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I wonder if subsequently we couldn't arrange for SMEP/SMAP faults to be
utilized instead, on capable hardware (which might well be all having
such large a physical address width).

I further wonder whether SH_L1E_MMIO_GFN_MASK couldn't / shouldn't be
widened. I don't see a reason why it would need confining to the low
32 bits of the PTE - using the full space up to bit 50 ought to be fine
(i.e. just one address bit left set in the magic mask), and we wouldn't
even need that many to encode a 40-bit GFN (i.e. the extra guarding
added here wouldn't then be needed in the first place).

Comments

Jan Beulich Feb. 25, 2021, 1:11 p.m. UTC | #1
On 25.02.2021 14:03, Jan Beulich wrote:
> When none of the physical address bits in PTEs are reserved, we can't
> create any 4k (leaf) PTEs which would trigger reserved bit faults. Hence
> the present SHOPT_FAST_FAULT_PATH machinery needs to be suppressed in
> this case, which is most easily achieved by never creating any magic
> entries.
> 
> To compensate a little, eliminate sh_write_p2m_entry_post()'s impact on
> such hardware.

As to 4.15: Without this shadow mode simply won't work on such (new)
hardware. Hence something needs to be done anyway. An alternative
would be to limit the change to just the guest-no-present entries (to
at least allow PV guests to be migrated), and refuse to enable shadow
mode for HVM guests on such hardware. (In this case we'd probably
better take care of ...

> While at it, also avoid using an MMIO magic entry when that would
> truncate the incoming GFN.

... this long-standing issue, perhaps as outlined in a post-commit-
message remark.)

The main risk here is (in particular for the MMIO part of the change
I suppose) execution suddenly going a different path, which has been
unused / untested (for this specific case) for years.

Jan
Ian Jackson Feb. 25, 2021, 1:17 p.m. UTC | #2
Jan Beulich writes ("[PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
> When none of the physical address bits in PTEs are reserved, we can't
> create any 4k (leaf) PTEs which would trigger reserved bit faults. Hence
> the present SHOPT_FAST_FAULT_PATH machinery needs to be suppressed in
> this case, which is most easily achieved by never creating any magic
> entries.
> 
> To compensate a little, eliminate sh_write_p2m_entry_post()'s impact on
> such hardware.
> 
> While at it, also avoid using an MMIO magic entry when that would
> truncate the incoming GFN.

Judging by the description I'm not sure whether this is a bugfix, or
a change to make it possible to run Xen on hardware where currently it
doesn't work at all.

I assume "none of the physical address bits in PTEs are reserved" is
a property of certain hardware, but it wasn't clear to me (i) whether
such platforms currently exists (ii) what the existing Xen code would
do in this case.

Can you enlighten me ?

Thanks,
Ian.
Ian Jackson Feb. 25, 2021, 1:20 p.m. UTC | #3
Jan Beulich writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
> As to 4.15: Without this shadow mode simply won't work on such (new)
> hardware. Hence something needs to be done anyway. An alternative
> would be to limit the change to just the guest-no-present entries (to
> at least allow PV guests to be migrated), and refuse to enable shadow
> mode for HVM guests on such hardware. (In this case we'd probably
> better take care of ...

Thanks for this explanation.

It sounds like the way you have it in this proposed patch is simpler
than the alternative.  And that right now it's not a regression, but
it is needed for running Xen on such newer hardware.

> The main risk here is (in particular for the MMIO part of the change
> I suppose) execution suddenly going a different path, which has been
> unused / untested (for this specific case) for years.

That's somewhat concerning.  But I think this only applies to the new
hardware ?  So it would be risking an XSA but not really risking the
release very much.

I think therefore:

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Ian.
Jan Beulich Feb. 25, 2021, 1:25 p.m. UTC | #4
On 25.02.2021 14:20, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
>> As to 4.15: Without this shadow mode simply won't work on such (new)
>> hardware. Hence something needs to be done anyway. An alternative
>> would be to limit the change to just the guest-no-present entries (to
>> at least allow PV guests to be migrated), and refuse to enable shadow
>> mode for HVM guests on such hardware. (In this case we'd probably
>> better take care of ...
> 
> Thanks for this explanation.
> 
> It sounds like the way you have it in this proposed patch is simpler
> than the alternative.  And that right now it's not a regression, but
> it is needed for running Xen on such newer hardware.

I'm not sure about the "simpler" part.

>> The main risk here is (in particular for the MMIO part of the change
>> I suppose) execution suddenly going a different path, which has been
>> unused / untested (for this specific case) for years.
> 
> That's somewhat concerning.  But I think this only applies to the new
> hardware ?  So it would be risking an XSA but not really risking the
> release very much.

Right - afaict an XSA would also be lurking without us doing anything,
as we'd permit a guest access to pages we didn't mean to hand to it.

> I think therefore:
> 
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Thanks.

Jan
Jan Beulich Feb. 25, 2021, 1:30 p.m. UTC | #5
On 25.02.2021 14:17, Ian Jackson wrote:
> Jan Beulich writes ("[PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
>> When none of the physical address bits in PTEs are reserved, we can't
>> create any 4k (leaf) PTEs which would trigger reserved bit faults. Hence
>> the present SHOPT_FAST_FAULT_PATH machinery needs to be suppressed in
>> this case, which is most easily achieved by never creating any magic
>> entries.
>>
>> To compensate a little, eliminate sh_write_p2m_entry_post()'s impact on
>> such hardware.
>>
>> While at it, also avoid using an MMIO magic entry when that would
>> truncate the incoming GFN.
> 
> Judging by the description I'm not sure whether this is a bugfix, or
> a change to make it possible to run Xen on hardware where currently it
> doesn't work at all.

It's still a bug fix imo, even if the flawed assumption was harmless
so far.

> I assume "none of the physical address bits in PTEs are reserved" is
> a property of certain hardware, but it wasn't clear to me (i) whether
> such platforms currently exists

If they don't exist yet, they're soon to become available afaict.

> (ii) what the existing Xen code would do in this case.

If memory is populated at the top 4Gb of physical address space,
guests would gain access to that memory through these page table
entries, as those don't cause the expected faults to be raised
anymore (but instead get used for valid - from the hardware's
perspective - translations).

Jan
Tim Deegan Feb. 26, 2021, 5:07 p.m. UTC | #6
Hi,

At 14:03 +0100 on 25 Feb (1614261809), Jan Beulich wrote:
> When none of the physical address bits in PTEs are reserved, we can't
> create any 4k (leaf) PTEs which would trigger reserved bit faults. Hence
> the present SHOPT_FAST_FAULT_PATH machinery needs to be suppressed in
> this case, which is most easily achieved by never creating any magic
> entries.
> 
> To compensate a little, eliminate sh_write_p2m_entry_post()'s impact on
> such hardware.
> 
> While at it, also avoid using an MMIO magic entry when that would
> truncate the incoming GFN.
> 
> Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>

> I wonder if subsequently we couldn't arrange for SMEP/SMAP faults to be
> utilized instead, on capable hardware (which might well be all having
> such large a physical address width).

I don't immediately see how, since we don't control the access type
that the guest will use.

> I further wonder whether SH_L1E_MMIO_GFN_MASK couldn't / shouldn't be
> widened. I don't see a reason why it would need confining to the low
> 32 bits of the PTE - using the full space up to bit 50 ought to be fine
> (i.e. just one address bit left set in the magic mask), and we wouldn't
> even need that many to encode a 40-bit GFN (i.e. the extra guarding
> added here wouldn't then be needed in the first place).

Yes, I think it could be reduced to use just one reserved address bit.
IIRC we just used such a large mask so the magic entries would be
really obvious in debugging, and there was no need to support arbitrary
address widths for emulated devices.

Cheers,

Tim.
Jan Beulich March 1, 2021, 8:10 a.m. UTC | #7
On 26.02.2021 18:07, Tim Deegan wrote:
> At 14:03 +0100 on 25 Feb (1614261809), Jan Beulich wrote:
>> When none of the physical address bits in PTEs are reserved, we can't
>> create any 4k (leaf) PTEs which would trigger reserved bit faults. Hence
>> the present SHOPT_FAST_FAULT_PATH machinery needs to be suppressed in
>> this case, which is most easily achieved by never creating any magic
>> entries.
>>
>> To compensate a little, eliminate sh_write_p2m_entry_post()'s impact on
>> such hardware.
>>
>> While at it, also avoid using an MMIO magic entry when that would
>> truncate the incoming GFN.
>>
>> Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Tim Deegan <tim@xen.org>

Thanks.

>> I wonder if subsequently we couldn't arrange for SMEP/SMAP faults to be
>> utilized instead, on capable hardware (which might well be all having
>> such large a physical address width).
> 
> I don't immediately see how, since we don't control the access type
> that the guest will use.


>> I further wonder whether SH_L1E_MMIO_GFN_MASK couldn't / shouldn't be
>> widened. I don't see a reason why it would need confining to the low
>> 32 bits of the PTE - using the full space up to bit 50 ought to be fine
>> (i.e. just one address bit left set in the magic mask), and we wouldn't
>> even need that many to encode a 40-bit GFN (i.e. the extra guarding
>> added here wouldn't then be needed in the first place).
> 
> Yes, I think it could be reduced to use just one reserved address bit.
> IIRC we just used such a large mask so the magic entries would be
> really obvious in debugging, and there was no need to support arbitrary
> address widths for emulated devices.

Will cook a patch, albeit I guess I'll keep as many of the bits set
as possible, while still being able to encode a full-40-bit GFN.

Ian - I don't suppose you'd consider this a reasonable thing to do
for 4.15? It would allow limiting the negative (performance) effect
the change here has.

Jan
Ian Jackson March 1, 2021, 5:30 p.m. UTC | #8
Jan Beulich writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
> On 26.02.2021 18:07, Tim Deegan wrote:
> > Yes, I think it could be reduced to use just one reserved address bit.
> > IIRC we just used such a large mask so the magic entries would be
> > really obvious in debugging, and there was no need to support arbitrary
> > address widths for emulated devices.
> 
> Will cook a patch, albeit I guess I'll keep as many of the bits set
> as possible, while still being able to encode a full-40-bit GFN.
> 
> Ian - I don't suppose you'd consider this a reasonable thing to do
> for 4.15? It would allow limiting the negative (performance) effect
> the change here has.

I'm afraid I don't follow enough of the background here to have an
opinion right now.  Can someone explain to me the risks (and,
correspondingly, upsides) of the options ?  Sorry to be dim, I don't
seem to be firing on all cylinders today.

Ian.
Andrew Cooper March 1, 2021, 5:34 p.m. UTC | #9
On 01/03/2021 17:30, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
>> On 26.02.2021 18:07, Tim Deegan wrote:
>>> Yes, I think it could be reduced to use just one reserved address bit.
>>> IIRC we just used such a large mask so the magic entries would be
>>> really obvious in debugging, and there was no need to support arbitrary
>>> address widths for emulated devices.
>> Will cook a patch, albeit I guess I'll keep as many of the bits set
>> as possible, while still being able to encode a full-40-bit GFN.
>>
>> Ian - I don't suppose you'd consider this a reasonable thing to do
>> for 4.15? It would allow limiting the negative (performance) effect
>> the change here has.
> I'm afraid I don't follow enough of the background here to have an
> opinion right now.  Can someone explain to me the risks (and,
> correspondingly, upsides) of the options ?  Sorry to be dim, I don't
> seem to be firing on all cylinders today.

Intel IceLake CPUs (imminently coming out) have no reserved bits in
pagetable entries, so these "optimisations" malfunction.  It is
definitely an issue for HVM Shadow guests, and possibly migration of PV
guests (I can never remember whether we use the GNP fastpath on PV or not).

It is arguably wrong that we ever depended on reserved behaviour.

I've got a (not-yet-upsteamed) XTF test which can comprehensively test
this.  I'll find some time to give them a spin and give a T-by.

Without this fix, some combinations of "normal" VM settings will
malfunction.

~Andrew
Ian Jackson March 1, 2021, 5:43 p.m. UTC | #10
Andrew Cooper writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
> On 01/03/2021 17:30, Ian Jackson wrote:
> > Jan Beulich writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
> >> On 26.02.2021 18:07, Tim Deegan wrote:
> >>> Yes, I think it could be reduced to use just one reserved address bit.
> >>> IIRC we just used such a large mask so the magic entries would be
> >>> really obvious in debugging, and there was no need to support arbitrary
> >>> address widths for emulated devices.
> >> Will cook a patch, albeit I guess I'll keep as many of the bits set
> >> as possible, while still being able to encode a full-40-bit GFN.
> >>
> >> Ian - I don't suppose you'd consider this a reasonable thing to do
> >> for 4.15? It would allow limiting the negative (performance) effect
> >> the change here has.
> > I'm afraid I don't follow enough of the background here to have an
> > opinion right now.  Can someone explain to me the risks (and,
> > correspondingly, upsides) of the options ?  Sorry to be dim, I don't
> > seem to be firing on all cylinders today.
> 
> Intel IceLake CPUs (imminently coming out) have no reserved bits in
> pagetable entries, so these "optimisations" malfunction.  It is
> definitely an issue for HVM Shadow guests, and possibly migration of PV
> guests (I can never remember whether we use the GNP fastpath on PV or not).
> 
> It is arguably wrong that we ever depended on reserved behaviour.
> 
> I've got a (not-yet-upsteamed) XTF test which can comprehensively test
> this.  I'll find some time to give them a spin and give a T-by.
> 
> Without this fix, some combinations of "normal" VM settings will
> malfunction.

Thanks for that explanation.

I don't quite follow how that relates to Jan's comment

 >> Will cook a patch, albeit I guess I'll keep as many of the bits set
 >> as possible, while still being able to encode a full-40-bit GFN.
 >>
 >> Ian - I don't suppose you'd consider this a reasonable thing to do
 >> for 4.15? It would allow limiting the negative (performance) effect
 >> the change here has.

I already gave a release-ack for the original patch.  I think Jan is
asking for a release-ack for a different way of fixing the problem.

Ian.
Jan Beulich March 2, 2021, 10:12 a.m. UTC | #11
On 01.03.2021 18:34, Andrew Cooper wrote:
> On 01/03/2021 17:30, Ian Jackson wrote:
>> Jan Beulich writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
>>> On 26.02.2021 18:07, Tim Deegan wrote:
>>>> Yes, I think it could be reduced to use just one reserved address bit.
>>>> IIRC we just used such a large mask so the magic entries would be
>>>> really obvious in debugging, and there was no need to support arbitrary
>>>> address widths for emulated devices.
>>> Will cook a patch, albeit I guess I'll keep as many of the bits set
>>> as possible, while still being able to encode a full-40-bit GFN.
>>>
>>> Ian - I don't suppose you'd consider this a reasonable thing to do
>>> for 4.15? It would allow limiting the negative (performance) effect
>>> the change here has.
>> I'm afraid I don't follow enough of the background here to have an
>> opinion right now.  Can someone explain to me the risks (and,
>> correspondingly, upsides) of the options ?  Sorry to be dim, I don't
>> seem to be firing on all cylinders today.
> 
> Intel IceLake CPUs (imminently coming out) have no reserved bits in
> pagetable entries, so these "optimisations" malfunction.  It is
> definitely an issue for HVM Shadow guests, and possibly migration of PV
> guests (I can never remember whether we use the GNP fastpath on PV or not).

We do for not-present entries afaict, while (I guess obviously) we
don't for MMIO ones.

Jan
Jan Beulich March 2, 2021, 10:22 a.m. UTC | #12
On 01.03.2021 18:43, Ian Jackson wrote:
> Andrew Cooper writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
>> On 01/03/2021 17:30, Ian Jackson wrote:
>>> Jan Beulich writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
>>>> On 26.02.2021 18:07, Tim Deegan wrote:
>>>>> Yes, I think it could be reduced to use just one reserved address bit.
>>>>> IIRC we just used such a large mask so the magic entries would be
>>>>> really obvious in debugging, and there was no need to support arbitrary
>>>>> address widths for emulated devices.
>>>> Will cook a patch, albeit I guess I'll keep as many of the bits set
>>>> as possible, while still being able to encode a full-40-bit GFN.
>>>>
>>>> Ian - I don't suppose you'd consider this a reasonable thing to do
>>>> for 4.15? It would allow limiting the negative (performance) effect
>>>> the change here has.
>>> I'm afraid I don't follow enough of the background here to have an
>>> opinion right now.  Can someone explain to me the risks (and,
>>> correspondingly, upsides) of the options ?  Sorry to be dim, I don't
>>> seem to be firing on all cylinders today.

I guess the risk from that patch is no different than that from the
patch here. It would merely improve performance for guests using
very large GFNs for memory areas needing emulation by qemu, which I
suppose originally wasn't expected to be happening in the first place.
In fact if I would have been certain there are no side effects of the
too narrow GFN representation used so far, I would probably have
submitted the patches in reverse order, or even folded them.

>> Intel IceLake CPUs (imminently coming out) have no reserved bits in
>> pagetable entries, so these "optimisations" malfunction.  It is
>> definitely an issue for HVM Shadow guests, and possibly migration of PV
>> guests (I can never remember whether we use the GNP fastpath on PV or not).
>>
>> It is arguably wrong that we ever depended on reserved behaviour.
>>
>> I've got a (not-yet-upsteamed) XTF test which can comprehensively test
>> this.  I'll find some time to give them a spin and give a T-by.
>>
>> Without this fix, some combinations of "normal" VM settings will
>> malfunction.
> 
> Thanks for that explanation.
> 
> I don't quite follow how that relates to Jan's comment
> 
>  >> Will cook a patch, albeit I guess I'll keep as many of the bits set
>  >> as possible, while still being able to encode a full-40-bit GFN.
>  >>
>  >> Ian - I don't suppose you'd consider this a reasonable thing to do
>  >> for 4.15? It would allow limiting the negative (performance) effect
>  >> the change here has.
> 
> I already gave a release-ack for the original patch.  I think Jan is
> asking for a release-ack for a different way of fixing the problem.

Well, I was trying to negotiate whether I should submit that patch
for 4.15, or only later for 4.16.

Jan
Ian Jackson March 2, 2021, 12:32 p.m. UTC | #13
Jan Beulich writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
> On 01.03.2021 18:43, Ian Jackson wrote:
> > Andrew Cooper writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
> >> On 01/03/2021 17:30, Ian Jackson wrote:
> >>> I'm afraid I don't follow enough of the background here to have an
> >>> opinion right now.  Can someone explain to me the risks (and,
> >>> correspondingly, upsides) of the options ?  Sorry to be dim, I don't
> >>> seem to be firing on all cylinders today.
> 
> I guess the risk from that patch is no different than that from the
> patch here. It would merely improve performance for guests using
> very large GFNs for memory areas needing emulation by qemu, which I
> suppose originally wasn't expected to be happening in the first place.
> In fact if I would have been certain there are no side effects of the
> too narrow GFN representation used so far, I would probably have
> submitted the patches in reverse order, or even folded them.

I am still confused.  You are saying that the existing patch, and your
proposal that you are wanting me to have an opinion on, have the same
risk.  So, what aspect of the proposed other way of fixing it might
make me say no ?

> >> Without this fix, some combinations of "normal" VM settings will
> >> malfunction.
> > 
> > Thanks for that explanation.
> > 
> > I don't quite follow how that relates to Jan's comment
> > 
> >  >> Will cook a patch, albeit I guess I'll keep as many of the bits set
> >  >> as possible, while still being able to encode a full-40-bit GFN.
> >  >>
> >  >> Ian - I don't suppose you'd consider this a reasonable thing to do
> >  >> for 4.15? It would allow limiting the negative (performance) effect
> >  >> the change here has.
> > 
> > I already gave a release-ack for the original patch.  I think Jan is
> > asking for a release-ack for a different way of fixing the problem.
> 
> Well, I was trying to negotiate whether I should submit that patch
> for 4.15, or only later for 4.16.

Precisely.  I was hoping someone (eg Andy) would be able to help
explain to me why this is a good idea, or a bad idea.

Ian.
Jan Beulich March 2, 2021, 1:17 p.m. UTC | #14
On 02.03.2021 13:32, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
>> On 01.03.2021 18:43, Ian Jackson wrote:
>>> Andrew Cooper writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
>>>> On 01/03/2021 17:30, Ian Jackson wrote:
>>>>> I'm afraid I don't follow enough of the background here to have an
>>>>> opinion right now.  Can someone explain to me the risks (and,
>>>>> correspondingly, upsides) of the options ?  Sorry to be dim, I don't
>>>>> seem to be firing on all cylinders today.
>>
>> I guess the risk from that patch is no different than that from the
>> patch here. It would merely improve performance for guests using
>> very large GFNs for memory areas needing emulation by qemu, which I
>> suppose originally wasn't expected to be happening in the first place.
>> In fact if I would have been certain there are no side effects of the
>> too narrow GFN representation used so far, I would probably have
>> submitted the patches in reverse order, or even folded them.
> 
> I am still confused.  You are saying that the existing patch, and your
> proposal that you are wanting me to have an opinion on, have the same
> risk.  So, what aspect of the proposed other way of fixing it might
> make me say no ?

Yet another change / yet more code churn for merely improving what
got fixed already. But, oh, there looks to be a misunderstanding
nevertheless - I'm not proposing another way of addressing the
same issue, but instead a performance improvement (for perhaps
just an unlikely case) on top of the fix that you did give your
ack for already. (As said - had I been certain of no aspect that
I might be overlooking, I might have folded that. But I wasn't
certain, and hence I did post the basic fix first / individually.)

Jan
diff mbox series

Patch

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -499,7 +499,8 @@  _sh_propagate(struct vcpu *v,
     {
         /* Guest l1e maps emulated MMIO space */
         *sp = sh_l1e_mmio(target_gfn, gflags);
-        d->arch.paging.shadow.has_fast_mmio_entries = true;
+        if ( sh_l1e_is_magic(*sp) )
+            d->arch.paging.shadow.has_fast_mmio_entries = true;
         goto done;
     }
 
--- a/xen/arch/x86/mm/shadow/types.h
+++ b/xen/arch/x86/mm/shadow/types.h
@@ -281,7 +281,8 @@  shadow_put_page_from_l1e(shadow_l1e_t sl
  * pagetables.
  *
  * This is only feasible for PAE and 64bit Xen: 32-bit non-PAE PTEs don't
- * have reserved bits that we can use for this.
+ * have reserved bits that we can use for this.  And even there it can only
+ * be used if the processor doesn't use all 52 address bits.
  */
 
 #define SH_L1E_MAGIC 0xffffffff00000001ULL
@@ -291,14 +292,24 @@  static inline bool sh_l1e_is_magic(shado
 }
 
 /* Guest not present: a single magic value */
-static inline shadow_l1e_t sh_l1e_gnp(void)
+static inline shadow_l1e_t sh_l1e_gnp_raw(void)
 {
     return (shadow_l1e_t){ -1ULL };
 }
 
+static inline shadow_l1e_t sh_l1e_gnp(void)
+{
+    /*
+     * On systems with no reserved physical address bits we can't engage the
+     * fast fault path.
+     */
+    return paddr_bits < PADDR_BITS ? sh_l1e_gnp_raw()
+                                   : shadow_l1e_empty();
+}
+
 static inline bool sh_l1e_is_gnp(shadow_l1e_t sl1e)
 {
-    return sl1e.l1 == sh_l1e_gnp().l1;
+    return sl1e.l1 == sh_l1e_gnp_raw().l1;
 }
 
 /*
@@ -313,9 +324,14 @@  static inline bool sh_l1e_is_gnp(shadow_
 
 static inline shadow_l1e_t sh_l1e_mmio(gfn_t gfn, u32 gflags)
 {
-    return (shadow_l1e_t) { (SH_L1E_MMIO_MAGIC
-                             | MASK_INSR(gfn_x(gfn), SH_L1E_MMIO_GFN_MASK)
-                             | (gflags & (_PAGE_USER|_PAGE_RW))) };
+    unsigned long gfn_val = MASK_INSR(gfn_x(gfn), SH_L1E_MMIO_GFN_MASK);
+
+    if ( paddr_bits >= PADDR_BITS ||
+         gfn_x(gfn) != MASK_EXTR(gfn_val, SH_L1E_MMIO_GFN_MASK) )
+        return shadow_l1e_empty();
+
+    return (shadow_l1e_t) { (SH_L1E_MMIO_MAGIC | gfn_val |
+                             (gflags & (_PAGE_USER | _PAGE_RW))) };
 }
 
 static inline bool sh_l1e_is_mmio(shadow_l1e_t sl1e)