[2/2] x86/AMD: Fix handling of x87 exception pointers on Fam17h hardware
diff mbox series

Message ID 20190819182612.16706-3-andrew.cooper3@citrix.com
State New
Headers show
Series
  • x86/AMD: Fix handling of x87 exception pointers on Fam17h hardware
Related show

Commit Message

Andrew Cooper Aug. 19, 2019, 6:26 p.m. UTC
AMD Pre-Fam17h CPUs "optimise" {F,}X{SAVE,RSTOR} by not saving/restoring
FOP/FIP/FDP if an x87 exception isn't pending.  This causes an information
leak, CVE-2006-1056, and worked around by several OSes, including Xen.  AMD
Fam17h CPUs no longer have this leak, and advertise so in a CPUID bit.

Introduce the RSTR_FP_ERR_PTRS feature, as specified by AMD, and expose to all
guests by default.  While adjusting libxl's cpuid table, add CLZERO which
looks to have been omitted previously.

Also introduce an X86_BUG bit to trigger the (F)XRSTOR workaround, and set it
on AMD hardware where RSTR_FP_ERR_PTRS is not advertised.  Optimise the
workaround path by dropping the data-dependent unpredictable conditions which
will evalute to true for all 64bit OSes and most 32bit ones.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Use the AMD naming, not that I am convinced this is a sensible name to use.
 * Adjust the i387 codepaths as well as the xstate ones.
 * Add xen-cpuid/libxl data for the CPUID bit.
---
 tools/libxl/libxl_cpuid.c                   |  3 +++
 tools/misc/xen-cpuid.c                      |  1 +
 xen/arch/x86/cpu/amd.c                      |  7 +++++++
 xen/arch/x86/i387.c                         | 14 +++++---------
 xen/arch/x86/xstate.c                       |  6 ++----
 xen/include/asm-x86/cpufeature.h            |  3 +++
 xen/include/asm-x86/cpufeatures.h           |  2 ++
 xen/include/public/arch-x86/cpufeatureset.h |  1 +
 8 files changed, 24 insertions(+), 13 deletions(-)

Comments

Jan Beulich Aug. 29, 2019, 12:56 p.m. UTC | #1
On 19.08.2019 20:26, Andrew Cooper wrote:
> AMD Pre-Fam17h CPUs "optimise" {F,}X{SAVE,RSTOR} by not saving/restoring
> FOP/FIP/FDP if an x87 exception isn't pending.  This causes an information
> leak, CVE-2006-1056, and worked around by several OSes, including Xen.  AMD
> Fam17h CPUs no longer have this leak, and advertise so in a CPUID bit.
> 
> Introduce the RSTR_FP_ERR_PTRS feature, as specified by AMD, and expose to all
> guests by default.  While adjusting libxl's cpuid table, add CLZERO which
> looks to have been omitted previously.
> 
> Also introduce an X86_BUG bit to trigger the (F)XRSTOR workaround, and set it
> on AMD hardware where RSTR_FP_ERR_PTRS is not advertised.  Optimise the
> workaround path by dropping the data-dependent unpredictable conditions which
> will evalute to true for all 64bit OSes and most 32bit ones.

I definitely don't buy the "all 64bit OSes" part here: Anyone doing
full 80-bit FP operations will have to use the FPU, and hence may
want to have some unmasked exceptions. I'm also not sure why you
call them "unpredictable": If all (or most) cases match, the branch
there could be pretty well predicted (subject of course to capacity).

All in all I'd prefer if the conditions remained in place; my minimal
request would be for there to be a comment why there's no evaluation
of FSW/FCW.

> --- a/xen/arch/x86/i387.c
> +++ b/xen/arch/x86/i387.c
> @@ -43,20 +43,17 @@ static inline void fpu_fxrstor(struct vcpu *v)
>      const typeof(v->arch.xsave_area->fpu_sse) *fpu_ctxt = v->arch.fpu_ctxt;
>  
>      /*
> -     * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
> +     * Some CPUs don't save/restore FDP/FIP/FOP unless an exception

Are there any non-AMD CPUs known to have this issue? If not, is
there a particular reason you don't say "Some AMD CPUs ..."?

>       * is pending. Clear the x87 state here by setting it to fixed
>       * values. The hypervisor data segment can be sometimes 0 and
>       * sometimes new user value. Both should be ok. Use the FPU saved
>       * data block as a safe address because it should be in L1.
>       */
> -    if ( !(fpu_ctxt->fsw & ~fpu_ctxt->fcw & 0x003f) &&
> -         boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
> -    {
> +    if ( cpu_bug_fpu_ptr_leak )
>          asm volatile ( "fnclex\n\t"
>                         "ffree %%st(7)\n\t" /* clear stack tag */
>                         "fildl %0"          /* load to clear state */
>                         : : "m" (*fpu_ctxt) );

If here and in the respective xsave instance you'd use alternatives
patching, I wouldn't mind the use of a X86_BUG_* for this (as made
possible by patch 1). But as said before, just like for synthetic
features I strongly think we should use simple boolean variables
when using them only in if()-s. Use of the feature(/bug) machinery
is needed only to not further complicate alternatives patching.

> @@ -169,11 +166,10 @@ static inline void fpu_fxsave(struct vcpu *v)
>                         : "=m" (*fpu_ctxt) : "R" (fpu_ctxt) );
>  
>          /*
> -         * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
> -         * is pending.
> +         * Some CPUs don't save/restore FDP/FIP/FOP unless an exception is
> +         * pending.  The restore code fills in suitable defaults.
>           */
> -        if ( !(fpu_ctxt->fsw & 0x0080) &&
> -             boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
> +        if ( cpu_bug_fpu_ptr_leak && !(fpu_ctxt->fsw & 0x0080) )
>              return;

The comment addition seems a little unmotivated: The code here isn't
about leaking data, but about having valid data to consume (down
from here). With this, keying the return to cpu_bug_* also doesn't
look very nice, but I admit I can't suggest a better alternative
(other than leaving the vendor check in place and checking the
X86_FEATURE_RSTR_FP_ERR_PTRS bit).

An option might be to give the construct a different name, without
"leak" in it (NO_FP_ERR_PTRS?).

Jan
Andrew Cooper Sept. 2, 2019, 2:15 p.m. UTC | #2
On 29/08/2019 13:56, Jan Beulich wrote:
> On 19.08.2019 20:26, Andrew Cooper wrote:
>> AMD Pre-Fam17h CPUs "optimise" {F,}X{SAVE,RSTOR} by not saving/restoring
>> FOP/FIP/FDP if an x87 exception isn't pending.  This causes an information
>> leak, CVE-2006-1056, and worked around by several OSes, including Xen.  AMD
>> Fam17h CPUs no longer have this leak, and advertise so in a CPUID bit.
>>
>> Introduce the RSTR_FP_ERR_PTRS feature, as specified by AMD, and expose to all
>> guests by default.  While adjusting libxl's cpuid table, add CLZERO which
>> looks to have been omitted previously.
>>
>> Also introduce an X86_BUG bit to trigger the (F)XRSTOR workaround, and set it
>> on AMD hardware where RSTR_FP_ERR_PTRS is not advertised.  Optimise the
>> workaround path by dropping the data-dependent unpredictable conditions which
>> will evalute to true for all 64bit OSes and most 32bit ones.
> I definitely don't buy the "all 64bit OSes" part here: Anyone doing
> full 80-bit FP operations will have to use the FPU, and hence may
> want to have some unmasked exceptions.

And all 0 people doing that is still 0.

Yes I'm being a little facetious, but there is exceedingly little
software which uses 80-bit FPU operations these days, as it has been
superseded by SSE.

>  I'm also not sure why you
> call them "unpredictable": If all (or most) cases match, the branch
> there could be pretty well predicted (subject of course to capacity).

Data-dependent branches which have no correlation to pattern history, of
which this is an example, are frequently mispredicted because they are
inherently unstable.

In this case, you're trading off the fact that an unmasked exception is
basically never pending, against the cost of mispredicts in the context
switch path.

> All in all I'd prefer if the conditions remained in place; my minimal
> request would be for there to be a comment why there's no evaluation
> of FSW/FCW.
>
>> --- a/xen/arch/x86/i387.c
>> +++ b/xen/arch/x86/i387.c
>> @@ -43,20 +43,17 @@ static inline void fpu_fxrstor(struct vcpu *v)
>>      const typeof(v->arch.xsave_area->fpu_sse) *fpu_ctxt = v->arch.fpu_ctxt;
>>  
>>      /*
>> -     * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
>> +     * Some CPUs don't save/restore FDP/FIP/FOP unless an exception
> Are there any non-AMD CPUs known to have this issue? If not, is
> there a particular reason you don't say "Some AMD CPUs ..."?

I'm not aware of any, but leaving it as "Some AMD" might become stale if
others do surface.

Information about which CPUs are affected should exclusively be
determined by the logic which sets cpu_bug_fpu_ptr_leak, which won't be
stale.

>>       * is pending. Clear the x87 state here by setting it to fixed
>>       * values. The hypervisor data segment can be sometimes 0 and
>>       * sometimes new user value. Both should be ok. Use the FPU saved
>>       * data block as a safe address because it should be in L1.
>>       */
>> -    if ( !(fpu_ctxt->fsw & ~fpu_ctxt->fcw & 0x003f) &&
>> -         boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
>> -    {
>> +    if ( cpu_bug_fpu_ptr_leak )
>>          asm volatile ( "fnclex\n\t"
>>                         "ffree %%st(7)\n\t" /* clear stack tag */
>>                         "fildl %0"          /* load to clear state */
>>                         : : "m" (*fpu_ctxt) );
> If here and in the respective xsave instance you'd use alternatives
> patching, I wouldn't mind the use of a X86_BUG_* for this (as made
> possible by patch 1).

a) this should probably be a static branch if/when we gain that
infrastructure, but ...

> But as said before, just like for synthetic
> features I strongly think we should use simple boolean variables
> when using them only in if()-s. Use of the feature(/bug) machinery
> is needed only to not further complicate alternatives patching.

... b)

I see I'm going to have to repeat myself, which is time I can't really
afford to waste.

x86_capabilities is not, and has never been, "just for alternatives". 
It is also not how it is currently used in Xen.

I also don't agree with the general suggestion because amongst other
things, there is a factor of 8 storage difference between one extra bit
in x86_capabilities[] and using scattered booleans.

This series, and a number of related series, have been overdue for more
than a year now, partly because of speculative preemption, but also
partly because of attempted scope creep such as this.  Scope creep is
having a catastrophic effect on the productivity of submissions to Xen,
and most not continue like this the Xen community is to survive.

>
>> @@ -169,11 +166,10 @@ static inline void fpu_fxsave(struct vcpu *v)
>>                         : "=m" (*fpu_ctxt) : "R" (fpu_ctxt) );
>>  
>>          /*
>> -         * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
>> -         * is pending.
>> +         * Some CPUs don't save/restore FDP/FIP/FOP unless an exception is
>> +         * pending.  The restore code fills in suitable defaults.
>>           */
>> -        if ( !(fpu_ctxt->fsw & 0x0080) &&
>> -             boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
>> +        if ( cpu_bug_fpu_ptr_leak && !(fpu_ctxt->fsw & 0x0080) )
>>              return;
> The comment addition seems a little unmotivated:

Well.  Judging by your reply, it is "too complicated for even Andrew to
follow", so absolutely needs to be clearer.

>  The code here isn't
> about leaking data, but about having valid data to consume (down
> from here).

Ok - I see that now.

>  With this, keying the return to cpu_bug_* also doesn't
> look very nice, but I admit I can't suggest a better alternative
> (other than leaving the vendor check in place and checking the
> X86_FEATURE_RSTR_FP_ERR_PTRS bit).
>
> An option might be to give the construct a different name, without
> "leak" in it (NO_FP_ERR_PTRS?).

This name also isn't ideal, because its not always that there are no
error pointers.

X86_BUG_FPU_PTRS might be best, as it is neutral as to precisely what is
buggy with them.

~Andrew
Jan Beulich Sept. 2, 2019, 2:50 p.m. UTC | #3
On 02.09.2019 16:15, Andrew Cooper wrote:
> On 29/08/2019 13:56, Jan Beulich wrote:
>> On 19.08.2019 20:26, Andrew Cooper wrote:
>>> AMD Pre-Fam17h CPUs "optimise" {F,}X{SAVE,RSTOR} by not saving/restoring
>>> FOP/FIP/FDP if an x87 exception isn't pending.  This causes an information
>>> leak, CVE-2006-1056, and worked around by several OSes, including Xen.  AMD
>>> Fam17h CPUs no longer have this leak, and advertise so in a CPUID bit.
>>>
>>> Introduce the RSTR_FP_ERR_PTRS feature, as specified by AMD, and expose to all
>>> guests by default.  While adjusting libxl's cpuid table, add CLZERO which
>>> looks to have been omitted previously.
>>>
>>> Also introduce an X86_BUG bit to trigger the (F)XRSTOR workaround, and set it
>>> on AMD hardware where RSTR_FP_ERR_PTRS is not advertised.  Optimise the
>>> workaround path by dropping the data-dependent unpredictable conditions which
>>> will evalute to true for all 64bit OSes and most 32bit ones.
>> I definitely don't buy the "all 64bit OSes" part here: Anyone doing
>> full 80-bit FP operations will have to use the FPU, and hence may
>> want to have some unmasked exceptions.
> 
> And all 0 people doing that is still 0.
> 
> Yes I'm being a little facetious, but there is exceedingly little
> software which uses 80-bit FPU operations these days, as it has been
> superseded by SSE.

Just for your amusement, I run such software myself. When computing
fractals the extra bits of precision may matter quite a lot. Granted
I don't fancy running something like this on top of Xen.

>>  I'm also not sure why you
>> call them "unpredictable": If all (or most) cases match, the branch
>> there could be pretty well predicted (subject of course to capacity).
> 
> Data-dependent branches which have no correlation to pattern history, of
> which this is an example, are frequently mispredicted because they are
> inherently unstable.
> 
> In this case, you're trading off the fact that an unmasked exception is
> basically never pending, against the cost of mispredicts in the context
> switch path.

For

    if ( !(fpu_ctxt->fsw & ~fpu_ctxt->fcw & 0x003f) &&

you're claiming it to be true most of the time. How could the
predictor be mislead if whenever this is encountered the result
of the double & is zero?

>> All in all I'd prefer if the conditions remained in place; my minimal
>> request would be for there to be a comment why there's no evaluation
>> of FSW/FCW.
>>
>>> --- a/xen/arch/x86/i387.c
>>> +++ b/xen/arch/x86/i387.c
>>> @@ -43,20 +43,17 @@ static inline void fpu_fxrstor(struct vcpu *v)
>>>      const typeof(v->arch.xsave_area->fpu_sse) *fpu_ctxt = v->arch.fpu_ctxt;
>>>  
>>>      /*
>>> -     * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
>>> +     * Some CPUs don't save/restore FDP/FIP/FOP unless an exception
>> Are there any non-AMD CPUs known to have this issue? If not, is
>> there a particular reason you don't say "Some AMD CPUs ..."?
> 
> I'm not aware of any, but leaving it as "Some AMD" might become stale if
> others do surface.
> 
> Information about which CPUs are affected should exclusively be
> determined by the logic which sets cpu_bug_fpu_ptr_leak, which won't be
> stale.

Well, okay then.

>>>       * is pending. Clear the x87 state here by setting it to fixed
>>>       * values. The hypervisor data segment can be sometimes 0 and
>>>       * sometimes new user value. Both should be ok. Use the FPU saved
>>>       * data block as a safe address because it should be in L1.
>>>       */
>>> -    if ( !(fpu_ctxt->fsw & ~fpu_ctxt->fcw & 0x003f) &&
>>> -         boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
>>> -    {
>>> +    if ( cpu_bug_fpu_ptr_leak )
>>>          asm volatile ( "fnclex\n\t"
>>>                         "ffree %%st(7)\n\t" /* clear stack tag */
>>>                         "fildl %0"          /* load to clear state */
>>>                         : : "m" (*fpu_ctxt) );
>> If here and in the respective xsave instance you'd use alternatives
>> patching, I wouldn't mind the use of a X86_BUG_* for this (as made
>> possible by patch 1).
> 
> a) this should probably be a static branch if/when we gain that
> infrastructure, but ...
> 
>> But as said before, just like for synthetic
>> features I strongly think we should use simple boolean variables
>> when using them only in if()-s. Use of the feature(/bug) machinery
>> is needed only to not further complicate alternatives patching.
> 
> ... b)
> 
> I see I'm going to have to repeat myself, which is time I can't really
> afford to waste.
> 
> x86_capabilities is not, and has never been, "just for alternatives". 
> It is also not how it is currently used in Xen.

And I've not been claiming this. Nevertheless my opinion is that it
shouldn't be needlessly abused beyond its main purpose. I.e. deriving
cpu_has_* flags from it because features flags get collected this way
is certainly fine. But introducing artificial extensions is (imo) not.
I thought I had successfully convinced you of not adding synthetic
feature (non-bug) flags either anymore, unless needed for alternatives
patching.

Anyway - in the interest of forward progress, yet without being
convinced at all, I'll (as in so many earlier cases) give in here and
see about acking patch 1 then.

> I also don't agree with the general suggestion because amongst other
> things, there is a factor of 8 storage difference between one extra bit
> in x86_capabilities[] and using scattered booleans.

While a valid argument at the first glance, there's nothing keeping
us from having a feature flag independent bitmap. Correct my if I'm
wrong, but I've gained the impression that you want this mainly
because Linux does it this way.

> This series, and a number of related series, have been overdue for more
> than a year now, partly because of speculative preemption, but also
> partly because of attempted scope creep such as this.  Scope creep is
> having a catastrophic effect on the productivity of submissions to Xen,
> and most not continue like this the Xen community is to survive.

Judging from what I guess "scope creep" means, I'd say there would
have been less (rather than more) work for you if you hadn't made
patch 1 a prereq for this one.

As to the more general statement here - I'm afraid we're both guilty
of this, to a varying degree. Yet I think that it's mutually the
case because in such situations we sincerely think that things would
be done better a different way, perhaps in a number of cases e.g. to
avoid having to touch the same code later again.

>>  With this, keying the return to cpu_bug_* also doesn't
>> look very nice, but I admit I can't suggest a better alternative
>> (other than leaving the vendor check in place and checking the
>> X86_FEATURE_RSTR_FP_ERR_PTRS bit).
>>
>> An option might be to give the construct a different name, without
>> "leak" in it (NO_FP_ERR_PTRS?).
> 
> This name also isn't ideal, because its not always that there are no
> error pointers.
> 
> X86_BUG_FPU_PTRS might be best, as it is neutral as to precisely what is
> buggy with them.

Well, okay, let's use that one then and hope we won't learn of a 2nd
FPU_PTRS bug later on.

Jan
Andrew Cooper Sept. 3, 2019, 7:04 p.m. UTC | #4
On 02/09/2019 15:50, Jan Beulich wrote:
>>>  I'm also not sure why you
>>> call them "unpredictable": If all (or most) cases match, the branch
>>> there could be pretty well predicted (subject of course to capacity).
>> Data-dependent branches which have no correlation to pattern history, of
>> which this is an example, are frequently mispredicted because they are
>> inherently unstable.
>>
>> In this case, you're trading off the fact that an unmasked exception is
>> basically never pending, against the cost of mispredicts in the context
>> switch path.
> For
>
>     if ( !(fpu_ctxt->fsw & ~fpu_ctxt->fcw & 0x003f) &&
>
> you're claiming it to be true most of the time. How could the
> predictor be mislead if whenever this is encountered the result
> of the double & is zero?

Because whether it is 0 or not is unrelated to previous history.

As this argument isn't getting anywhere, I'll leave it in for now and do
the perf work to demonstrate the problem at some point when I don't have
15 other things needing doing yesterday.

>>> But as said before, just like for synthetic
>>> features I strongly think we should use simple boolean variables
>>> when using them only in if()-s. Use of the feature(/bug) machinery
>>> is needed only to not further complicate alternatives patching.
>> ... b)
>>
>> I see I'm going to have to repeat myself, which is time I can't really
>> afford to waste.
>>
>> x86_capabilities is not, and has never been, "just for alternatives". 
>> It is also not how it is currently used in Xen.
> And I've not been claiming this.

You literally have, and it is quoted above.

>  Nevertheless my opinion is that it
> shouldn't be needlessly abused beyond its main purpose.

The purpose is to be a collection bits, stored in reasonably efficient
manner.  Synthetic features, as well as bugs are related information,
and very definitely capabilities of the CPU.

Alternatives use the x86_capabilities[] bitmap, which existed for 2
decades previously, because it happens to be in a convenient form.  The
fact that alternatives do use x86_capabilities[] has no bearing on what
is reasonable or appropriate data to store in the bitmap, and it
certainly doesn't mean that data-not-used-for-patching should be purged.

> I thought I had successfully convinced you of not adding synthetic
> feature (non-bug) flags either anymore, unless needed for alternatives
> patching.

No.

I don't think you realise how quite how infuriating it was trying to
meet the embargos for speculative issues.  We had series which were 10's
of patches long, being invasively rewritten leading up to the embargo. 
Some requests where legitimate - I'm not going to pretend otherwise, but
some really were minutia like this which really didn't help the situation.

There are two big series outstanding, MSR_VIRT_SPEC_CTRL and CPUID
Policy, which is getting to be reprehensibly late, and both of which had
proper embargos I was trying to meet. 

There was no way VIRT_SPEC_CTRL was going to meet the SSBD embargo
because of the delay getting the spec together, but running Xen on AMD
hardware is currently embarrassing and slow due to guests falling back
to native means and hitting:

(XEN) emul-priv-op.c:1113:d0v2 Domain attempted WRMSR c0011020 from
0x0006404000000000 to 0x0006404000000400

on their context switch path, and doing a good job of filling /var/log/
in minutes.

CPUID policy is even worse.  It's not currently safe to migrate VMs on
Intel hardware, because we can't level MSR_ARCH_CAPS.RSBA across the
migration pool, and this is something which really should have met the
L1TF embargo a year ago, but which was stopped dead in its tracks
because I couldn't even argue in public as to why it needed to be done
certain ways.  It also means that Xen is crippled on current-generation
Intel hardware.

The sad fact is that it is rather too easy to put off finishing that
work when there is other just-as-important work to do, and the thought
of arguing over further minutia on vN+1 is occasionally too exhausting
to contemplate.

> Anyway - in the interest of forward progress, yet without being
> convinced at all, I'll (as in so many earlier cases) give in here and
> see about acking patch 1 then.

Thankyou.

>
>> I also don't agree with the general suggestion because amongst other
>> things, there is a factor of 8 storage difference between one extra bit
>> in x86_capabilities[] and using scattered booleans.
> While a valid argument at the first glance, there's nothing keeping
> us from having a feature flag independent bitmap. Correct my if I'm
> wrong, but I've gained the impression that you want this mainly
> because Linux does it this way.

To a first approximation, yes - this is a construct we inherited from
Linux, and I'm continuing to use it in the way Linux uses it.

>
>>>  With this, keying the return to cpu_bug_* also doesn't
>>> look very nice, but I admit I can't suggest a better alternative
>>> (other than leaving the vendor check in place and checking the
>>> X86_FEATURE_RSTR_FP_ERR_PTRS bit).
>>>
>>> An option might be to give the construct a different name, without
>>> "leak" in it (NO_FP_ERR_PTRS?).
>> This name also isn't ideal, because its not always that there are no
>> error pointers.
>>
>> X86_BUG_FPU_PTRS might be best, as it is neutral as to precisely what is
>> buggy with them.
> Well, okay, let's use that one then and hope we won't learn of a 2nd
> FPU_PTRS bug later on.

Ok.

~Andrew

Patch
diff mbox series

diff --git a/tools/libxl/libxl_cpuid.c b/tools/libxl/libxl_cpuid.c
index a8d07fac50..acc92fd46c 100644
--- a/tools/libxl/libxl_cpuid.c
+++ b/tools/libxl/libxl_cpuid.c
@@ -256,7 +256,10 @@  int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str)
 
         {"invtsc",       0x80000007, NA, CPUID_REG_EDX,  8,  1},
 
+        {"clzero",       0x80000008, NA, CPUID_REG_EBX,  0,  1},
+        {"rstr-fp-err-ptrs", 0x80000008, NA, CPUID_REG_EBX, 2, 1},
         {"ibpb",         0x80000008, NA, CPUID_REG_EBX, 12,  1},
+
         {"nc",           0x80000008, NA, CPUID_REG_ECX,  0,  8},
         {"apicidsize",   0x80000008, NA, CPUID_REG_ECX, 12,  4},
 
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index b0db0525a9..04cdd9aa95 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -145,6 +145,7 @@  static const char *const str_e7d[32] =
 static const char *const str_e8b[32] =
 {
     [ 0] = "clzero",
+    [ 2] = "rstr-fp-err-ptrs",
 
     [12] = "ibpb",
 };
diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index a2f83c79a5..463f9776c7 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -580,6 +580,13 @@  static void init_amd(struct cpuinfo_x86 *c)
 	}
 
 	/*
+	 * Older AMD CPUs don't save/load FOP/FIP/FDP unless an FPU exception
+	 * is pending.  Xen works around this at (F)XRSTOR time.
+	 */
+	if ( !cpu_has(c, X86_FEATURE_RSTR_FP_ERR_PTRS) )
+		setup_force_cpu_cap(X86_BUG_FPU_PTR_LEAK);
+
+	/*
 	 * Attempt to set lfence to be Dispatch Serialising.  This MSR almost
 	 * certainly isn't virtualised (and Xen at least will leak the real
 	 * value in but silently discard writes), as well as being per-core
diff --git a/xen/arch/x86/i387.c b/xen/arch/x86/i387.c
index 88178485cb..82dbc461c3 100644
--- a/xen/arch/x86/i387.c
+++ b/xen/arch/x86/i387.c
@@ -43,20 +43,17 @@  static inline void fpu_fxrstor(struct vcpu *v)
     const typeof(v->arch.xsave_area->fpu_sse) *fpu_ctxt = v->arch.fpu_ctxt;
 
     /*
-     * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
+     * Some CPUs don't save/restore FDP/FIP/FOP unless an exception
      * is pending. Clear the x87 state here by setting it to fixed
      * values. The hypervisor data segment can be sometimes 0 and
      * sometimes new user value. Both should be ok. Use the FPU saved
      * data block as a safe address because it should be in L1.
      */
-    if ( !(fpu_ctxt->fsw & ~fpu_ctxt->fcw & 0x003f) &&
-         boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
-    {
+    if ( cpu_bug_fpu_ptr_leak )
         asm volatile ( "fnclex\n\t"
                        "ffree %%st(7)\n\t" /* clear stack tag */
                        "fildl %0"          /* load to clear state */
                        : : "m" (*fpu_ctxt) );
-    }
 
     /*
      * FXRSTOR can fault if passed a corrupted data block. We handle this
@@ -169,11 +166,10 @@  static inline void fpu_fxsave(struct vcpu *v)
                        : "=m" (*fpu_ctxt) : "R" (fpu_ctxt) );
 
         /*
-         * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
-         * is pending.
+         * Some CPUs don't save/restore FDP/FIP/FOP unless an exception is
+         * pending.  The restore code fills in suitable defaults.
          */
-        if ( !(fpu_ctxt->fsw & 0x0080) &&
-             boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
+        if ( cpu_bug_fpu_ptr_leak && !(fpu_ctxt->fsw & 0x0080) )
             return;
 
         /*
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index 3293ef834f..fd3c0c5a36 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -369,15 +369,13 @@  void xrstor(struct vcpu *v, uint64_t mask)
     unsigned int faults, prev_faults;
 
     /*
-     * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
+     * Some CPUs don't save/restore FDP/FIP/FOP unless an exception
      * is pending. Clear the x87 state here by setting it to fixed
      * values. The hypervisor data segment can be sometimes 0 and
      * sometimes new user value. Both should be ok. Use the FPU saved
      * data block as a safe address because it should be in L1.
      */
-    if ( (mask & ptr->xsave_hdr.xstate_bv & X86_XCR0_FP) &&
-         !(ptr->fpu_sse.fsw & ~ptr->fpu_sse.fcw & 0x003f) &&
-         boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
+    if ( cpu_bug_fpu_ptr_leak )
         asm volatile ( "fnclex\n\t"        /* clear exceptions */
                        "ffree %%st(7)\n\t" /* clear stack tag */
                        "fildl %0"          /* load to clear state */
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 906dd59c4b..5d7b819314 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -136,6 +136,9 @@ 
 
 #define cpu_has_msr_tsc_aux     (cpu_has_rdtscp || cpu_has_rdpid)
 
+/* Bugs. */
+#define cpu_bug_fpu_ptr_leak    boot_cpu_has(X86_BUG_FPU_PTR_LEAK)
+
 enum _cache_type {
     CACHE_TYPE_NULL = 0,
     CACHE_TYPE_DATA = 1,
diff --git a/xen/include/asm-x86/cpufeatures.h b/xen/include/asm-x86/cpufeatures.h
index ab3650f73b..afb861f588 100644
--- a/xen/include/asm-x86/cpufeatures.h
+++ b/xen/include/asm-x86/cpufeatures.h
@@ -43,5 +43,7 @@  XEN_CPUFEATURE(SC_VERW_IDLE,      X86_SYNTH(25)) /* VERW used by Xen for idle */
 #define X86_NR_BUG 1
 #define X86_BUG(x) ((FSCAPINTS + X86_NR_SYNTH) * 32 + (x))
 
+#define X86_BUG_FPU_PTR_LEAK      X86_BUG( 0) /* (F)XRSTOR doesn't load FOP/FIP/FDP. */
+
 /* Total number of capability words, inc synth and bug words. */
 #define NCAPINTS (FSCAPINTS + X86_NR_SYNTH + X86_NR_BUG) /* N 32-bit words worth of info */
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index e2c82a4554..babaf4b375 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -243,6 +243,7 @@  XEN_CPUFEATURE(EFRO,          7*32+10) /*   APERF/MPERF Read Only interface */
 
 /* AMD-defined CPU features, CPUID level 0x80000008.ebx, word 8 */
 XEN_CPUFEATURE(CLZERO,        8*32+ 0) /*A  CLZERO instruction */
+XEN_CPUFEATURE(RSTR_FP_ERR_PTRS, 8*32+ 2) /*A  (F)X{SAVE,RSTOR} always saves/restores FPU Error pointers. */
 XEN_CPUFEATURE(IBPB,          8*32+12) /*A  IBPB support only (no IBRS, used by AMD) */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:0.edx, word 9 */