diff mbox series

[4/8] x86/mm: Drop bogus cacheability logic in update_xen_mappings()

Message ID 20211130100445.31156-5-andrew.cooper3@citrix.com (mailing list archive)
State New, archived
Headers show
Series x86: Support for __ro_after_init | expand

Commit Message

Andrew Cooper Nov. 30, 2021, 10:04 a.m. UTC
There is no circumstance under which any part of the Xen image in memory wants
to have any cacheability other than Write Back.

Furthermore, unmapping random pieces of Xen like that comes with a non-trivial
risk of a Triple Fault, or other fatal condition.  Even if it appears to work,
an NMI or interrupt as a far wider reach into Xen mappings than calling
map_pages_to_xen() thrice.

Simplfy the safety checking to a BUG_ON().  It is substantially more correct
and robust than following either of the unlikely(alias) paths.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

I'm half tempted to drop the check entirely, but in that case it would be
better to inline the result into the two callers.
---
 xen/arch/x86/mm.c | 21 +++++++++------------
 1 file changed, 9 insertions(+), 12 deletions(-)

Comments

Andrew Cooper Nov. 30, 2021, 1:11 p.m. UTC | #1
On 30/11/2021 10:04, Andrew Cooper wrote:
> There is no circumstance under which any part of the Xen image in memory wants
> to have any cacheability other than Write Back.
>
> Furthermore, unmapping random pieces of Xen like that comes with a non-trivial
> risk of a Triple Fault, or other fatal condition.  Even if it appears to work,
> an NMI or interrupt as a far wider reach into Xen mappings than calling
> map_pages_to_xen() thrice.
>
> Simplfy the safety checking to a BUG_ON().  It is substantially more correct
> and robust than following either of the unlikely(alias) paths.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
>
> I'm half tempted to drop the check entirely, but in that case it would be
> better to inline the result into the two callers.
> ---
>  xen/arch/x86/mm.c | 21 +++++++++------------
>  1 file changed, 9 insertions(+), 12 deletions(-)
>
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 4d799032dc82..9bd4e5cc1d2f 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -785,24 +785,21 @@ bool is_iomem_page(mfn_t mfn)
>  
>  static int update_xen_mappings(unsigned long mfn, unsigned int cacheattr)
>  {
> -    int err = 0;
>      bool alias = mfn >= PFN_DOWN(xen_phys_start) &&
>           mfn < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START);
> -    unsigned long xen_va =
> -        XEN_VIRT_START + ((mfn - PFN_DOWN(xen_phys_start)) << PAGE_SHIFT);
> +
> +    /*
> +     * Something is catastrophically broken if someone is trying to change the
> +     * cacheability of Xen in memory...
> +     */
> +    BUG_ON(alias);
>  
>      if ( boot_cpu_has(X86_FEATURE_XEN_SELFSNOOP) )
>          return 0;
>  
> -    if ( unlikely(alias) && cacheattr )
> -        err = map_pages_to_xen(xen_va, _mfn(mfn), 1, 0);
> -    if ( !err )
> -        err = map_pages_to_xen((unsigned long)mfn_to_virt(mfn), _mfn(mfn), 1,
> -                     PAGE_HYPERVISOR | cacheattr_to_pte_flags(cacheattr));
> -    if ( unlikely(alias) && !cacheattr && !err )
> -        err = map_pages_to_xen(xen_va, _mfn(mfn), 1, PAGE_HYPERVISOR);
> -
> -    return err;
> +    return map_pages_to_xen(
> +        (unsigned long)mfn_to_virt(mfn), _mfn(mfn), 1,
> +        PAGE_HYPERVISOR | cacheattr_to_pte_flags(cacheattr));

In light of the discussion on patch 1, this is no longer safe.  The
alias calculation includes 2M of arbitrary guest memory, and in
principle there are legitimate reasons for a guest to want to map RAM as
WC (e.g. GPU pagetables) or in the future, WP (in place RAM
encryption/decryption).

The gross hack fix would be "mfn >= PFN_DOWN(xen_phys_start + MB(2))",
but but this is screaming for a helper.  xen_in_range() is part-way
there, but is an O(n) loop over the regions.

~Andrew
Andrew Cooper Nov. 30, 2021, 2:56 p.m. UTC | #2
On 30/11/2021 13:11, Andrew Cooper wrote:
> On 30/11/2021 10:04, Andrew Cooper wrote:
>> There is no circumstance under which any part of the Xen image in memory wants
>> to have any cacheability other than Write Back.
>>
>> Furthermore, unmapping random pieces of Xen like that comes with a non-trivial
>> risk of a Triple Fault, or other fatal condition.  Even if it appears to work,
>> an NMI or interrupt as a far wider reach into Xen mappings than calling
>> map_pages_to_xen() thrice.
>>
>> Simplfy the safety checking to a BUG_ON().  It is substantially more correct
>> and robust than following either of the unlikely(alias) paths.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>>
>> I'm half tempted to drop the check entirely, but in that case it would be
>> better to inline the result into the two callers.
>> ---
>>  xen/arch/x86/mm.c | 21 +++++++++------------
>>  1 file changed, 9 insertions(+), 12 deletions(-)
>>
>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>> index 4d799032dc82..9bd4e5cc1d2f 100644
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -785,24 +785,21 @@ bool is_iomem_page(mfn_t mfn)
>>  
>>  static int update_xen_mappings(unsigned long mfn, unsigned int cacheattr)
>>  {
>> -    int err = 0;
>>      bool alias = mfn >= PFN_DOWN(xen_phys_start) &&
>>           mfn < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START);
>> -    unsigned long xen_va =
>> -        XEN_VIRT_START + ((mfn - PFN_DOWN(xen_phys_start)) << PAGE_SHIFT);
>> +
>> +    /*
>> +     * Something is catastrophically broken if someone is trying to change the
>> +     * cacheability of Xen in memory...
>> +     */
>> +    BUG_ON(alias);
>>  
>>      if ( boot_cpu_has(X86_FEATURE_XEN_SELFSNOOP) )
>>          return 0;
>>  
>> -    if ( unlikely(alias) && cacheattr )
>> -        err = map_pages_to_xen(xen_va, _mfn(mfn), 1, 0);
>> -    if ( !err )
>> -        err = map_pages_to_xen((unsigned long)mfn_to_virt(mfn), _mfn(mfn), 1,
>> -                     PAGE_HYPERVISOR | cacheattr_to_pte_flags(cacheattr));
>> -    if ( unlikely(alias) && !cacheattr && !err )
>> -        err = map_pages_to_xen(xen_va, _mfn(mfn), 1, PAGE_HYPERVISOR);
>> -
>> -    return err;
>> +    return map_pages_to_xen(
>> +        (unsigned long)mfn_to_virt(mfn), _mfn(mfn), 1,
>> +        PAGE_HYPERVISOR | cacheattr_to_pte_flags(cacheattr));
> In light of the discussion on patch 1, this is no longer safe.  The
> alias calculation includes 2M of arbitrary guest memory, and in
> principle there are legitimate reasons for a guest to want to map RAM as
> WC (e.g. GPU pagetables) or in the future, WP (in place RAM
> encryption/decryption).
>
> The gross hack fix would be "mfn >= PFN_DOWN(xen_phys_start + MB(2))",
> but but this is screaming for a helper.  xen_in_range() is part-way
> there, but is an O(n) loop over the regions.

Further problems.  There is also the init section in the middle of Xen
which contains arbitrary guest memory.  The old algorithm was worse for
that, because it would reinstate the nuked init mappings.

~Andrew
diff mbox series

Patch

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 4d799032dc82..9bd4e5cc1d2f 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -785,24 +785,21 @@  bool is_iomem_page(mfn_t mfn)
 
 static int update_xen_mappings(unsigned long mfn, unsigned int cacheattr)
 {
-    int err = 0;
     bool alias = mfn >= PFN_DOWN(xen_phys_start) &&
          mfn < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START);
-    unsigned long xen_va =
-        XEN_VIRT_START + ((mfn - PFN_DOWN(xen_phys_start)) << PAGE_SHIFT);
+
+    /*
+     * Something is catastrophically broken if someone is trying to change the
+     * cacheability of Xen in memory...
+     */
+    BUG_ON(alias);
 
     if ( boot_cpu_has(X86_FEATURE_XEN_SELFSNOOP) )
         return 0;
 
-    if ( unlikely(alias) && cacheattr )
-        err = map_pages_to_xen(xen_va, _mfn(mfn), 1, 0);
-    if ( !err )
-        err = map_pages_to_xen((unsigned long)mfn_to_virt(mfn), _mfn(mfn), 1,
-                     PAGE_HYPERVISOR | cacheattr_to_pte_flags(cacheattr));
-    if ( unlikely(alias) && !cacheattr && !err )
-        err = map_pages_to_xen(xen_va, _mfn(mfn), 1, PAGE_HYPERVISOR);
-
-    return err;
+    return map_pages_to_xen(
+        (unsigned long)mfn_to_virt(mfn), _mfn(mfn), 1,
+        PAGE_HYPERVISOR | cacheattr_to_pte_flags(cacheattr));
 }
 
 #ifndef NDEBUG