diff mbox series

Partially revert "x86/mm: Clean IOMMU flags from p2m-pt code"

Message ID 20190828133229.86085-1-roger.pau@citrix.com (mailing list archive)
State Superseded
Headers show
Series Partially revert "x86/mm: Clean IOMMU flags from p2m-pt code" | expand

Commit Message

Roger Pau Monné Aug. 28, 2019, 1:32 p.m. UTC
This partially reverts commit
854a49a7486a02edae5b3e53617bace526e9c1b1 by re-adding the logic that
propagates changes to the domain physmap done by p2m_pt_set_entry into
the iommu page tables. Without this logic changes to the guest physmap
are not propagated to the iommu, leaving stale iommu entries that can
leak data, or failing to add new entries.

Note that this commit doesn't re-introduce iommu flags to the cpu page
table entries, since the logic to add/remove entries to the iommu page
tables is based on the p2m type and the mfn.

Fixes: 854a49a7486a02 ('x86/mm: Clean IOMMU flags from p2m-pt code')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Alexandru Stefan ISAILA <aisaila@bitdefender.com>
---
 xen/arch/x86/mm/p2m-pt.c | 50 ++++++++++++++++++++++++++++++++++++++--
 1 file changed, 48 insertions(+), 2 deletions(-)

Comments

Jan Beulich Aug. 28, 2019, 1:48 p.m. UTC | #1
On 28.08.2019 15:32, Roger Pau Monne wrote:
> This partially reverts commit
> 854a49a7486a02edae5b3e53617bace526e9c1b1 by re-adding the logic that
> propagates changes to the domain physmap done by p2m_pt_set_entry into
> the iommu page tables. Without this logic changes to the guest physmap
> are not propagated to the iommu, leaving stale iommu entries that can
> leak data, or failing to add new entries.

Oh, indeed - how did I miss this while reviewing?

> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -35,6 +35,7 @@
>  #include <asm/p2m.h>
>  #include <asm/mem_sharing.h>
>  #include <asm/hvm/nestedhvm.h>
> +#include <asm/hvm/svm/amd-iommu-proto.h>

I guess you don't really need to re-add this, as ...

> @@ -640,9 +671,24 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
>           && (gfn + (1UL << page_order) - 1 > p2m->max_mapped_pfn) )
>          p2m->max_mapped_pfn = gfn + (1UL << page_order) - 1;
>  
> +    if ( iommu_enabled && (iommu_old_flags != iommu_pte_flags ||
> +                           old_mfn != mfn_x(mfn)) )
> +    {
> +        ASSERT(rc == 0);
> +
> +        if ( need_iommu_pt_sync(p2m->domain) )
> +            rc = iommu_pte_flags ?
> +                iommu_legacy_map(d, _dfn(gfn), mfn, page_order,
> +                                 iommu_pte_flags) :
> +                iommu_legacy_unmap(d, _dfn(gfn), page_order);
> +        else if ( iommu_use_hap_pt(d) && iommu_old_flags )
> +            amd_iommu_flush_pages(p2m->domain, gfn, page_order);

... I don't think the "else if()" needs restoring (with there not
being any sharing of page tables for AMD/SVM).

Jan
Alexandru Stefan ISAILA Aug. 28, 2019, 2:55 p.m. UTC | #2
On 28.08.2019 16:32, Roger Pau Monne wrote:
> This partially reverts commit
> 854a49a7486a02edae5b3e53617bace526e9c1b1 by re-adding the logic that
> propagates changes to the domain physmap done by p2m_pt_set_entry into
> the iommu page tables. Without this logic changes to the guest physmap
> are not propagated to the iommu, leaving stale iommu entries that can
> leak data, or failing to add new entries.
> 
> Note that this commit doesn't re-introduce iommu flags to the cpu page
> table entries, since the logic to add/remove entries to the iommu page
> tables is based on the p2m type and the mfn.

This sounds good but is it still safe to use the IOMMU flag space with 
another purpose?

Alex
Roger Pau Monné Aug. 28, 2019, 3:01 p.m. UTC | #3
On Wed, Aug 28, 2019 at 02:55:58PM +0000, Alexandru Stefan ISAILA wrote:
> 
> 
> On 28.08.2019 16:32, Roger Pau Monne wrote:
> > This partially reverts commit
> > 854a49a7486a02edae5b3e53617bace526e9c1b1 by re-adding the logic that
> > propagates changes to the domain physmap done by p2m_pt_set_entry into
> > the iommu page tables. Without this logic changes to the guest physmap
> > are not propagated to the iommu, leaving stale iommu entries that can
> > leak data, or failing to add new entries.
> > 
> > Note that this commit doesn't re-introduce iommu flags to the cpu page
> > table entries, since the logic to add/remove entries to the iommu page
> > tables is based on the p2m type and the mfn.
> 
> This sounds good but is it still safe to use the IOMMU flag space with 
> another purpose?

Sure, please note that this partial revert doesn't add any extra flags
to the CPU page table entries, as p2m_add_iommu_flags is not
re-introduced.

Roger.
diff mbox series

Patch

diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 3a0a500d66..4526998b86 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -35,6 +35,7 @@ 
 #include <asm/p2m.h>
 #include <asm/mem_sharing.h>
 #include <asm/hvm/nestedhvm.h>
+#include <asm/hvm/svm/amd-iommu-proto.h>
 
 #include "mm-locks.h"
 
@@ -508,7 +509,18 @@  p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
     l2_pgentry_t l2e_content;
     l3_pgentry_t l3e_content;
     int rc;
-    unsigned int flags;
+    unsigned int iommu_pte_flags = p2m_get_iommu_flags(p2mt, mfn);
+    /*
+     * old_mfn and iommu_old_flags control possible flush/update needs on the
+     * IOMMU: We need to flush when MFN or flags (i.e. permissions) change.
+     * iommu_old_flags being initialized to zero covers the case of the entry
+     * getting replaced being a non-present (leaf or intermediate) one. For
+     * present leaf entries the real value will get calculated below, while
+     * for present intermediate entries ~0 (guaranteed != iommu_pte_flags)
+     * will be used (to cover all cases of what the leaf entries underneath
+     * the intermediate one might be).
+     */
+    unsigned int flags, iommu_old_flags = 0;
     unsigned long old_mfn = mfn_x(INVALID_MFN);
 
     if ( !sve )
@@ -556,9 +568,17 @@  p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         if ( flags & _PAGE_PRESENT )
         {
             if ( flags & _PAGE_PSE )
+            {
                 old_mfn = l1e_get_pfn(*p2m_entry);
+                iommu_old_flags =
+                    p2m_get_iommu_flags(p2m_flags_to_type(flags),
+                                        _mfn(old_mfn));
+            }
             else
+            {
+                iommu_old_flags = ~0;
                 intermediate_entry = *p2m_entry;
+            }
         }
 
         check_entry(mfn, p2mt, p2m_flags_to_type(flags), page_order);
@@ -594,6 +614,9 @@  p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
                                    0, L1_PAGETABLE_ENTRIES);
         ASSERT(p2m_entry);
         old_mfn = l1e_get_pfn(*p2m_entry);
+        iommu_old_flags =
+            p2m_get_iommu_flags(p2m_flags_to_type(l1e_get_flags(*p2m_entry)),
+                                _mfn(old_mfn));
 
         if ( mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt) )
             entry_content = p2m_l1e_from_pfn(mfn_x(mfn),
@@ -617,9 +640,17 @@  p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         if ( flags & _PAGE_PRESENT )
         {
             if ( flags & _PAGE_PSE )
+            {
                 old_mfn = l1e_get_pfn(*p2m_entry);
+                iommu_old_flags =
+                    p2m_get_iommu_flags(p2m_flags_to_type(flags),
+                                        _mfn(old_mfn));
+            }
             else
+            {
+                iommu_old_flags = ~0;
                 intermediate_entry = *p2m_entry;
+            }
         }
 
         check_entry(mfn, p2mt, p2m_flags_to_type(flags), page_order);
@@ -640,9 +671,24 @@  p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
          && (gfn + (1UL << page_order) - 1 > p2m->max_mapped_pfn) )
         p2m->max_mapped_pfn = gfn + (1UL << page_order) - 1;
 
+    if ( iommu_enabled && (iommu_old_flags != iommu_pte_flags ||
+                           old_mfn != mfn_x(mfn)) )
+    {
+        ASSERT(rc == 0);
+
+        if ( need_iommu_pt_sync(p2m->domain) )
+            rc = iommu_pte_flags ?
+                iommu_legacy_map(d, _dfn(gfn), mfn, page_order,
+                                 iommu_pte_flags) :
+                iommu_legacy_unmap(d, _dfn(gfn), page_order);
+        else if ( iommu_use_hap_pt(d) && iommu_old_flags )
+            amd_iommu_flush_pages(p2m->domain, gfn, page_order);
+    }
+
     /*
      * Free old intermediate tables if necessary.  This has to be the
-     * last thing we do so as to avoid a potential use-after-free.
+     * last thing we do, after removal from the IOMMU tables, so as to
+     * avoid a potential use-after-free.
      */
     if ( l1e_get_flags(intermediate_entry) & _PAGE_PRESENT )
         p2m_free_entry(p2m, &intermediate_entry, page_order);