diff mbox series

[v2] Partially revert "x86/mm: Clean IOMMU flags from p2m-pt code"

Message ID 20190829084918.89608-1-roger.pau@citrix.com (mailing list archive)
State New, archived
Headers show
Series [v2] Partially revert "x86/mm: Clean IOMMU flags from p2m-pt code" | expand

Commit Message

Roger Pau Monné Aug. 29, 2019, 8:49 a.m. UTC
This partially reverts commit
854a49a7486a02edae5b3e53617bace526e9c1b1 by re-adding the logic that
propagates changes to the domain physmap done by p2m_pt_set_entry into
the iommu page tables. Without this logic changes to the guest physmap
are not propagated to the iommu, leaving stale iommu entries that can
leak data, or failing to add new entries.

Note that this commit doesn't re-introduce iommu flags to the cpu page
table entries, since the logic to add/remove entries to the iommu page
tables is based on the p2m type and the mfn.

Fixes: 854a49a7486a02 ('x86/mm: Clean IOMMU flags from p2m-pt code')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Alexandru Stefan ISAILA <aisaila@bitdefender.com>
---
Changes since v1:
 - Remove the share-pt branch, there's no sharing on AMD hardware.
---
 xen/arch/x86/mm/p2m-pt.c | 42 ++++++++++++++++++++++++++++++++++++++--
 1 file changed, 40 insertions(+), 2 deletions(-)

Comments

Jan Beulich Aug. 29, 2019, 9:33 a.m. UTC | #1
On 29.08.2019 10:49, Roger Pau Monne wrote:
> @@ -640,9 +670,17 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
>           && (gfn + (1UL << page_order) - 1 > p2m->max_mapped_pfn) )
>          p2m->max_mapped_pfn = gfn + (1UL << page_order) - 1;
>  
> +    if ( need_iommu_pt_sync(p2m->domain) &&
> +         (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
> +            rc = iommu_pte_flags ?
> +                iommu_legacy_map(d, _dfn(gfn), mfn, page_order,
> +                                 iommu_pte_flags) :
> +                iommu_legacy_unmap(d, _dfn(gfn), page_order);

Indentation of the if() body is one level too deep. With this
corrected (easy enough to do while committing)
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan
Alexandru Stefan ISAILA Aug. 29, 2019, 9:37 a.m. UTC | #2
On 29.08.2019 11:49, Roger Pau Monne wrote:
> This partially reverts commit
> 854a49a7486a02edae5b3e53617bace526e9c1b1 by re-adding the logic that
> propagates changes to the domain physmap done by p2m_pt_set_entry into
> the iommu page tables. Without this logic changes to the guest physmap
> are not propagated to the iommu, leaving stale iommu entries that can
> leak data, or failing to add new entries.
> 
> Note that this commit doesn't re-introduce iommu flags to the cpu page
> table entries, since the logic to add/remove entries to the iommu page
> tables is based on the p2m type and the mfn.
> 
> Fixes: 854a49a7486a02 ('x86/mm: Clean IOMMU flags from p2m-pt code')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Cc: Alexandru Stefan ISAILA <aisaila@bitdefender.com>
> ---
> Changes since v1:
>   - Remove the share-pt branch, there's no sharing on AMD hardware.
> ---

Reviewed-by: Alexandru Isaila <aisaila@bitdefender.com>
Roger Pau Monné Aug. 29, 2019, 9:39 a.m. UTC | #3
On Thu, Aug 29, 2019 at 11:33:11AM +0200, Jan Beulich wrote:
> On 29.08.2019 10:49, Roger Pau Monne wrote:
> > @@ -640,9 +670,17 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
> >           && (gfn + (1UL << page_order) - 1 > p2m->max_mapped_pfn) )
> >          p2m->max_mapped_pfn = gfn + (1UL << page_order) - 1;
> >  
> > +    if ( need_iommu_pt_sync(p2m->domain) &&
> > +         (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
> > +            rc = iommu_pte_flags ?
> > +                iommu_legacy_map(d, _dfn(gfn), mfn, page_order,
> > +                                 iommu_pte_flags) :
> > +                iommu_legacy_unmap(d, _dfn(gfn), page_order);
> 
> Indentation of the if() body is one level too deep. With this
> corrected (easy enough to do while committing)
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Indeed, my editor seems to have done the wrong thing. Thanks.
George Dunlap Aug. 29, 2019, 10:54 a.m. UTC | #4
On 8/29/19 9:49 AM, Roger Pau Monne wrote:
> This partially reverts commit
> 854a49a7486a02edae5b3e53617bace526e9c1b1 by re-adding the logic that
> propagates changes to the domain physmap done by p2m_pt_set_entry into
> the iommu page tables. Without this logic changes to the guest physmap
> are not propagated to the iommu, leaving stale iommu entries that can
> leak data, or failing to add new entries.
> 
> Note that this commit doesn't re-introduce iommu flags to the cpu page
> table entries, since the logic to add/remove entries to the iommu page
> tables is based on the p2m type and the mfn.
> 
> Fixes: 854a49a7486a02 ('x86/mm: Clean IOMMU flags from p2m-pt code')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Sorry, the removal of the iommu update looked a bit suspicious; I should
have looked a bit closer.

Acked-by: George Dunlap <george.dunlap@citrix.com>
diff mbox series

Patch

diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 3a0a500d66..8483dc8142 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -508,7 +508,18 @@  p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
     l2_pgentry_t l2e_content;
     l3_pgentry_t l3e_content;
     int rc;
-    unsigned int flags;
+    unsigned int iommu_pte_flags = p2m_get_iommu_flags(p2mt, mfn);
+    /*
+     * old_mfn and iommu_old_flags control possible flush/update needs on the
+     * IOMMU: We need to flush when MFN or flags (i.e. permissions) change.
+     * iommu_old_flags being initialized to zero covers the case of the entry
+     * getting replaced being a non-present (leaf or intermediate) one. For
+     * present leaf entries the real value will get calculated below, while
+     * for present intermediate entries ~0 (guaranteed != iommu_pte_flags)
+     * will be used (to cover all cases of what the leaf entries underneath
+     * the intermediate one might be).
+     */
+    unsigned int flags, iommu_old_flags = 0;
     unsigned long old_mfn = mfn_x(INVALID_MFN);
 
     if ( !sve )
@@ -556,9 +567,17 @@  p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         if ( flags & _PAGE_PRESENT )
         {
             if ( flags & _PAGE_PSE )
+            {
                 old_mfn = l1e_get_pfn(*p2m_entry);
+                iommu_old_flags =
+                    p2m_get_iommu_flags(p2m_flags_to_type(flags),
+                                        _mfn(old_mfn));
+            }
             else
+            {
+                iommu_old_flags = ~0;
                 intermediate_entry = *p2m_entry;
+            }
         }
 
         check_entry(mfn, p2mt, p2m_flags_to_type(flags), page_order);
@@ -594,6 +613,9 @@  p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
                                    0, L1_PAGETABLE_ENTRIES);
         ASSERT(p2m_entry);
         old_mfn = l1e_get_pfn(*p2m_entry);
+        iommu_old_flags =
+            p2m_get_iommu_flags(p2m_flags_to_type(l1e_get_flags(*p2m_entry)),
+                                _mfn(old_mfn));
 
         if ( mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt) )
             entry_content = p2m_l1e_from_pfn(mfn_x(mfn),
@@ -617,9 +639,17 @@  p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         if ( flags & _PAGE_PRESENT )
         {
             if ( flags & _PAGE_PSE )
+            {
                 old_mfn = l1e_get_pfn(*p2m_entry);
+                iommu_old_flags =
+                    p2m_get_iommu_flags(p2m_flags_to_type(flags),
+                                        _mfn(old_mfn));
+            }
             else
+            {
+                iommu_old_flags = ~0;
                 intermediate_entry = *p2m_entry;
+            }
         }
 
         check_entry(mfn, p2mt, p2m_flags_to_type(flags), page_order);
@@ -640,9 +670,17 @@  p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
          && (gfn + (1UL << page_order) - 1 > p2m->max_mapped_pfn) )
         p2m->max_mapped_pfn = gfn + (1UL << page_order) - 1;
 
+    if ( need_iommu_pt_sync(p2m->domain) &&
+         (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
+            rc = iommu_pte_flags ?
+                iommu_legacy_map(d, _dfn(gfn), mfn, page_order,
+                                 iommu_pte_flags) :
+                iommu_legacy_unmap(d, _dfn(gfn), page_order);
+
     /*
      * Free old intermediate tables if necessary.  This has to be the
-     * last thing we do so as to avoid a potential use-after-free.
+     * last thing we do, after removal from the IOMMU tables, so as to
+     * avoid a potential use-after-free.
      */
     if ( l1e_get_flags(intermediate_entry) & _PAGE_PRESENT )
         p2m_free_entry(p2m, &intermediate_entry, page_order);