diff mbox

[v7,02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping

Message ID 1465376344-28290-3-git-send-email-quan.xu@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Quan Xu June 8, 2016, 8:58 a.m. UTC
From: Quan Xu <quan.xu@intel.com>

When IOMMU mapping is failed, we issue a best effort rollback, stopping
IOMMU mapping, unmapping the previous IOMMU maps and then reporting the
error up to the call trees. When rollback is not feasible (in early
initialization phase or trade-off of complexity) for the hardware domain,
we do things on a best effort basis, only throwing out an error message.

IOMMU unmapping should perhaps continue despite an error, in an attempt
to do best effort cleanup.

Signed-off-by: Quan Xu <quan.xu@intel.com>

CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

v7:
  1. add __must_check annotation to iommu_map_page() / iommu_unmap_page().
  2. use the same code structure for p2m_pt_set_entry() as I do on the
     EPT side.
  3. fix amd_iommu_hwdom_init() / iommu_hwdom_init() here.
  4. enhance print infomation. It is no need to mention "hardware domain"
     in printk of iommu_hwdom_init().
  5. make the assignment be replaced by its right side becoming
     the variable's initializer.
---
 xen/arch/x86/mm.c                           | 13 ++++++-----
 xen/arch/x86/mm/p2m-ept.c                   | 34 +++++++++++++++++++++++------
 xen/arch/x86/mm/p2m-pt.c                    | 23 ++++++++++++++++---
 xen/arch/x86/mm/p2m.c                       | 18 ++++++++++++---
 xen/arch/x86/x86_64/mm.c                    |  4 +++-
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 15 +++++++++++--
 xen/drivers/passthrough/iommu.c             | 13 ++++++++++-
 xen/drivers/passthrough/vtd/x86/vtd.c       | 15 +++++++++++--
 xen/include/xen/iommu.h                     |  6 ++---
 9 files changed, 114 insertions(+), 27 deletions(-)

Comments

Jan Beulich June 8, 2016, 2:40 p.m. UTC | #1
>>> On 08.06.16 at 10:58, <quan.xu@intel.com> wrote:
> @@ -831,10 +837,24 @@ out:
>          {
>              if ( iommu_flags )
>                  for ( i = 0; i < (1 << order); i++ )
> -                    iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
> +                {
> +                    rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
> +                    if ( unlikely(rc) )
> +                    {
> +                        while ( i-- )
> +                            if ( iommu_unmap_page(p2m->domain, gfn + i) )
> +                                continue;

Nice idea. I would have preferred a brief comment explaining this,
but anyway.

> @@ -140,8 +142,17 @@ void __hwdom_init vtd_set_hwdom_mapping(struct domain *d)
>  
>          tmp = 1 << (PAGE_SHIFT - PAGE_SHIFT_4K);
>          for ( j = 0; j < tmp; j++ )
> -            iommu_map_page(d, pfn * tmp + j, pfn * tmp + j,
> -                           IOMMUF_readable|IOMMUF_writable);
> +        {
> +            int ret = iommu_map_page(d, pfn * tmp + j, pfn * tmp + j,
> +                                     IOMMUF_readable|IOMMUF_writable);
> +
> +            if( !rc )

Missing blank (could be fixed upon commit if no other reason for
another iteration arises).

Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich June 10, 2016, 6:56 a.m. UTC | #2
>>> On 09.06.16 at 20:37, <suravee.suthikulpanit@amd.com> wrote:
> On 6/8/2016 9:53 AM, Jan Beulich wrote:
>> diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c
>>> b/xen/drivers/passthrough/amd/pci_amd_iommu.c
>>> index fce9827..4a860af 100644
>>> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
>>> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
>>> @@ -282,6 +282,8 @@ static void __hwdom_init amd_iommu_hwdom_init(struct
>>> domain *d)
>>>
>>>      if ( !iommu_passthrough && !need_iommu(d) )
>>>      {
>>> +        int rc = 0;
>>> +
>>>          /* Set up 1:1 page table for dom0 */
>>>          for ( i = 0; i < max_pdx; i++ )
>>>          {
>>> @@ -292,12 +294,21 @@ static void __hwdom_init amd_iommu_hwdom_init(struct
>>> domain *d)
>>>               * a pfn_valid() check would seem desirable here.
>>>               */
>>>              if ( mfn_valid(pfn) )
>>> -                amd_iommu_map_page(d, pfn, pfn,
>>> -                                   IOMMUF_readable|IOMMUF_writable);
>>> +            {
>>> +                int ret = amd_iommu_map_page(d, pfn, pfn,
>>> +
>>> IOMMUF_readable|IOMMUF_writable);
>>> +
>>> +                if ( !rc )
>>> +                    rc = ret;
>>> +            }
>>>
>>>              if ( !(i & 0xfffff) )
>>>                  process_pending_softirqs();
>>>          }
>>> +
>>> +        if ( rc )
>>> +            AMD_IOMMU_DEBUG("d%d: IOMMU mapping failed: %d\n",
>>> +                            d->domain_id, rc);
>>>      }
>>>
>>>      for_each_amd_iommu ( iommu )
> 
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

For the record to the list, since it had got dropped for an unknown
reason.

Jan
Tian, Kevin June 12, 2016, 6:41 a.m. UTC | #3
> From: Xu, Quan
> Sent: Wednesday, June 08, 2016 4:59 PM
> 
> From: Quan Xu <quan.xu@intel.com>
> 
> When IOMMU mapping is failed, we issue a best effort rollback, stopping
> IOMMU mapping, unmapping the previous IOMMU maps and then reporting the
> error up to the call trees. When rollback is not feasible (in early
> initialization phase or trade-off of complexity) for the hardware domain,
> we do things on a best effort basis, only throwing out an error message.
> 
> IOMMU unmapping should perhaps continue despite an error, in an attempt
> to do best effort cleanup.
> 
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> 

Acked-by: Kevin Tian <kevin.tian@intel.com>
diff mbox

Patch

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 8d10a3e..ae7c8ab 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2467,7 +2467,7 @@  static int __get_page_type(struct page_info *page, unsigned long type,
                            int preemptible)
 {
     unsigned long nx, x, y = page->u.inuse.type_info;
-    int rc = 0;
+    int rc = 0, iommu_ret = 0;
 
     ASSERT(!(type & ~(PGT_type_mask | PGT_pae_xen_l2)));
 
@@ -2578,11 +2578,11 @@  static int __get_page_type(struct page_info *page, unsigned long type,
         if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
         {
             if ( (x & PGT_type_mask) == PGT_writable_page )
-                iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
+                iommu_ret = iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
             else if ( type == PGT_writable_page )
-                iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
-                               page_to_mfn(page),
-                               IOMMUF_readable|IOMMUF_writable);
+                iommu_ret = iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
+                                           page_to_mfn(page),
+                                           IOMMUF_readable|IOMMUF_writable);
         }
     }
 
@@ -2599,6 +2599,9 @@  static int __get_page_type(struct page_info *page, unsigned long type,
     if ( (x & PGT_partial) && !(nx & PGT_partial) )
         put_page(page);
 
+    if ( !rc )
+        rc = iommu_ret;
+
     return rc;
 }
 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 1ed5b47..5aebc24 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -667,6 +667,7 @@  ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     unsigned long gfn_remainder = gfn;
     unsigned int i, target = order / EPT_TABLE_ORDER;
     int ret, rc = 0;
+    bool_t entry_written = 0;
     bool_t direct_mmio = (p2mt == p2m_mmio_direct);
     uint8_t ipat = 0;
     bool_t need_modify_vtd_table = 1;
@@ -812,10 +813,15 @@  ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     rc = atomic_write_ept_entry(ept_entry, new_entry, target);
     if ( unlikely(rc) )
         old_entry.epte = 0;
-    else if ( p2mt != p2m_invalid &&
-              (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
-        /* Track the highest gfn for which we have ever had a valid mapping */
-        p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
+    else
+    {
+        entry_written = 1;
+
+        if ( p2mt != p2m_invalid &&
+             (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
+            /* Track the highest gfn for which we have ever had a valid mapping */
+            p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
+    }
 
 out:
     if ( needs_sync )
@@ -831,10 +837,24 @@  out:
         {
             if ( iommu_flags )
                 for ( i = 0; i < (1 << order); i++ )
-                    iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
+                {
+                    rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
+                    if ( unlikely(rc) )
+                    {
+                        while ( i-- )
+                            if ( iommu_unmap_page(p2m->domain, gfn + i) )
+                                continue;
+
+                        break;
+                    }
+                }
             else
                 for ( i = 0; i < (1 << order); i++ )
-                    iommu_unmap_page(d, gfn + i);
+                {
+                    ret = iommu_unmap_page(d, gfn + i);
+                    if ( !rc )
+                        rc = ret;
+                }
         }
     }
 
@@ -847,7 +867,7 @@  out:
     if ( is_epte_present(&old_entry) )
         ept_free_entry(p2m, &old_entry, target);
 
-    if ( rc == 0 && p2m_is_hostp2m(p2m) )
+    if ( entry_written && p2m_is_hostp2m(p2m) )
         p2m_altp2m_propagate_change(d, _gfn(gfn), mfn, order, p2mt, p2ma);
 
     return rc;
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 3d80612..a3870da 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -673,6 +673,8 @@  p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     if ( iommu_enabled && need_iommu(p2m->domain) &&
          (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
     {
+        ASSERT(rc == 0);
+
         if ( iommu_use_hap_pt(p2m->domain) )
         {
             if ( iommu_old_flags )
@@ -680,11 +682,26 @@  p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
         }
         else if ( iommu_pte_flags )
             for ( i = 0; i < (1UL << page_order); i++ )
-                iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
-                               iommu_pte_flags);
+            {
+                rc = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
+                                    iommu_pte_flags);
+                if ( unlikely(rc) )
+                {
+                    while ( i-- )
+                        if ( iommu_unmap_page(p2m->domain, gfn + i) )
+                            continue;
+
+                    break;
+                }
+            }
         else
             for ( i = 0; i < (1UL << page_order); i++ )
-                iommu_unmap_page(p2m->domain, gfn + i);
+            {
+                int ret = iommu_unmap_page(p2m->domain, gfn + i);
+
+                if ( !rc )
+                    rc = ret;
+            }
     }
 
     /*
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 9b19769..7ce25d2 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -641,10 +641,20 @@  p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
 
     if ( !paging_mode_translate(p2m->domain) )
     {
+        int rc = 0;
+
         if ( need_iommu(p2m->domain) )
+        {
             for ( i = 0; i < (1 << page_order); i++ )
-                iommu_unmap_page(p2m->domain, mfn + i);
-        return 0;
+            {
+                int ret = iommu_unmap_page(p2m->domain, mfn + i);
+
+                if ( !rc )
+                    rc = ret;
+            }
+        }
+
+        return rc;
     }
 
     ASSERT(gfn_locked_by_me(p2m, gfn));
@@ -700,7 +710,9 @@  guest_physmap_add_entry(struct domain *d, unsigned long gfn,
                 if ( rc != 0 )
                 {
                     while ( i-- > 0 )
-                        iommu_unmap_page(d, mfn + i);
+                        if ( iommu_unmap_page(d, mfn + i) )
+                            continue;
+
                     return rc;
                 }
             }
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index e07e69e..6dab7ff 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1436,7 +1436,9 @@  int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
         if ( i != epfn )
         {
             while (i-- > old_max)
-                iommu_unmap_page(hardware_domain, i);
+                if ( iommu_unmap_page(hardware_domain, i) )
+                    continue;
+
             goto destroy_m2p;
         }
     }
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index fce9827..4a860af 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -282,6 +282,8 @@  static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
 
     if ( !iommu_passthrough && !need_iommu(d) )
     {
+        int rc = 0;
+
         /* Set up 1:1 page table for dom0 */
         for ( i = 0; i < max_pdx; i++ )
         {
@@ -292,12 +294,21 @@  static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
              * a pfn_valid() check would seem desirable here.
              */
             if ( mfn_valid(pfn) )
-                amd_iommu_map_page(d, pfn, pfn, 
-                                   IOMMUF_readable|IOMMUF_writable);
+            {
+                int ret = amd_iommu_map_page(d, pfn, pfn,
+                                             IOMMUF_readable|IOMMUF_writable);
+
+                if ( !rc )
+                    rc = ret;
+            }
 
             if ( !(i & 0xfffff) )
                 process_pending_softirqs();
         }
+
+        if ( rc )
+            AMD_IOMMU_DEBUG("d%d: IOMMU mapping failed: %d\n",
+                            d->domain_id, rc);
     }
 
     for_each_amd_iommu ( iommu )
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 673e126..ec85352 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -171,20 +171,31 @@  void __hwdom_init iommu_hwdom_init(struct domain *d)
     {
         struct page_info *page;
         unsigned int i = 0;
+        int rc = 0;
+
         page_list_for_each ( page, &d->page_list )
         {
             unsigned long mfn = page_to_mfn(page);
             unsigned long gfn = mfn_to_gmfn(d, mfn);
             unsigned int mapping = IOMMUF_readable;
+            int ret;
 
             if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
                  ((page->u.inuse.type_info & PGT_type_mask)
                   == PGT_writable_page) )
                 mapping |= IOMMUF_writable;
-            hd->platform_ops->map_page(d, gfn, mfn, mapping);
+
+            ret = hd->platform_ops->map_page(d, gfn, mfn, mapping);
+            if ( !rc )
+                rc = ret;
+
             if ( !(i++ & 0xfffff) )
                 process_pending_softirqs();
         }
+
+        if ( rc )
+            printk(XENLOG_WARNING "d%d: IOMMU mapping failed: %d\n",
+                   d->domain_id, rc);
     }
 
     return hd->platform_ops->hwdom_init(d);
diff --git a/xen/drivers/passthrough/vtd/x86/vtd.c b/xen/drivers/passthrough/vtd/x86/vtd.c
index c0d6aab..3d67909 100644
--- a/xen/drivers/passthrough/vtd/x86/vtd.c
+++ b/xen/drivers/passthrough/vtd/x86/vtd.c
@@ -118,6 +118,8 @@  void __hwdom_init vtd_set_hwdom_mapping(struct domain *d)
 
     for ( i = 0; i < top; i++ )
     {
+        int rc = 0;
+
         /*
          * Set up 1:1 mapping for dom0. Default to use only conventional RAM
          * areas and let RMRRs include needed reserved regions. When set, the
@@ -140,8 +142,17 @@  void __hwdom_init vtd_set_hwdom_mapping(struct domain *d)
 
         tmp = 1 << (PAGE_SHIFT - PAGE_SHIFT_4K);
         for ( j = 0; j < tmp; j++ )
-            iommu_map_page(d, pfn * tmp + j, pfn * tmp + j,
-                           IOMMUF_readable|IOMMUF_writable);
+        {
+            int ret = iommu_map_page(d, pfn * tmp + j, pfn * tmp + j,
+                                     IOMMUF_readable|IOMMUF_writable);
+
+            if( !rc )
+               rc = ret;
+        }
+
+        if ( rc )
+           printk(XENLOG_WARNING VTDPREFIX " d%d: IOMMU mapping failed: %d\n",
+                  d->domain_id, rc);
 
         if (!(i & (0xfffff >> (PAGE_SHIFT - PAGE_SHIFT_4K))))
             process_pending_softirqs();
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 19ba976..eaa2c77 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -74,9 +74,9 @@  void iommu_teardown(struct domain *d);
 #define IOMMUF_readable  (1u<<_IOMMUF_readable)
 #define _IOMMUF_writable 1
 #define IOMMUF_writable  (1u<<_IOMMUF_writable)
-int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
-                   unsigned int flags);
-int iommu_unmap_page(struct domain *d, unsigned long gfn);
+int __must_check iommu_map_page(struct domain *d, unsigned long gfn,
+                                unsigned long mfn, unsigned int flags);
+int __must_check iommu_unmap_page(struct domain *d, unsigned long gfn);
 
 enum iommu_feature
 {