diff mbox

[v2,05/11] vt-d: propagate IOMMU Device-TLB flush error up to IOMMU unmapping.

Message ID 1460988011-17758-6-git-send-email-quan.xu@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Quan Xu April 18, 2016, 2 p.m. UTC
While IOMMU Device-TLB flush timed out, xen calls panic() at present.
However the existing panic() is going to be eliminated, so we must
propagate the IOMMU Device-TLB flush error up to IOMMU unmapping.

Signed-off-by: Quan Xu <quan.xu@intel.com>

CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
CC: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/passthrough/vtd/iommu.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

Comments

Tian, Kevin April 19, 2016, 6:51 a.m. UTC | #1
> From: Xu, Quan
> Sent: Monday, April 18, 2016 10:00 PM
> 
> While IOMMU Device-TLB flush timed out, xen calls panic() at present.
> However the existing panic() is going to be eliminated, so we must
> propagate the IOMMU Device-TLB flush error up to IOMMU unmapping.

Thought you don't need replicate the same reason in every patch.
clear enough to just keep last line.

> 
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> 
> CC: Kevin Tian <kevin.tian@intel.com>
> CC: Feng Wu <feng.wu@intel.com>
> CC: Jan Beulich <jbeulich@suse.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>
Quan Xu April 19, 2016, 7:01 a.m. UTC | #2
On April 19, 2016 2:51pm, Tian, Kevin <kevin.tian@intel.com> wrote:
> > From: Xu, Quan
> > Sent: Monday, April 18, 2016 10:00 PM
> >
> > While IOMMU Device-TLB flush timed out, xen calls panic() at present.
> > However the existing panic() is going to be eliminated, so we must
> > propagate the IOMMU Device-TLB flush error up to IOMMU unmapping.
> 
> Thought you don't need replicate the same reason in every patch.
> clear enough to just keep last line.
> 

That's good. I also hesitated to add this same reason. I'll drop it in next v3.
Quan

> >
> > Signed-off-by: Quan Xu <quan.xu@intel.com>
> >
> > CC: Kevin Tian <kevin.tian@intel.com>
> > CC: Feng Wu <feng.wu@intel.com>
> > CC: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Kevin Tian <kevin.tian@intel.com>
Jan Beulich April 25, 2016, 10:06 a.m. UTC | #3
>>> On 18.04.16 at 16:00, <quan.xu@intel.com> wrote:
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -617,11 +617,12 @@ static void intel_iommu_iotlb_flush_all(struct domain 
> *d)
>  }
>  
>  /* clear one page's page table */
> -static void dma_pte_clear_one(struct domain *domain, u64 addr)
> +static int dma_pte_clear_one(struct domain *domain, u64 addr)

This, at the latest in this series, is where __must_check should
come into play.

Jan
Quan Xu April 26, 2016, 6:49 a.m. UTC | #4
On April 25, 2016 6:07 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 18.04.16 at 16:00, <quan.xu@intel.com> wrote:
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -617,11 +617,12 @@ static void intel_iommu_iotlb_flush_all(struct
> > domain
> > *d)
> >  }
> >
> >  /* clear one page's page table */
> > -static void dma_pte_clear_one(struct domain *domain, u64 addr)
> > +static int dma_pte_clear_one(struct domain *domain, u64 addr)
> 
> This, at the latest in this series, is where __must_check should come into play.
> 

Agreed.
Quan
diff mbox

Patch

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 50d98ac..580a9cf 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -617,11 +617,12 @@  static void intel_iommu_iotlb_flush_all(struct domain *d)
 }
 
 /* clear one page's page table */
-static void dma_pte_clear_one(struct domain *domain, u64 addr)
+static int dma_pte_clear_one(struct domain *domain, u64 addr)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(domain);
     struct dma_pte *page = NULL, *pte = NULL;
     u64 pg_maddr;
+    int rc = 0;
 
     spin_lock(&hd->arch.mapping_lock);
     /* get last level pte */
@@ -629,7 +630,7 @@  static void dma_pte_clear_one(struct domain *domain, u64 addr)
     if ( pg_maddr == 0 )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        return;
+        return 0;
     }
 
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
@@ -639,7 +640,7 @@  static void dma_pte_clear_one(struct domain *domain, u64 addr)
     {
         spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
-        return;
+        return 0;
     }
 
     dma_clear_pte(*pte);
@@ -647,9 +648,11 @@  static void dma_pte_clear_one(struct domain *domain, u64 addr)
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
-        iommu_flush_iotlb(domain, addr >> PAGE_SHIFT_4K, 1, 1);
+        rc = iommu_flush_iotlb(domain, addr >> PAGE_SHIFT_4K, 1, 1);
 
     unmap_vtd_domain_page(page);
+
+    return rc;
 }
 
 static void iommu_free_pagetable(u64 pt_maddr, int level)
@@ -1770,9 +1773,7 @@  static int intel_iommu_unmap_page(struct domain *d, unsigned long gfn)
     if ( iommu_passthrough && is_hardware_domain(d) )
         return 0;
 
-    dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
-
-    return 0;
+    return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
 }
 
 int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,