diff mbox

[v5,06/10] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones).

Message ID 1463558911-98187-7-git-send-email-quan.xu@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Quan Xu May 18, 2016, 8:08 a.m. UTC
Propagate the IOMMU Device-TLB flush error up to the iommu_iotlb_flush{,_all}.

This patch fixes the top level ones (this patch doesn't fix the leaf ones but
the next patch does).

Signed-off-by: Quan Xu <quan.xu@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/arm/p2m.c                  |  5 ++++-
 xen/common/memory.c                 | 20 +++++++++++++++-----
 xen/drivers/passthrough/iommu.c     | 13 +++++++++----
 xen/drivers/passthrough/x86/iommu.c |  5 +++--
 xen/include/xen/iommu.h             |  7 ++++---
 5 files changed, 35 insertions(+), 15 deletions(-)

Comments

Jan Beulich May 23, 2016, 4:05 p.m. UTC | #1
>>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -633,9 +633,9 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
>      return rc;
>  }
>  
> -static int xenmem_add_to_physmap(struct domain *d,
> -                                 struct xen_add_to_physmap *xatp,
> -                                 unsigned int start)
> +static int __must_check xenmem_add_to_physmap(struct domain *d,
> +                                              struct xen_add_to_physmap *xatp,
> +                                              unsigned int start)
>  {

As before - either you do this adding of annotations completely, or
you stop at the IOMMU / MM boundary. And note that this
addition was not covered by the R-b tag I had offered during the
v4 review.

Jan
Quan Xu May 24, 2016, 1:16 a.m. UTC | #2
On May 24, 2016 12:06 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> > --- a/xen/common/memory.c
> > +++ b/xen/common/memory.c
> > @@ -633,9 +633,9 @@ static long
> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t)
> arg)
> >      return rc;
> >  }
> >
> > -static int xenmem_add_to_physmap(struct domain *d,
> > -                                 struct xen_add_to_physmap *xatp,
> > -                                 unsigned int start)
> > +static int __must_check xenmem_add_to_physmap(struct domain *d,
> > +                                              struct xen_add_to_physmap *xatp,
> > +                                              unsigned int start)
> >  {
> 
> As before - either you do this adding of annotations completely, or you stop at
> the IOMMU / MM boundary. 

I prefer to stop at the IOMMU / MM boundary. The IOMMU boundary is obvious, but what's the definition of MM boundary? I thought this is at MM boundary.

Quan

> And note that this addition was not covered by
> the R-b tag I had offered during the
> v4 review.
Jan Beulich May 24, 2016, 7:01 a.m. UTC | #3
>>> On 24.05.16 at 03:16, <quan.xu@intel.com> wrote:
> On May 24, 2016 12:06 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
>> > --- a/xen/common/memory.c
>> > +++ b/xen/common/memory.c
>> > @@ -633,9 +633,9 @@ static long
>> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t)
>> arg)
>> >      return rc;
>> >  }
>> >
>> > -static int xenmem_add_to_physmap(struct domain *d,
>> > -                                 struct xen_add_to_physmap *xatp,
>> > -                                 unsigned int start)
>> > +static int __must_check xenmem_add_to_physmap(struct domain *d,
>> > +                                              struct xen_add_to_physmap 
> *xatp,
>> > +                                              unsigned int start)
>> >  {
>> 
>> As before - either you do this adding of annotations completely, or you stop 
> at
>> the IOMMU / MM boundary. 
> 
> I prefer to stop at the IOMMU / MM boundary. The IOMMU boundary is obvious, 
> but what's the definition of MM boundary? I thought this is at MM boundary.

Not sure what you mean to understand. The IOMMU / MM boundary
is the boundary between those two components, there's no talk of
two boundaries here, and hence the question is unclear to me.

Jan
Tian, Kevin May 24, 2016, 7:08 a.m. UTC | #4
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, May 24, 2016 3:02 PM
> 
> >>> On 24.05.16 at 03:16, <quan.xu@intel.com> wrote:
> > On May 24, 2016 12:06 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> >> > --- a/xen/common/memory.c
> >> > +++ b/xen/common/memory.c
> >> > @@ -633,9 +633,9 @@ static long
> >> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t)
> >> arg)
> >> >      return rc;
> >> >  }
> >> >
> >> > -static int xenmem_add_to_physmap(struct domain *d,
> >> > -                                 struct xen_add_to_physmap *xatp,
> >> > -                                 unsigned int start)
> >> > +static int __must_check xenmem_add_to_physmap(struct domain *d,
> >> > +                                              struct xen_add_to_physmap
> > *xatp,
> >> > +                                              unsigned int start)
> >> >  {
> >>
> >> As before - either you do this adding of annotations completely, or you stop
> > at
> >> the IOMMU / MM boundary.
> >
> > I prefer to stop at the IOMMU / MM boundary. The IOMMU boundary is obvious,
> > but what's the definition of MM boundary? I thought this is at MM boundary.
> 
> Not sure what you mean to understand. The IOMMU / MM boundary
> is the boundary between those two components, there's no talk of
> two boundaries here, and hence the question is unclear to me.
> 
> Jan

Hi, Quan,

A file-based map about IOMMU/MM boundary is under arch/x86/mm. You
need focus on low-level interaction between IOMMU and MM components,
i.e. when some state change in MM code (mostly p2m change) needs to 
conduct IOMMU operations.

Above xenmem is much higher level, which will be routed to various MM 
operations internally so you don't need bother here.

Thanks
Kevin
Quan Xu May 24, 2016, 8:11 a.m. UTC | #5
On May 24, 2016 3:09 PM, Tian, Kevin <kevin.tian@intel.com> wrote:
> > From: Jan Beulich [mailto:JBeulich@suse.com]
> > Sent: Tuesday, May 24, 2016 3:02 PM
> >
> > >>> On 24.05.16 at 03:16, <quan.xu@intel.com> wrote:
> > > On May 24, 2016 12:06 AM, Jan Beulich <JBeulich@suse.com> wrote:
> > >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> > >> > --- a/xen/common/memory.c
> > >> > +++ b/xen/common/memory.c
> > >> > @@ -633,9 +633,9 @@ static long
> > >>
> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t)
> > >> arg)
> > >> >      return rc;
> > >> >  }
> > >> >
> > >> > -static int xenmem_add_to_physmap(struct domain *d,
> > >> > -                                 struct xen_add_to_physmap *xatp,
> > >> > -                                 unsigned int start)
> > >> > +static int __must_check xenmem_add_to_physmap(struct domain *d,
> > >> > +                                              struct
> > >> > +xen_add_to_physmap
> > > *xatp,
> > >> > +                                              unsigned int
> > >> > + start)
> > >> >  {
> > >>
> > >> As before - either you do this adding of annotations completely, or
> > >> you stop
> > > at
> > >> the IOMMU / MM boundary.
> > >
> > > I prefer to stop at the IOMMU / MM boundary. The IOMMU boundary is
> > > obvious, but what's the definition of MM boundary? I thought this is at
> MM boundary.
> >
> > Not sure what you mean to understand. The IOMMU / MM boundary is the
> > boundary between those two components, there's no talk of two
> > boundaries here, and hence the question is unclear to me.
> >
> > Jan
> 
> Hi, Quan,
> 
> A file-based map about IOMMU/MM boundary is under arch/x86/mm. You
> need focus on low-level interaction between IOMMU and MM components,
> i.e. when some state change in MM code (mostly p2m change) needs to
> conduct IOMMU operations.
> 
> Above xenmem is much higher level, which will be routed to various MM
> operations internally so you don't need bother here.

Jan / Kevin,

I thought the IOMMU / MM boundary is the MM functions (high level callers) which call iommu_* interfaces (such as,  iommu_map_page / iommu_unmap_page / iommu_iotlb_flush ...).
For this case, the xenmem_add_to_physmap() indeed calls iommu_iotlb_flush(),  but xenmem_add_to_physmap() may be hypervisor interface, instead of MM interface.

If I drop this __must_check and fix patch 3 / patch 5, then I think __must_check would not be a block issue.

Quan
Jan Beulich May 24, 2016, 8:34 a.m. UTC | #6
>>> On 24.05.16 at 10:11, <quan.xu@intel.com> wrote:
> I thought the IOMMU / MM boundary is the MM functions (high level callers) 
> which call iommu_* interfaces (such as,  iommu_map_page / iommu_unmap_page / 
> iommu_iotlb_flush ...).

Exactly - the boundary is _in_ those MM functions, at the points where
they call IOMMU ones.

> For this case, the xenmem_add_to_physmap() indeed calls iommu_iotlb_flush(), 
>  but xenmem_add_to_physmap() may be hypervisor interface, instead of MM 
> interface.
> 
> If I drop this __must_check and fix patch 3 / patch 5, then I think 
> __must_check would not be a block issue.

Not sure what "a block issue" is supposed to mean here, but indeed
if you drop the annotations from non-IOMMU functions (unless, as
said, you mean to also add them further up the call trees), then I
think things should be coming close.

Jan
Quan Xu May 24, 2016, 8:44 a.m. UTC | #7
On May 24, 2016 4:34 PM, Jan Beulich <JBeulich@suse.com> wrote:
> but indeed if you
> drop the annotations from non-IOMMU functions (unless, as said, you mean
> to also add them further up the call trees), then I think things should be
> coming close.
> 

I'll drop the annotations from non-IOMMU functions.
Quan
diff mbox

Patch

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index db21433..fe201d0 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1178,7 +1178,10 @@  out:
     if ( flush )
     {
         flush_tlb_domain(d);
-        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+
+        if ( !rc )
+            rc = ret;
     }
 
     while ( (pg = page_list_remove_head(&free_pages)) )
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 644f81a..754c33c 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -633,9 +633,9 @@  static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
     return rc;
 }
 
-static int xenmem_add_to_physmap(struct domain *d,
-                                 struct xen_add_to_physmap *xatp,
-                                 unsigned int start)
+static int __must_check xenmem_add_to_physmap(struct domain *d,
+                                              struct xen_add_to_physmap *xatp,
+                                              unsigned int start)
 {
     unsigned int done = 0;
     long rc = 0;
@@ -677,9 +677,19 @@  static int xenmem_add_to_physmap(struct domain *d,
 #ifdef CONFIG_HAS_PASSTHROUGH
     if ( need_iommu(d) )
     {
+        int ret;
+
         this_cpu(iommu_dont_flush_iotlb) = 0;
-        iommu_iotlb_flush(d, xatp->idx - done, done);
-        iommu_iotlb_flush(d, xatp->gpfn - done, done);
+
+        ret = iommu_iotlb_flush(d, xatp->idx - done, done);
+
+        if ( unlikely(ret) && rc >= 0 )
+            rc = ret;
+
+        ret = iommu_iotlb_flush(d, xatp->gpfn - done, done);
+
+        if ( unlikely(ret) && rc >= 0 )
+            rc = ret;
     }
 #endif
 
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index e04db16..54d307b 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -315,24 +315,29 @@  static void iommu_free_pagetables(unsigned long unused)
                             cpumask_cycle(smp_processor_id(), &cpu_online_map));
 }
 
-void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+int iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+                      unsigned int page_count)
 {
     const struct domain_iommu *hd = dom_iommu(d);
 
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
-        return;
+        return 0;
 
     hd->platform_ops->iotlb_flush(d, gfn, page_count);
+
+    return 0;
 }
 
-void iommu_iotlb_flush_all(struct domain *d)
+int iommu_iotlb_flush_all(struct domain *d)
 {
     const struct domain_iommu *hd = dom_iommu(d);
 
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all )
-        return;
+        return 0;
 
     hd->platform_ops->iotlb_flush_all(d);
+
+    return 0;
 }
 
 int __init iommu_setup(void)
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index b64b98f..a18a608 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -104,8 +104,9 @@  int arch_iommu_populate_page_table(struct domain *d)
     this_cpu(iommu_dont_flush_iotlb) = 0;
 
     if ( !rc )
-        iommu_iotlb_flush_all(d);
-    else if ( rc != -ERESTART )
+        rc = iommu_iotlb_flush_all(d);
+
+    if ( rc && rc != -ERESTART )
         iommu_teardown(d);
 
     return rc;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 14041a1..27e2d3e 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -61,7 +61,7 @@  int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
 
 void arch_iommu_domain_destroy(struct domain *d);
 int arch_iommu_domain_init(struct domain *d);
-int arch_iommu_populate_page_table(struct domain *d);
+int __must_check arch_iommu_populate_page_table(struct domain *d);
 void arch_iommu_check_autotranslated_hwdom(struct domain *d);
 
 int iommu_construct(struct domain *d);
@@ -200,8 +200,9 @@  int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
 int iommu_do_domctl(struct xen_domctl *, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
-void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
-void iommu_iotlb_flush_all(struct domain *d);
+int __must_check iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+                                   unsigned int page_count);
+int __must_check iommu_iotlb_flush_all(struct domain *d);
 
 /*
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to