Message ID | 1463558911-98187-2-git-send-email-quan.xu@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
>>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote: > --- a/xen/drivers/passthrough/vtd/iommu.c > +++ b/xen/drivers/passthrough/vtd/iommu.c > @@ -557,14 +557,16 @@ static void iommu_flush_all(void) > } > } > > -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, > - int dma_old_pte_present, unsigned int page_count) > +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, > + bool_t dma_old_pte_present, > + unsigned int page_count) I realize you say so in the overview mail, but the continuing lack of __must_check here causes review trouble again. And I have a hard time seeing how adding these annotations right away would "disrupt the order", as long as the series is properly ordered / broken up. > @@ -579,23 +581,28 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, > > flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; > iommu_domid= domain_iommu_domid(d, iommu); > + > if ( iommu_domid == -1 ) I appreciate you adding blank lines where needed, but this one looks spurious. > @@ -1391,13 +1399,26 @@ int domain_context_mapping_one( > spin_unlock(&iommu->lock); > > /* Context entry was previously non-present (with domid 0). */ > - if ( iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn, > - DMA_CCMD_MASK_NOBIT, 1) ) > - iommu_flush_write_buffer(iommu); > - else > + rc = iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn, If you already touch such code, I'd appreciate if you did away with the open coding of pre-canned macros or inline functions (PCI_BDF2() in this case). > + DMA_CCMD_MASK_NOBIT, 1); > + > + /* > + * The current logic for rc returns: > + * - positive invoke iommu_flush_write_buffer to flush cache. > + * - zero success. > + * - negative failure. Continue to flush IOMMU IOTLB on a best > + * effort basis. > + */ > + if ( rc <= 0 ) > { > int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; > - iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb); > + > + rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb); If rc was negative before this call, you may end up returning success without having been successful. Furthermore I think it was you who last time round reminded me that iommu_flush_iotlb_dsi() can also return 1, which you don't take care of. > @@ -1522,6 +1544,7 @@ int domain_context_unmap_one( > iommu_flush_cache_entry(context, sizeof(struct context_entry)); > > iommu_domid= domain_iommu_domid(domain, iommu); > + > if ( iommu_domid == -1 ) Seemingly stray addition of blank line again (more such below). And the code below has the same issue as that above. Jan
On May 23, 2016 9:31 PM, Jan Beulich <JBeulich@suse.com> wrote: > >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote: > > --- a/xen/drivers/passthrough/vtd/iommu.c > > +++ b/xen/drivers/passthrough/vtd/iommu.c > > @@ -557,14 +557,16 @@ static void iommu_flush_all(void) > > } > > } > > > > -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, > > - int dma_old_pte_present, unsigned int page_count) > > +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, > > + bool_t dma_old_pte_present, > > + unsigned int page_count) > > I realize you say so in the overview mail, but the continuing lack of > __must_check here causes review trouble again. And I have a hard time seeing > how adding these annotations right away would "disrupt the order", as long > as the series is properly ordered / broken up. > If I add __must_check annotations here right now, e.g. -static void intel_iommu_iotlb_flush() +static int __must_check iommu_flush_iotlb_pages() ... @@ -179,8 +179,9 @@ struct iommu_ops { - void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count); + int __must_check (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count); ... } Should belong to here too. > > @@ -579,23 +581,28 @@ static void __intel_iommu_iotlb_flush(struct > > domain *d, unsigned long gfn, > > > > flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; > > iommu_domid= domain_iommu_domid(d, iommu); > > + > > if ( iommu_domid == -1 ) > > I appreciate you adding blank lines where needed, but this one looks spurious. > > > @@ -1391,13 +1399,26 @@ int domain_context_mapping_one( > > spin_unlock(&iommu->lock); > > > > /* Context entry was previously non-present (with domid 0). */ > > - if ( iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn, > > - DMA_CCMD_MASK_NOBIT, 1) ) > > - iommu_flush_write_buffer(iommu); > > - else > > + rc = iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | > > + devfn, > > If you already touch such code, I'd appreciate if you did away with the open > coding of pre-canned macros or inline functions (PCI_BDF2() in this case). I will enhance it in v6. > > > + DMA_CCMD_MASK_NOBIT, 1); > > + > > + /* > > + * The current logic for rc returns: > > + * - positive invoke iommu_flush_write_buffer to flush cache. > > + * - zero success. > > + * - negative failure. Continue to flush IOMMU IOTLB on a best > > + * effort basis. > > + */ > > + if ( rc <= 0 ) > > { > > int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; > > - iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb); > > + > > + rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb); > > If rc was negative before this call, you may end up returning success without > having been successful. Furthermore I think it was you who last time round > reminded me that > iommu_flush_iotlb_dsi() can also return 1, which you don't take care of. > Yes, the iommu_flush_iotlb_dsi() can also return 1. Look at the call tree, at the beginning of flush_context_qi()/flush_iotlb_qi(), or flush_context_reg()/flush_iotlb_reg().. If rc was negative when we call iommu_flush_context_device(), it is impossible to return 1 for iommu_flush_iotlb_dsi(). IMO, furthermore, this should not belong to comment. > > @@ -1522,6 +1544,7 @@ int domain_context_unmap_one( > > iommu_flush_cache_entry(context, sizeof(struct context_entry)); > > > > iommu_domid= domain_iommu_domid(domain, iommu); > > + > > if ( iommu_domid == -1 ) > > Seemingly stray addition of blank line again (more such below). And the code > below has the same issue as that above. > I will enhance it in v6. Quan
>>> On 23.05.16 at 17:22, <quan.xu@intel.com> wrote: > On May 23, 2016 9:31 PM, Jan Beulich <JBeulich@suse.com> wrote: >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote: >> > --- a/xen/drivers/passthrough/vtd/iommu.c >> > +++ b/xen/drivers/passthrough/vtd/iommu.c >> > @@ -557,14 +557,16 @@ static void iommu_flush_all(void) >> > } >> > } >> > >> > -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, >> > - int dma_old_pte_present, unsigned int page_count) >> > +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, >> > + bool_t dma_old_pte_present, >> > + unsigned int page_count) >> >> I realize you say so in the overview mail, but the continuing lack of >> __must_check here causes review trouble again. And I have a hard time seeing >> how adding these annotations right away would "disrupt the order", as long >> as the series is properly ordered / broken up. >> > > If I add __must_check annotations here right now, e.g. > > -static void intel_iommu_iotlb_flush() > +static int __must_check iommu_flush_iotlb_pages() > > ... > > @@ -179,8 +179,9 @@ struct iommu_ops { > - void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int > page_count); > + int __must_check (*iotlb_flush)(struct domain *d, unsigned long gfn, > unsigned int page_count); > ... > } > > Should belong to here too. Correct. And where's the problem? >> > + DMA_CCMD_MASK_NOBIT, 1); >> > + >> > + /* >> > + * The current logic for rc returns: >> > + * - positive invoke iommu_flush_write_buffer to flush cache. >> > + * - zero success. >> > + * - negative failure. Continue to flush IOMMU IOTLB on a best >> > + * effort basis. >> > + */ >> > + if ( rc <= 0 ) >> > { >> > int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; >> > - iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb); >> > + >> > + rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb); >> >> If rc was negative before this call, you may end up returning success > without >> having been successful. Furthermore I think it was you who last time round >> reminded me that >> iommu_flush_iotlb_dsi() can also return 1, which you don't take care of. >> > > Yes, the iommu_flush_iotlb_dsi() can also return 1. > Look at the call tree, at the beginning of > flush_context_qi()/flush_iotlb_qi(), or > flush_context_reg()/flush_iotlb_reg().. > > If rc was negative when we call iommu_flush_context_device(), it is > impossible to return 1 for iommu_flush_iotlb_dsi(). This is far from obvious, so please add a respective ASSERT() if you want to rely on that. > IMO, furthermore, this should not belong to comment. ??? Jan
On May 23, 2016 11:43 PM, Jan Beulich <JBeulich@suse.com> wrote: > >>> On 23.05.16 at 17:22, <quan.xu@intel.com> wrote: > > On May 23, 2016 9:31 PM, Jan Beulich <JBeulich@suse.com> wrote: > >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote: > >> > --- a/xen/drivers/passthrough/vtd/iommu.c > >> > +++ b/xen/drivers/passthrough/vtd/iommu.c > >> > @@ -557,14 +557,16 @@ static void iommu_flush_all(void) > >> > } > >> > } > >> > > >> > -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long > gfn, > >> > - int dma_old_pte_present, unsigned int page_count) > >> > +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long > gfn, > >> > + bool_t dma_old_pte_present, > >> > + unsigned int page_count) > >> > >> I realize you say so in the overview mail, but the continuing lack of > >> __must_check here causes review trouble again. And I have a hard time > >> seeing how adding these annotations right away would "disrupt the > >> order", as long as the series is properly ordered / broken up. > >> > > > > If I add __must_check annotations here right now, e.g. > > > > -static void intel_iommu_iotlb_flush() > > +static int __must_check iommu_flush_iotlb_pages() > > > > ... > > > > @@ -179,8 +179,9 @@ struct iommu_ops { > > - void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int > > page_count); > > + int __must_check (*iotlb_flush)(struct domain *d, unsigned long > > + gfn, > > unsigned int page_count); > > ... > > } > > > > Should belong to here too. > > Correct. And where's the problem? > IMO it is not a big deal.. I think this makes this patch 1 fat.. why not focus on the positive propagation value from IOMMU flush interfaces in this patch. If we are clear I will add annotation and rename them in another patches, it is acceptable to me. Furthermore, we also need to add (from patch 4): -static void dma_pte_clear_one(struct domain *domain, u64 addr) +static int __must_check dma_pte_clear_one(struct domain *domain, u64 addr) { ... - __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1); + rc = __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1); ... } In this patch. > >> > + DMA_CCMD_MASK_NOBIT, 1); > >> > + > >> > + /* > >> > + * The current logic for rc returns: > >> > + * - positive invoke iommu_flush_write_buffer to flush cache. > >> > + * - zero success. > >> > + * - negative failure. Continue to flush IOMMU IOTLB on a best > >> > + * effort basis. > >> > + */ > >> > + if ( rc <= 0 ) > >> > { > >> > int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; > >> > - iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb); > >> > + > >> > + rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb); > >> > >> If rc was negative before this call, you may end up returning success > > without > >> having been successful. Furthermore I think it was you who last time > >> round reminded me that > >> iommu_flush_iotlb_dsi() can also return 1, which you don't take care of. > >> > > > > Yes, the iommu_flush_iotlb_dsi() can also return 1. > > Look at the call tree, at the beginning of > > flush_context_qi()/flush_iotlb_qi(), or > > flush_context_reg()/flush_iotlb_reg().. > > > > If rc was negative when we call iommu_flush_context_device(), it is > > impossible to return 1 for iommu_flush_iotlb_dsi(). > > This is far from obvious, so please add a respective ASSERT() if you want to > rely on that. > > > IMO, furthermore, this should not belong to comment. > > ??? Think twice, I will add comments and a respective ASSERT().. Quan
>>> On 25.05.16 at 10:04, <quan.xu@intel.com> wrote: > On May 23, 2016 11:43 PM, Jan Beulich <JBeulich@suse.com> wrote: >> >>> On 23.05.16 at 17:22, <quan.xu@intel.com> wrote: >> > On May 23, 2016 9:31 PM, Jan Beulich <JBeulich@suse.com> wrote: >> >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote: >> >> > --- a/xen/drivers/passthrough/vtd/iommu.c >> >> > +++ b/xen/drivers/passthrough/vtd/iommu.c >> >> > @@ -557,14 +557,16 @@ static void iommu_flush_all(void) >> >> > } >> >> > } >> >> > >> >> > -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long >> gfn, >> >> > - int dma_old_pte_present, unsigned int page_count) >> >> > +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long >> gfn, >> >> > + bool_t dma_old_pte_present, >> >> > + unsigned int page_count) >> >> >> >> I realize you say so in the overview mail, but the continuing lack of >> >> __must_check here causes review trouble again. And I have a hard time >> >> seeing how adding these annotations right away would "disrupt the >> >> order", as long as the series is properly ordered / broken up. >> >> >> > >> > If I add __must_check annotations here right now, e.g. >> > >> > -static void intel_iommu_iotlb_flush() >> > +static int __must_check iommu_flush_iotlb_pages() >> > >> > ... >> > >> > @@ -179,8 +179,9 @@ struct iommu_ops { >> > - void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int >> > page_count); >> > + int __must_check (*iotlb_flush)(struct domain *d, unsigned long >> > + gfn, >> > unsigned int page_count); >> > ... >> > } >> > >> > Should belong to here too. >> >> Correct. And where's the problem? >> > > IMO it is not a big deal.. > > I think this makes this patch 1 fat.. why not focus on the positive > propagation value from IOMMU flush interfaces in this patch. > If we are clear I will add annotation and rename them in another patches, it > is acceptable to me. The patch getting too large is easy to deal with: Split it at a reasonable boundary. It's one thing that I want to be clear: Any conversion of a void return type of some function to non-void should be accompanied by it at the same time becoming __must_check. I dislike having to repeat yet again what I have been saying a number of times: Without doing so, it is harder for you as the person writing the patch to verify all callers deal with errors, and it's perhaps even harder for reviewers to verify you did. Jan
On May 25, 2016 4:30 PM, Jan Beulich <JBeulich@suse.com> wrote: > >>> On 25.05.16 at 10:04, <quan.xu@intel.com> wrote: > > On May 23, 2016 11:43 PM, Jan Beulich <JBeulich@suse.com> wrote: > >> >>> On 23.05.16 at 17:22, <quan.xu@intel.com> wrote: > >> > On May 23, 2016 9:31 PM, Jan Beulich <JBeulich@suse.com> wrote: > >> >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote: > >> >> > --- a/xen/drivers/passthrough/vtd/iommu.c > >> >> > +++ b/xen/drivers/passthrough/vtd/iommu.c > >> >> > @@ -557,14 +557,16 @@ static void iommu_flush_all(void) > >> >> > } > >> >> > } > >> >> > > >> >> > -static void __intel_iommu_iotlb_flush(struct domain *d, > >> >> > unsigned long > >> gfn, > >> >> > - int dma_old_pte_present, unsigned int page_count) > >> >> > +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned > >> >> > +long > >> gfn, > >> >> > + bool_t dma_old_pte_present, > >> >> > + unsigned int page_count) > >> >> > >> >> I realize you say so in the overview mail, but the continuing lack > >> >> of __must_check here causes review trouble again. And I have a > >> >> hard time seeing how adding these annotations right away would > >> >> "disrupt the order", as long as the series is properly ordered / broken up. > >> >> > >> > > >> > If I add __must_check annotations here right now, e.g. > >> > > >> > -static void intel_iommu_iotlb_flush() > >> > +static int __must_check iommu_flush_iotlb_pages() > >> > > >> > ... > >> > > >> > @@ -179,8 +179,9 @@ struct iommu_ops { > >> > - void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int > >> > page_count); > >> > + int __must_check (*iotlb_flush)(struct domain *d, unsigned > >> > + long gfn, > >> > unsigned int page_count); > >> > ... > >> > } > >> > > >> > Should belong to here too. > >> > >> Correct. And where's the problem? > >> > > > > IMO it is not a big deal.. > > > > I think this makes this patch 1 fat.. why not focus on the positive > > propagation value from IOMMU flush interfaces in this patch. > > If we are clear I will add annotation and rename them in another > > patches, it is acceptable to me. > > The patch getting too large is easy to deal with: Split it at a reasonable > boundary. It's one thing that I want to be clear: Any conversion of a void > return type of some function to non-void should be accompanied by it at the > same time becoming __must_check. I dislike having to repeat yet again what I > have been saying a number of times: Without doing so, it is harder for you as > the person writing the patch to verify all callers deal with errors, and it's > perhaps even harder for reviewers to verify you did. > It is clear to me, but I was lock of attention on reviewers part. I'll follow your suggestion from now on.. thanks. Quan
On May 23, 2016 9:31 PM, Jan Beulich <JBeulich@suse.com> wrote: > >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote: > > --- a/xen/drivers/passthrough/vtd/iommu.c > > +++ b/xen/drivers/passthrough/vtd/iommu.c > > @@ -1391,13 +1399,26 @@ int domain_context_mapping_one( > > spin_unlock(&iommu->lock); > > > > /* Context entry was previously non-present (with domid 0). */ > > - if ( iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn, > > - DMA_CCMD_MASK_NOBIT, 1) ) > > - iommu_flush_write_buffer(iommu); > > - else > > + rc = iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | > > + devfn, > > + DMA_CCMD_MASK_NOBIT, 1); > > + > > + /* > > + * The current logic for rc returns: > > + * - positive invoke iommu_flush_write_buffer to flush cache. > > + * - zero success. > > + * - negative failure. Continue to flush IOMMU IOTLB on a best > > + * effort basis. > > + */ > > + if ( rc <= 0 ) > > { > > int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; > > - iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb); > > + > > + rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb); > > If rc was negative before this call, you may end up returning success without > having been successful. You are right. IMO I need to add 'ret' here. Then .. + if ( rc <= 0 ) { int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; - iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb); + int ret; + + ret = iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb); + ASSERT(ret <= 0); + + if ( !rc ) + rc = ret; + } Quan
On May 25, 2016 4:30 PM, Jan Beulich <JBeulich@suse.com> wrote: > The patch getting too large is easy to deal with: Split it at a reasonable > boundary. Jan, If I follow the below rule, I need to merge most of patches into this one. I can't find a reasonable boundary. I recall your suggestion: top one first, then low level one.. I am better not to make this patch as a first one, as this is really a low level one. Then, I need to change condition from 'if ( !rc )' to ' if ( rc < 0 )' in my series. (but if this series would be merged together, I don't need to think about it.) Does it make sense? Quan > It's one thing that I want to be clear: Any conversion of a void > return type of some function to non-void should be accompanied by it at the > same time becoming __must_check. I dislike having to repeat yet again what I > have been saying a number of times: Without doing so, it is harder for you as > the person writing the patch to verify all callers deal with errors, and it's > perhaps even harder for reviewers to verify you did.
On May 26, 2016 6:38 PM, Xu, Quan <quan.xu@intel.com> wrote: > On May 25, 2016 4:30 PM, Jan Beulich <JBeulich@suse.com> wrote: > > The patch getting too large is easy to deal with: Split it at a > > reasonable boundary. > > Jan, > If I follow the below rule, I need to merge most of patches into this one. I can't > find a reasonable boundary. > I recall your suggestion: top one first, then low level one.. > I am better not to make this patch as a first one, as this is really a low level one. > Then, I need to change condition from 'if ( !rc )' Sorry, a typo, 'if ( rc )' btw, the __must_check annotation is helpful, and we have multiple rounds review.. I think a big patch is not a big deal. Quan > to ' if ( rc < 0 )' in my series. > (but if this series would be merged together, I don't need to think about it.) > Does it make sense? > > Quan
>>> "Xu, Quan" <quan.xu@intel.com> 05/26/16 12:38 PM >>> >On May 25, 2016 4:30 PM, Jan Beulich <JBeulich@suse.com> wrote: >> The patch getting too large is easy to deal with: Split it at a reasonable >> boundary. > >If I follow the below rule, I need to merge most of patches into this one. I can't find a reasonable boundary. As before, boundaries are pretty easy to set: Just change one function at a time, or when two or a few are very closely related, do the changes together. But try to avoid changing ones in the same patch that call each other (unless of course there's some sort of recursion). But yes, as you say in the other reply, a big patch may not be a problem as long as it remains reasonably understandable (e.g. many small hunks are usually fine, but a single or a few hunks changing dozens or even hundreds of lines in one go are usually hard to review). >I recall your suggestion: top one first, then low level one.. >I am better not to make this patch as a first one, as this is really a low level one. >Then, I need to change condition from 'if ( !rc )' to ' if ( rc < 0 )' in my series. (but if this series would be merged together, I don't need to think about it.) >Does it make sense? I'm afraid I'm lacking context. Jan
On Thursday, May 26, 2016 11:57 PM, Jan Beulich <jbeulich@suse.com> wrote: > >>> "Xu, Quan" <quan.xu@intel.com> 05/26/16 12:38 PM >>> > >On May 25, 2016 4:30 PM, Jan Beulich <JBeulich@suse.com> wrote: > >> The patch getting too large is easy to deal with: Split it at a > >> reasonable boundary. > >I recall your suggestion: top one first, then low level one.. > >I am better not to make this patch as a first one, as this is really a low level > one. > >Then, I need to change condition from 'if ( !rc )' to ' if ( rc < 0 )' > >in my series. (but if this series would be merged together, I don't need to > think about it.) Does it make sense? > > I'm afraid I'm lacking context. when the propagation value is positive, this patch fixes this flush issue as follows: - call iommu_flush_write_buffer() to flush cache. - return zero. So the condition can be 'if( rc )' for error, but if this patch is not a first one in series, I need to change condition from 'if ( rc )' to ' if ( rc < 0 )'.. as the propagation value may be positive.. Quan
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c index db83949..3ece815 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -557,14 +557,16 @@ static void iommu_flush_all(void) } } -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, - int dma_old_pte_present, unsigned int page_count) +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, + bool_t dma_old_pte_present, + unsigned int page_count) { struct domain_iommu *hd = dom_iommu(d); struct acpi_drhd_unit *drhd; struct iommu *iommu; int flush_dev_iotlb; int iommu_domid; + int rc = 0; /* * No need pcideves_lock here because we have flush @@ -579,23 +581,28 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; iommu_domid= domain_iommu_domid(d, iommu); + if ( iommu_domid == -1 ) continue; if ( page_count != 1 || gfn == INVALID_GFN ) - { - if ( iommu_flush_iotlb_dsi(iommu, iommu_domid, - 0, flush_dev_iotlb) ) - iommu_flush_write_buffer(iommu); - } + rc = iommu_flush_iotlb_dsi(iommu, iommu_domid, + 0, flush_dev_iotlb); else + rc = iommu_flush_iotlb_psi(iommu, iommu_domid, + (paddr_t)gfn << PAGE_SHIFT_4K, + PAGE_ORDER_4K, + !dma_old_pte_present, + flush_dev_iotlb); + + if ( rc > 0 ) { - if ( iommu_flush_iotlb_psi(iommu, iommu_domid, - (paddr_t)gfn << PAGE_SHIFT_4K, PAGE_ORDER_4K, - !dma_old_pte_present, flush_dev_iotlb) ) - iommu_flush_write_buffer(iommu); + iommu_flush_write_buffer(iommu); + rc = 0; } } + + return rc; } static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count) @@ -1278,6 +1285,7 @@ int domain_context_mapping_one( u64 maddr, pgd_maddr; u16 seg = iommu->intel->drhd->segment; int agaw; + int rc; ASSERT(pcidevs_locked()); spin_lock(&iommu->lock); @@ -1391,13 +1399,26 @@ int domain_context_mapping_one( spin_unlock(&iommu->lock); /* Context entry was previously non-present (with domid 0). */ - if ( iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn, - DMA_CCMD_MASK_NOBIT, 1) ) - iommu_flush_write_buffer(iommu); - else + rc = iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn, + DMA_CCMD_MASK_NOBIT, 1); + + /* + * The current logic for rc returns: + * - positive invoke iommu_flush_write_buffer to flush cache. + * - zero success. + * - negative failure. Continue to flush IOMMU IOTLB on a best + * effort basis. + */ + if ( rc <= 0 ) { int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; - iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb); + + rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb); + } + else + { + iommu_flush_write_buffer(iommu); + rc = 0; } set_bit(iommu->index, &hd->arch.iommu_bitmap); @@ -1407,7 +1428,7 @@ int domain_context_mapping_one( if ( !seg ) me_wifi_quirk(domain, bus, devfn, MAP_ME_PHANTOM_FUNC); - return 0; + return rc; } static int domain_context_mapping( @@ -1502,6 +1523,7 @@ int domain_context_unmap_one( struct context_entry *context, *context_entries; u64 maddr; int iommu_domid; + int rc; ASSERT(pcidevs_locked()); spin_lock(&iommu->lock); @@ -1522,6 +1544,7 @@ int domain_context_unmap_one( iommu_flush_cache_entry(context, sizeof(struct context_entry)); iommu_domid= domain_iommu_domid(domain, iommu); + if ( iommu_domid == -1 ) { spin_unlock(&iommu->lock); @@ -1529,14 +1552,27 @@ int domain_context_unmap_one( return -EINVAL; } - if ( iommu_flush_context_device(iommu, iommu_domid, + rc = iommu_flush_context_device(iommu, iommu_domid, (((u16)bus) << 8) | devfn, - DMA_CCMD_MASK_NOBIT, 0) ) - iommu_flush_write_buffer(iommu); - else + DMA_CCMD_MASK_NOBIT, 0); + + /* + * The current logic for rc returns: + * - positive invoke iommu_flush_write_buffer to flush cache. + * - zero success. + * - negative failure. Continue to flush IOMMU IOTLB on a best + * effort basis. + */ + if ( rc <= 0 ) { int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; - iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb); + + rc = iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb); + } + else + { + iommu_flush_write_buffer(iommu); + rc = 0; } spin_unlock(&iommu->lock); @@ -1545,7 +1581,7 @@ int domain_context_unmap_one( if ( !iommu->intel->drhd->segment ) me_wifi_quirk(domain, bus, devfn, UNMAP_ME_PHANTOM_FUNC); - return 0; + return rc; } static int domain_context_unmap( @@ -1750,32 +1786,43 @@ static int intel_iommu_unmap_page(struct domain *d, unsigned long gfn) return 0; } -void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, - int order, int present) +int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, + int order, bool_t present) { struct acpi_drhd_unit *drhd; struct iommu *iommu = NULL; struct domain_iommu *hd = dom_iommu(d); int flush_dev_iotlb; int iommu_domid; + int rc = 0; iommu_flush_cache_entry(pte, sizeof(struct dma_pte)); for_each_drhd_unit ( drhd ) { iommu = drhd->iommu; + if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) ) continue; flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; iommu_domid= domain_iommu_domid(d, iommu); + if ( iommu_domid == -1 ) continue; - if ( iommu_flush_iotlb_psi(iommu, iommu_domid, + + rc = iommu_flush_iotlb_psi(iommu, iommu_domid, (paddr_t)gfn << PAGE_SHIFT_4K, - order, !present, flush_dev_iotlb) ) + order, !present, flush_dev_iotlb); + + if ( rc > 0 ) + { iommu_flush_write_buffer(iommu); + rc = 0; + } } + + return rc; } static int __init vtd_ept_page_compatible(struct iommu *iommu) diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h index e82a2f0..43f1620 100644 --- a/xen/include/asm-x86/iommu.h +++ b/xen/include/asm-x86/iommu.h @@ -27,7 +27,7 @@ int iommu_setup_hpet_msi(struct msi_desc *); /* While VT-d specific, this must get declared in a generic header. */ int adjust_vtd_irq_affinities(void); -void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present); +int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, bool_t present); bool_t iommu_supports_eim(void); int iommu_enable_x2apic_IR(void); void iommu_disable_x2apic_IR(void);
The propagation value from IOMMU flush interfaces may be positive, which indicates callers need to flush cache, not one of faliures. when the propagation value is positive, this patch fixes this flush issue as follows: - call iommu_flush_write_buffer() to flush cache. - return zero. Signed-off-by: Quan Xu <quan.xu@intel.com> CC: Kevin Tian <kevin.tian@intel.com> CC: Feng Wu <feng.wu@intel.com> CC: Keir Fraser <keir@xen.org> CC: Jan Beulich <jbeulich@suse.com> CC: Andrew Cooper <andrew.cooper3@citrix.com> --- xen/drivers/passthrough/vtd/iommu.c | 101 ++++++++++++++++++++++++++---------- xen/include/asm-x86/iommu.h | 2 +- 2 files changed, 75 insertions(+), 28 deletions(-)