diff mbox series

[v2,1/1] i915/gem: drop wbinvd_on_all_cpus usage

Message ID 20220414181923.25631-2-michael.cheng@intel.com (mailing list archive)
State New, archived
Headers show
Series Drop wbinvd_on_all_cpus usage | expand

Commit Message

Michael Cheng April 14, 2022, 6:19 p.m. UTC
Previous concern with using drm_clflush_sg was that we don't know what the
sg_table is pointing to, thus the usage of wbinvd_on_all_cpus to flush
everything at once to avoid paranoia.

To make i915 more architecture-neutral and be less paranoid, lets attempt to
use drm_clflush_sg to flush the pages for when the GPU wants to read
from main memory.

Signed-off-by: Michael Cheng <michael.cheng@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

Comments

Lucas De Marchi June 17, 2022, 11:30 p.m. UTC | #1
On Thu, Apr 14, 2022 at 11:19:23AM -0700, Michael Cheng wrote:
>Previous concern with using drm_clflush_sg was that we don't know what the
>sg_table is pointing to, thus the usage of wbinvd_on_all_cpus to flush
>everything at once to avoid paranoia.

humn... and now we know it is backed by struct pages? I'm not sure I
follow what we didn't know before and now we do.

Thomas / Matthew, could you take another look herer if it seems correct
to you.


thanks
Lucas De Marchi


>To make i915 more architecture-neutral and be less paranoid, lets attempt to
>use drm_clflush_sg to flush the pages for when the GPU wants to read
>from main memory.
>
>Signed-off-by: Michael Cheng <michael.cheng@intel.com>
>---
> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++-------
> 1 file changed, 2 insertions(+), 7 deletions(-)
>
>diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>index f5062d0c6333..b0a5baaebc43 100644
>--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>@@ -8,6 +8,7 @@
> #include <linux/highmem.h>
> #include <linux/dma-resv.h>
> #include <linux/module.h>
>+#include <drm/drm_cache.h>
>
> #include <asm/smp.h>
>
>@@ -250,16 +251,10 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
> 	 * DG1 is special here since it still snoops transactions even with
> 	 * CACHE_NONE. This is not the case with other HAS_SNOOP platforms. We
> 	 * might need to revisit this as we add new discrete platforms.
>-	 *
>-	 * XXX: Consider doing a vmap flush or something, where possible.
>-	 * Currently we just do a heavy handed wbinvd_on_all_cpus() here since
>-	 * the underlying sg_table might not even point to struct pages, so we
>-	 * can't just call drm_clflush_sg or similar, like we do elsewhere in
>-	 * the driver.
> 	 */
> 	if (i915_gem_object_can_bypass_llc(obj) ||
> 	    (!HAS_LLC(i915) && !IS_DG1(i915)))
>-		wbinvd_on_all_cpus();
>+		drm_clflush_sg(pages);
>
> 	sg_page_sizes = i915_sg_dma_sizes(pages->sgl);
> 	__i915_gem_object_set_pages(obj, pages, sg_page_sizes);
>-- 
>2.25.1
>
Thomas Hellstrom June 20, 2022, 7:22 a.m. UTC | #2
Hi,

On Fri, 2022-06-17 at 16:30 -0700, Lucas De Marchi wrote:
> On Thu, Apr 14, 2022 at 11:19:23AM -0700, Michael Cheng wrote:
> > Previous concern with using drm_clflush_sg was that we don't know
> > what the
> > sg_table is pointing to, thus the usage of wbinvd_on_all_cpus to
> > flush
> > everything at once to avoid paranoia.
> 
> humn... and now we know it is backed by struct pages? I'm not sure I
> follow what we didn't know before and now we do.
> 
> Thomas / Matthew, could you take another look herer if it seems
> correct
> to you.
> 
> 
No, this isn't correct. A dma-buf sg-table may not be backed by struct
pages, and AFAIK, there is no way for the importer to tell, whether
that's the case or not.

If we need to get rid of the wbinvd_on_all_cpus here, we need to use
the dma_buf_vmap() function to obtain a virtual address and then use
drm_clflush_virt_range() on that address. After that clflush, we can
call dma_buf_vunmap(). This should work since x86 uses PIPT caches and
a flush to a virtual range should flush all other virtual ranges
pointing to the same pages as well.

/Thomas


> thanks
> Lucas De Marchi
> 
> 
> > To make i915 more architecture-neutral and be less paranoid, lets
> > attempt to
> > use drm_clflush_sg to flush the pages for when the GPU wants to
> > read
> > from main memory.
> > 
> > Signed-off-by: Michael Cheng <michael.cheng@intel.com>
> > ---
> > drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++-------
> > 1 file changed, 2 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > index f5062d0c6333..b0a5baaebc43 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > @@ -8,6 +8,7 @@
> > #include <linux/highmem.h>
> > #include <linux/dma-resv.h>
> > #include <linux/module.h>
> > +#include <drm/drm_cache.h>
> > 
> > #include <asm/smp.h>
> > 
> > @@ -250,16 +251,10 @@ static int
> > i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
> >          * DG1 is special here since it still snoops transactions
> > even with
> >          * CACHE_NONE. This is not the case with other HAS_SNOOP
> > platforms. We
> >          * might need to revisit this as we add new discrete
> > platforms.
> > -        *
> > -        * XXX: Consider doing a vmap flush or something, where
> > possible.
> > -        * Currently we just do a heavy handed wbinvd_on_all_cpus()
> > here since
> > -        * the underlying sg_table might not even point to struct
> > pages, so we
> > -        * can't just call drm_clflush_sg or similar, like we do
> > elsewhere in
> > -        * the driver.
> >          */
> >         if (i915_gem_object_can_bypass_llc(obj) ||
> >             (!HAS_LLC(i915) && !IS_DG1(i915)))
> > -               wbinvd_on_all_cpus();
> > +               drm_clflush_sg(pages);
> > 
> >         sg_page_sizes = i915_sg_dma_sizes(pages->sgl);
> >         __i915_gem_object_set_pages(obj, pages, sg_page_sizes);
> > -- 
> > 2.25.1
> >
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index f5062d0c6333..b0a5baaebc43 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -8,6 +8,7 @@ 
 #include <linux/highmem.h>
 #include <linux/dma-resv.h>
 #include <linux/module.h>
+#include <drm/drm_cache.h>
 
 #include <asm/smp.h>
 
@@ -250,16 +251,10 @@  static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
 	 * DG1 is special here since it still snoops transactions even with
 	 * CACHE_NONE. This is not the case with other HAS_SNOOP platforms. We
 	 * might need to revisit this as we add new discrete platforms.
-	 *
-	 * XXX: Consider doing a vmap flush or something, where possible.
-	 * Currently we just do a heavy handed wbinvd_on_all_cpus() here since
-	 * the underlying sg_table might not even point to struct pages, so we
-	 * can't just call drm_clflush_sg or similar, like we do elsewhere in
-	 * the driver.
 	 */
 	if (i915_gem_object_can_bypass_llc(obj) ||
 	    (!HAS_LLC(i915) && !IS_DG1(i915)))
-		wbinvd_on_all_cpus();
+		drm_clflush_sg(pages);
 
 	sg_page_sizes = i915_sg_dma_sizes(pages->sgl);
 	__i915_gem_object_set_pages(obj, pages, sg_page_sizes);