diff mbox series

drm/gpu: Add comment for memory barrier

Message ID 1571806802-17987-1-git-send-email-bhanusreemahesh@gmail.com (mailing list archive)
State New, archived
Headers show
Series drm/gpu: Add comment for memory barrier | expand

Commit Message

Bhanusree Mahesh Oct. 23, 2019, 5 a.m. UTC
-Add comment for memory barrier
-Issue found using checkpatch.pl

Signed-off-by: Bhanusree <bhanusreemahesh@gmail.com>
---
 drivers/gpu/drm/drm_cache.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Daniel Vetter Oct. 23, 2019, 8:16 a.m. UTC | #1
On Wed, Oct 23, 2019 at 7:00 AM Bhanusree <bhanusreemahesh@gmail.com> wrote:
>
> -Add comment for memory barrier
> -Issue found using checkpatch.pl
>
> Signed-off-by: Bhanusree <bhanusreemahesh@gmail.com>
> ---
>  drivers/gpu/drm/drm_cache.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c
> index 3bd76e9..39910f2 100644
> --- a/drivers/gpu/drm/drm_cache.c
> +++ b/drivers/gpu/drm/drm_cache.c
> @@ -62,10 +62,10 @@ static void drm_cache_flush_clflush(struct page *pages[],
>  {
>         unsigned long i;
>
> -       mb();
> +       mb(); /*make sure page address is read*/

That's not what's going on here. We need the barriers because:
- clflush is an unordered instruction (very rare on x86, but they exist)
- hence we need t have a barrier before&after to make sure the clflush
actually happens when we want it to happen.
- clflush flushes the cpu cache, _that_ is what we want to synchronize
against. not the memory address itself. Hence we need to make sure
that before all reads/writes have hit the cpu cache, and afterwards
that all the cache lines we wanted to flush are flushed.
- https://c9x.me/x86/html/file_module_x86_id_30.html for more info on clflush

So useful comment (see also the link) would be:

/* CLFLUSH is only ordered with a full memory mbarrier (MFENCE
instruction) before ...*/

>         for (i = 0; i < num_pages; i++)
>                 drm_clflush_page(*pages++);
> -       mb();
> +       mb(); /*make sure all addresses of pages are read sequentially*/

/* ... and after the CLFLUSH instruction itself */

If you dig through all the macros for mb() eventually you should find
the inline assembler for MFENCE. That's also why we need to have this
barrier even on single-processor builds (so no smp_mb()).

Can you pls respin?

Thanks, Daniel
>  }
>  #endif
>
> --
> 2.7.4
>
diff mbox series

Patch

diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c
index 3bd76e9..39910f2 100644
--- a/drivers/gpu/drm/drm_cache.c
+++ b/drivers/gpu/drm/drm_cache.c
@@ -62,10 +62,10 @@  static void drm_cache_flush_clflush(struct page *pages[],
 {
 	unsigned long i;
 
-	mb();
+	mb(); /*make sure page address is read*/
 	for (i = 0; i < num_pages; i++)
 		drm_clflush_page(*pages++);
-	mb();
+	mb(); /*make sure all addresses of pages are read sequentially*/
 }
 #endif