Message ID | 1415731396-19364-7-git-send-email-ross.zwisler@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 11/11/2014 10:43 AM, Ross Zwisler wrote: > If clwb is available on the system, use it in drm_clflush_virt_range. > If clwb is not available, fall back to clflushopt if you can. > If clflushopt is not supported, fall all the way back to clflush. I don't know exactly what drm_clflush_virt_range (and the other functions you're modifying similarly) are for, but it seems plausible to me that they're used before reads to make sure that non-coherent memory sees updated data. If that's true, then this will break it. But maybe all the users are write to coherent memory that just need to ensure that whatever's backing the memory knows about the write. FWIW, it may make sense to rename this function to drm_clwb_virt_range if you make this change. --Andy > > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> > Cc: H Peter Anvin <h.peter.anvin@intel.com> > Cc: Ingo Molnar <mingo@kernel.org> > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: David Airlie <airlied@linux.ie> > Cc: dri-devel@lists.freedesktop.org > Cc: x86@kernel.org > --- > drivers/gpu/drm/drm_cache.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c > index aad9d82..84e9a04 100644 > --- a/drivers/gpu/drm/drm_cache.c > +++ b/drivers/gpu/drm/drm_cache.c > @@ -138,8 +138,8 @@ drm_clflush_virt_range(void *addr, unsigned long length) > void *end = addr + length; > mb(); > for (; addr < end; addr += boot_cpu_data.x86_clflush_size) > - clflushopt(addr); > - clflushopt(end - 1); > + clwb(addr); > + clwb(end - 1); > mb(); > return; > } >
On Wed, Nov 12, 2014 at 07:14:21PM -0800, Andy Lutomirski wrote: > On 11/11/2014 10:43 AM, Ross Zwisler wrote: > > If clwb is available on the system, use it in drm_clflush_virt_range. > > If clwb is not available, fall back to clflushopt if you can. > > If clflushopt is not supported, fall all the way back to clflush. > > I don't know exactly what drm_clflush_virt_range (and the other > functions you're modifying similarly) are for, but it seems plausible to > me that they're used before reads to make sure that non-coherent memory > sees updated data. If that's true, then this will break it. Why would it break it? The updated cachelines will be in memory and subsequent reads will be serviced from the cache instead from going to memory as it is not invalidated as it would be by CLFLUSH. /me is puzzled.
On Nov 13, 2014 3:20 AM, "Borislav Petkov" <bp@alien8.de> wrote: > > On Wed, Nov 12, 2014 at 07:14:21PM -0800, Andy Lutomirski wrote: > > On 11/11/2014 10:43 AM, Ross Zwisler wrote: > > > If clwb is available on the system, use it in drm_clflush_virt_range. > > > If clwb is not available, fall back to clflushopt if you can. > > > If clflushopt is not supported, fall all the way back to clflush. > > > > I don't know exactly what drm_clflush_virt_range (and the other > > functions you're modifying similarly) are for, but it seems plausible to > > me that they're used before reads to make sure that non-coherent memory > > sees updated data. If that's true, then this will break it. > > Why would it break it? The updated cachelines will be in memory and > subsequent reads will be serviced from the cache instead from going to > memory as it is not invalidated as it would be by CLFLUSH. > > /me is puzzled. Suppose you map some device memory WB, and then the device non-coherently updates. If you want the CPU to see it, you need clflush or clflushopt. Some architectures might do this for dma_sync_single_for_cpu with DMA_FROM_DEVICE. I'm not sure that such a thing exists on x86. --Andy
On Thu, Nov 13, 2014 at 08:38:23AM -0800, Andy Lutomirski wrote: > On Nov 13, 2014 3:20 AM, "Borislav Petkov" <bp@alien8.de> wrote: > > > > On Wed, Nov 12, 2014 at 07:14:21PM -0800, Andy Lutomirski wrote: > > > On 11/11/2014 10:43 AM, Ross Zwisler wrote: > > > > If clwb is available on the system, use it in drm_clflush_virt_range. > > > > If clwb is not available, fall back to clflushopt if you can. > > > > If clflushopt is not supported, fall all the way back to clflush. > > > > > > I don't know exactly what drm_clflush_virt_range (and the other > > > functions you're modifying similarly) are for, but it seems plausible to > > > me that they're used before reads to make sure that non-coherent memory > > > sees updated data. If that's true, then this will break it. > > > > Why would it break it? The updated cachelines will be in memory and > > subsequent reads will be serviced from the cache instead from going to > > memory as it is not invalidated as it would be by CLFLUSH. > > > > /me is puzzled. > > Suppose you map some device memory WB, and then the device > non-coherently updates. If you want the CPU to see it, you need > clflush or clflushopt. Some architectures might do this for > dma_sync_single_for_cpu with DMA_FROM_DEVICE. Ah, you're talking about the other way around - the device does the writes. Well, the usage sites are all in i915*, maybe we should ask them - it looks to me like this is only the CPU making stuff visible in the shared buffer but I don't know that code... intel-gfx CCed although dri-devel is already on CC.
On Thu, Nov 13, 2014 at 06:11:33PM +0100, Borislav Petkov wrote: > On Thu, Nov 13, 2014 at 08:38:23AM -0800, Andy Lutomirski wrote: > > On Nov 13, 2014 3:20 AM, "Borislav Petkov" <bp@alien8.de> wrote: > > > > > > On Wed, Nov 12, 2014 at 07:14:21PM -0800, Andy Lutomirski wrote: > > > > On 11/11/2014 10:43 AM, Ross Zwisler wrote: > > > > > If clwb is available on the system, use it in drm_clflush_virt_range. > > > > > If clwb is not available, fall back to clflushopt if you can. > > > > > If clflushopt is not supported, fall all the way back to clflush. > > > > > > > > I don't know exactly what drm_clflush_virt_range (and the other > > > > functions you're modifying similarly) are for, but it seems plausible to > > > > me that they're used before reads to make sure that non-coherent memory > > > > sees updated data. If that's true, then this will break it. > > > > > > Why would it break it? The updated cachelines will be in memory and > > > subsequent reads will be serviced from the cache instead from going to > > > memory as it is not invalidated as it would be by CLFLUSH. > > > > > > /me is puzzled. > > > > Suppose you map some device memory WB, and then the device > > non-coherently updates. If you want the CPU to see it, you need > > clflush or clflushopt. Some architectures might do this for > > dma_sync_single_for_cpu with DMA_FROM_DEVICE. > > Ah, you're talking about the other way around - the device does the > writes. Well, the usage sites are all in i915*, maybe we should ask > them - it looks to me like this is only the CPU making stuff visible in > the shared buffer but I don't know that code... intel-gfx CCed although > dri-devel is already on CC. We use it both ways in i915. So please don't break it.
On Thu, Nov 13, 2014 at 07:33:54PM +0200, Ville Syrjälä wrote:
> We use it both ways in i915. So please don't break it.
Haha, we started from Intel with Ross' patch and made a full circle
back. Maybe you guys should talk about it.
LOL. :-)
On Thu, Nov 13, 2014 at 06:47:34PM +0100, Borislav Petkov wrote: > On Thu, Nov 13, 2014 at 07:33:54PM +0200, Ville Syrjälä wrote: > > We use it both ways in i915. So please don't break it. > > Haha, we started from Intel with Ross' patch and made a full circle > back. Maybe you guys should talk about it. > > LOL. :-) Indeed. The problem I see with these patches is that they don't actually tell what the new instruction does, so a casual glance doesn't really raise any red flags. Another excuse I can use is that I just got used to the fact that the x86 hasn't historically bothered separating invalidate and writeback and just does both. In my previous life on the dark^Warm side I did actually know the difference :) But there's plenty of blame to go around to the other side of the fence too. We should have documented what we expect from these functions. Currently you just have to know/guess, and that's just not good enough.
On Thu, Nov 13, 2014 at 07:33:54PM +0200, Ville Syrjälä wrote: > On Thu, Nov 13, 2014 at 06:11:33PM +0100, Borislav Petkov wrote: > > On Thu, Nov 13, 2014 at 08:38:23AM -0800, Andy Lutomirski wrote: > > > On Nov 13, 2014 3:20 AM, "Borislav Petkov" <bp@alien8.de> wrote: > > > > > > > > On Wed, Nov 12, 2014 at 07:14:21PM -0800, Andy Lutomirski wrote: > > > > > On 11/11/2014 10:43 AM, Ross Zwisler wrote: > > > > > > If clwb is available on the system, use it in drm_clflush_virt_range. > > > > > > If clwb is not available, fall back to clflushopt if you can. > > > > > > If clflushopt is not supported, fall all the way back to clflush. > > > > > > > > > > I don't know exactly what drm_clflush_virt_range (and the other > > > > > functions you're modifying similarly) are for, but it seems plausible to > > > > > me that they're used before reads to make sure that non-coherent memory > > > > > sees updated data. If that's true, then this will break it. > > > > > > > > Why would it break it? The updated cachelines will be in memory and > > > > subsequent reads will be serviced from the cache instead from going to > > > > memory as it is not invalidated as it would be by CLFLUSH. > > > > > > > > /me is puzzled. > > > > > > Suppose you map some device memory WB, and then the device > > > non-coherently updates. If you want the CPU to see it, you need > > > clflush or clflushopt. Some architectures might do this for > > > dma_sync_single_for_cpu with DMA_FROM_DEVICE. > > > > Ah, you're talking about the other way around - the device does the > > writes. Well, the usage sites are all in i915*, maybe we should ask > > them - it looks to me like this is only the CPU making stuff visible in > > the shared buffer but I don't know that code... intel-gfx CCed although > > dri-devel is already on CC. > > We use it both ways in i915. So please don't break it. Actually to clarify a bit, I think we shouldn't actually need the invalidate part for any modern platform with shared LLC. Apart from the display those tend to be fully coherent, so we only have to care about the display controller not seeing stale data in memory. I have no idea which platforms support this instruction. If Baytrail/Braswell/Cherrytrail are on the list, then we have a problem. Otherwise we should probably be fine with it. But I'm not 100% sure about any future platforms that are still under wraps. Also ttm seems to use some of this stuff. Not sure what they expect from it.
diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c index aad9d82..84e9a04 100644 --- a/drivers/gpu/drm/drm_cache.c +++ b/drivers/gpu/drm/drm_cache.c @@ -138,8 +138,8 @@ drm_clflush_virt_range(void *addr, unsigned long length) void *end = addr + length; mb(); for (; addr < end; addr += boot_cpu_data.x86_clflush_size) - clflushopt(addr); - clflushopt(end - 1); + clwb(addr); + clwb(end - 1); mb(); return; }
If clwb is available on the system, use it in drm_clflush_virt_range. If clwb is not available, fall back to clflushopt if you can. If clflushopt is not supported, fall all the way back to clflush. Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> Cc: H Peter Anvin <h.peter.anvin@intel.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: David Airlie <airlied@linux.ie> Cc: dri-devel@lists.freedesktop.org Cc: x86@kernel.org --- drivers/gpu/drm/drm_cache.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)