Message ID | 20220319194227.297639-1-michael.cheng@intel.com (mailing list archive) |
---|---|
Headers | show |
Series | Drop wbinvd_on_all_cpus usage | expand |
On 19/03/2022 19:42, Michael Cheng wrote: > To align with the discussion in [1][2], this patch series drops all usage of > wbvind_on_all_cpus within i915 by either replacing the call with certain > drm clflush helpers, or reverting to a previous logic. AFAIU, complaint from [1] was that it is wrong to provide non x86 implementations under the wbinvd_on_all_cpus name. Instead an arch agnostic helper which achieves the same effect could be created. Does Arm have such concept? Given that the series seems to be taking a different route, avoiding the need to call wbinvd_on_all_cpus rather than what [1] suggests (note drm_clflush_sg can still call it!?), concern is that the series has a bunch of reverts and each one needs to be analyzed. For instance looking at just the last one, 64b95df91f44, who has looked at the locking consequences that commit describes: """ Inside gtt_restore_mappings() we currently take the obj->resv->lock, but in the future we need to avoid taking this fs-reclaim tainted lock as we need to extend the coverage of the vm->mutex. Take advantage of the single-threaded nature of the early resume phase, and do a single wbinvd() to flush all the GTT objects en masse. """ ? Then there are suspend and freeze reverts which presumably can regress the suspend times. Any data on those? Adding Matt since he was the reviewer for that work so might remember something. Regards, Tvrtko > [1]. https://lists.freedesktop.org/archives/dri-devel/2021-November/330928.html > [2]. https://patchwork.freedesktop.org/patch/475752/?series=99991&rev=5 > > Michael Cheng (4): > i915/gem: drop wbinvd_on_all_cpus usage > Revert "drm/i915/gem: Almagamate clflushes on suspend" > i915/gem: Revert i915_gem_freeze to previous logic > drm/i915/gt: Revert ggtt_resume to previous logic > > drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 +--- > drivers/gpu/drm/i915/gem/i915_gem_pm.c | 56 ++++++++++++++-------- > drivers/gpu/drm/i915/gt/intel_ggtt.c | 17 +++---- > drivers/gpu/drm/i915/gt/intel_gtt.h | 2 +- > 4 files changed, 46 insertions(+), 38 deletions(-) >
Hi, Tvrtko. On 3/21/22 11:27, Tvrtko Ursulin wrote: > > On 19/03/2022 19:42, Michael Cheng wrote: >> To align with the discussion in [1][2], this patch series drops all >> usage of >> wbvind_on_all_cpus within i915 by either replacing the call with certain >> drm clflush helpers, or reverting to a previous logic. > > AFAIU, complaint from [1] was that it is wrong to provide non x86 > implementations under the wbinvd_on_all_cpus name. Instead an arch > agnostic helper which achieves the same effect could be created. Does > Arm have such concept? I also understand Linus' email like we shouldn't leak incoherent IO to other architectures, meaning any remaining wbinvd()s should be X86 only. Also, wbinvd_on_all_cpus() can become very costly, hence prefer the range apis when possible if they can be verified not to degrade performance. > > Given that the series seems to be taking a different route, avoiding > the need to call wbinvd_on_all_cpus rather than what [1] suggests > (note drm_clflush_sg can still call it!?), concern is that the series > has a bunch of reverts and each one needs to be analyzed. Agreed. /Thomas > > For instance looking at just the last one, 64b95df91f44, who has > looked at the locking consequences that commit describes: > > """ > Inside gtt_restore_mappings() we currently take the > obj->resv->lock, but > in the future we need to avoid taking this fs-reclaim tainted lock > as we > need to extend the coverage of the vm->mutex. Take advantage of the > single-threaded nature of the early resume phase, and do a single > wbinvd() to flush all the GTT objects en masse. > > """ > > ? > > Then there are suspend and freeze reverts which presumably can regress > the suspend times. Any data on those? > > Adding Matt since he was the reviewer for that work so might remember > something. > > Regards, > > Tvrtko > > >> [1]. >> https://lists.freedesktop.org/archives/dri-devel/2021-November/330928.html >> [2]. https://patchwork.freedesktop.org/patch/475752/?series=99991&rev=5 >> >> Michael Cheng (4): >> i915/gem: drop wbinvd_on_all_cpus usage >> Revert "drm/i915/gem: Almagamate clflushes on suspend" >> i915/gem: Revert i915_gem_freeze to previous logic >> drm/i915/gt: Revert ggtt_resume to previous logic >> >> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 +--- >> drivers/gpu/drm/i915/gem/i915_gem_pm.c | 56 ++++++++++++++-------- >> drivers/gpu/drm/i915/gt/intel_ggtt.c | 17 +++---- >> drivers/gpu/drm/i915/gt/intel_gtt.h | 2 +- >> 4 files changed, 46 insertions(+), 38 deletions(-) >>
On 21/03/2022 11:03, Thomas Hellström wrote: > Hi, Tvrtko. > > On 3/21/22 11:27, Tvrtko Ursulin wrote: >> >> On 19/03/2022 19:42, Michael Cheng wrote: >>> To align with the discussion in [1][2], this patch series drops all >>> usage of >>> wbvind_on_all_cpus within i915 by either replacing the call with certain >>> drm clflush helpers, or reverting to a previous logic. >> >> AFAIU, complaint from [1] was that it is wrong to provide non x86 >> implementations under the wbinvd_on_all_cpus name. Instead an arch >> agnostic helper which achieves the same effect could be created. Does >> Arm have such concept? > > I also understand Linus' email like we shouldn't leak incoherent IO to > other architectures, meaning any remaining wbinvd()s should be X86 only. The last part is completely obvious since it is a x86 instruction name. But I think we can't pick a solution until we know how the concept maps to Arm and that will also include seeing how the drm_clflush_sg for Arm would look. Is there a range based solution, or just a big hammer there. If the latter, then it is no good to churn all these reverts but instead an arch agnostic wrapper, with a generic name, would be the way to go. Regards, Tvrtko > Also, wbinvd_on_all_cpus() can become very costly, hence prefer the > range apis when possible if they can be verified not to degrade > performance. > > >> >> Given that the series seems to be taking a different route, avoiding >> the need to call wbinvd_on_all_cpus rather than what [1] suggests >> (note drm_clflush_sg can still call it!?), concern is that the series >> has a bunch of reverts and each one needs to be analyzed. > > > Agreed. > > /Thomas > > > >> >> For instance looking at just the last one, 64b95df91f44, who has >> looked at the locking consequences that commit describes: >> >> """ >> Inside gtt_restore_mappings() we currently take the >> obj->resv->lock, but >> in the future we need to avoid taking this fs-reclaim tainted lock >> as we >> need to extend the coverage of the vm->mutex. Take advantage of the >> single-threaded nature of the early resume phase, and do a single >> wbinvd() to flush all the GTT objects en masse. >> >> """ >> >> ? >> >> Then there are suspend and freeze reverts which presumably can regress >> the suspend times. Any data on those? >> >> Adding Matt since he was the reviewer for that work so might remember >> something. >> >> Regards, >> >> Tvrtko >> >> >>> [1]. >>> https://lists.freedesktop.org/archives/dri-devel/2021-November/330928.html >>> >>> [2]. https://patchwork.freedesktop.org/patch/475752/?series=99991&rev=5 >>> >>> Michael Cheng (4): >>> i915/gem: drop wbinvd_on_all_cpus usage >>> Revert "drm/i915/gem: Almagamate clflushes on suspend" >>> i915/gem: Revert i915_gem_freeze to previous logic >>> drm/i915/gt: Revert ggtt_resume to previous logic >>> >>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 +--- >>> drivers/gpu/drm/i915/gem/i915_gem_pm.c | 56 ++++++++++++++-------- >>> drivers/gpu/drm/i915/gt/intel_ggtt.c | 17 +++---- >>> drivers/gpu/drm/i915/gt/intel_gtt.h | 2 +- >>> 4 files changed, 46 insertions(+), 38 deletions(-) >>>
On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin wrote: > > On 21/03/2022 11:03, Thomas Hellström wrote: > > Hi, Tvrtko. > > > > On 3/21/22 11:27, Tvrtko Ursulin wrote: > > > > > > On 19/03/2022 19:42, Michael Cheng wrote: > > > > To align with the discussion in [1][2], this patch series drops > > > > all > > > > usage of > > > > wbvind_on_all_cpus within i915 by either replacing the call > > > > with certain > > > > drm clflush helpers, or reverting to a previous logic. > > > > > > AFAIU, complaint from [1] was that it is wrong to provide non x86 > > > implementations under the wbinvd_on_all_cpus name. Instead an > > > arch > > > agnostic helper which achieves the same effect could be created. > > > Does > > > Arm have such concept? > > > > I also understand Linus' email like we shouldn't leak incoherent IO > > to > > other architectures, meaning any remaining wbinvd()s should be X86 > > only. > > The last part is completely obvious since it is a x86 instruction > name. Yeah, I meant the function implementing wbinvd() semantics. > > But I think we can't pick a solution until we know how the concept > maps > to Arm and that will also include seeing how the drm_clflush_sg for > Arm > would look. Is there a range based solution, or just a big hammer > there. > If the latter, then it is no good to churn all these reverts but > instead > an arch agnostic wrapper, with a generic name, would be the way to > go. But my impression was that ARM would not need the range-based interface either, because ARM is only for discrete and with discrete we're always coherent. So in essence it all would become: 1) Any cache flushing intended for incoherent IO is x86 only. 2) Prefer range-based flushing if possible and any implications sorted out. /Thomas > > Regards, > > Tvrtko > > > Also, wbinvd_on_all_cpus() can become very costly, hence prefer the > > range apis when possible if they can be verified not to degrade > > performance. > > > > > > > > > > Given that the series seems to be taking a different route, > > > avoiding > > > the need to call wbinvd_on_all_cpus rather than what [1] suggests > > > (note drm_clflush_sg can still call it!?), concern is that the > > > series > > > has a bunch of reverts and each one needs to be analyzed. > > > > > > Agreed. > > > > /Thomas > > > > > > > > > > > > For instance looking at just the last one, 64b95df91f44, who has > > > looked at the locking consequences that commit describes: > > > > > > """ > > > Inside gtt_restore_mappings() we currently take the > > > obj->resv->lock, but > > > in the future we need to avoid taking this fs-reclaim tainted > > > lock > > > as we > > > need to extend the coverage of the vm->mutex. Take advantage > > > of the > > > single-threaded nature of the early resume phase, and do a > > > single > > > wbinvd() to flush all the GTT objects en masse. > > > > > > """ > > > > > > ? > > > > > > Then there are suspend and freeze reverts which presumably can > > > regress > > > the suspend times. Any data on those? > > > > > > Adding Matt since he was the reviewer for that work so might > > > remember > > > something. > > > > > > Regards, > > > > > > Tvrtko > > > > > > > > > > [1]. > > > > https://lists.freedesktop.org/archives/dri-devel/2021-November/330928.html > > > > > > > > > > > > [2]. > > > > https://patchwork.freedesktop.org/patch/475752/?series=99991&rev=5 > > > > > > > > Michael Cheng (4): > > > > i915/gem: drop wbinvd_on_all_cpus usage > > > > Revert "drm/i915/gem: Almagamate clflushes on suspend" > > > > i915/gem: Revert i915_gem_freeze to previous logic > > > > drm/i915/gt: Revert ggtt_resume to previous logic > > > > > > > > drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 +--- > > > > drivers/gpu/drm/i915/gem/i915_gem_pm.c | 56 > > > > ++++++++++++++-------- > > > > drivers/gpu/drm/i915/gt/intel_ggtt.c | 17 +++---- > > > > drivers/gpu/drm/i915/gt/intel_gtt.h | 2 +- > > > > 4 files changed, 46 insertions(+), 38 deletions(-) > > > >
On 21/03/2022 12:33, Thomas Hellström wrote: > On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin wrote: >> >> On 21/03/2022 11:03, Thomas Hellström wrote: >>> Hi, Tvrtko. >>> >>> On 3/21/22 11:27, Tvrtko Ursulin wrote: >>>> >>>> On 19/03/2022 19:42, Michael Cheng wrote: >>>>> To align with the discussion in [1][2], this patch series drops >>>>> all >>>>> usage of >>>>> wbvind_on_all_cpus within i915 by either replacing the call >>>>> with certain >>>>> drm clflush helpers, or reverting to a previous logic. >>>> >>>> AFAIU, complaint from [1] was that it is wrong to provide non x86 >>>> implementations under the wbinvd_on_all_cpus name. Instead an >>>> arch >>>> agnostic helper which achieves the same effect could be created. >>>> Does >>>> Arm have such concept? >>> >>> I also understand Linus' email like we shouldn't leak incoherent IO >>> to >>> other architectures, meaning any remaining wbinvd()s should be X86 >>> only. >> >> The last part is completely obvious since it is a x86 instruction >> name. > > Yeah, I meant the function implementing wbinvd() semantics. > >> >> But I think we can't pick a solution until we know how the concept >> maps >> to Arm and that will also include seeing how the drm_clflush_sg for >> Arm >> would look. Is there a range based solution, or just a big hammer >> there. >> If the latter, then it is no good to churn all these reverts but >> instead >> an arch agnostic wrapper, with a generic name, would be the way to >> go. > > But my impression was that ARM would not need the range-based interface > either, because ARM is only for discrete and with discrete we're always > coherent. Not sure what you mean here - what about flushing system memory objects on discrete? Those still need flushing on paths like suspend which this series touches. Am I missing something? If I am not, then that means we either keep the current, presumably optimised (wasn't personally involved so I don't know), flush once code paths and add a wrapper i915_flush_caches/whatever, or convert all those back into piece-meal flushes so range flushing can be done. Assuming Arm does range flushing. That's why I asked what does Arm have here. > So in essence it all would become: > > 1) Any cache flushing intended for incoherent IO is x86 only. > 2) Prefer range-based flushing if possible and any implications sorted > out. Yes, the question is how to do it. Regards, Tvrtko
Hi, On Mon, 2022-03-21 at 13:12 +0000, Tvrtko Ursulin wrote: > > On 21/03/2022 12:33, Thomas Hellström wrote: > > On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin wrote: > > > > > > On 21/03/2022 11:03, Thomas Hellström wrote: > > > > Hi, Tvrtko. > > > > > > > > On 3/21/22 11:27, Tvrtko Ursulin wrote: > > > > > > > > > > On 19/03/2022 19:42, Michael Cheng wrote: > > > > > > To align with the discussion in [1][2], this patch series > > > > > > drops > > > > > > all > > > > > > usage of > > > > > > wbvind_on_all_cpus within i915 by either replacing the call > > > > > > with certain > > > > > > drm clflush helpers, or reverting to a previous logic. > > > > > > > > > > AFAIU, complaint from [1] was that it is wrong to provide non > > > > > x86 > > > > > implementations under the wbinvd_on_all_cpus name. Instead an > > > > > arch > > > > > agnostic helper which achieves the same effect could be > > > > > created. > > > > > Does > > > > > Arm have such concept? > > > > > > > > I also understand Linus' email like we shouldn't leak incoherent > > > > IO > > > > to > > > > other architectures, meaning any remaining wbinvd()s should be > > > > X86 > > > > only. > > > > > > The last part is completely obvious since it is a x86 instruction > > > name. > > > > Yeah, I meant the function implementing wbinvd() semantics. > > > > > > > > But I think we can't pick a solution until we know how the concept > > > maps > > > to Arm and that will also include seeing how the drm_clflush_sg for > > > Arm > > > would look. Is there a range based solution, or just a big hammer > > > there. > > > If the latter, then it is no good to churn all these reverts but > > > instead > > > an arch agnostic wrapper, with a generic name, would be the way to > > > go. > > > > But my impression was that ARM would not need the range-based > > interface > > either, because ARM is only for discrete and with discrete we're > > always > > coherent. > > Not sure what you mean here - what about flushing system memory objects > on discrete? Those still need flushing on paths like suspend which this > series touches. Am I missing something? System bos on discrete should always have I915_BO_CACHE_COHERENT_FOR_READ | I915_BO_CACHE_COHERENT_FOR_WRITE either by the gpu being fully cache coherent (or us mapping system write-combined). Hence no need for cache clflushes or wbinvd() for incoherent IO. That's adhering to Linus' "And I sincerely hope to the gods that no cache-incoherent i915 mess ever makes it out of the x86 world. Incoherent IO was always a historical mistake and should never ever happen again, so we should not spread that horrific pattern around." /Thomas
On 21/03/2022 13:40, Thomas Hellström wrote: > Hi, > > On Mon, 2022-03-21 at 13:12 +0000, Tvrtko Ursulin wrote: >> >> On 21/03/2022 12:33, Thomas Hellström wrote: >>> On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin wrote: >>>> >>>> On 21/03/2022 11:03, Thomas Hellström wrote: >>>>> Hi, Tvrtko. >>>>> >>>>> On 3/21/22 11:27, Tvrtko Ursulin wrote: >>>>>> >>>>>> On 19/03/2022 19:42, Michael Cheng wrote: >>>>>>> To align with the discussion in [1][2], this patch series >>>>>>> drops >>>>>>> all >>>>>>> usage of >>>>>>> wbvind_on_all_cpus within i915 by either replacing the call >>>>>>> with certain >>>>>>> drm clflush helpers, or reverting to a previous logic. >>>>>> >>>>>> AFAIU, complaint from [1] was that it is wrong to provide non >>>>>> x86 >>>>>> implementations under the wbinvd_on_all_cpus name. Instead an >>>>>> arch >>>>>> agnostic helper which achieves the same effect could be >>>>>> created. >>>>>> Does >>>>>> Arm have such concept? >>>>> >>>>> I also understand Linus' email like we shouldn't leak incoherent >>>>> IO >>>>> to >>>>> other architectures, meaning any remaining wbinvd()s should be >>>>> X86 >>>>> only. >>>> >>>> The last part is completely obvious since it is a x86 instruction >>>> name. >>> >>> Yeah, I meant the function implementing wbinvd() semantics. >>> >>>> >>>> But I think we can't pick a solution until we know how the concept >>>> maps >>>> to Arm and that will also include seeing how the drm_clflush_sg for >>>> Arm >>>> would look. Is there a range based solution, or just a big hammer >>>> there. >>>> If the latter, then it is no good to churn all these reverts but >>>> instead >>>> an arch agnostic wrapper, with a generic name, would be the way to >>>> go. >>> >>> But my impression was that ARM would not need the range-based >>> interface >>> either, because ARM is only for discrete and with discrete we're >>> always >>> coherent. >> >> Not sure what you mean here - what about flushing system memory objects >> on discrete? Those still need flushing on paths like suspend which this >> series touches. Am I missing something? > > System bos on discrete should always have > > I915_BO_CACHE_COHERENT_FOR_READ | I915_BO_CACHE_COHERENT_FOR_WRITE > > either by the gpu being fully cache coherent (or us mapping system > write-combined). Hence no need for cache clflushes or wbinvd() for > incoherent IO. Hmm so you are talking about the shmem ttm backend. It ends up depending on the result of i915_ttm_cache_level, yes? It cannot end up with I915_CACHE_NONE from that function? I also found in i915_drm.h: * As caching mode when specifying `I915_MMAP_OFFSET_FIXED`, WC or WB will * be used, depending on the object placement on creation. WB will be used * when the object can only exist in system memory, WC otherwise. If what you say is true, that on discrete it is _always_ WC, then that needs updating as well. > > That's adhering to Linus' > > "And I sincerely hope to the gods that no cache-incoherent i915 mess > ever makes it out of the x86 world. Incoherent IO was always a > historical mistake and should never ever happen again, so we should > not spread that horrific pattern around." Sure, but I was not talking about IO - just the CPU side access to CPU side objects. Regards, Tvrtko
On Mon, 2022-03-21 at 14:43 +0000, Tvrtko Ursulin wrote: > > On 21/03/2022 13:40, Thomas Hellström wrote: > > Hi, > > > > On Mon, 2022-03-21 at 13:12 +0000, Tvrtko Ursulin wrote: > > > > > > On 21/03/2022 12:33, Thomas Hellström wrote: > > > > On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin wrote: > > > > > > > > > > On 21/03/2022 11:03, Thomas Hellström wrote: > > > > > > Hi, Tvrtko. > > > > > > > > > > > > On 3/21/22 11:27, Tvrtko Ursulin wrote: > > > > > > > > > > > > > > On 19/03/2022 19:42, Michael Cheng wrote: > > > > > > > > To align with the discussion in [1][2], this patch > > > > > > > > series > > > > > > > > drops > > > > > > > > all > > > > > > > > usage of > > > > > > > > wbvind_on_all_cpus within i915 by either replacing the > > > > > > > > call > > > > > > > > with certain > > > > > > > > drm clflush helpers, or reverting to a previous logic. > > > > > > > > > > > > > > AFAIU, complaint from [1] was that it is wrong to provide > > > > > > > non > > > > > > > x86 > > > > > > > implementations under the wbinvd_on_all_cpus name. > > > > > > > Instead an > > > > > > > arch > > > > > > > agnostic helper which achieves the same effect could be > > > > > > > created. > > > > > > > Does > > > > > > > Arm have such concept? > > > > > > > > > > > > I also understand Linus' email like we shouldn't leak > > > > > > incoherent > > > > > > IO > > > > > > to > > > > > > other architectures, meaning any remaining wbinvd()s should > > > > > > be > > > > > > X86 > > > > > > only. > > > > > > > > > > The last part is completely obvious since it is a x86 > > > > > instruction > > > > > name. > > > > > > > > Yeah, I meant the function implementing wbinvd() semantics. > > > > > > > > > > > > > > But I think we can't pick a solution until we know how the > > > > > concept > > > > > maps > > > > > to Arm and that will also include seeing how the > > > > > drm_clflush_sg for > > > > > Arm > > > > > would look. Is there a range based solution, or just a big > > > > > hammer > > > > > there. > > > > > If the latter, then it is no good to churn all these reverts > > > > > but > > > > > instead > > > > > an arch agnostic wrapper, with a generic name, would be the > > > > > way to > > > > > go. > > > > > > > > But my impression was that ARM would not need the range-based > > > > interface > > > > either, because ARM is only for discrete and with discrete > > > > we're > > > > always > > > > coherent. > > > > > > Not sure what you mean here - what about flushing system memory > > > objects > > > on discrete? Those still need flushing on paths like suspend > > > which this > > > series touches. Am I missing something? > > > > System bos on discrete should always have > > > > I915_BO_CACHE_COHERENT_FOR_READ | I915_BO_CACHE_COHERENT_FOR_WRITE > > > > either by the gpu being fully cache coherent (or us mapping system > > write-combined). Hence no need for cache clflushes or wbinvd() for > > incoherent IO. > > Hmm so you are talking about the shmem ttm backend. It ends up > depending on the result of i915_ttm_cache_level, yes? It cannot end > up with I915_CACHE_NONE from that function? If the object is allocated with allowable placement in either LMEM or SYSTEM, and it ends in system, it gets allocated with I915_CACHE_NONE, but then the shmem ttm backend isn't used but TTM's wc pools, and the object should *always* be mapped wc. Even in system. > > I also found in i915_drm.h: > > * As caching mode when specifying `I915_MMAP_OFFSET_FIXED`, > WC or WB will > * be used, depending on the object placement on creation. WB > will be used > * when the object can only exist in system memory, WC > otherwise. > > If what you say is true, that on discrete it is _always_ WC, then > that needs updating as well. If an object is allocated as system only, then it is mapped WB, and we're relying on the gpu being cache coherent to avoid clflushes. Same is actually currently true if the object happens to be accessed by the cpu while evicted. Might need an update for that. > > > > > That's adhering to Linus' > > > > "And I sincerely hope to the gods that no cache-incoherent i915 > > mess > > ever makes it out of the x86 world. Incoherent IO was always a > > historical mistake and should never ever happen again, so we should > > not spread that horrific pattern around." > > Sure, but I was not talking about IO - just the CPU side access to > CPU side objects. OK, I was under the impression that clflushes() and wbinvd()s in i915 was only ever used to make data visible to non-snooping GPUs. Do you mean that there are other uses as well? Agreed the wb cache flush on on suspend only if gpu is !I915_BO_CACHE_COHERENT_FOR_READ? looks to not fit this pattern completely. Otherwise, for architectures where memory isn't always fully coherent with the cpu cache, I'd expect them to use the apis in asm/cacheflush.h, like flush_cache_range() and similar, which are nops on x86. Thanks, Thomas > > Regards, > > Tvrtko
On 21/03/2022 15:15, Thomas Hellström wrote: > On Mon, 2022-03-21 at 14:43 +0000, Tvrtko Ursulin wrote: >> >> On 21/03/2022 13:40, Thomas Hellström wrote: >>> Hi, >>> >>> On Mon, 2022-03-21 at 13:12 +0000, Tvrtko Ursulin wrote: >>>> >>>> On 21/03/2022 12:33, Thomas Hellström wrote: >>>>> On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin wrote: >>>>>> >>>>>> On 21/03/2022 11:03, Thomas Hellström wrote: >>>>>>> Hi, Tvrtko. >>>>>>> >>>>>>> On 3/21/22 11:27, Tvrtko Ursulin wrote: >>>>>>>> >>>>>>>> On 19/03/2022 19:42, Michael Cheng wrote: >>>>>>>>> To align with the discussion in [1][2], this patch >>>>>>>>> series >>>>>>>>> drops >>>>>>>>> all >>>>>>>>> usage of >>>>>>>>> wbvind_on_all_cpus within i915 by either replacing the >>>>>>>>> call >>>>>>>>> with certain >>>>>>>>> drm clflush helpers, or reverting to a previous logic. >>>>>>>> >>>>>>>> AFAIU, complaint from [1] was that it is wrong to provide >>>>>>>> non >>>>>>>> x86 >>>>>>>> implementations under the wbinvd_on_all_cpus name. >>>>>>>> Instead an >>>>>>>> arch >>>>>>>> agnostic helper which achieves the same effect could be >>>>>>>> created. >>>>>>>> Does >>>>>>>> Arm have such concept? >>>>>>> >>>>>>> I also understand Linus' email like we shouldn't leak >>>>>>> incoherent >>>>>>> IO >>>>>>> to >>>>>>> other architectures, meaning any remaining wbinvd()s should >>>>>>> be >>>>>>> X86 >>>>>>> only. >>>>>> >>>>>> The last part is completely obvious since it is a x86 >>>>>> instruction >>>>>> name. >>>>> >>>>> Yeah, I meant the function implementing wbinvd() semantics. >>>>> >>>>>> >>>>>> But I think we can't pick a solution until we know how the >>>>>> concept >>>>>> maps >>>>>> to Arm and that will also include seeing how the >>>>>> drm_clflush_sg for >>>>>> Arm >>>>>> would look. Is there a range based solution, or just a big >>>>>> hammer >>>>>> there. >>>>>> If the latter, then it is no good to churn all these reverts >>>>>> but >>>>>> instead >>>>>> an arch agnostic wrapper, with a generic name, would be the >>>>>> way to >>>>>> go. >>>>> >>>>> But my impression was that ARM would not need the range-based >>>>> interface >>>>> either, because ARM is only for discrete and with discrete >>>>> we're >>>>> always >>>>> coherent. >>>> >>>> Not sure what you mean here - what about flushing system memory >>>> objects >>>> on discrete? Those still need flushing on paths like suspend >>>> which this >>>> series touches. Am I missing something? >>> >>> System bos on discrete should always have >>> >>> I915_BO_CACHE_COHERENT_FOR_READ | I915_BO_CACHE_COHERENT_FOR_WRITE >>> >>> either by the gpu being fully cache coherent (or us mapping system >>> write-combined). Hence no need for cache clflushes or wbinvd() for >>> incoherent IO. >> >> Hmm so you are talking about the shmem ttm backend. It ends up >> depending on the result of i915_ttm_cache_level, yes? It cannot end >> up with I915_CACHE_NONE from that function? > > If the object is allocated with allowable placement in either LMEM or > SYSTEM, and it ends in system, it gets allocated with I915_CACHE_NONE, > but then the shmem ttm backend isn't used but TTM's wc pools, and the > object should *always* be mapped wc. Even in system. I am not familiar with neither TTM backend or wc pools so maybe a missed question - if obj->cache_level can be set to none, and obj->cache_coherency to zero, then during object lifetime helpers which consult those fields (like i915_gem_cpu_write_needs_clflush, __start_cpu_write, etc) are giving out incorrect answers? That is, it is irrelevant that they would say flushes are required, since in actuality those objects can never ever and from anywhere be mapped other than WC so flushes aren't actually required? >> I also found in i915_drm.h: >> >> * As caching mode when specifying `I915_MMAP_OFFSET_FIXED`, >> WC or WB will >> * be used, depending on the object placement on creation. WB >> will be used >> * when the object can only exist in system memory, WC >> otherwise. >> >> If what you say is true, that on discrete it is _always_ WC, then >> that needs updating as well. > > If an object is allocated as system only, then it is mapped WB, and > we're relying on the gpu being cache coherent to avoid clflushes. Same > is actually currently true if the object happens to be accessed by the > cpu while evicted. Might need an update for that. Hmm okay, I think I actually misunderstood something here. I think the reason for difference bbtween smem+lmem object which happens to be in smem and smem only object is eluding me. >>> >>> That's adhering to Linus' >>> >>> "And I sincerely hope to the gods that no cache-incoherent i915 >>> mess >>> ever makes it out of the x86 world. Incoherent IO was always a >>> historical mistake and should never ever happen again, so we should >>> not spread that horrific pattern around." >> >> Sure, but I was not talking about IO - just the CPU side access to >> CPU side objects. > > OK, I was under the impression that clflushes() and wbinvd()s in i915 > was only ever used to make data visible to non-snooping GPUs. > > Do you mean that there are other uses as well? Agreed the wb cache > flush on on suspend only if gpu is !I915_BO_CACHE_COHERENT_FOR_READ? > looks to not fit this pattern completely. Don't know, I was first trying to understand handling of the obj->cache_coherent as discussed in the first quote block. Are the flags consistently set and how the Arm low level code will look. > Otherwise, for architectures where memory isn't always fully coherent > with the cpu cache, I'd expect them to use the apis in > asm/cacheflush.h, like flush_cache_range() and similar, which are nops > on x86. Hm do you know why there are no-ops? Like why wouldn't they map to clflush? Regards, Tvrtko
On Tue, 2022-03-22 at 10:13 +0000, Tvrtko Ursulin wrote: > > On 21/03/2022 15:15, Thomas Hellström wrote: > > On Mon, 2022-03-21 at 14:43 +0000, Tvrtko Ursulin wrote: > > > > > > On 21/03/2022 13:40, Thomas Hellström wrote: > > > > Hi, > > > > > > > > On Mon, 2022-03-21 at 13:12 +0000, Tvrtko Ursulin wrote: > > > > > > > > > > On 21/03/2022 12:33, Thomas Hellström wrote: > > > > > > On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin wrote: > > > > > > > > > > > > > > On 21/03/2022 11:03, Thomas Hellström wrote: > > > > > > > > Hi, Tvrtko. > > > > > > > > > > > > > > > > On 3/21/22 11:27, Tvrtko Ursulin wrote: > > > > > > > > > > > > > > > > > > On 19/03/2022 19:42, Michael Cheng wrote: > > > > > > > > > > To align with the discussion in [1][2], this patch > > > > > > > > > > series > > > > > > > > > > drops > > > > > > > > > > all > > > > > > > > > > usage of > > > > > > > > > > wbvind_on_all_cpus within i915 by either replacing > > > > > > > > > > the > > > > > > > > > > call > > > > > > > > > > with certain > > > > > > > > > > drm clflush helpers, or reverting to a previous > > > > > > > > > > logic. > > > > > > > > > > > > > > > > > > AFAIU, complaint from [1] was that it is wrong to > > > > > > > > > provide > > > > > > > > > non > > > > > > > > > x86 > > > > > > > > > implementations under the wbinvd_on_all_cpus name. > > > > > > > > > Instead an > > > > > > > > > arch > > > > > > > > > agnostic helper which achieves the same effect could > > > > > > > > > be > > > > > > > > > created. > > > > > > > > > Does > > > > > > > > > Arm have such concept? > > > > > > > > > > > > > > > > I also understand Linus' email like we shouldn't leak > > > > > > > > incoherent > > > > > > > > IO > > > > > > > > to > > > > > > > > other architectures, meaning any remaining wbinvd()s > > > > > > > > should > > > > > > > > be > > > > > > > > X86 > > > > > > > > only. > > > > > > > > > > > > > > The last part is completely obvious since it is a x86 > > > > > > > instruction > > > > > > > name. > > > > > > > > > > > > Yeah, I meant the function implementing wbinvd() semantics. > > > > > > > > > > > > > > > > > > > > But I think we can't pick a solution until we know how > > > > > > > the > > > > > > > concept > > > > > > > maps > > > > > > > to Arm and that will also include seeing how the > > > > > > > drm_clflush_sg for > > > > > > > Arm > > > > > > > would look. Is there a range based solution, or just a > > > > > > > big > > > > > > > hammer > > > > > > > there. > > > > > > > If the latter, then it is no good to churn all these > > > > > > > reverts > > > > > > > but > > > > > > > instead > > > > > > > an arch agnostic wrapper, with a generic name, would be > > > > > > > the > > > > > > > way to > > > > > > > go. > > > > > > > > > > > > But my impression was that ARM would not need the range- > > > > > > based > > > > > > interface > > > > > > either, because ARM is only for discrete and with discrete > > > > > > we're > > > > > > always > > > > > > coherent. > > > > > > > > > > Not sure what you mean here - what about flushing system > > > > > memory > > > > > objects > > > > > on discrete? Those still need flushing on paths like suspend > > > > > which this > > > > > series touches. Am I missing something? > > > > > > > > System bos on discrete should always have > > > > > > > > I915_BO_CACHE_COHERENT_FOR_READ | > > > > I915_BO_CACHE_COHERENT_FOR_WRITE > > > > > > > > either by the gpu being fully cache coherent (or us mapping > > > > system > > > > write-combined). Hence no need for cache clflushes or wbinvd() > > > > for > > > > incoherent IO. > > > > > > Hmm so you are talking about the shmem ttm backend. It ends up > > > depending on the result of i915_ttm_cache_level, yes? It cannot > > > end > > > up with I915_CACHE_NONE from that function? > > > > If the object is allocated with allowable placement in either LMEM > > or > > SYSTEM, and it ends in system, it gets allocated with > > I915_CACHE_NONE, > > but then the shmem ttm backend isn't used but TTM's wc pools, and > > the > > object should *always* be mapped wc. Even in system. > > I am not familiar with neither TTM backend or wc pools so maybe a > missed > question - if obj->cache_level can be set to none, and > obj->cache_coherency to zero, then during object lifetime helpers > which > consult those fields (like i915_gem_cpu_write_needs_clflush, > __start_cpu_write, etc) are giving out incorrect answers? That is, it > is > irrelevant that they would say flushes are required, since in > actuality > those objects can never ever and from anywhere be mapped other than > WC > so flushes aren't actually required? If we map other than WC somewhere in these situations, that should be a bug needing a fix. It might be that some of these helpers that you mention might still flag that a clflush is needed, and in that case that's an oversight that also needs fixing. > > > > I also found in i915_drm.h: > > > > > > * As caching mode when specifying > > > `I915_MMAP_OFFSET_FIXED`, > > > WC or WB will > > > * be used, depending on the object placement on > > > creation. WB > > > will be used > > > * when the object can only exist in system memory, WC > > > otherwise. > > > > > > If what you say is true, that on discrete it is _always_ WC, then > > > that needs updating as well. > > > > If an object is allocated as system only, then it is mapped WB, and > > we're relying on the gpu being cache coherent to avoid clflushes. > > Same > > is actually currently true if the object happens to be accessed by > > the > > cpu while evicted. Might need an update for that. > > Hmm okay, I think I actually misunderstood something here. I think > the > reason for difference bbtween smem+lmem object which happens to be in > smem and smem only object is eluding me. > > > > > > > > > That's adhering to Linus' > > > > > > > > "And I sincerely hope to the gods that no cache-incoherent i915 > > > > mess > > > > ever makes it out of the x86 world. Incoherent IO was always a > > > > historical mistake and should never ever happen again, so we > > > > should > > > > not spread that horrific pattern around." > > > > > > Sure, but I was not talking about IO - just the CPU side access > > > to > > > CPU side objects. > > > > OK, I was under the impression that clflushes() and wbinvd()s in > > i915 > > was only ever used to make data visible to non-snooping GPUs. > > > > Do you mean that there are other uses as well? Agreed the wb cache > > flush on on suspend only if gpu is > > !I915_BO_CACHE_COHERENT_FOR_READ? > > looks to not fit this pattern completely. > > Don't know, I was first trying to understand handling of the > obj->cache_coherent as discussed in the first quote block. Are the > flags > consistently set and how the Arm low level code will look. > > > Otherwise, for architectures where memory isn't always fully > > coherent > > with the cpu cache, I'd expect them to use the apis in > > asm/cacheflush.h, like flush_cache_range() and similar, which are > > nops > > on x86. > > Hm do you know why there are no-ops? Like why wouldn't they map to > clflush? I think it mostly boils down to the PIPT caches on x86. Everything is assumed to be coherent. Whereas some architextures keep different cache entries for different virtual addresses even if the physical page is the same... clflushes and wbinvds on x86 are for odd arch-specific situations where, for example where we change caching attributes of the linear kernel map mappings. /Thomas > > Regards, > > Tvrtko
On Tue, 2022-03-22 at 11:26 +0100, Thomas Hellström wrote: > On Tue, 2022-03-22 at 10:13 +0000, Tvrtko Ursulin wrote: > > > > On 21/03/2022 15:15, Thomas Hellström wrote: > > > On Mon, 2022-03-21 at 14:43 +0000, Tvrtko Ursulin wrote: > > > > > > > > On 21/03/2022 13:40, Thomas Hellström wrote: > > > > > Hi, > > > > > > > > > > On Mon, 2022-03-21 at 13:12 +0000, Tvrtko Ursulin wrote: > > > > > > > > > > > > On 21/03/2022 12:33, Thomas Hellström wrote: > > > > > > > On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin wrote: > > > > > > > > > > > > > > > > On 21/03/2022 11:03, Thomas Hellström wrote: > > > > > > > > > Hi, Tvrtko. > > > > > > > > > > > > > > > > > > On 3/21/22 11:27, Tvrtko Ursulin wrote: > > > > > > > > > > > > > > > > > > > > On 19/03/2022 19:42, Michael Cheng wrote: > > > > > > > > > > > To align with the discussion in [1][2], this > > > > > > > > > > > patch > > > > > > > > > > > series > > > > > > > > > > > drops > > > > > > > > > > > all > > > > > > > > > > > usage of > > > > > > > > > > > wbvind_on_all_cpus within i915 by either > > > > > > > > > > > replacing > > > > > > > > > > > the > > > > > > > > > > > call > > > > > > > > > > > with certain > > > > > > > > > > > drm clflush helpers, or reverting to a previous > > > > > > > > > > > logic. > > > > > > > > > > > > > > > > > > > > AFAIU, complaint from [1] was that it is wrong to > > > > > > > > > > provide > > > > > > > > > > non > > > > > > > > > > x86 > > > > > > > > > > implementations under the wbinvd_on_all_cpus name. > > > > > > > > > > Instead an > > > > > > > > > > arch > > > > > > > > > > agnostic helper which achieves the same effect > > > > > > > > > > could > > > > > > > > > > be > > > > > > > > > > created. > > > > > > > > > > Does > > > > > > > > > > Arm have such concept? > > > > > > > > > > > > > > > > > > I also understand Linus' email like we shouldn't leak > > > > > > > > > incoherent > > > > > > > > > IO > > > > > > > > > to > > > > > > > > > other architectures, meaning any remaining wbinvd()s > > > > > > > > > should > > > > > > > > > be > > > > > > > > > X86 > > > > > > > > > only. > > > > > > > > > > > > > > > > The last part is completely obvious since it is a x86 > > > > > > > > instruction > > > > > > > > name. > > > > > > > > > > > > > > Yeah, I meant the function implementing wbinvd() > > > > > > > semantics. > > > > > > > > > > > > > > > > > > > > > > > But I think we can't pick a solution until we know how > > > > > > > > the > > > > > > > > concept > > > > > > > > maps > > > > > > > > to Arm and that will also include seeing how the > > > > > > > > drm_clflush_sg for > > > > > > > > Arm > > > > > > > > would look. Is there a range based solution, or just a > > > > > > > > big > > > > > > > > hammer > > > > > > > > there. > > > > > > > > If the latter, then it is no good to churn all these > > > > > > > > reverts > > > > > > > > but > > > > > > > > instead > > > > > > > > an arch agnostic wrapper, with a generic name, would be > > > > > > > > the > > > > > > > > way to > > > > > > > > go. > > > > > > > > > > > > > > But my impression was that ARM would not need the range- > > > > > > > based > > > > > > > interface > > > > > > > either, because ARM is only for discrete and with > > > > > > > discrete > > > > > > > we're > > > > > > > always > > > > > > > coherent. > > > > > > > > > > > > Not sure what you mean here - what about flushing system > > > > > > memory > > > > > > objects > > > > > > on discrete? Those still need flushing on paths like > > > > > > suspend > > > > > > which this > > > > > > series touches. Am I missing something? > > > > > > > > > > System bos on discrete should always have > > > > > > > > > > I915_BO_CACHE_COHERENT_FOR_READ | > > > > > I915_BO_CACHE_COHERENT_FOR_WRITE > > > > > > > > > > either by the gpu being fully cache coherent (or us mapping > > > > > system > > > > > write-combined). Hence no need for cache clflushes or > > > > > wbinvd() > > > > > for > > > > > incoherent IO. > > > > > > > > Hmm so you are talking about the shmem ttm backend. It ends up > > > > depending on the result of i915_ttm_cache_level, yes? It cannot > > > > end > > > > up with I915_CACHE_NONE from that function? > > > > > > If the object is allocated with allowable placement in either > > > LMEM > > > or > > > SYSTEM, and it ends in system, it gets allocated with > > > I915_CACHE_NONE, > > > but then the shmem ttm backend isn't used but TTM's wc pools, and > > > the > > > object should *always* be mapped wc. Even in system. > > > > I am not familiar with neither TTM backend or wc pools so maybe a > > missed > > question - if obj->cache_level can be set to none, and > > obj->cache_coherency to zero, then during object lifetime helpers > > which > > consult those fields (like i915_gem_cpu_write_needs_clflush, > > __start_cpu_write, etc) are giving out incorrect answers? That is, > > it > > is > > irrelevant that they would say flushes are required, since in > > actuality > > those objects can never ever and from anywhere be mapped other than > > WC > > so flushes aren't actually required? > > If we map other than WC somewhere in these situations, that should be > a > bug needing a fix. It might be that some of these helpers that you > mention might still flag that a clflush is needed, and in that case > that's an oversight that also needs fixing. Actually, it seems like most of these has a IS_DGFX() in them, in particular i915_gem_clflush_object(), but it looks like some sort of cleanup might be needed here. In particular we might want to introduce an IS_COHERENT() in case we change the api at some point also for integrated. /Thomas
On 22/03/2022 10:26, Thomas Hellström wrote: > On Tue, 2022-03-22 at 10:13 +0000, Tvrtko Ursulin wrote: >> >> On 21/03/2022 15:15, Thomas Hellström wrote: >>> On Mon, 2022-03-21 at 14:43 +0000, Tvrtko Ursulin wrote: >>>> >>>> On 21/03/2022 13:40, Thomas Hellström wrote: >>>>> Hi, >>>>> >>>>> On Mon, 2022-03-21 at 13:12 +0000, Tvrtko Ursulin wrote: >>>>>> >>>>>> On 21/03/2022 12:33, Thomas Hellström wrote: >>>>>>> On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin wrote: >>>>>>>> >>>>>>>> On 21/03/2022 11:03, Thomas Hellström wrote: >>>>>>>>> Hi, Tvrtko. >>>>>>>>> >>>>>>>>> On 3/21/22 11:27, Tvrtko Ursulin wrote: >>>>>>>>>> >>>>>>>>>> On 19/03/2022 19:42, Michael Cheng wrote: >>>>>>>>>>> To align with the discussion in [1][2], this patch >>>>>>>>>>> series >>>>>>>>>>> drops >>>>>>>>>>> all >>>>>>>>>>> usage of >>>>>>>>>>> wbvind_on_all_cpus within i915 by either replacing >>>>>>>>>>> the >>>>>>>>>>> call >>>>>>>>>>> with certain >>>>>>>>>>> drm clflush helpers, or reverting to a previous >>>>>>>>>>> logic. >>>>>>>>>> >>>>>>>>>> AFAIU, complaint from [1] was that it is wrong to >>>>>>>>>> provide >>>>>>>>>> non >>>>>>>>>> x86 >>>>>>>>>> implementations under the wbinvd_on_all_cpus name. >>>>>>>>>> Instead an >>>>>>>>>> arch >>>>>>>>>> agnostic helper which achieves the same effect could >>>>>>>>>> be >>>>>>>>>> created. >>>>>>>>>> Does >>>>>>>>>> Arm have such concept? >>>>>>>>> >>>>>>>>> I also understand Linus' email like we shouldn't leak >>>>>>>>> incoherent >>>>>>>>> IO >>>>>>>>> to >>>>>>>>> other architectures, meaning any remaining wbinvd()s >>>>>>>>> should >>>>>>>>> be >>>>>>>>> X86 >>>>>>>>> only. >>>>>>>> >>>>>>>> The last part is completely obvious since it is a x86 >>>>>>>> instruction >>>>>>>> name. >>>>>>> >>>>>>> Yeah, I meant the function implementing wbinvd() semantics. >>>>>>> >>>>>>>> >>>>>>>> But I think we can't pick a solution until we know how >>>>>>>> the >>>>>>>> concept >>>>>>>> maps >>>>>>>> to Arm and that will also include seeing how the >>>>>>>> drm_clflush_sg for >>>>>>>> Arm >>>>>>>> would look. Is there a range based solution, or just a >>>>>>>> big >>>>>>>> hammer >>>>>>>> there. >>>>>>>> If the latter, then it is no good to churn all these >>>>>>>> reverts >>>>>>>> but >>>>>>>> instead >>>>>>>> an arch agnostic wrapper, with a generic name, would be >>>>>>>> the >>>>>>>> way to >>>>>>>> go. >>>>>>> >>>>>>> But my impression was that ARM would not need the range- >>>>>>> based >>>>>>> interface >>>>>>> either, because ARM is only for discrete and with discrete >>>>>>> we're >>>>>>> always >>>>>>> coherent. >>>>>> >>>>>> Not sure what you mean here - what about flushing system >>>>>> memory >>>>>> objects >>>>>> on discrete? Those still need flushing on paths like suspend >>>>>> which this >>>>>> series touches. Am I missing something? >>>>> >>>>> System bos on discrete should always have >>>>> >>>>> I915_BO_CACHE_COHERENT_FOR_READ | >>>>> I915_BO_CACHE_COHERENT_FOR_WRITE >>>>> >>>>> either by the gpu being fully cache coherent (or us mapping >>>>> system >>>>> write-combined). Hence no need for cache clflushes or wbinvd() >>>>> for >>>>> incoherent IO. >>>> >>>> Hmm so you are talking about the shmem ttm backend. It ends up >>>> depending on the result of i915_ttm_cache_level, yes? It cannot >>>> end >>>> up with I915_CACHE_NONE from that function? >>> >>> If the object is allocated with allowable placement in either LMEM >>> or >>> SYSTEM, and it ends in system, it gets allocated with >>> I915_CACHE_NONE, >>> but then the shmem ttm backend isn't used but TTM's wc pools, and >>> the >>> object should *always* be mapped wc. Even in system. >> >> I am not familiar with neither TTM backend or wc pools so maybe a >> missed >> question - if obj->cache_level can be set to none, and >> obj->cache_coherency to zero, then during object lifetime helpers >> which >> consult those fields (like i915_gem_cpu_write_needs_clflush, >> __start_cpu_write, etc) are giving out incorrect answers? That is, it >> is >> irrelevant that they would say flushes are required, since in >> actuality >> those objects can never ever and from anywhere be mapped other than >> WC >> so flushes aren't actually required? > > If we map other than WC somewhere in these situations, that should be a > bug needing a fix. It might be that some of these helpers that you > mention might still flag that a clflush is needed, and in that case > that's an oversight that also needs fixing. > >> >>>> I also found in i915_drm.h: >>>> >>>> * As caching mode when specifying >>>> `I915_MMAP_OFFSET_FIXED`, >>>> WC or WB will >>>> * be used, depending on the object placement on >>>> creation. WB >>>> will be used >>>> * when the object can only exist in system memory, WC >>>> otherwise. >>>> >>>> If what you say is true, that on discrete it is _always_ WC, then >>>> that needs updating as well. >>> >>> If an object is allocated as system only, then it is mapped WB, and >>> we're relying on the gpu being cache coherent to avoid clflushes. >>> Same >>> is actually currently true if the object happens to be accessed by >>> the >>> cpu while evicted. Might need an update for that. >> >> Hmm okay, I think I actually misunderstood something here. I think >> the >> reason for difference bbtween smem+lmem object which happens to be in >> smem and smem only object is eluding me. >> >>>>> >>>>> That's adhering to Linus' >>>>> >>>>> "And I sincerely hope to the gods that no cache-incoherent i915 >>>>> mess >>>>> ever makes it out of the x86 world. Incoherent IO was always a >>>>> historical mistake and should never ever happen again, so we >>>>> should >>>>> not spread that horrific pattern around." >>>> >>>> Sure, but I was not talking about IO - just the CPU side access >>>> to >>>> CPU side objects. >>> >>> OK, I was under the impression that clflushes() and wbinvd()s in >>> i915 >>> was only ever used to make data visible to non-snooping GPUs. >>> >>> Do you mean that there are other uses as well? Agreed the wb cache >>> flush on on suspend only if gpu is >>> !I915_BO_CACHE_COHERENT_FOR_READ? >>> looks to not fit this pattern completely. >> >> Don't know, I was first trying to understand handling of the >> obj->cache_coherent as discussed in the first quote block. Are the >> flags >> consistently set and how the Arm low level code will look. >> >>> Otherwise, for architectures where memory isn't always fully >>> coherent >>> with the cpu cache, I'd expect them to use the apis in >>> asm/cacheflush.h, like flush_cache_range() and similar, which are >>> nops >>> on x86. >> >> Hm do you know why there are no-ops? Like why wouldn't they map to >> clflush? > > I think it mostly boils down to the PIPT caches on x86. Everything is > assumed to be coherent. Whereas some architextures keep different cache > entries for different virtual addresses even if the physical page is > the same... > > clflushes and wbinvds on x86 are for odd arch-specific situations > where, for example where we change caching attributes of the linear > kernel map mappings. So in summary we have flush_cache_range which is generic, not implemented on x86 and works with virtual addresses so not directly usable even if x86 implementation was added. There is also x86 specific clflush_cache_range which works with virtual addresses as well so no good for drm_clflush_sg. Question you implicitly raise, correct me if I got it wrong, is whether we should even be trying to extend drm_clflush_sg for Arm, given how most (all?) call sites are not needed on discrete, is that right? Would that mean we could leave most of the code as is and just replace wbinvd_on_all_cpus with something like i915_flush_cpu_caches, which would then legitimately do nothing, at least on Arm if not also on discrete in general? If that would work it would make a small and easy to review series. I don't think it would collide with what Linus asked since it is not propagating undesirable things further - given how if there is no actual need to flush then there is no need to make it range based either. Exception would be the dmabuf get pages patch which needs a proper implementation of a new drm flush helper. Regards, Tvrtko
On Tue, 2022-03-22 at 11:20 +0000, Tvrtko Ursulin wrote: > > On 22/03/2022 10:26, Thomas Hellström wrote: > > On Tue, 2022-03-22 at 10:13 +0000, Tvrtko Ursulin wrote: > > > > > > On 21/03/2022 15:15, Thomas Hellström wrote: > > > > On Mon, 2022-03-21 at 14:43 +0000, Tvrtko Ursulin wrote: > > > > > > > > > > On 21/03/2022 13:40, Thomas Hellström wrote: > > > > > > Hi, > > > > > > > > > > > > On Mon, 2022-03-21 at 13:12 +0000, Tvrtko Ursulin wrote: > > > > > > > > > > > > > > On 21/03/2022 12:33, Thomas Hellström wrote: > > > > > > > > On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > On 21/03/2022 11:03, Thomas Hellström wrote: > > > > > > > > > > Hi, Tvrtko. > > > > > > > > > > > > > > > > > > > > On 3/21/22 11:27, Tvrtko Ursulin wrote: > > > > > > > > > > > > > > > > > > > > > > On 19/03/2022 19:42, Michael Cheng wrote: > > > > > > > > > > > > To align with the discussion in [1][2], this > > > > > > > > > > > > patch > > > > > > > > > > > > series > > > > > > > > > > > > drops > > > > > > > > > > > > all > > > > > > > > > > > > usage of > > > > > > > > > > > > wbvind_on_all_cpus within i915 by either > > > > > > > > > > > > replacing > > > > > > > > > > > > the > > > > > > > > > > > > call > > > > > > > > > > > > with certain > > > > > > > > > > > > drm clflush helpers, or reverting to a previous > > > > > > > > > > > > logic. > > > > > > > > > > > > > > > > > > > > > > AFAIU, complaint from [1] was that it is wrong to > > > > > > > > > > > provide > > > > > > > > > > > non > > > > > > > > > > > x86 > > > > > > > > > > > implementations under the wbinvd_on_all_cpus > > > > > > > > > > > name. > > > > > > > > > > > Instead an > > > > > > > > > > > arch > > > > > > > > > > > agnostic helper which achieves the same effect > > > > > > > > > > > could > > > > > > > > > > > be > > > > > > > > > > > created. > > > > > > > > > > > Does > > > > > > > > > > > Arm have such concept? > > > > > > > > > > > > > > > > > > > > I also understand Linus' email like we shouldn't > > > > > > > > > > leak > > > > > > > > > > incoherent > > > > > > > > > > IO > > > > > > > > > > to > > > > > > > > > > other architectures, meaning any remaining > > > > > > > > > > wbinvd()s > > > > > > > > > > should > > > > > > > > > > be > > > > > > > > > > X86 > > > > > > > > > > only. > > > > > > > > > > > > > > > > > > The last part is completely obvious since it is a x86 > > > > > > > > > instruction > > > > > > > > > name. > > > > > > > > > > > > > > > > Yeah, I meant the function implementing wbinvd() > > > > > > > > semantics. > > > > > > > > > > > > > > > > > > > > > > > > > > But I think we can't pick a solution until we know > > > > > > > > > how > > > > > > > > > the > > > > > > > > > concept > > > > > > > > > maps > > > > > > > > > to Arm and that will also include seeing how the > > > > > > > > > drm_clflush_sg for > > > > > > > > > Arm > > > > > > > > > would look. Is there a range based solution, or just > > > > > > > > > a > > > > > > > > > big > > > > > > > > > hammer > > > > > > > > > there. > > > > > > > > > If the latter, then it is no good to churn all these > > > > > > > > > reverts > > > > > > > > > but > > > > > > > > > instead > > > > > > > > > an arch agnostic wrapper, with a generic name, would > > > > > > > > > be > > > > > > > > > the > > > > > > > > > way to > > > > > > > > > go. > > > > > > > > > > > > > > > > But my impression was that ARM would not need the > > > > > > > > range- > > > > > > > > based > > > > > > > > interface > > > > > > > > either, because ARM is only for discrete and with > > > > > > > > discrete > > > > > > > > we're > > > > > > > > always > > > > > > > > coherent. > > > > > > > > > > > > > > Not sure what you mean here - what about flushing system > > > > > > > memory > > > > > > > objects > > > > > > > on discrete? Those still need flushing on paths like > > > > > > > suspend > > > > > > > which this > > > > > > > series touches. Am I missing something? > > > > > > > > > > > > System bos on discrete should always have > > > > > > > > > > > > I915_BO_CACHE_COHERENT_FOR_READ | > > > > > > I915_BO_CACHE_COHERENT_FOR_WRITE > > > > > > > > > > > > either by the gpu being fully cache coherent (or us mapping > > > > > > system > > > > > > write-combined). Hence no need for cache clflushes or > > > > > > wbinvd() > > > > > > for > > > > > > incoherent IO. > > > > > > > > > > Hmm so you are talking about the shmem ttm backend. It ends > > > > > up > > > > > depending on the result of i915_ttm_cache_level, yes? It > > > > > cannot > > > > > end > > > > > up with I915_CACHE_NONE from that function? > > > > > > > > If the object is allocated with allowable placement in either > > > > LMEM > > > > or > > > > SYSTEM, and it ends in system, it gets allocated with > > > > I915_CACHE_NONE, > > > > but then the shmem ttm backend isn't used but TTM's wc pools, > > > > and > > > > the > > > > object should *always* be mapped wc. Even in system. > > > > > > I am not familiar with neither TTM backend or wc pools so maybe a > > > missed > > > question - if obj->cache_level can be set to none, and > > > obj->cache_coherency to zero, then during object lifetime helpers > > > which > > > consult those fields (like i915_gem_cpu_write_needs_clflush, > > > __start_cpu_write, etc) are giving out incorrect answers? That > > > is, it > > > is > > > irrelevant that they would say flushes are required, since in > > > actuality > > > those objects can never ever and from anywhere be mapped other > > > than > > > WC > > > so flushes aren't actually required? > > > > If we map other than WC somewhere in these situations, that should > > be a > > bug needing a fix. It might be that some of these helpers that you > > mention might still flag that a clflush is needed, and in that case > > that's an oversight that also needs fixing. > > > > > > > > > > I also found in i915_drm.h: > > > > > > > > > > * As caching mode when specifying > > > > > `I915_MMAP_OFFSET_FIXED`, > > > > > WC or WB will > > > > > * be used, depending on the object placement on > > > > > creation. WB > > > > > will be used > > > > > * when the object can only exist in system memory, > > > > > WC > > > > > otherwise. > > > > > > > > > > If what you say is true, that on discrete it is _always_ WC, > > > > > then > > > > > that needs updating as well. > > > > > > > > If an object is allocated as system only, then it is mapped WB, > > > > and > > > > we're relying on the gpu being cache coherent to avoid > > > > clflushes. > > > > Same > > > > is actually currently true if the object happens to be accessed > > > > by > > > > the > > > > cpu while evicted. Might need an update for that. > > > > > > Hmm okay, I think I actually misunderstood something here. I > > > think > > > the > > > reason for difference bbtween smem+lmem object which happens to > > > be in > > > smem and smem only object is eluding me. > > > > > > > > > > > > > > > That's adhering to Linus' > > > > > > > > > > > > "And I sincerely hope to the gods that no cache-incoherent > > > > > > i915 > > > > > > mess > > > > > > ever makes it out of the x86 world. Incoherent IO was > > > > > > always a > > > > > > historical mistake and should never ever happen again, so > > > > > > we > > > > > > should > > > > > > not spread that horrific pattern around." > > > > > > > > > > Sure, but I was not talking about IO - just the CPU side > > > > > access > > > > > to > > > > > CPU side objects. > > > > > > > > OK, I was under the impression that clflushes() and wbinvd()s > > > > in > > > > i915 > > > > was only ever used to make data visible to non-snooping GPUs. > > > > > > > > Do you mean that there are other uses as well? Agreed the wb > > > > cache > > > > flush on on suspend only if gpu is > > > > !I915_BO_CACHE_COHERENT_FOR_READ? > > > > looks to not fit this pattern completely. > > > > > > Don't know, I was first trying to understand handling of the > > > obj->cache_coherent as discussed in the first quote block. Are > > > the > > > flags > > > consistently set and how the Arm low level code will look. > > > > > > > Otherwise, for architectures where memory isn't always fully > > > > coherent > > > > with the cpu cache, I'd expect them to use the apis in > > > > asm/cacheflush.h, like flush_cache_range() and similar, which > > > > are > > > > nops > > > > on x86. > > > > > > Hm do you know why there are no-ops? Like why wouldn't they map > > > to > > > clflush? > > > > I think it mostly boils down to the PIPT caches on x86. Everything > > is > > assumed to be coherent. Whereas some architextures keep different > > cache > > entries for different virtual addresses even if the physical page > > is > > the same... > > > > clflushes and wbinvds on x86 are for odd arch-specific situations > > where, for example where we change caching attributes of the linear > > kernel map mappings. > > So in summary we have flush_cache_range which is generic, not > implemented on x86 and works with virtual addresses so not directly > usable even if x86 implementation was added. I think for the intended flush_cache_range() semantics: "Make this range visible to all vms on all cpus", I think the x86 implementation is actually a nop, and correctly implemented. > > There is also x86 specific clflush_cache_range which works with > virtual addresses as well so no good for drm_clflush_sg. > > Question you implicitly raise, correct me if I got it wrong, is > whether we should even be trying to extend drm_clflush_sg for Arm, > given how most (all?) call sites are not needed on discrete, is that > right? Yes exactly. No need to bother figuring this out for ARM, as we don't do any incoherent IO. > > Would that mean we could leave most of the code as is and just > replace wbinvd_on_all_cpus with something like i915_flush_cpu_caches, > which would then legitimately do nothing, at least on Arm if not also > on discrete in general? Yes, with the caveat that we should, at least as a second step, make i915_flush_cpu_caches() range-based if possible from a performance point of view. > > If that would work it would make a small and easy to review series. I > don't think it would collide with what Linus asked since it is not > propagating undesirable things further - given how if there is no > actual need to flush then there is no need to make it range based > either. > > Exception would be the dmabuf get pages patch which needs a proper > implementation of a new drm flush helper. I think the dmabuf get_pages (note that that's also only for integrated I915_CACHE_NONE x86-only situations), can be done with dma_buf_vmap(dma_buf, &virtual); drm_clflush_virt_range(virtual, length); dma_buf_vunmap(&virtual); /Thomas > > Regards, > > Tvrtko
On 22/03/2022 11:37, Thomas Hellström wrote: > On Tue, 2022-03-22 at 11:20 +0000, Tvrtko Ursulin wrote: >> >> On 22/03/2022 10:26, Thomas Hellström wrote: >>> On Tue, 2022-03-22 at 10:13 +0000, Tvrtko Ursulin wrote: >>>> >>>> On 21/03/2022 15:15, Thomas Hellström wrote: >>>>> On Mon, 2022-03-21 at 14:43 +0000, Tvrtko Ursulin wrote: >>>>>> >>>>>> On 21/03/2022 13:40, Thomas Hellström wrote: >>>>>>> Hi, >>>>>>> >>>>>>> On Mon, 2022-03-21 at 13:12 +0000, Tvrtko Ursulin wrote: >>>>>>>> >>>>>>>> On 21/03/2022 12:33, Thomas Hellström wrote: >>>>>>>>> On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin >>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> On 21/03/2022 11:03, Thomas Hellström wrote: >>>>>>>>>>> Hi, Tvrtko. >>>>>>>>>>> >>>>>>>>>>> On 3/21/22 11:27, Tvrtko Ursulin wrote: >>>>>>>>>>>> >>>>>>>>>>>> On 19/03/2022 19:42, Michael Cheng wrote: >>>>>>>>>>>>> To align with the discussion in [1][2], this >>>>>>>>>>>>> patch >>>>>>>>>>>>> series >>>>>>>>>>>>> drops >>>>>>>>>>>>> all >>>>>>>>>>>>> usage of >>>>>>>>>>>>> wbvind_on_all_cpus within i915 by either >>>>>>>>>>>>> replacing >>>>>>>>>>>>> the >>>>>>>>>>>>> call >>>>>>>>>>>>> with certain >>>>>>>>>>>>> drm clflush helpers, or reverting to a previous >>>>>>>>>>>>> logic. >>>>>>>>>>>> >>>>>>>>>>>> AFAIU, complaint from [1] was that it is wrong to >>>>>>>>>>>> provide >>>>>>>>>>>> non >>>>>>>>>>>> x86 >>>>>>>>>>>> implementations under the wbinvd_on_all_cpus >>>>>>>>>>>> name. >>>>>>>>>>>> Instead an >>>>>>>>>>>> arch >>>>>>>>>>>> agnostic helper which achieves the same effect >>>>>>>>>>>> could >>>>>>>>>>>> be >>>>>>>>>>>> created. >>>>>>>>>>>> Does >>>>>>>>>>>> Arm have such concept? >>>>>>>>>>> >>>>>>>>>>> I also understand Linus' email like we shouldn't >>>>>>>>>>> leak >>>>>>>>>>> incoherent >>>>>>>>>>> IO >>>>>>>>>>> to >>>>>>>>>>> other architectures, meaning any remaining >>>>>>>>>>> wbinvd()s >>>>>>>>>>> should >>>>>>>>>>> be >>>>>>>>>>> X86 >>>>>>>>>>> only. >>>>>>>>>> >>>>>>>>>> The last part is completely obvious since it is a x86 >>>>>>>>>> instruction >>>>>>>>>> name. >>>>>>>>> >>>>>>>>> Yeah, I meant the function implementing wbinvd() >>>>>>>>> semantics. >>>>>>>>> >>>>>>>>>> >>>>>>>>>> But I think we can't pick a solution until we know >>>>>>>>>> how >>>>>>>>>> the >>>>>>>>>> concept >>>>>>>>>> maps >>>>>>>>>> to Arm and that will also include seeing how the >>>>>>>>>> drm_clflush_sg for >>>>>>>>>> Arm >>>>>>>>>> would look. Is there a range based solution, or just >>>>>>>>>> a >>>>>>>>>> big >>>>>>>>>> hammer >>>>>>>>>> there. >>>>>>>>>> If the latter, then it is no good to churn all these >>>>>>>>>> reverts >>>>>>>>>> but >>>>>>>>>> instead >>>>>>>>>> an arch agnostic wrapper, with a generic name, would >>>>>>>>>> be >>>>>>>>>> the >>>>>>>>>> way to >>>>>>>>>> go. >>>>>>>>> >>>>>>>>> But my impression was that ARM would not need the >>>>>>>>> range- >>>>>>>>> based >>>>>>>>> interface >>>>>>>>> either, because ARM is only for discrete and with >>>>>>>>> discrete >>>>>>>>> we're >>>>>>>>> always >>>>>>>>> coherent. >>>>>>>> >>>>>>>> Not sure what you mean here - what about flushing system >>>>>>>> memory >>>>>>>> objects >>>>>>>> on discrete? Those still need flushing on paths like >>>>>>>> suspend >>>>>>>> which this >>>>>>>> series touches. Am I missing something? >>>>>>> >>>>>>> System bos on discrete should always have >>>>>>> >>>>>>> I915_BO_CACHE_COHERENT_FOR_READ | >>>>>>> I915_BO_CACHE_COHERENT_FOR_WRITE >>>>>>> >>>>>>> either by the gpu being fully cache coherent (or us mapping >>>>>>> system >>>>>>> write-combined). Hence no need for cache clflushes or >>>>>>> wbinvd() >>>>>>> for >>>>>>> incoherent IO. >>>>>> >>>>>> Hmm so you are talking about the shmem ttm backend. It ends >>>>>> up >>>>>> depending on the result of i915_ttm_cache_level, yes? It >>>>>> cannot >>>>>> end >>>>>> up with I915_CACHE_NONE from that function? >>>>> >>>>> If the object is allocated with allowable placement in either >>>>> LMEM >>>>> or >>>>> SYSTEM, and it ends in system, it gets allocated with >>>>> I915_CACHE_NONE, >>>>> but then the shmem ttm backend isn't used but TTM's wc pools, >>>>> and >>>>> the >>>>> object should *always* be mapped wc. Even in system. >>>> >>>> I am not familiar with neither TTM backend or wc pools so maybe a >>>> missed >>>> question - if obj->cache_level can be set to none, and >>>> obj->cache_coherency to zero, then during object lifetime helpers >>>> which >>>> consult those fields (like i915_gem_cpu_write_needs_clflush, >>>> __start_cpu_write, etc) are giving out incorrect answers? That >>>> is, it >>>> is >>>> irrelevant that they would say flushes are required, since in >>>> actuality >>>> those objects can never ever and from anywhere be mapped other >>>> than >>>> WC >>>> so flushes aren't actually required? >>> >>> If we map other than WC somewhere in these situations, that should >>> be a >>> bug needing a fix. It might be that some of these helpers that you >>> mention might still flag that a clflush is needed, and in that case >>> that's an oversight that also needs fixing. >>> >>>> >>>>>> I also found in i915_drm.h: >>>>>> >>>>>> * As caching mode when specifying >>>>>> `I915_MMAP_OFFSET_FIXED`, >>>>>> WC or WB will >>>>>> * be used, depending on the object placement on >>>>>> creation. WB >>>>>> will be used >>>>>> * when the object can only exist in system memory, >>>>>> WC >>>>>> otherwise. >>>>>> >>>>>> If what you say is true, that on discrete it is _always_ WC, >>>>>> then >>>>>> that needs updating as well. >>>>> >>>>> If an object is allocated as system only, then it is mapped WB, >>>>> and >>>>> we're relying on the gpu being cache coherent to avoid >>>>> clflushes. >>>>> Same >>>>> is actually currently true if the object happens to be accessed >>>>> by >>>>> the >>>>> cpu while evicted. Might need an update for that. >>>> >>>> Hmm okay, I think I actually misunderstood something here. I >>>> think >>>> the >>>> reason for difference bbtween smem+lmem object which happens to >>>> be in >>>> smem and smem only object is eluding me. >>>> >>>>>>> >>>>>>> That's adhering to Linus' >>>>>>> >>>>>>> "And I sincerely hope to the gods that no cache-incoherent >>>>>>> i915 >>>>>>> mess >>>>>>> ever makes it out of the x86 world. Incoherent IO was >>>>>>> always a >>>>>>> historical mistake and should never ever happen again, so >>>>>>> we >>>>>>> should >>>>>>> not spread that horrific pattern around." >>>>>> >>>>>> Sure, but I was not talking about IO - just the CPU side >>>>>> access >>>>>> to >>>>>> CPU side objects. >>>>> >>>>> OK, I was under the impression that clflushes() and wbinvd()s >>>>> in >>>>> i915 >>>>> was only ever used to make data visible to non-snooping GPUs. >>>>> >>>>> Do you mean that there are other uses as well? Agreed the wb >>>>> cache >>>>> flush on on suspend only if gpu is >>>>> !I915_BO_CACHE_COHERENT_FOR_READ? >>>>> looks to not fit this pattern completely. >>>> >>>> Don't know, I was first trying to understand handling of the >>>> obj->cache_coherent as discussed in the first quote block. Are >>>> the >>>> flags >>>> consistently set and how the Arm low level code will look. >>>> >>>>> Otherwise, for architectures where memory isn't always fully >>>>> coherent >>>>> with the cpu cache, I'd expect them to use the apis in >>>>> asm/cacheflush.h, like flush_cache_range() and similar, which >>>>> are >>>>> nops >>>>> on x86. >>>> >>>> Hm do you know why there are no-ops? Like why wouldn't they map >>>> to >>>> clflush? >>> >>> I think it mostly boils down to the PIPT caches on x86. Everything >>> is >>> assumed to be coherent. Whereas some architextures keep different >>> cache >>> entries for different virtual addresses even if the physical page >>> is >>> the same... >>> >>> clflushes and wbinvds on x86 are for odd arch-specific situations >>> where, for example where we change caching attributes of the linear >>> kernel map mappings. >> >> So in summary we have flush_cache_range which is generic, not >> implemented on x86 and works with virtual addresses so not directly >> usable even if x86 implementation was added. > > I think for the intended flush_cache_range() semantics: "Make this > range visible to all vms on all cpus", I think the x86 implementation > is actually a nop, and correctly implemented. If that is so then I agree. (I did not spend much time looking for desired semantics, just noticed there was no kerneldoc next to the function and stopped there.) >> There is also x86 specific clflush_cache_range which works with >> virtual addresses as well so no good for drm_clflush_sg. >> >> Question you implicitly raise, correct me if I got it wrong, is >> whether we should even be trying to extend drm_clflush_sg for Arm, >> given how most (all?) call sites are not needed on discrete, is that >> right? > > Yes exactly. No need to bother figuring this out for ARM, as we don't > do any incoherent IO. > >> >> Would that mean we could leave most of the code as is and just >> replace wbinvd_on_all_cpus with something like i915_flush_cpu_caches, >> which would then legitimately do nothing, at least on Arm if not also >> on discrete in general? > > Yes, with the caveat that we should, at least as a second step, make > i915_flush_cpu_caches() range-based if possible from a performance > point of view. Sounds like a plan, and I am counting on the second step part to be really second step. Because that one will need to actually figure out and elaborate sufficiently all three proposed reverts, which was missing in this posting. So first step unblocks Arm builds very cheaply and non-controversially, second step tries going the range route. >> If that would work it would make a small and easy to review series. I >> don't think it would collide with what Linus asked since it is not >> propagating undesirable things further - given how if there is no >> actual need to flush then there is no need to make it range based >> either. >> >> Exception would be the dmabuf get pages patch which needs a proper >> implementation of a new drm flush helper. > > I think the dmabuf get_pages (note that that's also only for integrated > I915_CACHE_NONE x86-only situations), can be done with > > dma_buf_vmap(dma_buf, &virtual); > drm_clflush_virt_range(virtual, length); > dma_buf_vunmap(&virtual); Looks plausible to me. Downside being it vmaps the whole object at once so may regress, at least on 32-bit (!) builds. Would it work in theory to fall back to page by page but would it be worth it just for 32-bit I am not sure. Regards, Tvrtko
On 3/22/22 13:53, Tvrtko Ursulin wrote: > > On 22/03/2022 11:37, Thomas Hellström wrote: >> On Tue, 2022-03-22 at 11:20 +0000, Tvrtko Ursulin wrote: >>> >>> On 22/03/2022 10:26, Thomas Hellström wrote: >>>> On Tue, 2022-03-22 at 10:13 +0000, Tvrtko Ursulin wrote: >>>>> >>>>> On 21/03/2022 15:15, Thomas Hellström wrote: >>>>>> On Mon, 2022-03-21 at 14:43 +0000, Tvrtko Ursulin wrote: >>>>>>> >>>>>>> On 21/03/2022 13:40, Thomas Hellström wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> On Mon, 2022-03-21 at 13:12 +0000, Tvrtko Ursulin wrote: >>>>>>>>> >>>>>>>>> On 21/03/2022 12:33, Thomas Hellström wrote: >>>>>>>>>> On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin >>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>> On 21/03/2022 11:03, Thomas Hellström wrote: >>>>>>>>>>>> Hi, Tvrtko. >>>>>>>>>>>> >>>>>>>>>>>> On 3/21/22 11:27, Tvrtko Ursulin wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> On 19/03/2022 19:42, Michael Cheng wrote: >>>>>>>>>>>>>> To align with the discussion in [1][2], this >>>>>>>>>>>>>> patch >>>>>>>>>>>>>> series >>>>>>>>>>>>>> drops >>>>>>>>>>>>>> all >>>>>>>>>>>>>> usage of >>>>>>>>>>>>>> wbvind_on_all_cpus within i915 by either >>>>>>>>>>>>>> replacing >>>>>>>>>>>>>> the >>>>>>>>>>>>>> call >>>>>>>>>>>>>> with certain >>>>>>>>>>>>>> drm clflush helpers, or reverting to a previous >>>>>>>>>>>>>> logic. >>>>>>>>>>>>> >>>>>>>>>>>>> AFAIU, complaint from [1] was that it is wrong to >>>>>>>>>>>>> provide >>>>>>>>>>>>> non >>>>>>>>>>>>> x86 >>>>>>>>>>>>> implementations under the wbinvd_on_all_cpus >>>>>>>>>>>>> name. >>>>>>>>>>>>> Instead an >>>>>>>>>>>>> arch >>>>>>>>>>>>> agnostic helper which achieves the same effect >>>>>>>>>>>>> could >>>>>>>>>>>>> be >>>>>>>>>>>>> created. >>>>>>>>>>>>> Does >>>>>>>>>>>>> Arm have such concept? >>>>>>>>>>>> >>>>>>>>>>>> I also understand Linus' email like we shouldn't >>>>>>>>>>>> leak >>>>>>>>>>>> incoherent >>>>>>>>>>>> IO >>>>>>>>>>>> to >>>>>>>>>>>> other architectures, meaning any remaining >>>>>>>>>>>> wbinvd()s >>>>>>>>>>>> should >>>>>>>>>>>> be >>>>>>>>>>>> X86 >>>>>>>>>>>> only. >>>>>>>>>>> >>>>>>>>>>> The last part is completely obvious since it is a x86 >>>>>>>>>>> instruction >>>>>>>>>>> name. >>>>>>>>>> >>>>>>>>>> Yeah, I meant the function implementing wbinvd() >>>>>>>>>> semantics. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> But I think we can't pick a solution until we know >>>>>>>>>>> how >>>>>>>>>>> the >>>>>>>>>>> concept >>>>>>>>>>> maps >>>>>>>>>>> to Arm and that will also include seeing how the >>>>>>>>>>> drm_clflush_sg for >>>>>>>>>>> Arm >>>>>>>>>>> would look. Is there a range based solution, or just >>>>>>>>>>> a >>>>>>>>>>> big >>>>>>>>>>> hammer >>>>>>>>>>> there. >>>>>>>>>>> If the latter, then it is no good to churn all these >>>>>>>>>>> reverts >>>>>>>>>>> but >>>>>>>>>>> instead >>>>>>>>>>> an arch agnostic wrapper, with a generic name, would >>>>>>>>>>> be >>>>>>>>>>> the >>>>>>>>>>> way to >>>>>>>>>>> go. >>>>>>>>>> >>>>>>>>>> But my impression was that ARM would not need the >>>>>>>>>> range- >>>>>>>>>> based >>>>>>>>>> interface >>>>>>>>>> either, because ARM is only for discrete and with >>>>>>>>>> discrete >>>>>>>>>> we're >>>>>>>>>> always >>>>>>>>>> coherent. >>>>>>>>> >>>>>>>>> Not sure what you mean here - what about flushing system >>>>>>>>> memory >>>>>>>>> objects >>>>>>>>> on discrete? Those still need flushing on paths like >>>>>>>>> suspend >>>>>>>>> which this >>>>>>>>> series touches. Am I missing something? >>>>>>>> >>>>>>>> System bos on discrete should always have >>>>>>>> >>>>>>>> I915_BO_CACHE_COHERENT_FOR_READ | >>>>>>>> I915_BO_CACHE_COHERENT_FOR_WRITE >>>>>>>> >>>>>>>> either by the gpu being fully cache coherent (or us mapping >>>>>>>> system >>>>>>>> write-combined). Hence no need for cache clflushes or >>>>>>>> wbinvd() >>>>>>>> for >>>>>>>> incoherent IO. >>>>>>> >>>>>>> Hmm so you are talking about the shmem ttm backend. It ends >>>>>>> up >>>>>>> depending on the result of i915_ttm_cache_level, yes? It >>>>>>> cannot >>>>>>> end >>>>>>> up with I915_CACHE_NONE from that function? >>>>>> >>>>>> If the object is allocated with allowable placement in either >>>>>> LMEM >>>>>> or >>>>>> SYSTEM, and it ends in system, it gets allocated with >>>>>> I915_CACHE_NONE, >>>>>> but then the shmem ttm backend isn't used but TTM's wc pools, >>>>>> and >>>>>> the >>>>>> object should *always* be mapped wc. Even in system. >>>>> >>>>> I am not familiar with neither TTM backend or wc pools so maybe a >>>>> missed >>>>> question - if obj->cache_level can be set to none, and >>>>> obj->cache_coherency to zero, then during object lifetime helpers >>>>> which >>>>> consult those fields (like i915_gem_cpu_write_needs_clflush, >>>>> __start_cpu_write, etc) are giving out incorrect answers? That >>>>> is, it >>>>> is >>>>> irrelevant that they would say flushes are required, since in >>>>> actuality >>>>> those objects can never ever and from anywhere be mapped other >>>>> than >>>>> WC >>>>> so flushes aren't actually required? >>>> >>>> If we map other than WC somewhere in these situations, that should >>>> be a >>>> bug needing a fix. It might be that some of these helpers that you >>>> mention might still flag that a clflush is needed, and in that case >>>> that's an oversight that also needs fixing. >>>> >>>>> >>>>>>> I also found in i915_drm.h: >>>>>>> >>>>>>> * As caching mode when specifying >>>>>>> `I915_MMAP_OFFSET_FIXED`, >>>>>>> WC or WB will >>>>>>> * be used, depending on the object placement on >>>>>>> creation. WB >>>>>>> will be used >>>>>>> * when the object can only exist in system memory, >>>>>>> WC >>>>>>> otherwise. >>>>>>> >>>>>>> If what you say is true, that on discrete it is _always_ WC, >>>>>>> then >>>>>>> that needs updating as well. >>>>>> >>>>>> If an object is allocated as system only, then it is mapped WB, >>>>>> and >>>>>> we're relying on the gpu being cache coherent to avoid >>>>>> clflushes. >>>>>> Same >>>>>> is actually currently true if the object happens to be accessed >>>>>> by >>>>>> the >>>>>> cpu while evicted. Might need an update for that. >>>>> >>>>> Hmm okay, I think I actually misunderstood something here. I >>>>> think >>>>> the >>>>> reason for difference bbtween smem+lmem object which happens to >>>>> be in >>>>> smem and smem only object is eluding me. >>>>> >>>>>>>> >>>>>>>> That's adhering to Linus' >>>>>>>> >>>>>>>> "And I sincerely hope to the gods that no cache-incoherent >>>>>>>> i915 >>>>>>>> mess >>>>>>>> ever makes it out of the x86 world. Incoherent IO was >>>>>>>> always a >>>>>>>> historical mistake and should never ever happen again, so >>>>>>>> we >>>>>>>> should >>>>>>>> not spread that horrific pattern around." >>>>>>> >>>>>>> Sure, but I was not talking about IO - just the CPU side >>>>>>> access >>>>>>> to >>>>>>> CPU side objects. >>>>>> >>>>>> OK, I was under the impression that clflushes() and wbinvd()s >>>>>> in >>>>>> i915 >>>>>> was only ever used to make data visible to non-snooping GPUs. >>>>>> >>>>>> Do you mean that there are other uses as well? Agreed the wb >>>>>> cache >>>>>> flush on on suspend only if gpu is >>>>>> !I915_BO_CACHE_COHERENT_FOR_READ? >>>>>> looks to not fit this pattern completely. >>>>> >>>>> Don't know, I was first trying to understand handling of the >>>>> obj->cache_coherent as discussed in the first quote block. Are >>>>> the >>>>> flags >>>>> consistently set and how the Arm low level code will look. >>>>> >>>>>> Otherwise, for architectures where memory isn't always fully >>>>>> coherent >>>>>> with the cpu cache, I'd expect them to use the apis in >>>>>> asm/cacheflush.h, like flush_cache_range() and similar, which >>>>>> are >>>>>> nops >>>>>> on x86. >>>>> >>>>> Hm do you know why there are no-ops? Like why wouldn't they map >>>>> to >>>>> clflush? >>>> >>>> I think it mostly boils down to the PIPT caches on x86. Everything >>>> is >>>> assumed to be coherent. Whereas some architextures keep different >>>> cache >>>> entries for different virtual addresses even if the physical page >>>> is >>>> the same... >>>> >>>> clflushes and wbinvds on x86 are for odd arch-specific situations >>>> where, for example where we change caching attributes of the linear >>>> kernel map mappings. >>> >>> So in summary we have flush_cache_range which is generic, not >>> implemented on x86 and works with virtual addresses so not directly >>> usable even if x86 implementation was added. >> >> I think for the intended flush_cache_range() semantics: "Make this >> range visible to all vms on all cpus", I think the x86 implementation >> is actually a nop, and correctly implemented. > > If that is so then I agree. (I did not spend much time looking for > desired semantics, just noticed there was no kerneldoc next to the > function and stopped there.) > >>> There is also x86 specific clflush_cache_range which works with >>> virtual addresses as well so no good for drm_clflush_sg. >>> >>> Question you implicitly raise, correct me if I got it wrong, is >>> whether we should even be trying to extend drm_clflush_sg for Arm, >>> given how most (all?) call sites are not needed on discrete, is that >>> right? >> >> Yes exactly. No need to bother figuring this out for ARM, as we don't >> do any incoherent IO. >> >>> >>> Would that mean we could leave most of the code as is and just >>> replace wbinvd_on_all_cpus with something like i915_flush_cpu_caches, >>> which would then legitimately do nothing, at least on Arm if not also >>> on discrete in general? >> >> Yes, with the caveat that we should, at least as a second step, make >> i915_flush_cpu_caches() range-based if possible from a performance >> point of view. > > Sounds like a plan, and I am counting on the second step part to be > really second step. Because that one will need to actually figure out > and elaborate sufficiently all three proposed reverts, which was > missing in this posting. So first step unblocks Arm builds very > cheaply and non-controversially, second step tries going the range route. > >>> If that would work it would make a small and easy to review series. I >>> don't think it would collide with what Linus asked since it is not >>> propagating undesirable things further - given how if there is no >>> actual need to flush then there is no need to make it range based >>> either. >>> >>> Exception would be the dmabuf get pages patch which needs a proper >>> implementation of a new drm flush helper. >> >> I think the dmabuf get_pages (note that that's also only for integrated >> I915_CACHE_NONE x86-only situations), can be done with >> >> dma_buf_vmap(dma_buf, &virtual); >> drm_clflush_virt_range(virtual, length); >> dma_buf_vunmap(&virtual); > > Looks plausible to me. Downside being it vmaps the whole object at > once so may regress, at least on 32-bit (!) builds. Would it work in > theory to fall back to page by page but would it be worth it just for > 32-bit I am not sure. Back in the days IIRC there was a kmap() api also for dma-buf. But nobody used it, and yes, vmap is not ideal but a simple fallback to page-based (or even wbinvd on the rare occasion of vmap error) might be ok. /Thomas > > Regards, > > Tvrtko