Message ID | 20201111145322.15793-5-david@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | powernv/memtrace: don't abuse memory hot(un)plug infrastructure for memory allocations | expand |
On Wed, Nov 11, 2020 at 03:53:18PM +0100, David Hildenbrand wrote: > @@ -144,7 +147,9 @@ void __ref arch_remove_linear_mapping(u64 start, u64 size) > start = (unsigned long)__va(start); > flush_dcache_range_chunked(start, start + size, FLUSH_CHUNK_SIZE); > > + mutex_lock(&linear_mapping_mutex); > ret = remove_section_mapping(start, start + size); > + mutex_unlock(&linear_mapping_mutex); > WARN_ON_ONCE(ret); My expertise in this area is low, so bear with me. Why we do not need to protect flush_dcache_range_chunked and vm_unmap_aliases?
On 17.11.20 16:37, Oscar Salvador wrote: > On Wed, Nov 11, 2020 at 03:53:18PM +0100, David Hildenbrand wrote: >> @@ -144,7 +147,9 @@ void __ref arch_remove_linear_mapping(u64 start, u64 size) >> start = (unsigned long)__va(start); >> flush_dcache_range_chunked(start, start + size, FLUSH_CHUNK_SIZE); >> >> + mutex_lock(&linear_mapping_mutex); >> ret = remove_section_mapping(start, start + size); >> + mutex_unlock(&linear_mapping_mutex); >> WARN_ON_ONCE(ret); > > My expertise in this area is low, so bear with me. > > Why we do not need to protect flush_dcache_range_chunked and > vm_unmap_aliases? > vm_unmap_aliases does own locking and can handle concurrent calls. flush_dcache_range_chunked()->flush_dcache_range() ends up as a sequence of memory barriers paired with dcbf instructions. dcbf: Copies modified cache blocks to main storage and invalidates the copy in the data cache. It's called from various places and no global variables seem to be involved, so it looks like it doesn't need any kind of locking.
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 8a86d81f8df0..ca5c4b54c366 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -58,6 +58,7 @@ #define CPU_FTR_NOEXECUTE 0 #endif +static DEFINE_MUTEX(linear_mapping_mutex); unsigned long long memory_limit; bool init_mem_is_free; @@ -126,8 +127,10 @@ int __ref arch_create_linear_mapping(int nid, u64 start, u64 size, int rc; start = (unsigned long)__va(start); + mutex_lock(&linear_mapping_mutex); rc = create_section_mapping(start, start + size, nid, params->pgprot); + mutex_unlock(&linear_mapping_mutex); if (rc) { pr_warn("Unable to create linear mapping for 0x%llx..0x%llx: %d\n", start, start + size, rc); @@ -144,7 +147,9 @@ void __ref arch_remove_linear_mapping(u64 start, u64 size) start = (unsigned long)__va(start); flush_dcache_range_chunked(start, start + size, FLUSH_CHUNK_SIZE); + mutex_lock(&linear_mapping_mutex); ret = remove_section_mapping(start, start + size); + mutex_unlock(&linear_mapping_mutex); WARN_ON_ONCE(ret); /* Ensure all vmalloc mappings are flushed in case they also
This code currently relies on mem_hotplug_begin()/mem_hotplug_done() - create_section_mapping()/remove_section_mapping() implementations cannot tollerate getting called concurrently. Let's prepare for callers (memtrace) not holding any such locks (and don't force them to mess with memory hotplug locks). Other parts in these functions don't seem to rely on external locking. Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Rashmica Gupta <rashmica.g@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Mike Rapoport <rppt@kernel.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Wei Yang <richard.weiyang@linux.alibaba.com> Signed-off-by: David Hildenbrand <david@redhat.com> --- arch/powerpc/mm/mem.c | 5 +++++ 1 file changed, 5 insertions(+)