Message ID | 20231024134637.3120277-1-surenb@google.com (mailing list archive) |
---|---|
Headers | show |
Series | Memory allocation profiling | expand |
On Tue, Oct 24, 2023 at 06:45:57AM -0700, Suren Baghdasaryan wrote: > Updates since the last version [1] > - Simplified allocation tagging macros; > - Runtime enable/disable sysctl switch (/proc/sys/vm/mem_profiling) > instead of kernel command-line option; > - CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT to select default enable state; > - Changed the user-facing API from debugfs to procfs (/proc/allocinfo); > - Removed context capture support to make patch incremental; > - Renamed uninstrumented allocation functions to use _noprof suffix; > - Added __GFP_LAST_BIT to make the code cleaner; > - Removed lazy per-cpu counters; it turned out the memory savings was > minimal and not worth the performance impact; Hello Suren, > Performance overhead: > To evaluate performance we implemented an in-kernel test executing > multiple get_free_page/free_page and kmalloc/kfree calls with allocation > sizes growing from 8 to 240 bytes with CPU frequency set to max and CPU > affinity set to a specific CPU to minimize the noise. Below is performance > comparison between the baseline kernel, profiling when enabled, profiling > when disabled and (for comparison purposes) baseline with > CONFIG_MEMCG_KMEM enabled and allocations using __GFP_ACCOUNT: > > kmalloc pgalloc > (1 baseline) 12.041s 49.190s > (2 default disabled) 14.970s (+24.33%) 49.684s (+1.00%) > (3 default enabled) 16.859s (+40.01%) 56.287s (+14.43%) > (4 runtime enabled) 16.983s (+41.04%) 55.760s (+13.36%) > (5 memcg) 33.831s (+180.96%) 51.433s (+4.56%) some recent changes [1] to the kmem accounting should have made it quite a bit faster. Would be great if you can provide new numbers for the comparison. Maybe with the next revision? And btw thank you (and Kent): your numbers inspired me to do this kmemcg performance work. I expect it still to be ~twice more expensive than your stuff because on the memcg side we handle separately charge and statistics, but hopefully the difference will be lower. Thank you! [1]: patches from next tree, so no stable hashes: mm: kmem: reimplement get_obj_cgroup_from_current() percpu: scoped objcg protection mm: kmem: scoped objcg protection mm: kmem: make memcg keep a reference to the original objcg mm: kmem: add direct objcg pointer to task_struct mm: kmem: optimize get_obj_cgroup_from_current()
On Tue, Oct 24, 2023 at 11:29 AM Roman Gushchin <roman.gushchin@linux.dev> wrote: > > On Tue, Oct 24, 2023 at 06:45:57AM -0700, Suren Baghdasaryan wrote: > > Updates since the last version [1] > > - Simplified allocation tagging macros; > > - Runtime enable/disable sysctl switch (/proc/sys/vm/mem_profiling) > > instead of kernel command-line option; > > - CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT to select default enable state; > > - Changed the user-facing API from debugfs to procfs (/proc/allocinfo); > > - Removed context capture support to make patch incremental; > > - Renamed uninstrumented allocation functions to use _noprof suffix; > > - Added __GFP_LAST_BIT to make the code cleaner; > > - Removed lazy per-cpu counters; it turned out the memory savings was > > minimal and not worth the performance impact; > > Hello Suren, > > > Performance overhead: > > To evaluate performance we implemented an in-kernel test executing > > multiple get_free_page/free_page and kmalloc/kfree calls with allocation > > sizes growing from 8 to 240 bytes with CPU frequency set to max and CPU > > affinity set to a specific CPU to minimize the noise. Below is performance > > comparison between the baseline kernel, profiling when enabled, profiling > > when disabled and (for comparison purposes) baseline with > > CONFIG_MEMCG_KMEM enabled and allocations using __GFP_ACCOUNT: > > > > kmalloc pgalloc > > (1 baseline) 12.041s 49.190s > > (2 default disabled) 14.970s (+24.33%) 49.684s (+1.00%) > > (3 default enabled) 16.859s (+40.01%) 56.287s (+14.43%) > > (4 runtime enabled) 16.983s (+41.04%) 55.760s (+13.36%) > > (5 memcg) 33.831s (+180.96%) 51.433s (+4.56%) > > some recent changes [1] to the kmem accounting should have made it quite a bit > faster. Would be great if you can provide new numbers for the comparison. > Maybe with the next revision? > > And btw thank you (and Kent): your numbers inspired me to do this kmemcg > performance work. I expect it still to be ~twice more expensive than your > stuff because on the memcg side we handle separately charge and statistics, > but hopefully the difference will be lower. Yes, I saw them! Well done! I'll definitely update my numbers once the patches land in their final form. > > Thank you! Thank you for the optimizations! > > [1]: > patches from next tree, so no stable hashes: > mm: kmem: reimplement get_obj_cgroup_from_current() > percpu: scoped objcg protection > mm: kmem: scoped objcg protection > mm: kmem: make memcg keep a reference to the original objcg > mm: kmem: add direct objcg pointer to task_struct > mm: kmem: optimize get_obj_cgroup_from_current()