mbox series

[v5,0/3] make vm_committed_as_batch aware of vm overcommit policy

Message ID 1592725000-73486-1-git-send-email-feng.tang@intel.com (mailing list archive)
Headers show
Series make vm_committed_as_batch aware of vm overcommit policy | expand

Message

Feng Tang June 21, 2020, 7:36 a.m. UTC
When checking a performance change for will-it-scale scalability
mmap test [1], we found very high lock contention for spinlock of
percpu counter 'vm_committed_as':

    94.14%     0.35%  [kernel.kallsyms]         [k] _raw_spin_lock_irqsave
    48.21% _raw_spin_lock_irqsave;percpu_counter_add_batch;__vm_enough_memory;mmap_region;do_mmap;
    45.91% _raw_spin_lock_irqsave;percpu_counter_add_batch;__do_munmap;

Actually this heavy lock contention is not always necessary. The
'vm_committed_as' needs to be very precise when the strict
OVERCOMMIT_NEVER policy is set, which requires a rather small batch
number for the percpu counter.

So keep 'batch' number unchanged for strict OVERCOMMIT_NEVER policy,
and enlarge it for not-so-strict  OVERCOMMIT_ALWAYS and OVERCOMMIT_GUESS
policies.

Benchmark with the same testcase in [1] shows 53% improvement on a
8C/16T desktop, and 2097%(20X) on a 4S/72C/144T server. And for that
case, whether it shows improvements depends on if the test mmap size
is bigger than the batch number computed.

We tested 10+ platforms in 0day (server, desktop and laptop). If we
lift it to 64X, 80%+ platforms show improvements, and for 16X lift,
1/3 of the platforms will show improvements.

And generally it should help the mmap/unmap usage,as Michal Hocko
mentioned:

: I believe that there are non-synthetic worklaods which would benefit
: from a larger batch. E.g. large in memory databases which do large
: mmaps during startups from multiple threads.

Note: There are some style complain from checkpatch for patch 3,
as sysctl handler declaration follows the similar format of sibling
functions

[1] https://lore.kernel.org/lkml/20200305062138.GI5972@shao2-debian/

patch1: a cleanup for /proc/meminfo
patch2: a preparation patch which also improve the accuracy of
        vm_memory_committed
patch3: main change

This is against today's linux-mm git tree on github.

Please help to review, thanks!

- Feng

----------------------------------------------------------------
Changelog:

  v5:
    * rebase after 5.8-rc1
    * remove the 3/4 patch in v4  which is merged in v5.7
    * add code comments for vm_memory_committed() 

  v4:
    * Remove the VM_WARN_ONCE check for vm_committed_as underflow,
      thanks to Qian Cai for finding and testing the warning

  v3:
    * refine commit log and cleanup code, according to comments
      from Michal Hocko and Matthew Wilcox
    * change the lift from 16X and 64X after test 
  
  v2:
    * add the sysctl handler to cover runtime overcommit policy
      change, as suggested by Andres Morton 
    * address the accuracy concern of vm_memory_committed()
      from Andi Kleen 

Feng Tang (3):
  proc/meminfo: avoid open coded reading of vm_committed_as
  mm/util.c: make vm_memory_committed() more accurate
  mm: adjust vm_committed_as_batch according to vm overcommit policy

 fs/proc/meminfo.c    |  2 +-
 include/linux/mm.h   |  2 ++
 include/linux/mman.h |  4 ++++
 kernel/sysctl.c      |  2 +-
 mm/mm_init.c         | 18 ++++++++++++++----
 mm/util.c            | 19 ++++++++++++++++++-
 6 files changed, 40 insertions(+), 7 deletions(-)

Comments

Michal Hocko June 24, 2020, 9:45 a.m. UTC | #1
Andrew, I do not see these patches in mmotm tree. Is there anything
blocking them? There used to be v3 in the tree
(http://lkml.kernel.org/r/1589611660-89854-4-git-send-email-feng.tang@intel.com)
but that one got dropped due some failures. I haven't seen any failures
for this one.

On Sun 21-06-20 15:36:37, Feng Tang wrote:
> When checking a performance change for will-it-scale scalability
> mmap test [1], we found very high lock contention for spinlock of
> percpu counter 'vm_committed_as':
> 
>     94.14%     0.35%  [kernel.kallsyms]         [k] _raw_spin_lock_irqsave
>     48.21% _raw_spin_lock_irqsave;percpu_counter_add_batch;__vm_enough_memory;mmap_region;do_mmap;
>     45.91% _raw_spin_lock_irqsave;percpu_counter_add_batch;__do_munmap;
> 
> Actually this heavy lock contention is not always necessary. The
> 'vm_committed_as' needs to be very precise when the strict
> OVERCOMMIT_NEVER policy is set, which requires a rather small batch
> number for the percpu counter.
> 
> So keep 'batch' number unchanged for strict OVERCOMMIT_NEVER policy,
> and enlarge it for not-so-strict  OVERCOMMIT_ALWAYS and OVERCOMMIT_GUESS
> policies.
> 
> Benchmark with the same testcase in [1] shows 53% improvement on a
> 8C/16T desktop, and 2097%(20X) on a 4S/72C/144T server. And for that
> case, whether it shows improvements depends on if the test mmap size
> is bigger than the batch number computed.
> 
> We tested 10+ platforms in 0day (server, desktop and laptop). If we
> lift it to 64X, 80%+ platforms show improvements, and for 16X lift,
> 1/3 of the platforms will show improvements.
> 
> And generally it should help the mmap/unmap usage,as Michal Hocko
> mentioned:
> 
> : I believe that there are non-synthetic worklaods which would benefit
> : from a larger batch. E.g. large in memory databases which do large
> : mmaps during startups from multiple threads.
> 
> Note: There are some style complain from checkpatch for patch 3,
> as sysctl handler declaration follows the similar format of sibling
> functions
> 
> [1] https://lore.kernel.org/lkml/20200305062138.GI5972@shao2-debian/
> 
> patch1: a cleanup for /proc/meminfo
> patch2: a preparation patch which also improve the accuracy of
>         vm_memory_committed
> patch3: main change
> 
> This is against today's linux-mm git tree on github.
> 
> Please help to review, thanks!
> 
> - Feng
> 
> ----------------------------------------------------------------
> Changelog:
> 
>   v5:
>     * rebase after 5.8-rc1
>     * remove the 3/4 patch in v4  which is merged in v5.7
>     * add code comments for vm_memory_committed() 
> 
>   v4:
>     * Remove the VM_WARN_ONCE check for vm_committed_as underflow,
>       thanks to Qian Cai for finding and testing the warning
> 
>   v3:
>     * refine commit log and cleanup code, according to comments
>       from Michal Hocko and Matthew Wilcox
>     * change the lift from 16X and 64X after test 
>   
>   v2:
>     * add the sysctl handler to cover runtime overcommit policy
>       change, as suggested by Andres Morton 
>     * address the accuracy concern of vm_memory_committed()
>       from Andi Kleen 
> 
> Feng Tang (3):
>   proc/meminfo: avoid open coded reading of vm_committed_as
>   mm/util.c: make vm_memory_committed() more accurate
>   mm: adjust vm_committed_as_batch according to vm overcommit policy
> 
>  fs/proc/meminfo.c    |  2 +-
>  include/linux/mm.h   |  2 ++
>  include/linux/mman.h |  4 ++++
>  kernel/sysctl.c      |  2 +-
>  mm/mm_init.c         | 18 ++++++++++++++----
>  mm/util.c            | 19 ++++++++++++++++++-
>  6 files changed, 40 insertions(+), 7 deletions(-)
> 
> -- 
> 2.7.4
> 
>