mbox series

[RFC,0/3] mm/memcg: Address PREEMPT_RT problems instead of disabling it.

Message ID 20211222114111.2206248-1-bigeasy@linutronix.de (mailing list archive)
Headers show
Series mm/memcg: Address PREEMPT_RT problems instead of disabling it. | expand

Message

Sebastian Andrzej Siewior Dec. 22, 2021, 11:41 a.m. UTC
Hi,

this is a follow up to
   https://lkml.kernel.org/r/20211207155208.eyre5svucpg7krxe@linutronix.de

where it has been suggested that I should try again with memcg instead
of simply disabling it.

Patch #1 deals with the counters. It has been suggested to simply
disable preemption on RT (like in vmstats) and I followed that advice as
closely as possible. The local_irq_save() could be removed from
mod_memcg_state() and the other wrapper on RT but I leave it since it
does not hurt and it might look nicer ;)

Patch #2 is a follow up to
   https://lkml.kernel.org/r/20211214144412.447035-1-longman@redhat.com

Patch #3 restricts the task_obj usage to !PREEMPTION kernels. Based on
my understanding the use of preempt_disable() minimizes (avoids?) the
win of the optimisation.

I tested them on CONFIG_PREEMPT_NONE + CONFIG_PREEMPT_RT with the
tools/testing/selftests/cgroup/* tests. I looked good except for the
following (which was also there before the patches):
- test_kmem sometimes complained about:
 not ok 2 test_kmem_memcg_deletion
 
- test_memcontrol complained always about
 not ok 3 test_memcg_min
 not ok 4 test_memcg_low
 and did not finish.

- lockdep complains were triggered by test_core and test_freezer (both
  had to run):
 ======================================================
 WARNING: possible circular locking dependency detected
 5.16.0-rc5 #259 Not tainted
 ------------------------------------------------------
 test_core/5996 is trying to acquire lock:
 ffffffff829a1258 (css_set_lock){..-.}-{2:2}, at: obj_cgroup_release+0x2d/0xb0
 
 but task is already holding lock:
 ffff888103034618 (&sighand->siglock){....}-{2:2}, at: get_signal+0x8d/0xdb0
 
 which lock already depends on the new lock.

 
 the existing dependency chain (in reverse order) is:
 
 -> #1 (&sighand->siglock){....}-{2:2}:
        _raw_spin_lock+0x27/0x40
        cgroup_post_fork+0x1f5/0x290
        copy_process+0x191b/0x1f80
        kernel_clone+0x5a/0x410
        __do_sys_clone3+0xb3/0x110
        do_syscall_64+0x43/0x90
        entry_SYSCALL_64_after_hwframe+0x44/0xae
 
 -> #0 (css_set_lock){..-.}-{2:2}:
        __lock_acquire+0x1253/0x2280
        lock_acquire+0xd4/0x2e0
        _raw_spin_lock_irqsave+0x36/0x50
        obj_cgroup_release+0x2d/0xb0
        drain_obj_stock+0x1a9/0x1b0
        refill_obj_stock+0x4f/0x220
        memcg_slab_free_hook.part.0+0x108/0x290
        kmem_cache_free+0xf5/0x3c0
        dequeue_signal+0xaf/0x1e0
        get_signal+0x232/0xdb0
        arch_do_signal_or_restart+0xf8/0x740
        exit_to_user_mode_prepare+0x17d/0x270
        syscall_exit_to_user_mode+0x19/0x70
        do_syscall_64+0x50/0x90
        entry_SYSCALL_64_after_hwframe+0x44/0xae
 
 other info that might help us debug this:

  Possible unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(&sighand->siglock);
                                lock(css_set_lock);
                                lock(&sighand->siglock);
   lock(css_set_lock);
 
  *** DEADLOCK ***

 2 locks held by test_core/5996:
  #0: ffff888103034618 (&sighand->siglock){....}-{2:2}, at: get_signal+0x8d/0xdb0
  #1: ffffffff82905e40 (rcu_read_lock){....}-{1:2}, at: drain_obj_stock+0x71/0x1b0
 
 stack backtrace:
 CPU: 2 PID: 5996 Comm: test_core Not tainted 5.16.0-rc5 #259
 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014
 Call Trace:
  <TASK>
  dump_stack_lvl+0x45/0x59
  check_noncircular+0xfe/0x110
  __lock_acquire+0x1253/0x2280
  lock_acquire+0xd4/0x2e0
  _raw_spin_lock_irqsave+0x36/0x50
  obj_cgroup_release+0x2d/0xb0
  drain_obj_stock+0x1a9/0x1b0
  refill_obj_stock+0x4f/0x220
  memcg_slab_free_hook.part.0+0x108/0x290
  kmem_cache_free+0xf5/0x3c0
  dequeue_signal+0xaf/0x1e0
  get_signal+0x232/0xdb0
  arch_do_signal_or_restart+0xf8/0x740
  exit_to_user_mode_prepare+0x17d/0x270
  syscall_exit_to_user_mode+0x19/0x70
  do_syscall_64+0x50/0x90
  entry_SYSCALL_64_after_hwframe+0x44/0xae
  </TASK>

Sebastian

Comments

Michal Koutný Jan. 5, 2022, 2:59 p.m. UTC | #1
On Wed, Dec 22, 2021 at 12:41:08PM +0100, Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> - lockdep complains were triggered by test_core and test_freezer (both
>   had to run):

This doesn't happen on the patched kernel, correct?

Thanks,
Michal
Sebastian Andrzej Siewior Jan. 5, 2022, 3:06 p.m. UTC | #2
On 2022-01-05 15:59:56 [+0100], Michal Koutný wrote:
> On Wed, Dec 22, 2021 at 12:41:08PM +0100, Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> > - lockdep complains were triggered by test_core and test_freezer (both
> >   had to run):
> 
> This doesn't happen on the patched kernel, correct?

I saw it first on the patched kernel (with this series) then went back
to the -rc and saw it there, too.

> Thanks,
> Michal

Sebastian