Message ID | 20210723011436.60960-1-surenb@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v3,1/2] mm: introduce process_mrelease system call | expand |
On Thu, Jul 22, 2021 at 6:14 PM Suren Baghdasaryan <surenb@google.com> wrote: > [...] > + > + mmap_read_lock(mm); How about mmap_read_trylock(mm) and return -EAGAIN on failure? > + if (!__oom_reap_task_mm(mm)) > + ret = -EAGAIN; > + mmap_read_unlock(mm); > +
On Thu, Jul 22, 2021, 7:04 PM Shakeel Butt <shakeelb@google.com> wrote: > On Thu, Jul 22, 2021 at 6:14 PM Suren Baghdasaryan <surenb@google.com> > wrote: > > > [...] > > + > > + mmap_read_lock(mm); > > How about mmap_read_trylock(mm) and return -EAGAIN on failure? > That sounds like a good idea. Thanks! I'll add that in the next respin. > > > + if (!__oom_reap_task_mm(mm)) > > + ret = -EAGAIN; > > + mmap_read_unlock(mm); > > + >
On Thu 22-07-21 21:47:56, Suren Baghdasaryan wrote: > On Thu, Jul 22, 2021, 7:04 PM Shakeel Butt <shakeelb@google.com> wrote: > > > On Thu, Jul 22, 2021 at 6:14 PM Suren Baghdasaryan <surenb@google.com> > > wrote: > > > > > [...] > > > + > > > + mmap_read_lock(mm); > > > > How about mmap_read_trylock(mm) and return -EAGAIN on failure? > > > > That sounds like a good idea. Thanks! I'll add that in the next respin. Why is that a good idea? Can you do anything meaningful about the failure other than immediately retry the syscall and hope for the best?
On Thu, Jul 22, 2021, 11:20 PM Michal Hocko <mhocko@suse.com> wrote: > On Thu 22-07-21 21:47:56, Suren Baghdasaryan wrote: > > On Thu, Jul 22, 2021, 7:04 PM Shakeel Butt <shakeelb@google.com> wrote: > > > > > On Thu, Jul 22, 2021 at 6:14 PM Suren Baghdasaryan <surenb@google.com> > > > wrote: > > > > > > > [...] > > > > + > > > > + mmap_read_lock(mm); > > > > > > How about mmap_read_trylock(mm) and return -EAGAIN on failure? > > > > > > > That sounds like a good idea. Thanks! I'll add that in the next respin. > > Why is that a good idea? Can you do anything meaningful about the > failure other than immediately retry the syscall and hope for the best? > I was thinking if this syscall implements "best effort without blocking" approach then for a more strict usage user can simply retry. However retrying means issuing another syscall, so additional overhead... I guess such "best effort" approach would be unusual for a syscall, so maybe we can keep it as it is now and if such "do not block" mode is needed we can use flags to implement it later? > -- > Michal Hocko > SUSE Labs >
On 23.07.21 10:11, Suren Baghdasaryan wrote: > > > On Thu, Jul 22, 2021, 11:20 PM Michal Hocko <mhocko@suse.com > <mailto:mhocko@suse.com>> wrote: > > On Thu 22-07-21 21:47:56, Suren Baghdasaryan wrote: > > On Thu, Jul 22, 2021, 7:04 PM Shakeel Butt <shakeelb@google.com > <mailto:shakeelb@google.com>> wrote: > > > > > On Thu, Jul 22, 2021 at 6:14 PM Suren Baghdasaryan > <surenb@google.com <mailto:surenb@google.com>> > > > wrote: > > > > > > > [...] > > > > + > > > > + mmap_read_lock(mm); > > > > > > How about mmap_read_trylock(mm) and return -EAGAIN on failure? > > > > > > > That sounds like a good idea. Thanks! I'll add that in the next > respin. > > Why is that a good idea? Can you do anything meaningful about the > failure other than immediately retry the syscall and hope for the best? > > > I was thinking if this syscall implements "best effort without blocking" > approach then for a more strict usage user can simply retry. However > retrying means issuing another syscall, so additional overhead... > I guess such "best effort" approach would be unusual for a syscall, so > maybe we can keep it as it is now and if such "do not block" mode is > needed we can use flags to implement it later? The process is dying, so I am not sure what we are trying to optimize here in respect to locking ...
On Fri, Jul 23, 2021 at 1:15 AM David Hildenbrand <david@redhat.com> wrote: > > On 23.07.21 10:11, Suren Baghdasaryan wrote: > > > > > > On Thu, Jul 22, 2021, 11:20 PM Michal Hocko <mhocko@suse.com > > <mailto:mhocko@suse.com>> wrote: > > > > On Thu 22-07-21 21:47:56, Suren Baghdasaryan wrote: > > > On Thu, Jul 22, 2021, 7:04 PM Shakeel Butt <shakeelb@google.com > > <mailto:shakeelb@google.com>> wrote: > > > > > > > On Thu, Jul 22, 2021 at 6:14 PM Suren Baghdasaryan > > <surenb@google.com <mailto:surenb@google.com>> > > > > wrote: > > > > > > > > > [...] > > > > > + > > > > > + mmap_read_lock(mm); > > > > > > > > How about mmap_read_trylock(mm) and return -EAGAIN on failure? > > > > > > > > > > That sounds like a good idea. Thanks! I'll add that in the next > > respin. > > > > Why is that a good idea? Can you do anything meaningful about the > > failure other than immediately retry the syscall and hope for the best? > > > > > > I was thinking if this syscall implements "best effort without blocking" > > approach then for a more strict usage user can simply retry. However > > retrying means issuing another syscall, so additional overhead... > > I guess such "best effort" approach would be unusual for a syscall, so > > maybe we can keep it as it is now and if such "do not block" mode is > > needed we can use flags to implement it later? > > The process is dying, so I am not sure what we are trying to optimize > here in respect to locking ... Trying not to block the caller, which is likely a system health monitoring process. However, if not blocking is important, it can issue this syscall from a separate thread... Let's scratch that "do not block" mode and keep it simple as it is now. > > > -- > Thanks, > > David / dhildenb > > -- > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. >
On Fri 23-07-21 01:11:51, Suren Baghdasaryan wrote: > On Thu, Jul 22, 2021, 11:20 PM Michal Hocko <mhocko@suse.com> wrote: > > > On Thu 22-07-21 21:47:56, Suren Baghdasaryan wrote: > > > On Thu, Jul 22, 2021, 7:04 PM Shakeel Butt <shakeelb@google.com> wrote: > > > > > > > On Thu, Jul 22, 2021 at 6:14 PM Suren Baghdasaryan <surenb@google.com> > > > > wrote: > > > > > > > > > [...] > > > > > + > > > > > + mmap_read_lock(mm); > > > > > > > > How about mmap_read_trylock(mm) and return -EAGAIN on failure? > > > > > > > > > > That sounds like a good idea. Thanks! I'll add that in the next respin. > > > > Why is that a good idea? Can you do anything meaningful about the > > failure other than immediately retry the syscall and hope for the best? > > > > I was thinking if this syscall implements "best effort without blocking" > approach then for a more strict usage user can simply retry. I do not think we really want to promise non blocking behavior at this stage unless that is absolutely necessary. The current implementation goes an extra mile to not block but I wouldn't carve it into stone via userspace expectations. > However > retrying means issuing another syscall, so additional overhead... > I guess such "best effort" approach would be unusual for a syscall, so > maybe we can keep it as it is now and if such "do not block" mode is needed > we can use flags to implement it later? Yeah, an explicit opt-in via flags would be an option if that turns out to be really necessary.
On Thu, Jul 22, 2021 at 11:20 PM Michal Hocko <mhocko@suse.com> wrote: > > On Thu 22-07-21 21:47:56, Suren Baghdasaryan wrote: > > On Thu, Jul 22, 2021, 7:04 PM Shakeel Butt <shakeelb@google.com> wrote: > > > > > On Thu, Jul 22, 2021 at 6:14 PM Suren Baghdasaryan <surenb@google.com> > > > wrote: > > > > > > > [...] > > > > + > > > > + mmap_read_lock(mm); > > > > > > How about mmap_read_trylock(mm) and return -EAGAIN on failure? > > > > > > > That sounds like a good idea. Thanks! I'll add that in the next respin. > > Why is that a good idea? Can you do anything meaningful about the > failure other than immediately retry the syscall and hope for the best? > Yes we can. Based on the situation/impact we can select more victims.
On Fri, Jul 23, 2021 at 1:53 AM Michal Hocko <mhocko@suse.com> wrote: > [...] > > However > > retrying means issuing another syscall, so additional overhead... > > I guess such "best effort" approach would be unusual for a syscall, so > > maybe we can keep it as it is now and if such "do not block" mode is needed > > we can use flags to implement it later? > > Yeah, an explicit opt-in via flags would be an option if that turns out > to be really necessary. > I am fine with keeping it as it is but we do need the non-blocking option (via flags) to enable userspace to act more aggressively.
On Fri, Jul 23, 2021 at 6:46 AM Shakeel Butt <shakeelb@google.com> wrote: > > On Fri, Jul 23, 2021 at 1:53 AM Michal Hocko <mhocko@suse.com> wrote: > > > [...] > > > However > > > retrying means issuing another syscall, so additional overhead... > > > I guess such "best effort" approach would be unusual for a syscall, so > > > maybe we can keep it as it is now and if such "do not block" mode is needed > > > we can use flags to implement it later? > > > > Yeah, an explicit opt-in via flags would be an option if that turns out > > to be really necessary. > > > > I am fine with keeping it as it is but we do need the non-blocking > option (via flags) to enable userspace to act more aggressively. I think you want to check memory conditions shortly after issuing kill/reap requests irrespective of mmap_sem contention. The reason is that even when memory release is not blocked, allocations from other processes might consume memory faster than we release it. For example, in Android we issue kill and start waiting on pidfd for its death notification. As soon as the process is dead we reassess the situation and possibly kill again. If the process is not dead within a configurable timeout we check conditions again and might issue more kill requests (IOW our wait for the process to die has a timeout). If process_mrelease() is blocked on mmap_sem, we might timeout like this. I imagine that a non-blocking option for process_mrelease() would not really change this logic. Adding such an option is trivial but I would like to make sure it's indeed useful. Maybe after the syscall is in place you can experiment with it and see if such an option would really change the way you use it?
On Fri, Jul 23, 2021 at 9:09 AM Suren Baghdasaryan <surenb@google.com> wrote: > > On Fri, Jul 23, 2021 at 6:46 AM Shakeel Butt <shakeelb@google.com> wrote: > > > > On Fri, Jul 23, 2021 at 1:53 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > [...] > > > > However > > > > retrying means issuing another syscall, so additional overhead... > > > > I guess such "best effort" approach would be unusual for a syscall, so > > > > maybe we can keep it as it is now and if such "do not block" mode is needed > > > > we can use flags to implement it later? > > > > > > Yeah, an explicit opt-in via flags would be an option if that turns out > > > to be really necessary. > > > > > > > I am fine with keeping it as it is but we do need the non-blocking > > option (via flags) to enable userspace to act more aggressively. > > I think you want to check memory conditions shortly after issuing > kill/reap requests irrespective of mmap_sem contention. The reason is > that even when memory release is not blocked, allocations from other > processes might consume memory faster than we release it. For example, > in Android we issue kill and start waiting on pidfd for its death > notification. As soon as the process is dead we reassess the situation > and possibly kill again. If the process is not dead within a > configurable timeout we check conditions again and might issue more > kill requests (IOW our wait for the process to die has a timeout). If > process_mrelease() is blocked on mmap_sem, we might timeout like this. > I imagine that a non-blocking option for process_mrelease() would not > really change this logic. On a containerized system, killing a job requires killing multiple processes and then process_mrelease() them. Now there is cgroup.kill to kill all the processes in a cgroup tree but we would still need to process_mrelease() all the processes in that tree. There is a chance that we get stuck in reaping the early process. Making process_mrelease() non-blocking will enable the userspace to go to other processes in the list. An alternative would be to have a cgroup specific interface for reaping similar to cgroup.kill. > Adding such an option is trivial but I would like to make sure it's > indeed useful. Maybe after the syscall is in place you can experiment > with it and see if such an option would really change the way you use > it? SGTM.
On Fri 23-07-21 10:00:26, Shakeel Butt wrote: > On Fri, Jul 23, 2021 at 9:09 AM Suren Baghdasaryan <surenb@google.com> wrote: > > > > On Fri, Jul 23, 2021 at 6:46 AM Shakeel Butt <shakeelb@google.com> wrote: > > > > > > On Fri, Jul 23, 2021 at 1:53 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > [...] > > > > > However > > > > > retrying means issuing another syscall, so additional overhead... > > > > > I guess such "best effort" approach would be unusual for a syscall, so > > > > > maybe we can keep it as it is now and if such "do not block" mode is needed > > > > > we can use flags to implement it later? > > > > > > > > Yeah, an explicit opt-in via flags would be an option if that turns out > > > > to be really necessary. > > > > > > > > > > I am fine with keeping it as it is but we do need the non-blocking > > > option (via flags) to enable userspace to act more aggressively. > > > > I think you want to check memory conditions shortly after issuing > > kill/reap requests irrespective of mmap_sem contention. The reason is > > that even when memory release is not blocked, allocations from other > > processes might consume memory faster than we release it. For example, > > in Android we issue kill and start waiting on pidfd for its death > > notification. As soon as the process is dead we reassess the situation > > and possibly kill again. If the process is not dead within a > > configurable timeout we check conditions again and might issue more > > kill requests (IOW our wait for the process to die has a timeout). If > > process_mrelease() is blocked on mmap_sem, we might timeout like this. > > I imagine that a non-blocking option for process_mrelease() would not > > really change this logic. > > On a containerized system, killing a job requires killing multiple > processes and then process_mrelease() them. Now there is cgroup.kill > to kill all the processes in a cgroup tree but we would still need to > process_mrelease() all the processes in that tree. Is process_mrelease on all of them really necessary? I thought that the primary reason for the call is to guarantee a forward progress in cases where the userspace OOM victim cannot die on SIGKILL. That should be more an exception than a normal case, no? > There is a chance > that we get stuck in reaping the early process. Making > process_mrelease() non-blocking will enable the userspace to go to > other processes in the list. I do agree that allowing (guanrateed) non-blocking behavior is nice but it is also a rather strong promise. There is some memory that cannot be released by the oom reaper currently because there are locks involved (e.g. mlocked memory or memory areas backed by blocking notifiers). I can imagine some users of this api would rather block and make sure to release the memory rather than skip over it. So if anything this has to be an opt in with a big fat warning that the behavior of the kernel wrt to releasable memory can vary due to all sorts of implementation details. > An alternative would be to have a cgroup specific interface for > reaping similar to cgroup.kill. Could you elaborate?
On Thu 22-07-21 19:03:56, Shakeel Butt wrote: > On Thu, Jul 22, 2021 at 6:14 PM Suren Baghdasaryan <surenb@google.com> wrote: > > > [...] > > + > > + mmap_read_lock(mm); > > How about mmap_read_trylock(mm) and return -EAGAIN on failure? Btw. wether there is a non-blocking variant or not we should use killable waiting to make sure the task calling into this is killable by userspace (e.g. to implement a timeout based approach).
On Mon, Jul 26, 2021 at 12:27 AM Michal Hocko <mhocko@suse.com> wrote: > [...] > > Is process_mrelease on all of them really necessary? I thought that the > primary reason for the call is to guarantee a forward progress in cases > where the userspace OOM victim cannot die on SIGKILL. That should be > more an exception than a normal case, no? > I am thinking of using this API in this way: On user-defined OOM condition, kill a job/cgroup and unconditionally reap all of its processes. Keep monitoring the situation and if it does not improve go for another kill and reap. I can add additional logic in between kill and reap to see if reap is necessary but unconditionally reaping is more simple. > > > An alternative would be to have a cgroup specific interface for > > reaping similar to cgroup.kill. > > Could you elaborate? > I mentioned this in [1] where I was thinking if it makes sense to overload cgroup.kill to also add the SIGKILLed processes in oom_reaper_list. The downside would be that there will be one thread doing the reaping and the syscall approach allows userspace to reap in multiple threads. I think for now, I would go with whatever Suren is proposing and we can always add more stuff if need arises. [1] https://lore.kernel.org/containers/CALvZod4jsb6bFzTOS4ZRAJGAzBru0oWanAhezToprjACfGm+ew@mail.gmail.com/
On Mon, Jul 26, 2021 at 6:44 AM Shakeel Butt <shakeelb@google.com> wrote: > > On Mon, Jul 26, 2021 at 12:27 AM Michal Hocko <mhocko@suse.com> wrote: > > > [...] > > > > Is process_mrelease on all of them really necessary? I thought that the > > primary reason for the call is to guarantee a forward progress in cases > > where the userspace OOM victim cannot die on SIGKILL. That should be > > more an exception than a normal case, no? > > > > I am thinking of using this API in this way: On user-defined OOM > condition, kill a job/cgroup and unconditionally reap all of its > processes. Keep monitoring the situation and if it does not improve go > for another kill and reap. > > I can add additional logic in between kill and reap to see if reap is > necessary but unconditionally reaping is more simple. > > > > > > An alternative would be to have a cgroup specific interface for > > > reaping similar to cgroup.kill. > > > > Could you elaborate? > > > > I mentioned this in [1] where I was thinking if it makes sense to > overload cgroup.kill to also add the SIGKILLed processes in > oom_reaper_list. The downside would be that there will be one thread > doing the reaping and the syscall approach allows userspace to reap in > multiple threads. I think for now, I would go with whatever Suren is > proposing and we can always add more stuff if need arises. > > [1] https://lore.kernel.org/containers/CALvZod4jsb6bFzTOS4ZRAJGAzBru0oWanAhezToprjACfGm+ew@mail.gmail.com/ Hi Folks, So far I don't think there was any request for further changes. Anything else you would want me to address or are we in a good shape wrt this feature? If so, would people who had a chance to review this patchset be willing to endorse it with their Reviewed-by or Acked-by? Thanks, Suren.
On Mon, Aug 2, 2021 at 12:54 PM Suren Baghdasaryan <surenb@google.com> wrote: > > On Mon, Jul 26, 2021 at 6:44 AM Shakeel Butt <shakeelb@google.com> wrote: > > > > On Mon, Jul 26, 2021 at 12:27 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > [...] > > > > > > Is process_mrelease on all of them really necessary? I thought that the > > > primary reason for the call is to guarantee a forward progress in cases > > > where the userspace OOM victim cannot die on SIGKILL. That should be > > > more an exception than a normal case, no? > > > > > > > I am thinking of using this API in this way: On user-defined OOM > > condition, kill a job/cgroup and unconditionally reap all of its > > processes. Keep monitoring the situation and if it does not improve go > > for another kill and reap. > > > > I can add additional logic in between kill and reap to see if reap is > > necessary but unconditionally reaping is more simple. > > > > > > > > > An alternative would be to have a cgroup specific interface for > > > > reaping similar to cgroup.kill. > > > > > > Could you elaborate? > > > > > > > I mentioned this in [1] where I was thinking if it makes sense to > > overload cgroup.kill to also add the SIGKILLed processes in > > oom_reaper_list. The downside would be that there will be one thread > > doing the reaping and the syscall approach allows userspace to reap in > > multiple threads. I think for now, I would go with whatever Suren is > > proposing and we can always add more stuff if need arises. > > > > [1] https://lore.kernel.org/containers/CALvZod4jsb6bFzTOS4ZRAJGAzBru0oWanAhezToprjACfGm+ew@mail.gmail.com/ > > Hi Folks, > So far I don't think there was any request for further changes. > Anything else you would want me to address or are we in a good shape > wrt this feature? > If so, would people who had a chance to review this patchset be > willing to endorse it with their Reviewed-by or Acked-by? I think with Michal's suggestion to use a killable mmap lock, at least I am good with the patch.
On Mon, Aug 2, 2021 at 1:05 PM Shakeel Butt <shakeelb@google.com> wrote: > > On Mon, Aug 2, 2021 at 12:54 PM Suren Baghdasaryan <surenb@google.com> wrote: > > > > On Mon, Jul 26, 2021 at 6:44 AM Shakeel Butt <shakeelb@google.com> wrote: > > > > > > On Mon, Jul 26, 2021 at 12:27 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > [...] > > > > > > > > Is process_mrelease on all of them really necessary? I thought that the > > > > primary reason for the call is to guarantee a forward progress in cases > > > > where the userspace OOM victim cannot die on SIGKILL. That should be > > > > more an exception than a normal case, no? > > > > > > > > > > I am thinking of using this API in this way: On user-defined OOM > > > condition, kill a job/cgroup and unconditionally reap all of its > > > processes. Keep monitoring the situation and if it does not improve go > > > for another kill and reap. > > > > > > I can add additional logic in between kill and reap to see if reap is > > > necessary but unconditionally reaping is more simple. > > > > > > > > > > > > An alternative would be to have a cgroup specific interface for > > > > > reaping similar to cgroup.kill. > > > > > > > > Could you elaborate? > > > > > > > > > > I mentioned this in [1] where I was thinking if it makes sense to > > > overload cgroup.kill to also add the SIGKILLed processes in > > > oom_reaper_list. The downside would be that there will be one thread > > > doing the reaping and the syscall approach allows userspace to reap in > > > multiple threads. I think for now, I would go with whatever Suren is > > > proposing and we can always add more stuff if need arises. > > > > > > [1] https://lore.kernel.org/containers/CALvZod4jsb6bFzTOS4ZRAJGAzBru0oWanAhezToprjACfGm+ew@mail.gmail.com/ > > > > Hi Folks, > > So far I don't think there was any request for further changes. > > Anything else you would want me to address or are we in a good shape > > wrt this feature? > > If so, would people who had a chance to review this patchset be > > willing to endorse it with their Reviewed-by or Acked-by? > > I think with Michal's suggestion to use a killable mmap lock, at least > I am good with the patch. Ah, yes. Thanks for pointing this out! I'll replace mmap_read_lock() with mmap_read_lock_killable(). Will post an updated version later today.
On Mon, Aug 2, 2021 at 1:08 PM Suren Baghdasaryan <surenb@google.com> wrote: > > On Mon, Aug 2, 2021 at 1:05 PM Shakeel Butt <shakeelb@google.com> wrote: > > > > On Mon, Aug 2, 2021 at 12:54 PM Suren Baghdasaryan <surenb@google.com> wrote: > > > > > > On Mon, Jul 26, 2021 at 6:44 AM Shakeel Butt <shakeelb@google.com> wrote: > > > > > > > > On Mon, Jul 26, 2021 at 12:27 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > > [...] > > > > > > > > > > Is process_mrelease on all of them really necessary? I thought that the > > > > > primary reason for the call is to guarantee a forward progress in cases > > > > > where the userspace OOM victim cannot die on SIGKILL. That should be > > > > > more an exception than a normal case, no? > > > > > > > > > > > > > I am thinking of using this API in this way: On user-defined OOM > > > > condition, kill a job/cgroup and unconditionally reap all of its > > > > processes. Keep monitoring the situation and if it does not improve go > > > > for another kill and reap. > > > > > > > > I can add additional logic in between kill and reap to see if reap is > > > > necessary but unconditionally reaping is more simple. > > > > > > > > > > > > > > > An alternative would be to have a cgroup specific interface for > > > > > > reaping similar to cgroup.kill. > > > > > > > > > > Could you elaborate? > > > > > > > > > > > > > I mentioned this in [1] where I was thinking if it makes sense to > > > > overload cgroup.kill to also add the SIGKILLed processes in > > > > oom_reaper_list. The downside would be that there will be one thread > > > > doing the reaping and the syscall approach allows userspace to reap in > > > > multiple threads. I think for now, I would go with whatever Suren is > > > > proposing and we can always add more stuff if need arises. > > > > > > > > [1] https://lore.kernel.org/containers/CALvZod4jsb6bFzTOS4ZRAJGAzBru0oWanAhezToprjACfGm+ew@mail.gmail.com/ > > > > > > Hi Folks, > > > So far I don't think there was any request for further changes. > > > Anything else you would want me to address or are we in a good shape > > > wrt this feature? > > > If so, would people who had a chance to review this patchset be > > > willing to endorse it with their Reviewed-by or Acked-by? > > > > I think with Michal's suggestion to use a killable mmap lock, at least > > I am good with the patch. > > Ah, yes. Thanks for pointing this out! I'll replace mmap_read_lock() > with mmap_read_lock_killable(). Will post an updated version later > today. Posted the next version at https://lore.kernel.org/patchwork/patch/1471403/
diff --git a/mm/oom_kill.c b/mm/oom_kill.c index c729a4c4a1ac..8bf7a1020ac5 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -28,6 +28,7 @@ #include <linux/sched/task.h> #include <linux/sched/debug.h> #include <linux/swap.h> +#include <linux/syscalls.h> #include <linux/timex.h> #include <linux/jiffies.h> #include <linux/cpuset.h> @@ -1141,3 +1142,56 @@ void pagefault_out_of_memory(void) out_of_memory(&oc); mutex_unlock(&oom_lock); } + +SYSCALL_DEFINE2(process_mrelease, int, pidfd, unsigned int, flags) +{ +#ifdef CONFIG_MMU + struct mm_struct *mm = NULL; + struct task_struct *task; + unsigned int f_flags; + struct pid *pid; + long ret = 0; + + if (flags != 0) + return -EINVAL; + + pid = pidfd_get_pid(pidfd, &f_flags); + if (IS_ERR(pid)) + return PTR_ERR(pid); + + task = get_pid_task(pid, PIDTYPE_PID); + if (!task) { + ret = -ESRCH; + goto put_pid; + } + + /* + * If the task is dying and in the process of releasing its memory + * then get its mm. + */ + task_lock(task); + if (task_will_free_mem(task) && (task->flags & PF_KTHREAD) == 0) { + mm = task->mm; + mmget(mm); + } + task_unlock(task); + if (!mm) { + ret = -EINVAL; + goto put_task; + } + + mmap_read_lock(mm); + if (!__oom_reap_task_mm(mm)) + ret = -EAGAIN; + mmap_read_unlock(mm); + + mmput(mm); +put_task: + put_task_struct(task); +put_pid: + put_pid(pid); + return ret; +#else + return -ENOSYS; +#endif /* CONFIG_MMU */ +}
In modern systems it's not unusual to have a system component monitoring memory conditions of the system and tasked with keeping system memory pressure under control. One way to accomplish that is to kill non-essential processes to free up memory for more important ones. Examples of this are Facebook's OOM killer daemon called oomd and Android's low memory killer daemon called lmkd. For such system component it's important to be able to free memory quickly and efficiently. Unfortunately the time process takes to free up its memory after receiving a SIGKILL might vary based on the state of the process (uninterruptible sleep), size and OPP level of the core the process is running. A mechanism to free resources of the target process in a more predictable way would improve system's ability to control its memory pressure. Introduce process_mrelease system call that releases memory of a dying process from the context of the caller. This way the memory is freed in a more controllable way with CPU affinity and priority of the caller. The workload of freeing the memory will also be charged to the caller. The operation is allowed only on a dying process. Previously I proposed a number of alternatives to accomplish this: - https://lore.kernel.org/patchwork/patch/1060407 extending pidfd_send_signal to allow memory reaping using oom_reaper thread; - https://lore.kernel.org/patchwork/patch/1338196 extending pidfd_send_signal to reap memory of the target process synchronously from the context of the caller; - https://lore.kernel.org/patchwork/patch/1344419/ to add MADV_DONTNEED support for process_madvise implementing synchronous memory reaping. The end of the last discussion culminated with suggestion to introduce a dedicated system call (https://lore.kernel.org/patchwork/patch/1344418/#1553875) The reasoning was that the new variant of process_madvise a) does not work on an address range b) is destructive c) doesn't share much code at all with the rest of process_madvise From the userspace point of view it was awkward and inconvenient to provide memory range for this operation that operates on the entire address space. Using special flags or address values to specify the entire address space was too hacky. The API is as follows, int process_mrelease(int pidfd, unsigned int flags); DESCRIPTION The process_mrelease() system call is used to free the memory of a process which was sent a SIGKILL signal. The pidfd selects the process referred to by the PID file descriptor. (See pidofd_open(2) for further information) The flags argument is reserved for future use; currently, this argument must be specified as 0. RETURN VALUE On success, process_mrelease() returns 0. On error, -1 is returned and errno is set to indicate the error. ERRORS EBADF pidfd is not a valid PID file descriptor. EAGAIN Failed to release part of the address space. EINVAL flags is not 0. EINVAL The task does not have a pending SIGKILL or its memory is shared with another process with no pending SIGKILL. ENOSYS This system call is not supported by kernels built with no MMU support (CONFIG_MMU=n). ESRCH The target process does not exist (i.e., it has terminated and been waited on). Signed-off-by: Suren Baghdasaryan <surenb@google.com> --- changes in v3: - Added #ifdef CONFIG_MMU inside process_mrelease to keep task_will_free_mem in the same place, per David Hildenbrand - Reordered variable definitions in process_mrelease, per David Hildenbrand mm/oom_kill.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+)