From patchwork Sun Jul 18 21:41:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 12384479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC586C636C8 for ; Sun, 18 Jul 2021 21:41:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 59E3361026 for ; Sun, 18 Jul 2021 21:41:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 59E3361026 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C89688D00FB; Sun, 18 Jul 2021 17:41:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C398A8D00EC; Sun, 18 Jul 2021 17:41:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB3CE8D00FB; Sun, 18 Jul 2021 17:41:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0026.hostedemail.com [216.40.44.26]) by kanga.kvack.org (Postfix) with ESMTP id 8129B8D00EC for ; Sun, 18 Jul 2021 17:41:41 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 19A8323E48 for ; Sun, 18 Jul 2021 21:41:40 +0000 (UTC) X-FDA: 78377030760.01.7E1EA1F Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf03.hostedemail.com (Postfix) with ESMTP id C0E043000182 for ; Sun, 18 Jul 2021 21:41:39 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id a7-20020a5b00070000b02904ed415d9d84so22290198ybp.0 for ; Sun, 18 Jul 2021 14:41:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=34U0ArTGqmXEjhOeCq4v/OslLn/g3DytngGhTp5TZL8=; b=pPvAdUp/CHeFKkyssuk3noamEwEUQ430z6Mm20okH/S7xjTEMSN1hLsGZyoIoqrNNS rzGK9ki2izgz4kD3nHi/YreNd4EAvRfyoeihoKKXGV7J+C6YcJIS0zn9s5tqxBNvPGIP CoYhAbkMOQE1fN18Jgyf8pF55oBU0iJdtTVwskGFybU3brpP1tWl6JZDfuFbbm1Zf3c2 J5jfRBIUo+KEkQ9aUIW16s+J/v1hZDIVunR0lwrOIrhIONqBXZ2t7yGWPA+OOrSgaP5o lCR1vkyk7/alS9/emKMVqI9iNUS3KzrGBXzEwgzgeThZ7sKMYsQVxgkWSifn3LDH6kxd m3ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=34U0ArTGqmXEjhOeCq4v/OslLn/g3DytngGhTp5TZL8=; b=jQwfmZM2YJwuu2TcaLBMm1OKDko3LHUK9vFBMvJbuIpqcoXZuGahLWsXfZaKzYgJZf 5NLPZ+FtuB+5v9dwapegZGFZrs3TKkvhOlN1R/zeas0nF5PfEZiR7ZFN/1YRDbv+WWCD fUqQPBS1QtJF1NHokRDocTew4KQq4Oy4fK4QsHp0Uol29vV/mQhtFwO4KHb9Nrvj/ZH1 b+FZmvMoyKLkXyFPuftApLs46EbnEA35Al3p/zpDnK+WDocVT9TEwCxIdvkdANKVPQyr N+2gpRhqrb9WJgBh4LqFlxMWoOT7Pjzw1zn5/pakYiXlidEImiTT2el9aMhiiE/aRUVU tANQ== X-Gm-Message-State: AOAM530FKG/exwWGOYHBaGcpFIjWzjbTzcLDI1QlOMOkxxed3nubNE3p cuXsoKFJoqeYwx+eEztXjGbmWFmad64= X-Google-Smtp-Source: ABdhPJyHpBJotkKu8PrUTO0MZGywY5JL/FZae0r63UdMIUbNcFIJSmjtRbMs4cLePCK/g7u/ncJ/RXD+Utc= X-Received: from surenb1.mtv.corp.google.com ([2620:15c:211:200:b347:cf97:e43a:9891]) (user=surenb job=sendgmr) by 2002:a25:694f:: with SMTP id e76mr26374379ybc.119.1626644498976; Sun, 18 Jul 2021 14:41:38 -0700 (PDT) Date: Sun, 18 Jul 2021 14:41:32 -0700 Message-Id: <20210718214134.2619099-1-surenb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v2 1/3] mm, oom: move task_will_free_mem up in the file to be used in process_mrelease From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: mhocko@kernel.org, mhocko@suse.com, rientjes@google.com, willy@infradead.org, hannes@cmpxchg.org, guro@fb.com, riel@surriel.com, minchan@kernel.org, christian@brauner.io, hch@infradead.org, oleg@redhat.com, david@redhat.com, jannh@google.com, shakeelb@google.com, luto@kernel.org, christian.brauner@ubuntu.com, fweimer@redhat.com, jengelh@inai.de, timmurray@google.com, linux-api@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b="pPvAdUp/"; spf=pass (imf03.hostedemail.com: domain of 3EqD0YAYKCDMhjgTcQVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3EqD0YAYKCDMhjgTcQVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: C0E043000182 X-Stat-Signature: 4njawbnzmh1ancgjwtu65iyxrd4911a1 X-HE-Tag: 1626644499-589679 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: process_mrelease needs to be added in the CONFIG_MMU-dependent block which comes before __task_will_free_mem and task_will_free_mem. Move these functions before this block so that new process_mrelease syscall can use them. Signed-off-by: Suren Baghdasaryan --- changes in v2: - Fixed build error when CONFIG_MMU=n, reported by kernel test robot. This required moving task_will_free_mem implemented in the first patch - Renamed process_reap to process_mrelease, per majority of votes - Replaced "dying process" with "process which was sent a SIGKILL signal" in the manual page text, per Florian Weimer - Added ERRORS section in the manual page text - Resolved conflicts in syscall numbers caused by the new memfd_secret syscall - Separated boilerplate code wiring-up the new syscall into a separate patch to facilitate the review process mm/oom_kill.c | 150 +++++++++++++++++++++++++------------------------- 1 file changed, 75 insertions(+), 75 deletions(-) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index c729a4c4a1ac..d04a13dc9fde 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -501,6 +501,81 @@ bool process_shares_mm(struct task_struct *p, struct mm_struct *mm) return false; } +static inline bool __task_will_free_mem(struct task_struct *task) +{ + struct signal_struct *sig = task->signal; + + /* + * A coredumping process may sleep for an extended period in exit_mm(), + * so the oom killer cannot assume that the process will promptly exit + * and release memory. + */ + if (sig->flags & SIGNAL_GROUP_COREDUMP) + return false; + + if (sig->flags & SIGNAL_GROUP_EXIT) + return true; + + if (thread_group_empty(task) && (task->flags & PF_EXITING)) + return true; + + return false; +} + +/* + * Checks whether the given task is dying or exiting and likely to + * release its address space. This means that all threads and processes + * sharing the same mm have to be killed or exiting. + * Caller has to make sure that task->mm is stable (hold task_lock or + * it operates on the current). + */ +static bool task_will_free_mem(struct task_struct *task) +{ + struct mm_struct *mm = task->mm; + struct task_struct *p; + bool ret = true; + + /* + * Skip tasks without mm because it might have passed its exit_mm and + * exit_oom_victim. oom_reaper could have rescued that but do not rely + * on that for now. We can consider find_lock_task_mm in future. + */ + if (!mm) + return false; + + if (!__task_will_free_mem(task)) + return false; + + /* + * This task has already been drained by the oom reaper so there are + * only small chances it will free some more + */ + if (test_bit(MMF_OOM_SKIP, &mm->flags)) + return false; + + if (atomic_read(&mm->mm_users) <= 1) + return true; + + /* + * Make sure that all tasks which share the mm with the given tasks + * are dying as well to make sure that a) nobody pins its mm and + * b) the task is also reapable by the oom reaper. + */ + rcu_read_lock(); + for_each_process(p) { + if (!process_shares_mm(p, mm)) + continue; + if (same_thread_group(task, p)) + continue; + ret = __task_will_free_mem(p); + if (!ret) + break; + } + rcu_read_unlock(); + + return ret; +} + #ifdef CONFIG_MMU /* * OOM Reaper kernel thread which tries to reap the memory used by the OOM @@ -781,81 +856,6 @@ bool oom_killer_disable(signed long timeout) return true; } -static inline bool __task_will_free_mem(struct task_struct *task) -{ - struct signal_struct *sig = task->signal; - - /* - * A coredumping process may sleep for an extended period in exit_mm(), - * so the oom killer cannot assume that the process will promptly exit - * and release memory. - */ - if (sig->flags & SIGNAL_GROUP_COREDUMP) - return false; - - if (sig->flags & SIGNAL_GROUP_EXIT) - return true; - - if (thread_group_empty(task) && (task->flags & PF_EXITING)) - return true; - - return false; -} - -/* - * Checks whether the given task is dying or exiting and likely to - * release its address space. This means that all threads and processes - * sharing the same mm have to be killed or exiting. - * Caller has to make sure that task->mm is stable (hold task_lock or - * it operates on the current). - */ -static bool task_will_free_mem(struct task_struct *task) -{ - struct mm_struct *mm = task->mm; - struct task_struct *p; - bool ret = true; - - /* - * Skip tasks without mm because it might have passed its exit_mm and - * exit_oom_victim. oom_reaper could have rescued that but do not rely - * on that for now. We can consider find_lock_task_mm in future. - */ - if (!mm) - return false; - - if (!__task_will_free_mem(task)) - return false; - - /* - * This task has already been drained by the oom reaper so there are - * only small chances it will free some more - */ - if (test_bit(MMF_OOM_SKIP, &mm->flags)) - return false; - - if (atomic_read(&mm->mm_users) <= 1) - return true; - - /* - * Make sure that all tasks which share the mm with the given tasks - * are dying as well to make sure that a) nobody pins its mm and - * b) the task is also reapable by the oom reaper. - */ - rcu_read_lock(); - for_each_process(p) { - if (!process_shares_mm(p, mm)) - continue; - if (same_thread_group(task, p)) - continue; - ret = __task_will_free_mem(p); - if (!ret) - break; - } - rcu_read_unlock(); - - return ret; -} - static void __oom_kill_process(struct task_struct *victim, const char *message) { struct task_struct *p;