From patchwork Tue Jul 3 14:25:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tetsuo Handa X-Patchwork-Id: 10504175 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0116660325 for ; Tue, 3 Jul 2018 14:26:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E285228700 for ; Tue, 3 Jul 2018 14:26:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D4FDD28949; Tue, 3 Jul 2018 14:26:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3129428700 for ; Tue, 3 Jul 2018 14:26:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB5E26B0266; Tue, 3 Jul 2018 10:26:19 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C38AB6B0269; Tue, 3 Jul 2018 10:26:19 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A62B46B026A; Tue, 3 Jul 2018 10:26:19 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ot0-f197.google.com (mail-ot0-f197.google.com [74.125.82.197]) by kanga.kvack.org (Postfix) with ESMTP id 7DBB96B0266 for ; Tue, 3 Jul 2018 10:26:19 -0400 (EDT) Received: by mail-ot0-f197.google.com with SMTP id f7-v6so1393568oti.5 for ; Tue, 03 Jul 2018 07:26:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=Jw7FPazKj6kIMDQRwu8SkLpLH2z5u0g3jWOaXlKG7ds=; b=sDsah/o7OvzInlt9D/HsWTI2R1YXRRhHfwmEHvJE6xoFOGVV0BchW4QTr0/7+hNnuO 8O1nAJWLGq7LJfvEKA8Rrb0oMnfy+fsGqCoINhqlzGC9QXV5Qv2p+nJ8kVzHn1yFesqF Fj2bmlB76H5j8rrWydkGDw1nPnX84seUVu9hrgtEJN8ZeRLPtE6VV1NUoN0Y/wJzxQuT oR4F/wSEM+Bx1Fwyqer+7jS+9X7LK6d5/S7U3jeEVkecvhpPy+Gm59R/XF83JbOOXNxG eisag1byi92zF8XDeMzMi0yp9xe3zVhVYHHSDWEOEM1ZSONcHJG3P41hgHeAgMy0KxZh NAAw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of penguin-kernel@i-love.sakura.ne.jp designates 202.181.97.72 as permitted sender) smtp.mailfrom=penguin-kernel@i-love.sakura.ne.jp X-Gm-Message-State: APt69E0yH3gtqunDczDOu2/u1SeQ4JDZm/UtCkVs5+sXbhG6PPkT6qUT l7hzPoLC8be9aGNtH6ZFoJK8rr7woV0lChlgZ9PpibOaIaAdnvtguV1KM5M0+dNo0sH+zvGpkgx hvFPFXjVC0x3vmm5bKQf7ZGIGRtinsgxMwB+xZHbupGB8eNPQ6fF1T6gPxZf1abhxxA== X-Received: by 2002:a54:4f88:: with SMTP id g8-v6mr7229792oiy.191.1530627979199; Tue, 03 Jul 2018 07:26:19 -0700 (PDT) X-Google-Smtp-Source: AAOMgpd+aeZ1AohhGAuk94MX9hJDucWYDuGzLW4uzRhHjamQF3qhzCtZY3F7GHpjowUVLDaOhKxo X-Received: by 2002:a54:4f88:: with SMTP id g8-v6mr7229731oiy.191.1530627978475; Tue, 03 Jul 2018 07:26:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530627978; cv=none; d=google.com; s=arc-20160816; b=ofKhhCeaEgOTMVqFxgiBoRU5CJTVocIOq4fReWNRhAzHRjR8/j4ymjqz1VLHb4wHOP aQTFBrpPhQepd4+g24fi7k3LDeFpMkPb/kXQM0jkUK2hIxkZiW+40GRYkPYJM+/hwp+A 0SN3KsjolDGoV2C86mx+EjItU3c4tQNTk7sNp6fqnsYxsrv1gVsdhILhdHYUSUSH//la IZmV4yZAyJ4o7pt2XrchvCbsObb2iGGVDzOROvKezoakl3cHOm5jtFo1rOh2nA4twlhA VExl7SEagX95vHxhxGK5jzDa0hljzwjAT/BOgXEvoaYPLHlYqPDoldOZQo/q4oKdeoH8 UuBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=Jw7FPazKj6kIMDQRwu8SkLpLH2z5u0g3jWOaXlKG7ds=; b=hUBO8WvyADQbu8bnnfFPZ4stnGo1mnNrW+8aljQiPQfAAjl4dvSg19MPTZ7HP3oSip rJ4vcYqeOeZuHpGiWpxvkal+qAMKFnu9kDP4g1ZmKkHzjyIsKNTAacW8L08v3dGsScfe xnu2hYYpIJBoIj/keuKcVdVxLYPtX4uwcA9UBx9MUFDI6RDtVc8sWYmF1MZ9zorincVC qtmx2q8rDsfyGc1l+iZ2+6HOKN1mIFu6fWoUGvNfLm4hj56lp8F9CK5nkp7kdYo7uWfZ H05rz9khUYc6nVU+EhW2eZdIMFe0eYYlpTu7jAdNtiYo1usfVHe2TqeRAAnwMRNn2hO3 ehaQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of penguin-kernel@i-love.sakura.ne.jp designates 202.181.97.72 as permitted sender) smtp.mailfrom=penguin-kernel@i-love.sakura.ne.jp Received: from www262.sakura.ne.jp (www262.sakura.ne.jp. [202.181.97.72]) by mx.google.com with ESMTPS id n26-v6si449315ote.162.2018.07.03.07.26.17 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 03 Jul 2018 07:26:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of penguin-kernel@i-love.sakura.ne.jp designates 202.181.97.72 as permitted sender) client-ip=202.181.97.72; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of penguin-kernel@i-love.sakura.ne.jp designates 202.181.97.72 as permitted sender) smtp.mailfrom=penguin-kernel@i-love.sakura.ne.jp Received: from fsav402.sakura.ne.jp (fsav402.sakura.ne.jp [133.242.250.101]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id w63EPpPn063916; Tue, 3 Jul 2018 23:25:52 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav402.sakura.ne.jp (F-Secure/fsigk_smtp/530/fsav402.sakura.ne.jp); Tue, 03 Jul 2018 23:25:51 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/530/fsav402.sakura.ne.jp) Received: from ccsecurity.localdomain (softbank126074194044.bbtec.net [126.74.194.44]) (authenticated bits=0) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTPSA id w63EPbJA063838 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Tue, 3 Jul 2018 23:25:51 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) From: Tetsuo Handa To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: torvalds@linux-foundation.org, Tetsuo Handa , David Rientjes , Johannes Weiner , Michal Hocko , Roman Gushchin , Tejun Heo , Vladimir Davydov Subject: [PATCH 6/8] mm,oom: Make oom_lock static variable. Date: Tue, 3 Jul 2018 23:25:07 +0900 Message-Id: <1530627910-3415-7-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1530627910-3415-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp> References: <1530627910-3415-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP As a preparation for not to sleep with oom_lock held, this patch makes oom_lock local to the OOM killer. Signed-off-by: Tetsuo Handa Cc: Roman Gushchin Cc: Michal Hocko Cc: Johannes Weiner Cc: Vladimir Davydov Cc: David Rientjes Cc: Tejun Heo --- drivers/tty/sysrq.c | 2 -- include/linux/oom.h | 2 -- mm/memcontrol.c | 6 +----- mm/oom_kill.c | 47 ++++++++++++++++++++++++++++------------------- mm/page_alloc.c | 24 ++++-------------------- 5 files changed, 33 insertions(+), 48 deletions(-) diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c index 6364890..c8b66b9 100644 --- a/drivers/tty/sysrq.c +++ b/drivers/tty/sysrq.c @@ -376,10 +376,8 @@ static void moom_callback(struct work_struct *ignored) .order = -1, }; - mutex_lock(&oom_lock); if (!out_of_memory(&oc)) pr_info("OOM request ignored. No task eligible\n"); - mutex_unlock(&oom_lock); } static DECLARE_WORK(moom_work, moom_callback); diff --git a/include/linux/oom.h b/include/linux/oom.h index d8da2cb..5ad2927 100644 --- a/include/linux/oom.h +++ b/include/linux/oom.h @@ -44,8 +44,6 @@ struct oom_control { unsigned long chosen_points; }; -extern struct mutex oom_lock; - static inline void set_current_oom_origin(void) { current->signal->oom_flag_origin = true; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c8a75c8..35c33bf 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1198,12 +1198,8 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, .gfp_mask = gfp_mask, .order = order, }; - bool ret; - mutex_lock(&oom_lock); - ret = out_of_memory(&oc); - mutex_unlock(&oom_lock); - return ret; + return out_of_memory(&oc); } #if MAX_NUMNODES > 1 diff --git a/mm/oom_kill.c b/mm/oom_kill.c index d18fe1e..a1d3616 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -59,7 +59,7 @@ static inline unsigned long oom_victim_mm_score(struct mm_struct *mm) int sysctl_oom_kill_allocating_task; int sysctl_oom_dump_tasks = 1; -DEFINE_MUTEX(oom_lock); +static DEFINE_MUTEX(oom_lock); #ifdef CONFIG_NUMA /** @@ -965,10 +965,9 @@ static bool oom_has_pending_victims(struct oom_control *oc) * OR try to be smart about which process to kill. Note that we * don't have to be perfect here, we just have to be good. */ -bool out_of_memory(struct oom_control *oc) +static bool __out_of_memory(struct oom_control *oc, + enum oom_constraint constraint) { - enum oom_constraint constraint = CONSTRAINT_NONE; - if (oom_killer_disabled) return false; @@ -991,18 +990,8 @@ bool out_of_memory(struct oom_control *oc) if (oc->gfp_mask && !(oc->gfp_mask & __GFP_FS)) return true; - /* - * Check if there were limitations on the allocation (only relevant for - * NUMA and memcg) that may require different handling. - */ - constraint = constrained_alloc(oc); - if (constraint != CONSTRAINT_MEMORY_POLICY) - oc->nodemask = NULL; check_panic_on_oom(oc, constraint); - if (oom_has_pending_victims(oc)) - return true; - if (!is_memcg_oom(oc) && sysctl_oom_kill_allocating_task && oom_badness(current, NULL, oc->nodemask, oc->totalpages) > 0) { get_task_struct(current); @@ -1024,10 +1013,33 @@ bool out_of_memory(struct oom_control *oc) return true; } +bool out_of_memory(struct oom_control *oc) +{ + enum oom_constraint constraint; + bool ret; + /* + * Check if there were limitations on the allocation (only relevant for + * NUMA and memcg) that may require different handling. + */ + constraint = constrained_alloc(oc); + if (constraint != CONSTRAINT_MEMORY_POLICY) + oc->nodemask = NULL; + /* + * If there are OOM victims which current thread can select, + * wait for them to reach __mmput(). + */ + mutex_lock(&oom_lock); + if (oom_has_pending_victims(oc)) + ret = true; + else + ret = __out_of_memory(oc, constraint); + mutex_unlock(&oom_lock); + return ret; +} + /* * The pagefault handler calls here because it is out of memory, so kill a - * memory-hogging task. If oom_lock is held by somebody else, a parallel oom - * killing is already in progress so do nothing. + * memory-hogging task. */ void pagefault_out_of_memory(void) { @@ -1042,9 +1054,6 @@ void pagefault_out_of_memory(void) if (mem_cgroup_oom_synchronize(true)) return; - if (!mutex_trylock(&oom_lock)) - return; out_of_memory(&oc); - mutex_unlock(&oom_lock); schedule_timeout_killable(1); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4cb3602..4c648f7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3500,32 +3500,17 @@ static inline bool can_oomkill(gfp_t gfp_mask, unsigned int order, }; struct page *page; - *did_some_progress = 0; - /* Try to reclaim via OOM notifier callback. */ - if (oomkill) - *did_some_progress = try_oom_notifier(); - - /* - * Acquire the oom lock. If that fails, somebody else is - * making progress for us. - */ - if (!mutex_trylock(&oom_lock)) { - *did_some_progress = 1; - return NULL; - } + *did_some_progress = oomkill ? try_oom_notifier() : 0; /* * Go through the zonelist yet one more time, keep very high watermark * here, this is only to catch a parallel oom killing, we must fail if - * we're still under heavy pressure. But make sure that this reclaim - * attempt shall not depend on __GFP_DIRECT_RECLAIM && !__GFP_NORETRY - * allocation which will never fail due to oom_lock already held. + * we're still under heavy pressure. */ - page = get_page_from_freelist((gfp_mask | __GFP_HARDWALL) & - ~__GFP_DIRECT_RECLAIM, order, + page = get_page_from_freelist(gfp_mask | __GFP_HARDWALL, order, ALLOC_WMARK_HIGH|ALLOC_CPUSET, ac); - if (page) + if (page || *did_some_progress) goto out; if (!oomkill) @@ -3544,7 +3529,6 @@ static inline bool can_oomkill(gfp_t gfp_mask, unsigned int order, ALLOC_NO_WATERMARKS, ac); } out: - mutex_unlock(&oom_lock); return page; }