From patchwork Wed Dec 8 18:17:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Savitz X-Patchwork-Id: 12665009 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E75EDC433F5 for ; Wed, 8 Dec 2021 18:18:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 557E46B0071; Wed, 8 Dec 2021 13:18:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 507FA6B0073; Wed, 8 Dec 2021 13:18:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F6D56B0074; Wed, 8 Dec 2021 13:18:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0068.hostedemail.com [216.40.44.68]) by kanga.kvack.org (Postfix) with ESMTP id 313016B0071 for ; Wed, 8 Dec 2021 13:18:04 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E9AB418573477 for ; Wed, 8 Dec 2021 18:17:53 +0000 (UTC) X-FDA: 78895435626.21.BB0D0E6 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf20.hostedemail.com (Postfix) with ESMTP id 810B7D0000B0 for ; Wed, 8 Dec 2021 18:17:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1638987473; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=eh7hIXvlLrVY9VyTate/zWqD+zx7eGtk6Mm28VxINmo=; b=I7g2b3Ly6Cd9DEuzCjzmP/etGgeFxGuRRq+t6YXy0CZHglvMWi7vgv/rOld4UxOV47fy9A v7VJ67tvtpaHu9jhBj9ZRyDrl8wV6oIP0sTu0LEBAdHwM2QRqdvy33XA2CIAjBRyWVGeuZ XCalBYM8YTvBhh9x0xxPCLlVD4+HETA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-44-Lb7kjGJWNUGoPSky2PSe0A-1; Wed, 08 Dec 2021 13:17:49 -0500 X-MC-Unique: Lb7kjGJWNUGoPSky2PSe0A-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3C65310247A7; Wed, 8 Dec 2021 18:17:48 +0000 (UTC) Received: from jsavitz-csb.redhat.com (unknown [10.22.35.120]) by smtp.corp.redhat.com (Postfix) with ESMTP id BB0F71F30E; Wed, 8 Dec 2021 18:17:18 +0000 (UTC) From: Joel Savitz To: linux-kernel@vger.kernel.org Cc: Joel Savitz , Waiman Long , linux-mm@kvack.org, Nico Pache , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Darren Hart , Davidlohr Bueso , =?utf-8?q?Andr=C3=A9_Almeida?= , Andrew Morton , Michal Hocko Subject: [PATCH v2] mm/oom_kill: wake futex waiters before annihilating victim shared mutex Date: Wed, 8 Dec 2021 13:17:14 -0500 Message-Id: <20211208181714.880312-1-jsavitz@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 810B7D0000B0 X-Stat-Signature: x7m5xqy4xudgm7ixm88cymjyew791355 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=I7g2b3Ly; spf=none (imf20.hostedemail.com: domain of jsavitz@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=jsavitz@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-HE-Tag: 1638987473-654448 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the case that two or more processes share a futex located within a shared mmaped region, such as a process that shares a lock between itself and a number of child processes, we have observed that when a process holding the lock is oom killed, at least one waiter is never alerted to this new development and simply continues to wait. This is visible via pthreads by checking the __owner field of the pthread_mutex_t structure within a waiting process, perhaps with gdb. We identify reproduction of this issue by checking a waiting process of a test program and viewing the contents of the pthread_mutex_t, taking note of the value in the owner field, and then checking dmesg to see if the owner has already been killed. This issue can be tricky to reproduce, but with the modifications of this small patch, I have found it to be impossible to reproduce. There may be additional considerations that I have not taken into account in this patch and I welcome any comments and criticism. Changes from v1: - add comments before calls to futex_exit_release() Co-developed-by: Nico Pache Signed-off-by: Nico Pache Signed-off-by: Joel Savitz --- mm/oom_kill.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 1ddabefcfb5a..884a5f15fd06 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -44,6 +44,7 @@ #include #include #include +#include #include #include "internal.h" @@ -885,6 +886,11 @@ static void __oom_kill_process(struct task_struct *victim, const char *message) count_vm_event(OOM_KILL); memcg_memory_event_mm(mm, MEMCG_OOM_KILL); + /* + * We call futex_exit_release() on the victim task to ensure any waiters on any + * process-shared futexes held by the victim task are woken up. + */ + futex_exit_release(victim); /* * We should send SIGKILL before granting access to memory reserves * in order to prevent the OOM victim from depleting the memory @@ -930,6 +936,12 @@ static void __oom_kill_process(struct task_struct *victim, const char *message) */ if (unlikely(p->flags & PF_KTHREAD)) continue; + /* + * We call futex_exit_release() on any task p sharing the + * victim->mm to ensure any waiters on any + * process-shared futexes held by task p are woken up. + */ + futex_exit_release(p); do_send_sig_info(SIGKILL, SEND_SIG_PRIV, p, PIDTYPE_TGID); } rcu_read_unlock();