From patchwork Thu Mar 25 04:37:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12162897 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_RED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABA93C433DB for ; Thu, 25 Mar 2021 04:37:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5981360238 for ; Thu, 25 Mar 2021 04:37:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5981360238 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F0D086B0075; Thu, 25 Mar 2021 00:37:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E95866B0078; Thu, 25 Mar 2021 00:37:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D0F3A6B007B; Thu, 25 Mar 2021 00:37:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0040.hostedemail.com [216.40.44.40]) by kanga.kvack.org (Postfix) with ESMTP id B26CC6B0075 for ; Thu, 25 Mar 2021 00:37:25 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7A10FA8E5 for ; Thu, 25 Mar 2021 04:37:25 +0000 (UTC) X-FDA: 77957137650.38.43E627C Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf05.hostedemail.com (Postfix) with ESMTP id 90DEFE0011C5 for ; Thu, 25 Mar 2021 04:37:24 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id B444761A1A; Thu, 25 Mar 2021 04:37:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1616647044; bh=PUcF3KgHrzGiBtVp9wFO0dBtzI7xyYmREqwbxHQEUvQ=; h=Date:From:To:Subject:In-Reply-To:From; b=NlPJQqz6wUVLgAWoxKk7/i8LnbTuhIAeJwg/uYXTslg5c/lcDPypD/AINC36Mo8YN xlwORtV0HzOSf8rlr1NIHkKrMwWf+0BzH/0irytAUDDXUvxQ3qEobl0Z50k0Fe7BX3 ZSEQockJt5QXH2KL23k4Xc5h2i5Av03bwLW7g3qQ= Date: Wed, 24 Mar 2021 21:37:23 -0700 From: Andrew Morton To: aarcange@redhat.com, akpm@linux-foundation.org, bgardon@google.com, dimitri.sivanich@hpe.com, hannes@cmpxchg.org, jgg@nvidia.com, jgg@ziepe.ca, jglisse@redhat.com, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, rientjes@google.com, seanjc@google.com, torvalds@linux-foundation.org Subject: [patch 03/14] mm/mmu_notifiers: ensure range_end() is paired with range_start() Message-ID: <20210325043723.T8SpyzyPH%akpm@linux-foundation.org> In-Reply-To: <20210324213644.bf03a529aec4ef9580e17dbc@linux-foundation.org> User-Agent: s-nail v14.8.16 MIME-Version: 1.0 X-Stat-Signature: q1azy3t1psbzkdtd84hp35umyjwkbehp X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 90DEFE0011C5 Received-SPF: none (linux-foundation.org>: No applicable sender policy available) receiver=imf05; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616647044-282073 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Sean Christopherson Subject: mm/mmu_notifiers: ensure range_end() is paired with range_start() If one or more notifiers fails .invalidate_range_start(), invoke .invalidate_range_end() for "all" notifiers. If there are multiple notifiers, those that did not fail are expecting _start() and _end() to be paired, e.g. KVM's mmu_notifier_count would become imbalanced. Disallow notifiers that can fail _start() from implementing _end() so that it's unnecessary to either track which notifiers rejected _start(), or had already succeeded prior to a failed _start(). Note, the existing behavior of calling _start() on all notifiers even after a previous notifier failed _start() was an unintented "feature". Make it canon now that the behavior is depended on for correctness. As of today, the bug is likely benign: 1. The only caller of the non-blocking notifier is OOM kill. 2. The only notifiers that can fail _start() are the i915 and Nouveau drivers. 3. The only notifiers that utilize _end() are the SGI UV GRU driver and KVM. 4. The GRU driver will never coincide with the i195/Nouveau drivers. 5. An imbalanced kvm->mmu_notifier_count only causes soft lockup in the _guest_, and the guest is already doomed due to being an OOM victim. Fix the bug now to play nice with future usage, e.g. KVM has a potential use case for blocking memslot updates in KVM while an invalidation is in-progress, and failure to unblock would result in said updates being blocked indefinitely and hanging. Found by inspection. Verified by adding a second notifier in KVM that periodically returns -EAGAIN on non-blockable ranges, triggering OOM, and observing that KVM exits with an elevated notifier count. Link: https://lkml.kernel.org/r/20210311180057.1582638-1-seanjc@google.com Fixes: 93065ac753e4 ("mm, oom: distinguish blockable mode for mmu notifiers") Signed-off-by: Sean Christopherson Suggested-by: Jason Gunthorpe Reviewed-by: Jason Gunthorpe Cc: David Rientjes Cc: Ben Gardon Cc: Michal Hocko Cc: "Jérôme Glisse" Cc: Andrea Arcangeli Cc: Johannes Weiner Cc: Dimitri Sivanich Signed-off-by: Andrew Morton --- include/linux/mmu_notifier.h | 10 +++++----- mm/mmu_notifier.c | 23 +++++++++++++++++++++++ 2 files changed, 28 insertions(+), 5 deletions(-) --- a/include/linux/mmu_notifier.h~mm-mmu_notifiers-esnure-range_end-is-paired-with-range_start +++ a/include/linux/mmu_notifier.h @@ -169,11 +169,11 @@ struct mmu_notifier_ops { * the last refcount is dropped. * * If blockable argument is set to false then the callback cannot - * sleep and has to return with -EAGAIN. 0 should be returned - * otherwise. Please note that if invalidate_range_start approves - * a non-blocking behavior then the same applies to - * invalidate_range_end. - * + * sleep and has to return with -EAGAIN if sleeping would be required. + * 0 should be returned otherwise. Please note that notifiers that can + * fail invalidate_range_start are not allowed to implement + * invalidate_range_end, as there is no mechanism for informing the + * notifier that its start failed. */ int (*invalidate_range_start)(struct mmu_notifier *subscription, const struct mmu_notifier_range *range); --- a/mm/mmu_notifier.c~mm-mmu_notifiers-esnure-range_end-is-paired-with-range_start +++ a/mm/mmu_notifier.c @@ -501,10 +501,33 @@ static int mn_hlist_invalidate_range_sta ""); WARN_ON(mmu_notifier_range_blockable(range) || _ret != -EAGAIN); + /* + * We call all the notifiers on any EAGAIN, + * there is no way for a notifier to know if + * its start method failed, thus a start that + * does EAGAIN can't also do end. + */ + WARN_ON(ops->invalidate_range_end); ret = _ret; } } } + + if (ret) { + /* + * Must be non-blocking to get here. If there are multiple + * notifiers and one or more failed start, any that succeeded + * start are expecting their end to be called. Do so now. + */ + hlist_for_each_entry_rcu(subscription, &subscriptions->list, + hlist, srcu_read_lock_held(&srcu)) { + if (!subscription->ops->invalidate_range_end) + continue; + + subscription->ops->invalidate_range_end(subscription, + range); + } + } srcu_read_unlock(&srcu, id); return ret;