From patchwork Tue Mar 22 21:44:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12789201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4168C433EF for ; Tue, 22 Mar 2022 21:44:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 27CAB6B010E; Tue, 22 Mar 2022 17:44:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 22A486B010F; Tue, 22 Mar 2022 17:44:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F40E6B0110; Tue, 22 Mar 2022 17:44:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id EC5B66B010E for ; Tue, 22 Mar 2022 17:44:30 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C4C9921D3E for ; Tue, 22 Mar 2022 21:44:30 +0000 (UTC) X-FDA: 79273351500.06.F6AD824 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf12.hostedemail.com (Postfix) with ESMTP id 581A340040 for ; Tue, 22 Mar 2022 21:44:30 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 41AB2B81DB7; Tue, 22 Mar 2022 21:44:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D8BD8C340EC; Tue, 22 Mar 2022 21:44:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1647985467; bh=vOR0AWbeZ0W+/p8PKfNYSI+2VebxktM8MpUsSX/2EA0=; h=Date:To:From:In-Reply-To:Subject:From; b=RmXiSxIgUbDoNDYofhuj2yUWkr1X7nMiZEVqbPrLAJPMFb0MxiWrHaPDNtJDANv3w EDpHGcTi07IXZmG84KKuJcTsfhIFOQQXQfimqCcWac0m5f5hiKR33Tz4uvToaPSGw0 BrrggZre6i4sMOUjWHbwZFyVtlEtqi2KvRUMV0rU= Date: Tue, 22 Mar 2022 14:44:27 -0700 To: naoya.horiguchi@nec.com,linmiaohe@huawei.com,akpm@linux-foundation.org,patches@lists.linux.dev,linux-mm@kvack.org,mm-commits@vger.kernel.org,torvalds@linux-foundation.org,akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220322143803.04a5e59a07e48284f196a2f9@linux-foundation.org> Subject: [patch 117/227] mm/memory-failure.c: rework the try_to_unmap logic in hwpoison_user_mappings() Message-Id: <20220322214427.D8BD8C340EC@smtp.kernel.org> X-Rspam-User: X-Stat-Signature: 9jjm58qfiakf5w39bgzxpyso3gjn8pti Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=RmXiSxIg; spf=pass (imf12.hostedemail.com: domain of akpm@linux-foundation.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 581A340040 X-HE-Tag: 1647985470-92105 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Miaohe Lin Subject: mm/memory-failure.c: rework the try_to_unmap logic in hwpoison_user_mappings() Only for hugetlb pages in shared mappings, try_to_unmap should take semaphore in write mode here. Rework the code to make it clear. Link: https://lkml.kernel.org/r/20220218090118.1105-7-linmiaohe@huawei.com Signed-off-by: Miaohe Lin Acked-by: Naoya Horiguchi Signed-off-by: Andrew Morton --- mm/memory-failure.c | 34 +++++++++++++++------------------- 1 file changed, 15 insertions(+), 19 deletions(-) --- a/mm/memory-failure.c~mm-memory-failurec-rework-the-try_to_unmap-logic-in-hwpoison_user_mappings +++ a/mm/memory-failure.c @@ -1404,26 +1404,22 @@ static bool hwpoison_user_mappings(struc if (kill) collect_procs(hpage, &tokill, flags & MF_ACTION_REQUIRED); - if (!PageHuge(hpage)) { - try_to_unmap(hpage, ttu); + if (PageHuge(hpage) && !PageAnon(hpage)) { + /* + * For hugetlb pages in shared mappings, try_to_unmap + * could potentially call huge_pmd_unshare. Because of + * this, take semaphore in write mode here and set + * TTU_RMAP_LOCKED to indicate we have taken the lock + * at this higher level. + */ + mapping = hugetlb_page_mapping_lock_write(hpage); + if (mapping) { + try_to_unmap(hpage, ttu|TTU_RMAP_LOCKED); + i_mmap_unlock_write(mapping); + } else + pr_info("Memory failure: %#lx: could not lock mapping for mapped huge page\n", pfn); } else { - if (!PageAnon(hpage)) { - /* - * For hugetlb pages in shared mappings, try_to_unmap - * could potentially call huge_pmd_unshare. Because of - * this, take semaphore in write mode here and set - * TTU_RMAP_LOCKED to indicate we have taken the lock - * at this higher level. - */ - mapping = hugetlb_page_mapping_lock_write(hpage); - if (mapping) { - try_to_unmap(hpage, ttu|TTU_RMAP_LOCKED); - i_mmap_unlock_write(mapping); - } else - pr_info("Memory failure: %#lx: could not lock mapping for mapped huge page\n", pfn); - } else { - try_to_unmap(hpage, ttu); - } + try_to_unmap(hpage, ttu); } unmap_success = !page_mapped(hpage);