From patchwork Tue Aug 27 11:47:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13779375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85A6EC54735 for ; Tue, 27 Aug 2024 11:47:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5A44D6B0093; Tue, 27 Aug 2024 07:47:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 556B06B0099; Tue, 27 Aug 2024 07:47:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F8DC6B0095; Tue, 27 Aug 2024 07:47:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B42F26B0095 for ; Tue, 27 Aug 2024 07:47:44 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 605F5C1683 for ; Tue, 27 Aug 2024 11:47:44 +0000 (UTC) X-FDA: 82497850848.19.61CE56D Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf19.hostedemail.com (Postfix) with ESMTP id E2E191A0013 for ; Tue, 27 Aug 2024 11:47:41 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724759175; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4LqG7P9tKxA4D8bgnxrOp0BxRnJqnZUSIYcuWvFHJXg=; b=BkMeELlEI+UOH+jrP1DVfZL5ebbLn6t247VlGNjwt9WVMluaAXz+u5fsPV95VyETqWhaJq LmyQy12tQdrk2A1wI9B6x5JlYPpER5ES6RKGU2ayT2AZcgv0mBTaQCd6LtjOh5TArmJn4F M9E+8WIJ9ytXO2oSU2B1kB7XYuQnXJ8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724759175; a=rsa-sha256; cv=none; b=nK69BoAZcJReY4ck44aW/mvD2cVD5ASiVjJ0T81qGIyWx+WEaihAq2C6CLuIAsXf0xmBB3 H90Le8p6b84qH88KW46srt3cznszfJcwXWrwoTRBcP5HG0xOCRiSl0hsOJHx/7kP0SukWg E4EOiSfH4Ra0N8QcDZ9G64DuN7LkQsI= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4WtQhz147xz1xty8; Tue, 27 Aug 2024 19:45:39 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 5DD55180019; Tue, 27 Aug 2024 19:47:36 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 27 Aug 2024 19:47:35 +0800 From: Kefeng Wang To: Andrew Morton CC: David Hildenbrand , Oscar Salvador , Miaohe Lin , Naoya Horiguchi , , , Jonathan Cameron , Kefeng Wang Subject: [PATCH v3 2/5] mm: memory-failure: add unmap_poisoned_folio() Date: Tue, 27 Aug 2024 19:47:25 +0800 Message-ID: <20240827114728.3212578-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240827114728.3212578-1-wangkefeng.wang@huawei.com> References: <20240827114728.3212578-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf100008.china.huawei.com (7.185.36.138) X-Stat-Signature: reod3mb1bmfx1pgt8gxood8hpb65pciu X-Rspamd-Queue-Id: E2E191A0013 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1724759261-768890 X-HE-Meta: U2FsdGVkX19nt12nUxP8FLSWS2dio4VOZLA7qYQuDQI8IDS5K5AY4mXSCtPw6C/CVKR9szIjBZicO9neSJN+WJ4UMDi0q9lITNqnpON22IQrfBqn6OgNw+hnMN3LNqi/sipO5aHOfT7A3NKSEA6eBSf70yBkTAbykQDBNrzd89/2V4Y1D+pyafdA+k2n7MdVJykd3cxBhUGStDxg+eK1rqj+7M97mqRbkbmfh/GKVrGaGvvRsXSqXKYIKmAPFQQSLe3pqzdyLyceMpzWttYnWZKb6gfhDHYSOYzUYGLMYFtANF4lWQgm0PWVlS4G66SAdRleqLQnMbjyV1zBMtWpSMejq2beceNCMiOYokGjNmA6+84tW7a0yXGNoOJPaGLx4uDd4u5TXoxfgtBYsi+BhKZB9QcvQg2MdO1CeCl6X6EozSpY7zxOJgW02GOOr/LfXggiK3UIm2hjKKyr10P/H3r/rUP7hsGHBbZoqHQtparE5Y5kA8MjnEyXOUo3/xiCWWocLtqfFGEfn1XBeGjfCMzR2e1fKs4OSKtxCu4DqW55ooeeBbeCk0PVGkCq130PwOJf8LLNz/w1JkejlVql2gR3HPgyeRs4QvUMyXAFEWrSpYBgwxbKjbALPoLA4AaFlCTqVH6BlQ3hvVcjsMipyvaGLJSO4jQ3Wo9SlWJaAjGBWSSo+IpXCrn+Rb2N/NED9eVyPezj6kMiuJcNHaeUvbmiIHsHpaAdqK1+Wt4s11gAJflWNIScldXtAxqSKf9ZIaVsf8ow84nGf6xyQZCtuommysuP2krp4BXitudQX6z8ld4xZNogvzy/VDJGtJXxrREjqmMWjda+l7Xrq5bt0SdvFEeYgk6JhcztQb/3SBrjpHujVJcA8aRRkbRaPxohyJpwqMwzMZwIKJ8CC4G8dFYahialLq4oY3vKDS4l9i3r5objkeIM1FDTtuY/1m1HoUiC+9ILJtj8sGNAqz9 /nDLXZSs VWaRTn2oaNgnavQfHAz6H6VV4IkfA3Aa6PO8yrOlAtCSSbbx965cGsGg7kIhPaOJd39YhYSFPulE0d8UTq77nqtGpwVMsg7BdrzPAHcgVeAEnG+DPoVPxRpwSGlTzxAGIgVfbn4JVOHrKQoVo+73pOoUbXON41ajc5j+OgoImWhiRAbpgJSRCLxBQopjWh/rc0pTLoSgCFH7gn0v+/j78zGTMAJ+EzNGtsJ+6uawK9qWnxw6ai+HArUdaWCWuyX0XV872Z4FsajLY3Hz1Rv5P1ghqUkH0cmiMevRY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add unmap_poisoned_folio() helper which will be reused by do_migrate_range() from memory hotplug soon. Acked-by: David Hildenbrand Signed-off-by: Kefeng Wang Acked-by: Miaohe Lin --- mm/internal.h | 8 ++++++++ mm/memory-failure.c | 43 ++++++++++++++++++++++++++----------------- 2 files changed, 34 insertions(+), 17 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 5e6f2abcea28..b00ea4595d18 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1048,6 +1048,8 @@ static inline int find_next_best_node(int node, nodemask_t *used_node_mask) /* * mm/memory-failure.c */ +#ifdef CONFIG_MEMORY_FAILURE +void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu); void shake_folio(struct folio *folio); extern int hwpoison_filter(struct page *p); @@ -1068,6 +1070,12 @@ void add_to_kill_ksm(struct task_struct *tsk, struct page *p, unsigned long ksm_addr); unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); +#else +static inline void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu) +{ +} +#endif + extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 353254537b54..67b6b259a75d 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1554,6 +1554,31 @@ static int get_hwpoison_page(struct page *p, unsigned long flags) return ret; } +void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu) +{ + if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { + struct address_space *mapping; + /* + * For hugetlb folios in shared mappings, try_to_unmap + * could potentially call huge_pmd_unshare. Because of + * this, take semaphore in write mode here and set + * TTU_RMAP_LOCKED to indicate we have taken the lock + * at this higher level. + */ + mapping = hugetlb_folio_mapping_lock_write(folio); + if (!mapping) { + pr_info("%#lx: could not lock mapping for mapped hugetlb folio\n", + folio_pfn(folio)); + return; + } + + try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); + i_mmap_unlock_write(mapping); + } else { + try_to_unmap(folio, ttu); + } +} + /* * Do all that is necessary to remove user space mappings. Unmap * the pages and send SIGBUS to the processes if the data was dirty. @@ -1615,23 +1640,7 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p, */ collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED); - if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { - /* - * For hugetlb pages in shared mappings, try_to_unmap - * could potentially call huge_pmd_unshare. Because of - * this, take semaphore in write mode here and set - * TTU_RMAP_LOCKED to indicate we have taken the lock - * at this higher level. - */ - mapping = hugetlb_folio_mapping_lock_write(folio); - if (mapping) { - try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); - i_mmap_unlock_write(mapping); - } else - pr_info("%#lx: could not lock mapping for mapped huge page\n", pfn); - } else { - try_to_unmap(folio, ttu); - } + unmap_poisoned_folio(folio, ttu); unmap_success = !folio_mapped(folio); if (!unmap_success)