From patchwork Sat Aug 17 08:49:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13767068 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC058C52D7F for ; Sat, 17 Aug 2024 08:54:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 23F036B027D; Sat, 17 Aug 2024 04:54:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1238F8D00CC; Sat, 17 Aug 2024 04:54:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C45206B027D; Sat, 17 Aug 2024 04:54:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 918CB6B017B for ; Sat, 17 Aug 2024 04:54:07 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 2BAB7A054E for ; Sat, 17 Aug 2024 08:54:07 +0000 (UTC) X-FDA: 82461125334.19.2439F93 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf08.hostedemail.com (Postfix) with ESMTP id E749316000F for ; Sat, 17 Aug 2024 08:54:03 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723884831; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pTr0PZ0fKJ44mNTHvsm/hg1N1X6FUexpWOV9fIQRECM=; b=VIAKrLUpbYKIsE2RHOdiDuEvHEBq7NzaJxv9a/yO0bTyXxWyI6n2aaNA4I5XP574jM1pXG 950iclLvAXd0RW6eHFw6StTkihWwFRfA+Oa2H0eaYTcjun332IhQWvfJImZOjOHKcbYvCg vDAU/ZtIwUX2NGNNZx2cZd+Af46Gt/Y= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723884831; a=rsa-sha256; cv=none; b=V1GRexcQtiqH4cI8luqE6KHR+KTT87BLyXVTxSDsVRzPqgT3/1/M9kKjQewe3jBOb2Xez6 uI91JI9jb/ruhlOHDkoFxHtwBjm5Kori3Vih8BWNauZQADeMJxBPOBSusnKrmikDa2FyLw ai6/5lWwYVdwAneMS8/YI7RmkabQiTI= Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4WmCHw4MRjz1HGHc; Sat, 17 Aug 2024 16:50:52 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 353F11401E9; Sat, 17 Aug 2024 16:54:00 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 17 Aug 2024 16:53:59 +0800 From: Kefeng Wang To: Andrew Morton CC: David Hildenbrand , Oscar Salvador , Miaohe Lin , Naoya Horiguchi , , Kefeng Wang Subject: [PATCH v2 2/5] mm: memory-failure: add unmap_posioned_folio() Date: Sat, 17 Aug 2024 16:49:38 +0800 Message-ID: <20240817084941.2375713-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240817084941.2375713-1-wangkefeng.wang@huawei.com> References: <20240817084941.2375713-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspam-User: X-Stat-Signature: 61u36bf3ez439b11q65njyehs3k9ym3d X-Rspamd-Queue-Id: E749316000F X-Rspamd-Server: rspam11 X-HE-Tag: 1723884843-900667 X-HE-Meta: U2FsdGVkX1/RRDQnAhHtUOQp6VNSScWZ/BNQMpfDXnvRa5XOYGZsQM1TSevbUF1djU0qtazBXO1U101OE5b9nOWagWLo0pZto8dIg3CppXm9ENxTgA0DM9zNM8JPXKoq3CkY/PyzMsJe4BSymL99RH3jSNNo64XEzv7ip7zh2MSEQ84OF1IOu/ow8yM27Zwc7t3V3PRPFWWk81bFCgFaAIqBlvJ/G7KyqqZie9uxgu/wYrsd+8ysHWbnxbk4Qcu0AkknBw8UlIDLZVpT9hSa0QWtmEB8gbBD+F8iuZvJiZRx88jPoVl49vwO34wBdtnEn8QnT+nU182NYk8MArSd6Bw1hpXDC114vY3ZUMjs9Qg6fM4VV0F7UUv+Qbycmctg2xwK47f7Pmoeej4LKhTwH7H/GIDvQB/VTTMbKcfhDYeBl4oaPcVhVySvjGmyp4aNGJG1Pmrm5I7FazFjqyQ1VvDqkjmyZG5xVyn1hTHaqgQG0xmAmevrkUvwTkhDNDwpI4bKeZC27kT8dGnWJXHXmHO9IEfUaWWT06GXXtmeDyDEhb2WPKgoOPjShfWJDyklkqeYmU15ZEZ3uw8jmh/O8DdHdMKSO9OJORMAl8/gKbGvZWCpYh/Gf6iy1Mh5O5IGVVa07S7RaC3YHLvWv288ynlbkJvYkfeCWcWVJiRKg44xjrGovlgI93B3HIwwiffO7VdcMNumQ/jSJ1pTqjmtjMRPcmPQeFjaPXIAh1k7bcE2h5Qv7Sjx9dPQihd6bUtjOwZSSxHsjS0RTWOb/lCmbktwPRIJ03JjNp5DzBwuptv2idR+NZQVVqDxS1oLrBmlZNa2yPTpg0XytRQhVdAkNMZZ3Z3/yRmRc5TcaqCC+aSL+heHpIYBY1KTxRFYnsFgBRDXXWDIhXQ4wuRbtp/TEUs4comB18CgX2aMc+8tf7F0+XrxuvvAaW50tYF/v29qMZRBUNhnyaR6XRZrsN9 NEWQODeO voTPj4nksFE4O+qRUhz5sn35LwFl7Extw5cxFjsJHZ6TyyPeUnzcB/ERZivrVGaWFZjnUferSA4n5IcbNkDWbNtkCUbeFIHwFcG0VhJvNUge1dUQZEp+Vkrpa/U3TD9prrH32BVid1c8yKSzCK6iZuBAF5Ne/dSYwyGV0Rl2Jlqs1AUtGfzaaJQ7x3KwbBMyhLL3yg7lw0vSRTkDZN/8p4o8llVIHWe/iORrGM4nmaVOBEduUXitx8s2DDda7XlJMDb6I/PS2d3nAttdIWCVdzp6iyt73oyemoZXj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add unmap_posioned_folio() helper which will be reused by do_migrate_range() from memory hotplug soon. Acked-by: David Hildenbrand Signed-off-by: Kefeng Wang --- mm/internal.h | 9 +++++++++ mm/memory-failure.c | 43 ++++++++++++++++++++++++++----------------- 2 files changed, 35 insertions(+), 17 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index d7aac802efa5..74490b8ac63d 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1045,6 +1045,8 @@ static inline int find_next_best_node(int node, nodemask_t *used_node_mask) /* * mm/memory-failure.c */ +#ifdef CONFIG_MEMORY_FAILURE +int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu); void shake_folio(struct folio *folio); extern int hwpoison_filter(struct page *p); @@ -1065,6 +1067,13 @@ void add_to_kill_ksm(struct task_struct *tsk, struct page *p, unsigned long ksm_addr); unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); +#else +static inline int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu) +{ + return 0; +} +#endif + extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 353254537b54..93848330de1f 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1554,6 +1554,30 @@ static int get_hwpoison_page(struct page *p, unsigned long flags) return ret; } +int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu) +{ + if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { + struct address_space *mapping; + /* + * For hugetlb pages in shared mappings, try_to_unmap + * could potentially call huge_pmd_unshare. Because of + * this, take semaphore in write mode here and set + * TTU_RMAP_LOCKED to indicate we have taken the lock + * at this higher level. + */ + mapping = hugetlb_folio_mapping_lock_write(folio); + if (!mapping) + return -EAGAIN; + + try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); + i_mmap_unlock_write(mapping); + } else { + try_to_unmap(folio, ttu); + } + + return 0; +} + /* * Do all that is necessary to remove user space mappings. Unmap * the pages and send SIGBUS to the processes if the data was dirty. @@ -1615,23 +1639,8 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p, */ collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED); - if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { - /* - * For hugetlb pages in shared mappings, try_to_unmap - * could potentially call huge_pmd_unshare. Because of - * this, take semaphore in write mode here and set - * TTU_RMAP_LOCKED to indicate we have taken the lock - * at this higher level. - */ - mapping = hugetlb_folio_mapping_lock_write(folio); - if (mapping) { - try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); - i_mmap_unlock_write(mapping); - } else - pr_info("%#lx: could not lock mapping for mapped huge page\n", pfn); - } else { - try_to_unmap(folio, ttu); - } + if (unmap_posioned_folio(folio, ttu)) + pr_info("%#lx: could not lock mapping for mapped huge page\n", pfn); unmap_success = !folio_mapped(folio); if (!unmap_success)