From patchwork Fri Aug 16 09:04:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13765785 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 929BEC3DA4A for ; Fri, 16 Aug 2024 09:05:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C0AFF6B0114; Fri, 16 Aug 2024 05:05:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A195B6B0116; Fri, 16 Aug 2024 05:05:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 681E26B011E; Fri, 16 Aug 2024 05:05:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1ED016B00D4 for ; Fri, 16 Aug 2024 05:05:50 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B37A7C1A01 for ; Fri, 16 Aug 2024 09:05:49 +0000 (UTC) X-FDA: 82457526018.04.4E65503 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf27.hostedemail.com (Postfix) with ESMTP id 23C1440002 for ; Fri, 16 Aug 2024 09:05:46 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf27.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723799065; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=D6fIsSygiHw7ao2LpsxFR4VWJoD82NgGLk9UhUYZyKs=; b=hkyBc4wz3CowCVTZt7I2hoGFJteUw55CfpeFlvHgE2k8qeIu1wA8YFjcrXA5WF9VLu1uYa 5jgzc38Hn3VmYIxWOld3WM1kzp3KLpcpjO+ev6emi4D4N9nTUfxNZ3DcISw3uvuZUrg9gw I6vJLpPzENDOPxZd20Tq+KSkln4vGXE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723799065; a=rsa-sha256; cv=none; b=HgO5Vn3qG6bcx8Y3qZJDH1SDZpwFcOZrmlkPZ9glamyRJIchcg2bjBzLS/CI5RAdS2LGuk ii0uxGHmviN2xEFygnhUsFpluLPI53JcqZSwedq1RuiYcO583ciNRZA51LzVcUSQdbyDZ+ MTrF4JZ9M7BX+oVw19SrDXQrK7epSTI= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf27.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4WlbY83X7Tz1j6Z9; Fri, 16 Aug 2024 17:00:12 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id C7B0B1A016C; Fri, 16 Aug 2024 17:05:06 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 16 Aug 2024 17:05:06 +0800 From: Kefeng Wang To: Andrew Morton CC: David Hildenbrand , Oscar Salvador , Miaohe Lin , Naoya Horiguchi , , Kefeng Wang Subject: [PATCH v2 2/5] mm: memory-failure: add unmap_posioned_folio() Date: Fri, 16 Aug 2024 17:04:32 +0800 Message-ID: <20240816090435.888946-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240816090435.888946-1-wangkefeng.wang@huawei.com> References: <20240816090435.888946-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 23C1440002 X-Stat-Signature: x6ubao6j3muaqj5wdt8muwx9r865n4px X-Rspam-User: X-HE-Tag: 1723799146-496706 X-HE-Meta: U2FsdGVkX1/0bsgA36PlGMJx7a1p2oDgOlMJVM+cQ138MBGvvOT1uwnpRX1zbHJm7BZYbW7Eobx+ILt6RNk3SE4N49IRGhqNTZ/nDSODEqWaCoq9HPJKZ2oNX+l6Uo/dyDSTF2d6PIfGht/pIlI13GRjusBnul4g7nJpA6DlXYZO2t3M8OovMNH8vG2jwDgwvaNP4YeIWRMT+EhRfWYxyxWXrF2WlbqlReIinkx5WEzEYeHnxB6OQUrSofysoY7zLUkZW+zgTUbQditxTxMO1mg6H15Wgtae616UM7ZH0P8HugFDGVinhUgK7HbI8sDN5nbr/4O/s2/e48Az7S1PNkHslJVOCM7fK6453ok2hGePlD3FqX1mKEjU6pqNpnMpHftL5uic5EL6ovlVAMTg8EXjVjoAyEov9DHbU2vQ8M2ICZiWYiJfhXwjKntnmdfLLCTri72ftewY230nuQoY7CrNtxEC0xvnD3XxNNrM2vnsw8XOnPgkmP7b0dPba7fEY9ACiZ5zVrLpf1OiUKSLO2Oau7zqbz7wz1mGmXXgJibPlacSppVmm5401LJrB7Wmp2xEQfjkH8VKPXs/hKoyssFXIH5JJ/iNja4FG28psJzj+kTMS0+6RYzvK1TIzwfKzn0Oy0T5hFg0xv0MPlt4I1q4sw3u3Lrg8NU+xTr9U0oRDiFPYVKlmQ6vmXl9MUPakjvwwhKcSezPAhhkYzsvcUwBPZ+6+dKdDuOm8F62MxlM1CPj5IivmFD9RgxzggYyiBR25fRbyFRAPKyBFT1CWEgPb/quA3HbgtkPrmdr0RQaSIQAjRGfSfCdmbGgFmeLr4fKHW0N1Xp1Wl0BDWtzh3FlQ+O7OsAerh/kpjbqhHYqXHhno4VQ5aWMSJM+lBAgSzU71fw+cQV1HZQ6j5ysThx3ZpCJIk/82rJ2QPoURmfc/GhjchroaCzO3IJG429UyUgAo2eaxfOsLxkPFXF MBa3oiQm D8pjSs7a/rB1OVxddcSDTR9OrgFonVXr0a1KwGUXbTEnYfJNTpi+5JUYu+zc8oNoxQMZLnlaBjKYLCunn0HLDsEIgPGjAKEWennf6ezDk1dsqiHEiw5h0vfQTZ8iRilBSxvYWwECtS/qjLoTzkwKT0L6iEoXAVvwdwlA/Fg6dCXK/76ZITRGmDrxULjJcmyZrRQ12VCtJhBTCK8FleazmCMfsuQfPtlkVQKzl1C2y27910u5mH33OpQ5B+KIRW5blAqcIvYpRK3FBtKrjOGefhR7l+9Yps5IlOglP X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add unmap_posioned_folio() helper which will be reused by do_migrate_range() from memory hotplug soon. Acked-by: David Hildenbrand Signed-off-by: Kefeng Wang --- mm/internal.h | 9 +++++++++ mm/memory-failure.c | 43 ++++++++++++++++++++++++++----------------- 2 files changed, 35 insertions(+), 17 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index adbf8c88c9df..5b80c65f82b6 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1054,6 +1054,8 @@ static inline int find_next_best_node(int node, nodemask_t *used_node_mask) /* * mm/memory-failure.c */ +#ifdef CONFIG_MEMORY_FAILURE +int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu); void shake_folio(struct folio *folio); extern int hwpoison_filter(struct page *p); @@ -1074,6 +1076,13 @@ void add_to_kill_ksm(struct task_struct *tsk, struct page *p, unsigned long ksm_addr); unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); +#else +static inline int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu) +{ + return 0; +} +#endif + extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 353254537b54..93848330de1f 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1554,6 +1554,30 @@ static int get_hwpoison_page(struct page *p, unsigned long flags) return ret; } +int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu) +{ + if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { + struct address_space *mapping; + /* + * For hugetlb pages in shared mappings, try_to_unmap + * could potentially call huge_pmd_unshare. Because of + * this, take semaphore in write mode here and set + * TTU_RMAP_LOCKED to indicate we have taken the lock + * at this higher level. + */ + mapping = hugetlb_folio_mapping_lock_write(folio); + if (!mapping) + return -EAGAIN; + + try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); + i_mmap_unlock_write(mapping); + } else { + try_to_unmap(folio, ttu); + } + + return 0; +} + /* * Do all that is necessary to remove user space mappings. Unmap * the pages and send SIGBUS to the processes if the data was dirty. @@ -1615,23 +1639,8 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p, */ collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED); - if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { - /* - * For hugetlb pages in shared mappings, try_to_unmap - * could potentially call huge_pmd_unshare. Because of - * this, take semaphore in write mode here and set - * TTU_RMAP_LOCKED to indicate we have taken the lock - * at this higher level. - */ - mapping = hugetlb_folio_mapping_lock_write(folio); - if (mapping) { - try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); - i_mmap_unlock_write(mapping); - } else - pr_info("%#lx: could not lock mapping for mapped huge page\n", pfn); - } else { - try_to_unmap(folio, ttu); - } + if (unmap_posioned_folio(folio, ttu)) + pr_info("%#lx: could not lock mapping for mapped huge page\n", pfn); unmap_success = !folio_mapped(folio); if (!unmap_success)