From patchwork Wed Jun 9 04:12:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 12308831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 685E4C47095 for ; Wed, 9 Jun 2021 04:12:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 03E6461249 for ; Wed, 9 Jun 2021 04:12:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 03E6461249 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 96D5D6B0036; Wed, 9 Jun 2021 00:12:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F5676B006E; Wed, 9 Jun 2021 00:12:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 748B26B0070; Wed, 9 Jun 2021 00:12:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0242.hostedemail.com [216.40.44.242]) by kanga.kvack.org (Postfix) with ESMTP id 3D00D6B0036 for ; Wed, 9 Jun 2021 00:12:24 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D9164181AEF23 for ; Wed, 9 Jun 2021 04:12:23 +0000 (UTC) X-FDA: 78232863366.28.AAB4100 Received: from mail-ot1-f46.google.com (mail-ot1-f46.google.com [209.85.210.46]) by imf17.hostedemail.com (Postfix) with ESMTP id D2BEC40002C1 for ; Wed, 9 Jun 2021 04:12:20 +0000 (UTC) Received: by mail-ot1-f46.google.com with SMTP id 6-20020a9d07860000b02903e83bf8f8fcso9618855oto.12 for ; Tue, 08 Jun 2021 21:12:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:subject:message-id:mime-version; bh=dH5ufk7Y6j9/GuAEqgcW6GomD5lTAsfQgKNtMZXGAZw=; b=suDqTOtYhw65igVyFflM5NWu6cE9W/9pkIJIuRfBvtmcdE+kQ2E4LW6yecy0YeIDoC A1vQrJPBticUu4bYvfKxFd1QFTfDMue5JVBfh7OcvH6b451eNbR4Pa8MzRkqPGh20skr UXgA/BY6on6HUK1ggRI2FIq9vaUEgYu8CIUczXccrb4RmQXlanSbqpfEkriV4xpSMZce 2P5dygh3yYd9psOWHRZbwWu/1B/zYKILV6XmCjh5yxmerxiTTJ8pehEDNx3URfODt1GX gU98yCW2jGZXP1tcmOapS4+1RFiuKj3cTUjcDkjQp3vHKIcVHcw2YMcfg+meFfkfcEPg As4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:subject:message-id:mime-version; bh=dH5ufk7Y6j9/GuAEqgcW6GomD5lTAsfQgKNtMZXGAZw=; b=eFq/nK0+lNgN6pjqdbRDNP2NBvJUexNYvEnIODdusDKsDSWS8uHJIqc6g+s1Jvof0L 2+gATtTXq11+N9/DzS/wyAQX41Ta+TXkpl6pKaWFlNr+40+N46H/IZIpjrXOlXr5qPbb ALxmgwmGGF4z2R0njDbHNyZhX4lacM5Tb5n9n6eNx+BxPFEMexyz5e47kJIsQ0N/TJYJ jR+iF+aulkRw/lUnmev9NBXFEQyy+5Qr95ZkAYdq70zTeGHgYOiGM47J77Q/U54SWjus hmPKtBce30mrcnoQ4Rn1fiZ8odfsZ1vbf4nXEDCs7seaZthP6rxR0W9Fy9a/M6drbj0N Wi8Q== X-Gm-Message-State: AOAM530PF6iAgQwyGpq/p0awBJYxGjDE453BGb+gYgbnyg0oVqFmRzYx 1w6uKQPRiUNsiZT/jXLuk3sMEdNV1uVN1w== X-Google-Smtp-Source: ABdhPJy8EA3JngAx0gGz5Jhhe/FOExFaGsYyva3Po5f2ayBipGfyZuuW56kwCv6M1oUi0fsxDF/P+A== X-Received: by 2002:a9d:3e5:: with SMTP id f92mr20891152otf.181.1623211942541; Tue, 08 Jun 2021 21:12:22 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id l18sm201747otr.50.2021.06.08.21.12.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 21:12:22 -0700 (PDT) Date: Tue, 8 Jun 2021 21:12:20 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 03/10] mm/thp: try_to_unmap() use TTU_SYNC for safe splitting (fwd) Message-ID: <6b2b6683-d9a7-b7d0-a3e5-425b96338d63@google.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: D2BEC40002C1 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=suDqTOtY; spf=pass (imf17.hostedemail.com: domain of hughd@google.com designates 209.85.210.46 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: hnfhi61g8jskzwa6ebcstortkkxehrs6 X-HE-Tag: 1623211940-216031 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ---------- Forwarded message ---------- Date: Tue, 8 Jun 2021 21:10:19 -0700 (PDT) From: Hugh Dickins To: Andrew Morton Cc: Hugh Dickins , Kirill A. Shutemov , Yang Shi , Wang Yugui , Matthew Wilcox , Naoya Horiguchi , Alistair Popple , Ralph Campbell , Zi Yan , Miaohe Lin , Minchan Kim , Jue Wang , Peter Xu , Jan Kara , Shakeel Butt , Oscar Salvador Subject: [PATCH v2 03/10] mm/thp: try_to_unmap() use TTU_SYNC for safe splitting Stressing huge tmpfs often crashed on unmap_page()'s VM_BUG_ON_PAGE (!unmap_success): with dump_page() showing mapcount:1, but then its raw struct page output showing _mapcount ffffffff i.e. mapcount 0. And even if that particular VM_BUG_ON_PAGE(!unmap_success) is removed, it is immediately followed by a VM_BUG_ON_PAGE(compound_mapcount(head)), and further down an IS_ENABLED(CONFIG_DEBUG_VM) total_mapcount BUG(): all indicative of some mapcount difficulty in development here perhaps. But the !CONFIG_DEBUG_VM path handles the failures correctly and silently. I believe the problem is that once a racing unmap has cleared pte or pmd, try_to_unmap_one() may skip taking the page table lock, and emerge from try_to_unmap() before the racing task has reached decrementing mapcount. Instead of abandoning the unsafe VM_BUG_ON_PAGE(), and the ones that follow, use PVMW_SYNC in try_to_unmap_one() in this case: adding TTU_SYNC to the options, and passing that from unmap_page(). When CONFIG_DEBUG_VM, or for non-debug too? Consensus is to do the same for both: the slight overhead added should rarely matter, except perhaps if splitting sparsely-populated multiply-mapped shmem. Once confident that bugs are fixed, TTU_SYNC here can be removed, and the race tolerated. Fixes: fec89c109f3a ("thp: rewrite freeze_page()/unfreeze_page() with generic rmap walkers") Signed-off-by: Hugh Dickins Cc: Acked-by: Kirill A. Shutemov Reviewed-by: Yang Shi --- v2: moved TTU_SYNC definition up, to avoid conflict with other patchset use TTU_SYNC even when non-debug, per Peter Xu and Yang Shi expanded PVMW_SYNC's spin_unlock(pmd_lock()), per Kirill and Peter include/linux/rmap.h | 1 + mm/huge_memory.c | 2 +- mm/page_vma_mapped.c | 11 +++++++++++ mm/rmap.c | 17 ++++++++++++++++- 4 files changed, 29 insertions(+), 2 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index def5c62c93b3..8d04e7deedc6 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -91,6 +91,7 @@ enum ttu_flags { TTU_SPLIT_HUGE_PMD = 0x4, /* split huge PMD if any */ TTU_IGNORE_MLOCK = 0x8, /* ignore mlock */ + TTU_SYNC = 0x10, /* avoid racy checks with PVMW_SYNC */ TTU_IGNORE_HWPOISON = 0x20, /* corrupted page is recoverable */ TTU_BATCH_FLUSH = 0x40, /* Batch TLB flushes where possible * and caller guarantees they will diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5885c5f5836f..84ab735139dc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2350,7 +2350,7 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma, static void unmap_page(struct page *page) { - enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | + enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD; bool unmap_success; diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 2cf01d933f13..5b559967410e 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -212,6 +212,17 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) pvmw->ptl = NULL; } } else if (!pmd_present(pmde)) { + /* + * If PVMW_SYNC, take and drop THP pmd lock so that we + * cannot return prematurely, while zap_huge_pmd() has + * cleared *pmd but not decremented compound_mapcount(). + */ + if ((pvmw->flags & PVMW_SYNC) && + PageTransCompound(pvmw->page)) { + spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); + + spin_unlock(ptl); + } return false; } if (!map_pte(pvmw)) diff --git a/mm/rmap.c b/mm/rmap.c index 693a610e181d..07811b4ae793 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1405,6 +1405,15 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; + /* + * When racing against e.g. zap_pte_range() on another cpu, + * in between its ptep_get_and_clear_full() and page_remove_rmap(), + * try_to_unmap() may return false when it is about to become true, + * if page table locking is skipped: use TTU_SYNC to wait for that. + */ + if (flags & TTU_SYNC) + pvmw.flags = PVMW_SYNC; + /* munlock has nothing to gain from examining un-locked vmas */ if ((flags & TTU_MUNLOCK) && !(vma->vm_flags & VM_LOCKED)) return true; @@ -1777,7 +1786,13 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags) else rmap_walk(page, &rwc); - return !page_mapcount(page) ? true : false; + /* + * When racing against e.g. zap_pte_range() on another cpu, + * in between its ptep_get_and_clear_full() and page_remove_rmap(), + * try_to_unmap() may return false when it is about to become true, + * if page table locking is skipped: use TTU_SYNC to wait for that. + */ + return !page_mapcount(page); } /** From patchwork Wed Jun 9 04:14:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 12308833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76485C47095 for ; Wed, 9 Jun 2021 04:14:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1A0896124C for ; Wed, 9 Jun 2021 04:14:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1A0896124C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9DF1A6B0036; Wed, 9 Jun 2021 00:14:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 98FCC6B006E; Wed, 9 Jun 2021 00:14:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E1CB6B0070; Wed, 9 Jun 2021 00:14:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0027.hostedemail.com [216.40.44.27]) by kanga.kvack.org (Postfix) with ESMTP id 484226B0036 for ; Wed, 9 Jun 2021 00:14:29 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id DF1B0A2CD for ; Wed, 9 Jun 2021 04:14:28 +0000 (UTC) X-FDA: 78232868616.05.06E364D Received: from mail-qt1-f176.google.com (mail-qt1-f176.google.com [209.85.160.176]) by imf28.hostedemail.com (Postfix) with ESMTP id E370F200109C for ; Wed, 9 Jun 2021 04:14:24 +0000 (UTC) Received: by mail-qt1-f176.google.com with SMTP id l17so12884108qtq.12 for ; Tue, 08 Jun 2021 21:14:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=1Gkk68Yxejem3jDpMBjjD+FSUwRvlNAzk5ZsMSk3bl4=; b=SUk2w4ANTGja8rZm6+4s+f9cLvedGQIJ1TDwCVl7K54PNMaQLvEsBrnRh74k/N6Yao 3ySxMaxnwSpxVAcv9QpNIpaxEDji6dxCw1Wz0c/RXWgJNPVRAXzAvWBC8kU8W71fUZVH SO6ECspRznhqvOCfwFfyTZHCN2bj+zOd6soYsoOM4MN/K03QqSvbR6cQC1nQbFRnZSE1 o/Twg6VOJJp0u/ZEn1rEcDF3TdgkjJMgw7LHBIoLO4UO3xI8cPx8i7qz7Ei305NbJhYl JdgFEntRrJemB7mhU03dCHtTcP75XZiyWYX+3KGr8TmLiZpkHEn97YTRxXMvwm7HP/eZ vrUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=1Gkk68Yxejem3jDpMBjjD+FSUwRvlNAzk5ZsMSk3bl4=; b=CCCJRDKl6avj8YzNHXQmbbx8sF71mnVYmdUJjT6q1F9ql2+OHZbYrfTcmz6nDVi7eS IhAtAPi8QluUZykHPqFyRLM8t2vyKjQ3ZXxKTzMIRlDcIxC5jgKBQncTIknyjerBc5Dj oG4nP+4HgxkIr2Hb7QEh4nSfenrhGyZ014VeKEw0YOmYv2BGGYaV43GflLP5CXbYjohA bFHcAXzlClblbBk1goxlb5js9usuMxlJRxAbNM/KC60GYZI+x3RFsT8NYFm3a+sxcDaL 6hu8JWma7tHcvRorBlOPAU2yVS7DRPsUayuiyHYNmNQxn4RU5sA/7cEctfHiCtEF1Lhi 6Ifg== X-Gm-Message-State: AOAM532jf19wPUWNMBeVNu2XUinO4tADneza+zl6QHM180TchAp0ZD71 ZIJputxC/Zdh1myjM570UB626Q== X-Google-Smtp-Source: ABdhPJxGgqmVmMkDl98m/O1+owdzkmD+beSmerlJdgmElyg9g3SjKMF4520vQEWZAHWRzwB08bp2KA== X-Received: by 2002:ac8:70d2:: with SMTP id g18mr5926577qtp.344.1623212067659; Tue, 08 Jun 2021 21:14:27 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id h12sm3355847qtn.44.2021.06.08.21.14.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 21:14:27 -0700 (PDT) Date: Tue, 8 Jun 2021 21:14:24 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Hugh Dickins , "Kirill A. Shutemov" , Yang Shi , Wang Yugui , Matthew Wilcox , Naoya Horiguchi , Alistair Popple , Ralph Campbell , Zi Yan , Miaohe Lin , Minchan Kim , Jue Wang , Peter Xu , Jan Kara , Shakeel Butt , Oscar Salvador , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 04/10] mm/thp: fix vma_address() if virtual address below file offset In-Reply-To: Message-ID: References: MIME-Version: 1.0 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=SUk2w4AN; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of hughd@google.com designates 209.85.160.176 as permitted sender) smtp.mailfrom=hughd@google.com X-Rspamd-Server: rspam02 X-Stat-Signature: pntpwojs3tjgwsrejigr7bur19orhah1 X-Rspamd-Queue-Id: E370F200109C X-HE-Tag: 1623212064-383515 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Running certain tests with a DEBUG_VM kernel would crash within hours, on the total_mapcount BUG() in split_huge_page_to_list(), while trying to free up some memory by punching a hole in a shmem huge page: split's try_to_unmap() was unable to find all the mappings of the page (which, on a !DEBUG_VM kernel, would then keep the huge page pinned in memory). When that BUG() was changed to a WARN(), it would later crash on the VM_BUG_ON_VMA(end < vma->vm_start || start >= vma->vm_end, vma) in mm/internal.h:vma_address(), used by rmap_walk_file() for try_to_unmap(). vma_address() is usually correct, but there's a wraparound case when the vm_start address is unusually low, but vm_pgoff not so low: vma_address() chooses max(start, vma->vm_start), but that decides on the wrong address, because start has become almost ULONG_MAX. Rewrite vma_address() to be more careful about vm_pgoff; move the VM_BUG_ON_VMA() out of it, returning -EFAULT for errors, so that it can be safely used from page_mapped_in_vma() and page_address_in_vma() too. Add vma_address_end() to apply similar care to end address calculation, in page_vma_mapped_walk() and page_mkclean_one() and try_to_unmap_one(); though it raises a question of whether callers would do better to supply pvmw->end to page_vma_mapped_walk() - I chose not, for a smaller patch. An irritation is that their apparent generality breaks down on KSM pages, which cannot be located by the page->index that page_to_pgoff() uses: as 4b0ece6fa016 ("mm: migrate: fix remove_migration_pte() for ksm pages") once discovered. I dithered over the best thing to do about that, and have ended up with a VM_BUG_ON_PAGE(PageKsm) in both vma_address() and vma_address_end(); though the only place in danger of using it on them was try_to_unmap_one(). Sidenote: vma_address() and vma_address_end() now use compound_nr() on a head page, instead of thp_size(): to make the right calculation on a hugetlbfs page, whether or not THPs are configured. try_to_unmap() is used on hugetlbfs pages, but perhaps the wrong calculation never mattered. Fixes: a8fa41ad2f6f ("mm, rmap: check all VMAs that PTE-mapped THP can be part of") Signed-off-by: Hugh Dickins Acked-by: Kirill A. Shutemov Cc: --- v2 series: adjust vma_address() to avoid 32-bit wrap, per Matthew v2: use compound_nr() as Matthew suggested mm/internal.h | 53 ++++++++++++++++++++++++++++++++++------------ mm/page_vma_mapped.c | 16 ++++++-------- mm/rmap.c | 16 +++++++------- 3 files changed, 53 insertions(+), 32 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 2f1182948aa6..e8fdb531f887 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -384,27 +384,52 @@ static inline void mlock_migrate_page(struct page *newpage, struct page *page) extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); /* - * At what user virtual address is page expected in @vma? + * At what user virtual address is page expected in vma? + * Returns -EFAULT if all of the page is outside the range of vma. + * If page is a compound head, the entire compound page is considered. */ static inline unsigned long -__vma_address(struct page *page, struct vm_area_struct *vma) +vma_address(struct page *page, struct vm_area_struct *vma) { - pgoff_t pgoff = page_to_pgoff(page); - return vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + pgoff_t pgoff; + unsigned long address; + + VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ + pgoff = page_to_pgoff(page); + if (pgoff >= vma->vm_pgoff) { + address = vma->vm_start + + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + /* Check for address beyond vma (or wrapped through 0?) */ + if (address < vma->vm_start || address >= vma->vm_end) + address = -EFAULT; + } else if (PageHead(page) && + pgoff + compound_nr(page) - 1 >= vma->vm_pgoff) { + /* Test above avoids possibility of wrap to 0 on 32-bit */ + address = vma->vm_start; + } else { + address = -EFAULT; + } + return address; } +/* + * Then at what user virtual address will none of the page be found in vma? + * Assumes that vma_address() already returned a good starting address. + * If page is a compound head, the entire compound page is considered. + */ static inline unsigned long -vma_address(struct page *page, struct vm_area_struct *vma) +vma_address_end(struct page *page, struct vm_area_struct *vma) { - unsigned long start, end; - - start = __vma_address(page, vma); - end = start + thp_size(page) - PAGE_SIZE; - - /* page should be within @vma mapping range */ - VM_BUG_ON_VMA(end < vma->vm_start || start >= vma->vm_end, vma); - - return max(start, vma->vm_start); + pgoff_t pgoff; + unsigned long address; + + VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ + pgoff = page_to_pgoff(page) + compound_nr(page); + address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + /* Check for address beyond vma (or wrapped through 0?) */ + if (address < vma->vm_start || address > vma->vm_end) + address = vma->vm_end; + return address; } static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf, diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 5b559967410e..e37bd43904af 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -228,18 +228,18 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (!map_pte(pvmw)) goto next_pte; while (1) { + unsigned long end; + if (check_pte(pvmw)) return true; next_pte: /* Seek to next pte only makes sense for THP */ if (!PageTransHuge(pvmw->page) || PageHuge(pvmw->page)) return not_found(pvmw); + end = vma_address_end(pvmw->page, pvmw->vma); do { pvmw->address += PAGE_SIZE; - if (pvmw->address >= pvmw->vma->vm_end || - pvmw->address >= - __vma_address(pvmw->page, pvmw->vma) + - thp_size(pvmw->page)) + if (pvmw->address >= end) return not_found(pvmw); /* Did we cross page table boundary? */ if (pvmw->address % PMD_SIZE == 0) { @@ -277,14 +277,10 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) .vma = vma, .flags = PVMW_SYNC, }; - unsigned long start, end; - - start = __vma_address(page, vma); - end = start + thp_size(page) - PAGE_SIZE; - if (unlikely(end < vma->vm_start || start >= vma->vm_end)) + pvmw.address = vma_address(page, vma); + if (pvmw.address == -EFAULT) return 0; - pvmw.address = max(start, vma->vm_start); if (!page_vma_mapped_walk(&pvmw)) return 0; page_vma_mapped_walk_done(&pvmw); diff --git a/mm/rmap.c b/mm/rmap.c index 07811b4ae793..144de54efc1c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -707,7 +707,6 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) */ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma) { - unsigned long address; if (PageAnon(page)) { struct anon_vma *page__anon_vma = page_anon_vma(page); /* @@ -722,10 +721,8 @@ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma) return -EFAULT; } else return -EFAULT; - address = __vma_address(page, vma); - if (unlikely(address < vma->vm_start || address >= vma->vm_end)) - return -EFAULT; - return address; + + return vma_address(page, vma); } pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address) @@ -919,7 +916,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, */ mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, vma, vma->vm_mm, address, - min(vma->vm_end, address + page_size(page))); + vma_address_end(page, vma)); mmu_notifier_invalidate_range_start(&range); while (page_vma_mapped_walk(&pvmw)) { @@ -1435,9 +1432,10 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, * Note that the page can not be free in this function as call of * try_to_unmap() must hold a reference on the page. */ + range.end = PageKsm(page) ? + address + PAGE_SIZE : vma_address_end(page, vma); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, - address, - min(vma->vm_end, address + page_size(page))); + address, range.end); if (PageHuge(page)) { /* * If sharing is possible, start and end will be adjusted @@ -1889,6 +1887,7 @@ static void rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, struct vm_area_struct *vma = avc->vma; unsigned long address = vma_address(page, vma); + VM_BUG_ON_VMA(address == -EFAULT, vma); cond_resched(); if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) @@ -1943,6 +1942,7 @@ static void rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, pgoff_start, pgoff_end) { unsigned long address = vma_address(page, vma); + VM_BUG_ON_VMA(address == -EFAULT, vma); cond_resched(); if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) From patchwork Wed Jun 9 04:16:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 12308835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1035C47095 for ; Wed, 9 Jun 2021 04:17:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 75ED6601FC for ; Wed, 9 Jun 2021 04:17:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 75ED6601FC Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0B4F56B0036; Wed, 9 Jun 2021 00:17:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 064BD6B006E; Wed, 9 Jun 2021 00:17:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E203C6B0070; Wed, 9 Jun 2021 00:17:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id B32E66B0036 for ; Wed, 9 Jun 2021 00:17:00 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 47C9D8249980 for ; Wed, 9 Jun 2021 04:17:00 +0000 (UTC) X-FDA: 78232875000.34.491D717 Received: from mail-qk1-f173.google.com (mail-qk1-f173.google.com [209.85.222.173]) by imf10.hostedemail.com (Postfix) with ESMTP id B6E3B40002C1 for ; Wed, 9 Jun 2021 04:16:56 +0000 (UTC) Received: by mail-qk1-f173.google.com with SMTP id u30so22487298qke.7 for ; Tue, 08 Jun 2021 21:16:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=pMyjc9CYqvbfrzl29URaDu3TgxtX0dXCEwyNP3GAvus=; b=e+RJ2E4Zeq65xDwtDBfefBHjqSQnt+ObhFB8cWd4WRWktC46wAECrkAZKegtHdI57U HyS/iTRZE/hQ/0dcjCit9KO1wTdujI5nzTGsQKyg9bQ9N2gU1tj4YVT8ng8kZREo+E96 xARjzVhT8FwML3scpPqULTgmVCY4SQM+lYFSWhA8HS/odAxZoSccQjNwV2nTcEMzZSqQ DIY1QHUYRNURhsUZE2XmNWgYyGCdf5Aae/6I2/mYb8dovDNSfNE1rHjzju6+5ttvsrpP qJbZyxiXkKTI3g8+QTZRDVwQ/qMO+rxvZW0EGckKBU+OweiVkeyZ3hQbL4N4AXxrI7DL 0XyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=pMyjc9CYqvbfrzl29URaDu3TgxtX0dXCEwyNP3GAvus=; b=Laq5Z90Cj6JYLJ20lNAjaDKb64yn/i7ZXROMXhoZldJZ9EHg/yiqtVUYyG12QuvVIb 6AqX0sQH2SihKgVca9/nC9aogLCfP3mzW92RIbZ4RiiFLYuZWgAE73rM4wS+iEYqEF/c SONNH7f/+OFaEQCQFJGu1b/EjL34GGeaOg8IBaOl4fXHQZU8Vrbrtz3AGzY/7IlDUdqG oQXwbT8KfKlIhTcsU7L3lROMTYRbNCmoCQCvKBP4ohc3wpLUkAz/9rgoDnenl9Kac3ES gfukvpXN6JnLPeBw2ijvTxHkmBedPw/ovmmOkDBU6TAIs3jHvufGM8ZbKgTSnOqK457M dBDg== X-Gm-Message-State: AOAM531Rvv/06pXRtdN4tuHOxd7VOmODPbnwjQMuAw1UztTPsbDh+Myt Qgxe+yYUeOiKBoCBPXtBp+h5nw== X-Google-Smtp-Source: ABdhPJzwS8WKYUfmOquNK2fdMAaAFTRZEgzgFXIqA/SGDE+nqIO6HtJ9kNcpbTsmL3mRhIqH41zW8g== X-Received: by 2002:a05:620a:39e:: with SMTP id q30mr25347667qkm.116.1623212219181; Tue, 08 Jun 2021 21:16:59 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id k19sm7673587qkj.89.2021.06.08.21.16.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 21:16:58 -0700 (PDT) Date: Tue, 8 Jun 2021 21:16:55 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Hugh Dickins , "Kirill A. Shutemov" , Yang Shi , Wang Yugui , Matthew Wilcox , Naoya Horiguchi , Alistair Popple , Ralph Campbell , Zi Yan , Miaohe Lin , Minchan Kim , Jue Wang , Peter Xu , Jan Kara , Shakeel Butt , Oscar Salvador , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 05/10] mm/thp: fix page_address_in_vma() on file THP tails In-Reply-To: Message-ID: References: MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: B6E3B40002C1 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=e+RJ2E4Z; spf=pass (imf10.hostedemail.com: domain of hughd@google.com designates 209.85.222.173 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: bnfmtnjhz967muwnqo4bda9mdkd61k6c X-HE-Tag: 1623212216-573584 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jue Wang Anon THP tails were already supported, but memory-failure may need to use page_address_in_vma() on file THP tails, which its page->mapping check did not permit: fix it. hughd adds: no current usage is known to hit the issue, but this does fix a subtle trap in a general helper: best fixed in stable sooner than later. Fixes: 800d8c63b2e9 ("shmem: add huge pages support") Signed-off-by: Jue Wang Signed-off-by: Hugh Dickins Reviewed-by: Matthew Wilcox (Oracle) Reviewed-by: Yang Shi Acked-by: Kirill A. Shutemov Cc: --- mm/rmap.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 144de54efc1c..e05c300048e6 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -716,11 +716,11 @@ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma) if (!vma->anon_vma || !page__anon_vma || vma->anon_vma->root != page__anon_vma->root) return -EFAULT; - } else if (page->mapping) { - if (!vma->vm_file || vma->vm_file->f_mapping != page->mapping) - return -EFAULT; - } else + } else if (!vma->vm_file) { + return -EFAULT; + } else if (vma->vm_file->f_mapping != compound_head(page)->mapping) { return -EFAULT; + } return vma_address(page, vma); } From patchwork Wed Jun 9 04:19:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 12308837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD76AC48BCD for ; Wed, 9 Jun 2021 04:19:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 492DD61351 for ; Wed, 9 Jun 2021 04:19:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 492DD61351 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BFCB86B0036; Wed, 9 Jun 2021 00:19:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD4136B006E; Wed, 9 Jun 2021 00:19:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A74746B0070; Wed, 9 Jun 2021 00:19:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id 766A36B0036 for ; Wed, 9 Jun 2021 00:19:28 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 188FA181AEF0B for ; Wed, 9 Jun 2021 04:19:28 +0000 (UTC) X-FDA: 78232881216.29.AE04A16 Received: from mail-qk1-f179.google.com (mail-qk1-f179.google.com [209.85.222.179]) by imf29.hostedemail.com (Postfix) with ESMTP id 2E69037A for ; Wed, 9 Jun 2021 04:19:22 +0000 (UTC) Received: by mail-qk1-f179.google.com with SMTP id i67so22495466qkc.4 for ; Tue, 08 Jun 2021 21:19:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=dknf+euzTRr7r2/S3iUZMemdIlHi8vRKB9Z0yM9o6l4=; b=dmE24KgHI1vI7hqZhadp/Dl4mNPzK8uiw5dc1UT6rxcJ7JbNXkm4YqQe82Q7XBmUUg 3ctA4xIrVPDDVKacv9E2bbfbdoD2EFrZqGj6J4l8a55RWuJO3yNVXxPJXc1+1PbUhIio rBo/4hYlgglNK09Tq7s+Mw0ppXNXhu56PAbSP9kLfdAm7sZMt3BLQng3tdITCwaDqCHn SjJdTr5D781wdhdgjKZUxMQdHMy6kmgxSioKJ06y0lg4X/7R8eSvx9KcW3KqH/eEth6x mNHzB3cQ+xVPUyAEjAyCtpp/l/I/sN1s5VrWV50Z9Xfa20dh7hCpsvZi3FCqgfBoVKK0 sVug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=dknf+euzTRr7r2/S3iUZMemdIlHi8vRKB9Z0yM9o6l4=; b=eO5qqhpaL5znvnMwWCU84hjuZGTls+Urbs8bSDG7jrjI5QwBa8VlRMN3VSaJDVl9vY o7CCo+UoC9oVgyXkTtfIEyln3dzTZ4zHm0blfjC7/ni64rNJJDdEERe9fZgQs4i+0nXD bbDJeQ3aRP7V4E//AFxknpcapzixGfTJkxXFYklNLHBa7n5S5z0hKBQLv+gDNw5SkjWR EBN4sJ+2teTTp3g6BML1BBggalKOndCDMCRorFPgbn7RUYANybIvE7C8S9O8q4gpuOnx BnXG2/OrghBw30guSVlVO03QleIRY/tY89P3YVvj4AHVHBvQzQrwmOgHetUsPThbETSm VuLw== X-Gm-Message-State: AOAM5312bDozN2MDdKBtGnRSyAIc6IkjKmq5UG+rk0qm/baYqsqxjQJj wKjPffD5BYMlB/ZJg6rd38t+Pg== X-Google-Smtp-Source: ABdhPJxHnL+j9u0hHoWnMWzYVWec9zXzU9aWEUinZ4dRlvqjZbcerPeNMUrtkbB0koAeSOQKJq4XtA== X-Received: by 2002:a37:94e:: with SMTP id 75mr24733985qkj.127.1623212366868; Tue, 08 Jun 2021 21:19:26 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id m68sm6182941qkb.75.2021.06.08.21.19.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 21:19:26 -0700 (PDT) Date: Tue, 8 Jun 2021 21:19:24 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Hugh Dickins , "Kirill A. Shutemov" , Yang Shi , Wang Yugui , Matthew Wilcox , Naoya Horiguchi , Alistair Popple , Ralph Campbell , Zi Yan , Miaohe Lin , Minchan Kim , Jue Wang , Peter Xu , Jan Kara , Shakeel Butt , Oscar Salvador , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 06/10] mm/thp: unmap_mapping_page() to fix THP truncate_cleanup_page() In-Reply-To: Message-ID: References: MIME-Version: 1.0 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=dmE24KgH; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of hughd@google.com designates 209.85.222.179 as permitted sender) smtp.mailfrom=hughd@google.com X-Stat-Signature: apdcmfodagg6rkznqmxyyzfe6s7cgbyg X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 2E69037A X-HE-Tag: 1623212362-446032 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is a race between THP unmapping and truncation, when truncate sees pmd_none() and skips the entry, after munmap's zap_huge_pmd() cleared it, but before its page_remove_rmap() gets to decrement compound_mapcount: generating false "BUG: Bad page cache" reports that the page is still mapped when deleted. This commit fixes that, but not in the way I hoped. The first attempt used try_to_unmap(page, TTU_SYNC|TTU_IGNORE_MLOCK) instead of unmap_mapping_range() in truncate_cleanup_page(): it has often been an annoyance that we usually call unmap_mapping_range() with no pages locked, but there apply it to a single locked page. try_to_unmap() looks more suitable for a single locked page. However, try_to_unmap_one() contains a VM_BUG_ON_PAGE(!pvmw.pte,page): it is used to insert THP migration entries, but not used to unmap THPs. Copy zap_huge_pmd() and add THP handling now? Perhaps, but their TLB needs are different, I'm too ignorant of the DAX cases, and couldn't decide how far to go for anon+swap. Set that aside. The second attempt took a different tack: make no change in truncate.c, but modify zap_huge_pmd() to insert an invalidated huge pmd instead of clearing it initially, then pmd_clear() between page_remove_rmap() and unlocking at the end. Nice. But powerpc blows that approach out of the water, with its serialize_against_pte_lookup(), and interesting pgtable usage. It would need serious help to get working on powerpc (with a minor optimization issue on s390 too). Set that aside. Just add an "if (page_mapped(page)) synchronize_rcu();" or other such delay, after unmapping in truncate_cleanup_page()? Perhaps, but though that's likely to reduce or eliminate the number of incidents, it would give less assurance of whether we had identified the problem correctly. This successful iteration introduces "unmap_mapping_page(page)" instead of try_to_unmap(), and goes the usual unmap_mapping_range_tree() route, with an addition to details. Then zap_pmd_range() watches for this case, and does spin_unlock(pmd_lock) if so - just like page_vma_mapped_walk() now does in the PVMW_SYNC case. Not pretty, but safe. Note that unmap_mapping_page() is doing a VM_BUG_ON(!PageLocked) to assert its interface; but currently that's only used to make sure that page->mapping is stable, and zap_pmd_range() doesn't care if the page is locked or not. Along these lines, in invalidate_inode_pages2_range() move the initial unmap_mapping_range() out from under page lock, before then calling unmap_mapping_page() under page lock if still mapped. Fixes: fc127da085c2 ("truncate: handle file thp") Signed-off-by: Hugh Dickins Acked-by: Kirill A. Shutemov Cc: --- include/linux/mm.h | 3 +++ mm/memory.c | 40 ++++++++++++++++++++++++++++++++++++++++ mm/truncate.c | 43 +++++++++++++++++++------------------------ 3 files changed, 62 insertions(+), 24 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c274f75efcf9..8ae31622deef 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1719,6 +1719,7 @@ struct zap_details { struct address_space *check_mapping; /* Check page->mapping if set */ pgoff_t first_index; /* Lowest page->index to unmap */ pgoff_t last_index; /* Highest page->index to unmap */ + struct page *single_page; /* Locked page to be unmapped */ }; struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, @@ -1766,6 +1767,7 @@ extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, extern int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked); +void unmap_mapping_page(struct page *page); void unmap_mapping_pages(struct address_space *mapping, pgoff_t start, pgoff_t nr, bool even_cows); void unmap_mapping_range(struct address_space *mapping, @@ -1786,6 +1788,7 @@ static inline int fixup_user_fault(struct mm_struct *mm, unsigned long address, BUG(); return -EFAULT; } +static inline void unmap_mapping_page(struct page *page) { } static inline void unmap_mapping_pages(struct address_space *mapping, pgoff_t start, pgoff_t nr, bool even_cows) { } static inline void unmap_mapping_range(struct address_space *mapping, diff --git a/mm/memory.c b/mm/memory.c index f3ffab9b9e39..ee1163df3a53 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1361,7 +1361,17 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb, else if (zap_huge_pmd(tlb, vma, pmd, addr)) goto next; /* fall through */ + } else if (details && details->single_page && + PageTransCompound(details->single_page) && + next - addr == HPAGE_PMD_SIZE && pmd_none(*pmd)) { + /* + * Take and drop THP pmd lock so that we cannot return + * prematurely, while zap_huge_pmd() has cleared *pmd, + * but not yet decremented compound_mapcount(). + */ + spin_unlock(pmd_lock(tlb->mm, pmd)); } + /* * Here there can be other concurrent MADV_DONTNEED or * trans huge page faults running, and if the pmd is @@ -3236,6 +3246,36 @@ static inline void unmap_mapping_range_tree(struct rb_root_cached *root, } } +/** + * unmap_mapping_page() - Unmap single page from processes. + * @page: The locked page to be unmapped. + * + * Unmap this page from any userspace process which still has it mmaped. + * Typically, for efficiency, the range of nearby pages has already been + * unmapped by unmap_mapping_pages() or unmap_mapping_range(). But once + * truncation or invalidation holds the lock on a page, it may find that + * the page has been remapped again: and then uses unmap_mapping_page() + * to unmap it finally. + */ +void unmap_mapping_page(struct page *page) +{ + struct address_space *mapping = page->mapping; + struct zap_details details = { }; + + VM_BUG_ON(!PageLocked(page)); + VM_BUG_ON(PageTail(page)); + + details.check_mapping = mapping; + details.first_index = page->index; + details.last_index = page->index + thp_nr_pages(page) - 1; + details.single_page = page; + + i_mmap_lock_write(mapping); + if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root))) + unmap_mapping_range_tree(&mapping->i_mmap, &details); + i_mmap_unlock_write(mapping); +} + /** * unmap_mapping_pages() - Unmap pages from processes. * @mapping: The address space containing pages to be unmapped. diff --git a/mm/truncate.c b/mm/truncate.c index 95af244b112a..234ddd879caa 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -167,13 +167,10 @@ void do_invalidatepage(struct page *page, unsigned int offset, * its lock, b) when a concurrent invalidate_mapping_pages got there first and * c) when tmpfs swizzles a page between a tmpfs inode and swapper_space. */ -static void -truncate_cleanup_page(struct address_space *mapping, struct page *page) +static void truncate_cleanup_page(struct page *page) { - if (page_mapped(page)) { - unsigned int nr = thp_nr_pages(page); - unmap_mapping_pages(mapping, page->index, nr, false); - } + if (page_mapped(page)) + unmap_mapping_page(page); if (page_has_private(page)) do_invalidatepage(page, 0, thp_size(page)); @@ -218,7 +215,7 @@ int truncate_inode_page(struct address_space *mapping, struct page *page) if (page->mapping != mapping) return -EIO; - truncate_cleanup_page(mapping, page); + truncate_cleanup_page(page); delete_from_page_cache(page); return 0; } @@ -325,7 +322,7 @@ void truncate_inode_pages_range(struct address_space *mapping, index = indices[pagevec_count(&pvec) - 1] + 1; truncate_exceptional_pvec_entries(mapping, &pvec, indices); for (i = 0; i < pagevec_count(&pvec); i++) - truncate_cleanup_page(mapping, pvec.pages[i]); + truncate_cleanup_page(pvec.pages[i]); delete_from_page_cache_batch(mapping, &pvec); for (i = 0; i < pagevec_count(&pvec); i++) unlock_page(pvec.pages[i]); @@ -639,6 +636,16 @@ int invalidate_inode_pages2_range(struct address_space *mapping, continue; } + if (!did_range_unmap && page_mapped(page)) { + /* + * If page is mapped, before taking its lock, + * zap the rest of the file in one hit. + */ + unmap_mapping_pages(mapping, index, + (1 + end - index), false); + did_range_unmap = 1; + } + lock_page(page); WARN_ON(page_to_index(page) != index); if (page->mapping != mapping) { @@ -646,23 +653,11 @@ int invalidate_inode_pages2_range(struct address_space *mapping, continue; } wait_on_page_writeback(page); - if (page_mapped(page)) { - if (!did_range_unmap) { - /* - * Zap the rest of the file in one hit. - */ - unmap_mapping_pages(mapping, index, - (1 + end - index), false); - did_range_unmap = 1; - } else { - /* - * Just zap this page - */ - unmap_mapping_pages(mapping, index, - 1, false); - } - } + + if (page_mapped(page)) + unmap_mapping_page(page); BUG_ON(page_mapped(page)); + ret2 = do_launder_page(mapping, page); if (ret2 == 0) { if (!invalidate_complete_page2(mapping, page)) From patchwork Wed Jun 9 04:22:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 12308839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7898C47095 for ; Wed, 9 Jun 2021 04:22:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4B767611C9 for ; Wed, 9 Jun 2021 04:22:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B767611C9 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D44136B0036; Wed, 9 Jun 2021 00:22:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CE12C6B006E; Wed, 9 Jun 2021 00:22:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B33C46B0070; Wed, 9 Jun 2021 00:22:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0188.hostedemail.com [216.40.44.188]) by kanga.kvack.org (Postfix) with ESMTP id 820916B0036 for ; Wed, 9 Jun 2021 00:22:43 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 140CE180AD815 for ; Wed, 9 Jun 2021 04:22:43 +0000 (UTC) X-FDA: 78232889406.37.12A226E Received: from mail-qv1-f48.google.com (mail-qv1-f48.google.com [209.85.219.48]) by imf27.hostedemail.com (Postfix) with ESMTP id 9E8E6801934C for ; Wed, 9 Jun 2021 04:22:38 +0000 (UTC) Received: by mail-qv1-f48.google.com with SMTP id e18so12131679qvm.10 for ; Tue, 08 Jun 2021 21:22:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=WFC+/0lyPxFIAiwGMHjjGfnZCl9oEPVKPge+xDeOiLA=; b=NlAloZezTsaqGXm+Vce+9CIJS/kpxlmYkrmUXX5sEBZ4dSjx8mTxMFcdYrY7TITW7r duwK3CxTdlKxNCDni5i1bQiVrvGYOsT+AnoFFG5OjBVx/R8Ma33peq9vQvgwsuvrpkTT 2vwo0T+HyrtIjo3Sw68tZwFksUOXHsRTZxKgyGl06LJ4UynqNlV2Z1dMKvNScr8aqFe3 L9GBA1v0FNQ5988p/PgZB06yKzd7U2R+Hxcxbh0TUnbwfe3mS2Tt2frSUmyDOJS1MG36 p1MIOIfNB+TnRUFWhr8oOTg+PHljTndvKZVf3bCKgx5rj3BnHD20IKxOClS1gEyb3aoY MJnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=WFC+/0lyPxFIAiwGMHjjGfnZCl9oEPVKPge+xDeOiLA=; b=Mn0PL9UClWsgkk5QGUKd9Ir4yUYjU3yE/gmL6b/brfVB5Q44oGV+HSnqYGxtoe/x+s fk26mdn20JiULURQH/VaTUhYTpB/eT1lVeJGe0Eo8elRWipAc25u5qnVF8g6iK4crskT qGXlDXmdig8xFhR6HN3ecXP7qerxRVzhucC6j4uQENVIQrxJux9ejjLesrKOZMtxcax2 C4WzhihCevRogrIpUqZt3tNSHn3yskHl9SSPIF4GKk+KnJslS8LBx2djTw7fcxIFhKFS 3r1rbT6XRBBYLbrlyUL7Xnf/Vlvqfb59x6NjbOjKEqOJ0waAgHddcNawKtqm6/GxAqdC EkaQ== X-Gm-Message-State: AOAM530bhe2rnaWfLCPAuu1SZrjKNRluZbirMSrI1NiUxI65WfLAbMte dTjlGFmbdU8bcXb5outwpzYJpw== X-Google-Smtp-Source: ABdhPJxvoInT8fNvF++/wNvSN3dYnlbQ5nVu6siJRdbWpnTpcsvOa0lmTjktIbCw37L5J1yT/UrCrw== X-Received: by 2002:ad4:536a:: with SMTP id e10mr4056084qvv.9.1623212561911; Tue, 08 Jun 2021 21:22:41 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id f13sm3722869qtv.83.2021.06.08.21.22.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 21:22:41 -0700 (PDT) Date: Tue, 8 Jun 2021 21:22:39 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Hugh Dickins , "Kirill A. Shutemov" , Yang Shi , Wang Yugui , Matthew Wilcox , Naoya Horiguchi , Alistair Popple , Ralph Campbell , Zi Yan , Miaohe Lin , Minchan Kim , Jue Wang , Peter Xu , Jan Kara , Shakeel Butt , Oscar Salvador , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 07/10] mm: thp: replace DEBUG_VM BUG with VM_WARN when unmap fails for split In-Reply-To: Message-ID: References: MIME-Version: 1.0 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=NlAloZez; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of hughd@google.com designates 209.85.219.48 as permitted sender) smtp.mailfrom=hughd@google.com X-Stat-Signature: yrz985xnzg8wn5bd3soq3bm8uoa9uxbh X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9E8E6801934C X-HE-Tag: 1623212558-68773 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yang Shi When debugging the bug reported by Wang Yugui [1], try_to_unmap() may fail, but the first VM_BUG_ON_PAGE() just checks page_mapcount() however it may miss the failure when head page is unmapped but other subpage is mapped. Then the second DEBUG_VM BUG() that check total mapcount would catch it. This may incur some confusion. And this is not a fatal issue, so consolidate the two DEBUG_VM checks into one VM_WARN_ON_ONCE_PAGE(). [1] https://lore.kernel.org/linux-mm/20210412180659.B9E3.409509F4@e16-tech.com/ Signed-off-by: Yang Shi Reviewed-by: Zi Yan Acked-by: Kirill A. Shutemov Signed-off-by: Hugh Dickins Cc: --- Patch inserted since the v1 series was posted. v5: Rediffed by Hugh to fit after 6/7 in his mm/thp series; Cc stable. v4: Updated the subject and commit log per Hugh. Reordered the patches per Hugh. v3: Incorporated the comments from Hugh. Keep Zi Yan's reviewed-by tag since there is no fundamental change against v2. v2: Removed dead code and updated the comment of try_to_unmap() per Zi Yan. mm/huge_memory.c | 24 +++++++----------------- 1 file changed, 7 insertions(+), 17 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 84ab735139dc..6d2a0119fc58 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2352,15 +2352,15 @@ static void unmap_page(struct page *page) { enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD; - bool unmap_success; VM_BUG_ON_PAGE(!PageHead(page), page); if (PageAnon(page)) ttu_flags |= TTU_SPLIT_FREEZE; - unmap_success = try_to_unmap(page, ttu_flags); - VM_BUG_ON_PAGE(!unmap_success, page); + try_to_unmap(page, ttu_flags); + + VM_WARN_ON_ONCE_PAGE(page_mapped(page), page); } static void remap_page(struct page *page, unsigned int nr) @@ -2671,7 +2671,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) struct deferred_split *ds_queue = get_deferred_split_queue(head); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; - int count, mapcount, extra_pins, ret; + int extra_pins, ret; pgoff_t end; VM_BUG_ON_PAGE(is_huge_zero_page(head), head); @@ -2730,7 +2730,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) } unmap_page(head); - VM_BUG_ON_PAGE(compound_mapcount(head), head); /* block interrupt reentry in xa_lock and spinlock */ local_irq_disable(); @@ -2748,9 +2747,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) /* Prevent deferred_split_scan() touching ->_refcount */ spin_lock(&ds_queue->split_queue_lock); - count = page_count(head); - mapcount = total_mapcount(head); - if (!mapcount && page_ref_freeze(head, 1 + extra_pins)) { + if (page_ref_freeze(head, 1 + extra_pins)) { if (!list_empty(page_deferred_list(head))) { ds_queue->split_queue_len--; list_del(page_deferred_list(head)); @@ -2770,16 +2767,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) __split_huge_page(page, list, end); ret = 0; } else { - if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) { - pr_alert("total_mapcount: %u, page_count(): %u\n", - mapcount, count); - if (PageTail(page)) - dump_page(head, NULL); - dump_page(page, "total_mapcount(head) > 0"); - BUG(); - } spin_unlock(&ds_queue->split_queue_lock); -fail: if (mapping) +fail: + if (mapping) xa_unlock(&mapping->i_pages); local_irq_enable(); remap_page(head, thp_nr_pages(head)); From patchwork Wed Jun 9 04:25:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 12308841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C97CC47095 for ; Wed, 9 Jun 2021 04:25:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 26839611C9 for ; Wed, 9 Jun 2021 04:25:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 26839611C9 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 73AD16B0036; Wed, 9 Jun 2021 00:25:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6EB3D6B006E; Wed, 9 Jun 2021 00:25:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 564A36B0070; Wed, 9 Jun 2021 00:25:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0113.hostedemail.com [216.40.44.113]) by kanga.kvack.org (Postfix) with ESMTP id 25FEA6B0036 for ; Wed, 9 Jun 2021 00:25:27 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A7477181AEF23 for ; Wed, 9 Jun 2021 04:25:26 +0000 (UTC) X-FDA: 78232896252.12.B96318F Received: from mail-qt1-f175.google.com (mail-qt1-f175.google.com [209.85.160.175]) by imf16.hostedemail.com (Postfix) with ESMTP id 00C108019365 for ; Wed, 9 Jun 2021 04:25:22 +0000 (UTC) Received: by mail-qt1-f175.google.com with SMTP id g12so4694798qtb.2 for ; Tue, 08 Jun 2021 21:25:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=Oexv5s0spKxHzuQZR9/nsfhnf91QtP+MqA8iDNbC7OM=; b=YOZb6ehBp21ZQtLHgUbb9UZcopJJr9AlVXK18YBLhR7/t2143jZYCiu5oO8mgASh8u UF9FxHyPDgScVhf2Cmcdmqo+i+WhQCfi+LTB4iy495pehmdB4SGe8gthJFrzogTMjivV BHbtLj8rZT469+apHs3qV+7gtsIuNDrW5FrdbWumJvBohFtBiL86Eb8mile0oYFKSMD9 JCzawGL/QCBMuvVRmiVeP3lfOi/H2hDoa1xHPs5m7APjPxGvEcP4kqcWWw8xdqMP3r6g QmpWMEtz3Kl988Is8J/ZnQFZFDXkgx0dXb18cd8X18W5gQNXxXJmBzL++MQ+2IOuGhRZ KQcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=Oexv5s0spKxHzuQZR9/nsfhnf91QtP+MqA8iDNbC7OM=; b=CLZB1YpTpMi+XQyvTfFud5p+sC82IgI4O3arNmlHfesjhgDYfa0fAlyWMYte/Gn8gD JZndGmK8vf2tkEpztqtYg1IHA6CUseyZ81l7juKtzUnm6J4gIzLS/IgglN+ZbUuB6iUt gWxvwIKisDW+XcnQSMSaEaZOkivR3pofPijxp8UW6JKO+Jpv2dhrgDM8Ws3pT67jdTIV 5IrhNfjOl7fJCBmvjgUpCUAFSp6Ui75l98SbyvE7R/1AMLIcOJ+2hNRSiGPhgRX3H/gx aNiCZ5K6/TC66Bb64q8EnA+xduUvfdkWQv45xX+uLeKUp/m/O5Wpc50T0kg6uwdn3HNV EAiQ== X-Gm-Message-State: AOAM53290v+REonjjkZSEM60xw11VLBtw2tSNhD0xpDaO+M1s9xS9Kxn c5Fs/b3PDWoy44Xhi4GuC/V+Qw== X-Google-Smtp-Source: ABdhPJwbPyKWU+Wuips3/pDRxUOpIKw45kcSiPRUUl/Qx0rOshbDGdblVW4/K9JGaXvbsPETG5dlKQ== X-Received: by 2002:ac8:1090:: with SMTP id a16mr24489959qtj.32.1623212725465; Tue, 08 Jun 2021 21:25:25 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id k7sm3114562qke.65.2021.06.08.21.25.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 21:25:25 -0700 (PDT) Date: Tue, 8 Jun 2021 21:25:22 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Hugh Dickins , "Kirill A. Shutemov" , Yang Shi , Wang Yugui , Matthew Wilcox , Naoya Horiguchi , Alistair Popple , Ralph Campbell , Zi Yan , Miaohe Lin , Minchan Kim , Jue Wang , Peter Xu , Jan Kara , Shakeel Butt , Oscar Salvador , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 08/10] mm: rmap: make try_to_unmap() void function In-Reply-To: Message-ID: References: MIME-Version: 1.0 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=YOZb6ehB; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of hughd@google.com designates 209.85.160.175 as permitted sender) smtp.mailfrom=hughd@google.com X-Rspamd-Server: rspam02 X-Stat-Signature: sgpg49stnaf94bc6bh5ypw417zuera34 X-Rspamd-Queue-Id: 00C108019365 X-HE-Tag: 1623212722-180724 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yang Shi Currently try_to_unmap() return bool value by checking page_mapcount(), however this may return false positive since page_mapcount() doesn't check all subpages of compound page. The total_mapcount() could be used instead, but its cost is higher since it traverses all subpages. Actually the most callers of try_to_unmap() don't care about the return value at all. So just need check if page is still mapped by page_mapped() when necessary. And page_mapped() does bail out early when it finds mapped subpage. Suggested-by: Hugh Dickins Signed-off-by: Yang Shi Acked-by: Minchan Kim Reviewed-by: Shakeel Butt Acked-by: Kirill A. Shutemov Signed-off-by: Hugh Dickins Acked-by: Naoya Horiguchi --- Patch inserted since the v1 series was posted. v5: Rediffed by Hugh to fit before 7/7 in mm/thp series; akpm fixed grammar. v4: Updated the comment of try_to_unmap() per Minchan. Minor fix and patch reorder per Hugh. Collected ack tag from Hugh. include/linux/rmap.h | 2 +- mm/memory-failure.c | 15 +++++++-------- mm/rmap.c | 15 ++++----------- mm/vmscan.c | 3 ++- 4 files changed, 14 insertions(+), 21 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 8d04e7deedc6..ed31a559e857 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -195,7 +195,7 @@ static inline void page_dup_rmap(struct page *page, bool compound) int page_referenced(struct page *, int is_locked, struct mem_cgroup *memcg, unsigned long *vm_flags); -bool try_to_unmap(struct page *, enum ttu_flags flags); +void try_to_unmap(struct page *, enum ttu_flags flags); /* Avoid racy checks */ #define PVMW_SYNC (1 << 0) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 85ad98c00fd9..b6806e446567 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1063,7 +1063,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, enum ttu_flags ttu = TTU_IGNORE_MLOCK; struct address_space *mapping; LIST_HEAD(tokill); - bool unmap_success = true; + bool unmap_success; int kill = 1, forcekill; struct page *hpage = *hpagep; bool mlocked = PageMlocked(hpage); @@ -1126,7 +1126,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, collect_procs(hpage, &tokill, flags & MF_ACTION_REQUIRED); if (!PageHuge(hpage)) { - unmap_success = try_to_unmap(hpage, ttu); + try_to_unmap(hpage, ttu); } else { if (!PageAnon(hpage)) { /* @@ -1138,17 +1138,16 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, */ mapping = hugetlb_page_mapping_lock_write(hpage); if (mapping) { - unmap_success = try_to_unmap(hpage, - ttu|TTU_RMAP_LOCKED); + try_to_unmap(hpage, ttu|TTU_RMAP_LOCKED); i_mmap_unlock_write(mapping); - } else { + } else pr_info("Memory failure: %#lx: could not lock mapping for mapped huge page\n", pfn); - unmap_success = false; - } } else { - unmap_success = try_to_unmap(hpage, ttu); + try_to_unmap(hpage, ttu); } } + + unmap_success = !page_mapped(hpage); if (!unmap_success) pr_err("Memory failure: %#lx: failed to unmap page (mapcount=%d)\n", pfn, page_mapcount(hpage)); diff --git a/mm/rmap.c b/mm/rmap.c index e05c300048e6..f9fd5bc54f0a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1405,7 +1405,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, /* * When racing against e.g. zap_pte_range() on another cpu, * in between its ptep_get_and_clear_full() and page_remove_rmap(), - * try_to_unmap() may return false when it is about to become true, + * try_to_unmap() may return before page_mapped() has become false, * if page table locking is skipped: use TTU_SYNC to wait for that. */ if (flags & TTU_SYNC) @@ -1756,9 +1756,10 @@ static int page_not_mapped(struct page *page) * Tries to remove all the page table entries which are mapping this * page, used in the pageout path. Caller must hold the page lock. * - * If unmap is successful, return true. Otherwise, false. + * It is the caller's responsibility to check if the page is still + * mapped when needed (use TTU_SYNC to prevent accounting races). */ -bool try_to_unmap(struct page *page, enum ttu_flags flags) +void try_to_unmap(struct page *page, enum ttu_flags flags) { struct rmap_walk_control rwc = { .rmap_one = try_to_unmap_one, @@ -1783,14 +1784,6 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags) rmap_walk_locked(page, &rwc); else rmap_walk(page, &rwc); - - /* - * When racing against e.g. zap_pte_range() on another cpu, - * in between its ptep_get_and_clear_full() and page_remove_rmap(), - * try_to_unmap() may return false when it is about to become true, - * if page table locking is skipped: use TTU_SYNC to wait for that. - */ - return !page_mapcount(page); } /** diff --git a/mm/vmscan.c b/mm/vmscan.c index 5199b9696bab..db49cb1dc052 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1499,7 +1499,8 @@ static unsigned int shrink_page_list(struct list_head *page_list, if (unlikely(PageTransHuge(page))) flags |= TTU_SPLIT_HUGE_PMD; - if (!try_to_unmap(page, flags)) { + try_to_unmap(page, flags); + if (page_mapped(page)) { stat->nr_unmap_fail += nr_pages; if (!was_swapbacked && PageSwapBacked(page)) stat->nr_lazyfree_fail += nr_pages; From patchwork Wed Jun 9 04:27:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 12308843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81630C47095 for ; Wed, 9 Jun 2021 04:27:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 14107601FC for ; Wed, 9 Jun 2021 04:27:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 14107601FC Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9F7DC6B0036; Wed, 9 Jun 2021 00:27:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A7DA6B006E; Wed, 9 Jun 2021 00:27:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7FB016B0070; Wed, 9 Jun 2021 00:27:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id 4AE9B6B0036 for ; Wed, 9 Jun 2021 00:27:50 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E7801A8F6 for ; Wed, 9 Jun 2021 04:27:49 +0000 (UTC) X-FDA: 78232902258.35.58FFA3A Received: from mail-qk1-f175.google.com (mail-qk1-f175.google.com [209.85.222.175]) by imf09.hostedemail.com (Postfix) with ESMTP id 7FA5A6000141 for ; Wed, 9 Jun 2021 04:27:47 +0000 (UTC) Received: by mail-qk1-f175.google.com with SMTP id c124so22519957qkd.8 for ; Tue, 08 Jun 2021 21:27:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=PCG4uUwW8wS6xmpuPwkiV4oLkvnmo78CTRIwxQBRKgY=; b=ISnNRpCieQkW3oz7o/YjHabNZ0xWqccqSPmHG2RGMzDEDOX/VY1vdWkev1zuAu9bf4 CLBMxrS/rDZwTQBTqOikREGr5+eRpWpzswMhSU2FKiuUohrvuxpQQbpkKqLgnACpFngy wotrtTqMXRz2IeWNPzl82lj2T5/EugCK8bQ1l8yifOMTdDOtYr5PVIA3A6gIe1EonP0n jIdGAnfNvldffzBciY3ibTHeza+5sC+rNLcfxLUQ+8oJzpa+8/C30RtBUeju8jOv4WAl 3Sy5Kbu9hdOcMeLwP1ffxx4YX8lSWJ5uBZ4BzmkwrcffJTiGoA1jYlNtz8L5lTxQnLNR UkWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=PCG4uUwW8wS6xmpuPwkiV4oLkvnmo78CTRIwxQBRKgY=; b=BwyOr3u6QaPSDAaagQmVMjh2sw3VTnLOnZJekG1bwmic1YI736ZKhJKplw/2/rl0cT hy0CMmM/LKCzsQ8tpjYVG6EDebcTXNpiT2oUz00ZMjNa+/xvWaBe4C7fFdUTi62vvjOt SoBv6GL233mfYLlqOK8LBEhvq5CJ6yl9KAbU1xM2a60KUID36zeQiENQJjHd97xhhaTG p6QO11CNaNDnjNtYxQRS4jMBGkg/T/gMdCFlKz95kc+t1j3Atj4enU7YcGWOnRWdogAr KMMuLt2IxzZdaJtBXtMaE3w4xQtQ26x3Xi08/1Isc/GDxMZdNcQUIMKbWHM9jKVeAXwv 1mMA== X-Gm-Message-State: AOAM530L14K9/UqAhfRXFZ9vaQLHvY2GfKcM4luDiuD3C5vd/zDzSdDs w2wnP/0fZMAQ6Fya/lAc8rv9ng== X-Google-Smtp-Source: ABdhPJxXZMNVnAh0LFgldc9dyfNSXHSfAHtXSYWJqsucG866iZ2oHmbpoSnB8lX2SgUmBPKcL7ZXYA== X-Received: by 2002:a37:911:: with SMTP id 17mr22703936qkj.436.1623212868622; Tue, 08 Jun 2021 21:27:48 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id h4sm12175706qti.0.2021.06.08.21.27.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 21:27:48 -0700 (PDT) Date: Tue, 8 Jun 2021 21:27:45 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Hugh Dickins , "Kirill A. Shutemov" , Yang Shi , Wang Yugui , Matthew Wilcox , Naoya Horiguchi , Alistair Popple , Ralph Campbell , Zi Yan , Miaohe Lin , Minchan Kim , Jue Wang , Peter Xu , Jan Kara , Shakeel Butt , Oscar Salvador , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 09/10] mm/thp: remap_page() is only needed on anonymous THP In-Reply-To: Message-ID: References: MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 7FA5A6000141 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=ISnNRpCi; spf=pass (imf09.hostedemail.com: domain of hughd@google.com designates 209.85.222.175 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: 1co4wy8qkxt5tj46eqd819g84f6opu9j X-HE-Tag: 1623212867-250835 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: THP splitting's unmap_page() only sets TTU_SPLIT_FREEZE when PageAnon, and migration entries are only inserted when TTU_MIGRATION (unused here) or TTU_SPLIT_FREEZE is set: so it's just a waste of time for remap_page() to search for migration entries to remove when !PageAnon. Fixes: baa355fd3314 ("thp: file pages support for split_huge_page()") Signed-off-by: Hugh Dickins Reviewed-by: Yang Shi Acked-by: Kirill A. Shutemov --- mm/huge_memory.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6d2a0119fc58..319a1a078451 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2355,6 +2355,7 @@ static void unmap_page(struct page *page) VM_BUG_ON_PAGE(!PageHead(page), page); + /* If TTU_SPLIT_FREEZE is ever extended to file, update remap_page() */ if (PageAnon(page)) ttu_flags |= TTU_SPLIT_FREEZE; @@ -2366,6 +2367,10 @@ static void unmap_page(struct page *page) static void remap_page(struct page *page, unsigned int nr) { int i; + + /* If TTU_SPLIT_FREEZE is ever extended to file, remove this check */ + if (!PageAnon(page)) + return; if (PageTransHuge(page)) { remove_migration_ptes(page, page, true); } else { From patchwork Wed Jun 9 04:30:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 12308845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 416ADC47095 for ; Wed, 9 Jun 2021 04:30:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9F48861246 for ; Wed, 9 Jun 2021 04:30:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9F48861246 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2EFD46B0036; Wed, 9 Jun 2021 00:30:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 29FC66B006E; Wed, 9 Jun 2021 00:30:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F3056B0070; Wed, 9 Jun 2021 00:30:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0078.hostedemail.com [216.40.44.78]) by kanga.kvack.org (Postfix) with ESMTP id CE6FF6B0036 for ; Wed, 9 Jun 2021 00:30:04 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 57A9C181AEF2A for ; Wed, 9 Jun 2021 04:30:04 +0000 (UTC) X-FDA: 78232907928.18.32C6B42 Received: from mail-qv1-f43.google.com (mail-qv1-f43.google.com [209.85.219.43]) by imf11.hostedemail.com (Postfix) with ESMTP id E0DE7200107C for ; Wed, 9 Jun 2021 04:29:59 +0000 (UTC) Received: by mail-qv1-f43.google.com with SMTP id l3so5108686qvl.0 for ; Tue, 08 Jun 2021 21:30:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=+ThT1ycF8YLamnlD2Lza4I95kCWi6MvYxYZGgNS2X9s=; b=M4FcLOpy9iMQviGUShyo5Ubyn4BAdjZbD/36Spa6wZwN1yu5XkVBovKgUgOOm4MHYN +9rbHmJQ/QlxN5lmNPNE3ZqD9ob3o4BQUbbZqH27OMI+hmD7E7dxL0cQtcs4kpFSoLoI P7PiaxSFx4nK0py9c5yw5WAE/sM5FhuO2JorxLDfrkcnT5+c/jqm/mkAuZHeQkuoAD1X L6J1LLhknct1mGv8LxU8JU1rzDXQxbTSoCSRb64AvCvf5inp1NLFrttpIrJGlASZGgMZ OFZ9xGwdHrxIhM+v2MnAMcpfe1yTuJbdi09tFELDTC1bju9G9LfX0cO16l28YT3aqrnI wZ9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=+ThT1ycF8YLamnlD2Lza4I95kCWi6MvYxYZGgNS2X9s=; b=De5TzC7qcdLpjnJk4FauQdclXd9yqR47DP8kDcKUGv/bLD9kamm7aIQTS7z5gAGD02 yVdVsSDCJuHEkcm58QYQTBy2r97oQlEPBOopcAfx6KwFNUk3Ze/ofxgmuBqd/N73kzcG RMgaSt7taSQ7yct7LH+is10r1iq7BtY+X47lYJelnNc3NtUp7VlnMU1g+HMpzjkVaReM U2xKd+lwZcyx07RadR79LEoyMMFPUufhVmfjyZeiduYh+hsUUBfELzSORbbNkKUAWmSb G5VPH+3qIus9ZWPKYuo6P/5Gb5S5vMJzvXr6lxOggqBCiD8UZayyG/EevEWbJub6gR7D EzDQ== X-Gm-Message-State: AOAM533pr7Gii16pP4F+i58CwdpUwZViXVsJEiPxXVF7wMRNkePmRXhp +1oBM1VQS7F2CXldxAWLhqm9AQ== X-Google-Smtp-Source: ABdhPJzQmPE98FZfl1GR4aklzUHZDiAU5rCGYrSbno/A3T1De47msweq94P8b3ysrWw+Si6IILLoOg== X-Received: by 2002:a0c:eda5:: with SMTP id h5mr3924907qvr.26.1623213003254; Tue, 08 Jun 2021 21:30:03 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id k124sm13126699qkc.132.2021.06.08.21.30.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 21:30:02 -0700 (PDT) Date: Tue, 8 Jun 2021 21:30:00 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Hugh Dickins , "Kirill A. Shutemov" , Yang Shi , Wang Yugui , Matthew Wilcox , Naoya Horiguchi , Alistair Popple , Ralph Campbell , Zi Yan , Miaohe Lin , Minchan Kim , Jue Wang , Peter Xu , Jan Kara , Shakeel Butt , Oscar Salvador , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 10/10] mm: hwpoison_user_mappings() try_to_unmap() with TTU_SYNC In-Reply-To: Message-ID: <329c28ed-95df-9a2c-8893-b444d8a6d340@google.com> References: MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=M4FcLOpy; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of hughd@google.com designates 209.85.219.43 as permitted sender) smtp.mailfrom=hughd@google.com X-Rspamd-Server: rspam02 X-Stat-Signature: qzudn6bmi7t98za8qm5d693zseg1ik7b X-Rspamd-Queue-Id: E0DE7200107C X-HE-Tag: 1623212999-921600 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: TTU_SYNC prevents an unlikely race, when try_to_unmap() returns shortly before the page is accounted as unmapped. It is unlikely to coincide with hwpoisoning, but now that we have the flag, hwpoison_user_mappings() would do well to use it. Signed-off-by: Hugh Dickins Acked-by: Kirill A. Shutemov Acked-by: Naoya Horiguchi --- Patch added since the v1 series was posted. mm/memory-failure.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index b6806e446567..e16edefca523 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1060,7 +1060,7 @@ static int get_hwpoison_page(struct page *p, unsigned long flags, static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, int flags, struct page **hpagep) { - enum ttu_flags ttu = TTU_IGNORE_MLOCK; + enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC; struct address_space *mapping; LIST_HEAD(tokill); bool unmap_success;