From patchwork Fri Apr 14 14:23:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chih-En Lin X-Patchwork-Id: 13211633 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B33A1C77B72 for ; Fri, 14 Apr 2023 14:26:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230458AbjDNO0T (ORCPT ); Fri, 14 Apr 2023 10:26:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230516AbjDNOZ4 (ORCPT ); Fri, 14 Apr 2023 10:25:56 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74E11CC09; Fri, 14 Apr 2023 07:25:26 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id h24-20020a17090a9c1800b002404be7920aso18803455pjp.5; Fri, 14 Apr 2023 07:25:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681482326; x=1684074326; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TCxBEbp3Vh2LecugBLRL5yp+Vpy2XMoG5rUe/amD8/I=; b=ZUkBFFgDZMzq0ei7OkZmTW87uX2xk5y2kJKyAYLp3lpS89qPRHDKJgz7tCSi6O1V6k GrFbOBQFX/XypGH/c1yklfosrEHoIqzQ10mKCvEBfVArBFpSNqifnArvfV57Wofxfked tQSdXPx2lkO9j+I9wB/qaUE15663ceimdY4la8QuwN52VLu4kI7OSpxGktPP3xTt05op 8/JPrXs0WD8ikj7mOCQTPzq3BQuATZD6Bj6rr4p7YEsHpV8Rg7fuN426Vq3Yjlwn5Htj 98xjWssljzirK0y11ZY7ToIsvcWt1GPlCV+66dUlZIWt1/ldVbpqyb0Vt9r3pd3avAS7 t9Zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681482326; x=1684074326; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TCxBEbp3Vh2LecugBLRL5yp+Vpy2XMoG5rUe/amD8/I=; b=YVs5vOLVgAe5DagTU3OXVUWA9HSV7pUhn7AC7Bxe1tYjhaSh6KeNiqlmtIGuk3gH08 XHMbtyTFDybcJqWs6UedMQxC9d+cHsbZ3y83WJcKsToWGzF9WPQ2ew4nooBdUCMVhCAF pLxFG74WYPXuiq9wuPsPkR0AKinxwLUL9bW9b0Bv/EGFtZnl84ZOHytt1zWb4AZBUqdA /NHYlFHg1qoe4sNmQaEu6HiZSQ94q3RQ1gEAu8RvE1GvruKw4uRL6dAsj9fb3STfDaf0 VEnFmtqN9Eawfnrg+Q86HrDERlyzRiU+4zAuqXxzRrQmxoprNNirm7cg4h0LTi8tLxqa vDUA== X-Gm-Message-State: AAQBX9dz4iMwNe2k2V6b5RpQXrEO8jzZ7I+Zl/BW1+x53G1jkkLAwFZy 1p4J16lNeVGcYg0jp+sAfZc= X-Google-Smtp-Source: AKy350apXLIP0kt5VCJQRQireZ4xb8EqK6nTeakZbNZ/7tzb6r5mGG/H+ghRbHSVVMAlJe/5ZJjFlQ== X-Received: by 2002:a17:90b:788:b0:246:896a:408d with SMTP id l8-20020a17090b078800b00246896a408dmr5981639pjz.14.1681482325813; Fri, 14 Apr 2023 07:25:25 -0700 (PDT) Received: from strix-laptop.. (2001-b011-20e0-1499-8303-7502-d3d7-e13b.dynamic-ip6.hinet.net. [2001:b011:20e0:1499:8303:7502:d3d7:e13b]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm2952386pjt.22.2023.04.14.07.25.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 07:25:25 -0700 (PDT) From: Chih-En Lin To: Andrew Morton , Qi Zheng , David Hildenbrand , "Matthew Wilcox (Oracle)" , Christophe Leroy , John Hubbard , Nadav Amit , Barry Song , Pasha Tatashin Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Steven Rostedt , Masami Hiramatsu , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Yu Zhao , Steven Barrett , Juergen Gross , Peter Xu , Kefeng Wang , Tong Tiangen , Christoph Hellwig , "Liam R. Howlett" , Yang Shi , Vlastimil Babka , Alex Sierra , Vincent Whitchurch , Anshuman Khandual , Li kunyu , Liu Shixin , Hugh Dickins , Minchan Kim , Joey Gouly , Chih-En Lin , Michal Hocko , Suren Baghdasaryan , "Zach O'Keefe" , Gautam Menghani , Catalin Marinas , Mark Brown , "Eric W. Biederman" , Andrei Vagin , Shakeel Butt , Daniel Bristot de Oliveira , "Jason A. Donenfeld" , Greg Kroah-Hartman , Alexey Gladkov , x86@kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Dinglan Peng , Pedro Fonseca , Jim Huang , Huichun Feng Subject: [PATCH v5 07/17] mm/khugepaged: Break COW PTE before scanning pte Date: Fri, 14 Apr 2023 22:23:31 +0800 Message-Id: <20230414142341.354556-8-shiyn.lin@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230414142341.354556-1-shiyn.lin@gmail.com> References: <20230414142341.354556-1-shiyn.lin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org We should not allow THP to collapse COW-ed PTE. So, break COW PTE before collapse_pte_mapped_thp() collapse to THP. Also, break COW PTE before khugepaged_scan_pmd() scan PTE. Signed-off-by: Chih-En Lin --- include/trace/events/huge_memory.h | 1 + mm/khugepaged.c | 35 +++++++++++++++++++++++++++++- 2 files changed, 35 insertions(+), 1 deletion(-) diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h index 3e6fb05852f9..5f2c39f61521 100644 --- a/include/trace/events/huge_memory.h +++ b/include/trace/events/huge_memory.h @@ -13,6 +13,7 @@ EM( SCAN_PMD_NULL, "pmd_null") \ EM( SCAN_PMD_NONE, "pmd_none") \ EM( SCAN_PMD_MAPPED, "page_pmd_mapped") \ + EM( SCAN_COW_PTE, "cowed_pte") \ EM( SCAN_EXCEED_NONE_PTE, "exceed_none_pte") \ EM( SCAN_EXCEED_SWAP_PTE, "exceed_swap_pte") \ EM( SCAN_EXCEED_SHARED_PTE, "exceed_shared_pte") \ diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 92e6f56a932d..3020fcb53691 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -31,6 +31,7 @@ enum scan_result { SCAN_PMD_NULL, SCAN_PMD_NONE, SCAN_PMD_MAPPED, + SCAN_COW_PTE, SCAN_EXCEED_NONE_PTE, SCAN_EXCEED_SWAP_PTE, SCAN_EXCEED_SHARED_PTE, @@ -886,7 +887,7 @@ static int find_pmd_or_thp_or_none(struct mm_struct *mm, return SCAN_PMD_MAPPED; if (pmd_devmap(pmde)) return SCAN_PMD_NULL; - if (pmd_bad(pmde)) + if (pmd_write(pmde) && pmd_bad(pmde)) return SCAN_PMD_NULL; return SCAN_SUCCEED; } @@ -937,6 +938,8 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm, pte_unmap(vmf.pte); continue; } + if (break_cow_pte(vma, pmd, address)) + return SCAN_COW_PTE; ret = do_swap_page(&vmf); /* @@ -1049,6 +1052,9 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, if (result != SCAN_SUCCEED) goto out_up_write; + /* We should already handled COW-ed PTE. */ + VM_WARN_ON(test_bit(MMF_COW_PTE, &mm->flags) && !pmd_write(*pmd)); + anon_vma_lock_write(vma->anon_vma); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address, @@ -1159,6 +1165,13 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm, memset(cc->node_load, 0, sizeof(cc->node_load)); nodes_clear(cc->alloc_nmask); + + /* Break COW PTE before we collapse the pages. */ + if (break_cow_pte(vma, pmd, address)) { + result = SCAN_COW_PTE; + goto out; + } + pte = pte_offset_map_lock(mm, pmd, address, &ptl); for (_address = address, _pte = pte; _pte < pte + HPAGE_PMD_NR; _pte++, _address += PAGE_SIZE) { @@ -1217,6 +1230,10 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm, goto out_unmap; } + /* + * If we only trigger the break COW PTE, the page usually + * still in COW mapping, which it still be shared. + */ if (page_mapcount(page) > 1) { ++shared; if (cc->is_khugepaged && @@ -1512,6 +1529,11 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, goto drop_hpage; } + /* We shouldn't let COW-ed PTE collapse. */ + if (break_cow_pte(vma, pmd, haddr)) + goto drop_hpage; + VM_WARN_ON(test_bit(MMF_COW_PTE, &mm->flags) && !pmd_write(*pmd)); + /* * We need to lock the mapping so that from here on, only GUP-fast and * hardware page walks can access the parts of the page tables that @@ -1717,6 +1739,11 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, result = SCAN_PTE_UFFD_WP; goto unlock_next; } + if (test_bit(MMF_COW_PTE, &mm->flags) && + !pmd_write(*pmd)) { + result = SCAN_COW_PTE; + goto unlock_next; + } collapse_and_free_pmd(mm, vma, addr, pmd); if (!cc->is_khugepaged && is_target) result = set_huge_pmd(vma, addr, pmd, hpage); @@ -2154,6 +2181,11 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, swap = 0; memset(cc->node_load, 0, sizeof(cc->node_load)); nodes_clear(cc->alloc_nmask); + if (break_cow_pte(find_vma(mm, addr), NULL, addr)) { + result = SCAN_COW_PTE; + goto out; + } + rcu_read_lock(); xas_for_each(&xas, page, start + HPAGE_PMD_NR - 1) { if (xas_retry(&xas, page)) @@ -2224,6 +2256,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, } rcu_read_unlock(); +out: if (result == SCAN_SUCCEED) { if (cc->is_khugepaged && present < HPAGE_PMD_NR - khugepaged_max_ptes_none) {