From patchwork Tue Jul 20 15:51:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12388723 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A90EAC07E95 for ; Tue, 20 Jul 2021 15:52:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5471E610FB for ; Tue, 20 Jul 2021 15:52:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5471E610FB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0F9246B006C; Tue, 20 Jul 2021 11:52:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D16A6B0070; Tue, 20 Jul 2021 11:52:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB2F86B0071; Tue, 20 Jul 2021 11:52:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id C6A876B006C for ; Tue, 20 Jul 2021 11:52:02 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4A0E0185282AC for ; Tue, 20 Jul 2021 15:52:01 +0000 (UTC) X-FDA: 78383407242.07.C6550D1 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf07.hostedemail.com (Postfix) with ESMTP id E6E94100A7EB for ; Tue, 20 Jul 2021 15:52:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626796320; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7D01Sf7WyFpDy5iBOSB8keAwNmDOKVpIqWY7HiG+shw=; b=i4SZyGeBhYKgnJN13+G6GaueDE/A/sBlwpgi3Rn1Qo2Ye/fu0HdX+MKImZCnY9N2E4v4c8 BRrvu5YSMo7cmZPK8MdhEnKooSAqkm1qejsKSIvcdyec08Wt0WO9PsMszeH68rg55Lzbny bQAj1hgmU2LYGpc6HdkBecN2ROfOduA= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-532-MY_91Ly0Oaa6niNigJRGwA-1; Tue, 20 Jul 2021 11:51:56 -0400 X-MC-Unique: MY_91Ly0Oaa6niNigJRGwA-1 Received: by mail-qv1-f71.google.com with SMTP id l4-20020a0ce8440000b02902d89f797d08so19629401qvo.17 for ; Tue, 20 Jul 2021 08:51:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7D01Sf7WyFpDy5iBOSB8keAwNmDOKVpIqWY7HiG+shw=; b=TQ4j2vADot4uFtg7DZgjFTAhw/igbBL8WADBAS57oZCLwwK9KH8G6p/vkNcdmBcc0p bVHgieSNIsBl/iLa0W4IdIku8nEdcmgR9E6VnO9x9DT1pPBnQhyfwDgJ7aKDI5Xi1GMk jBZGzNABsjTKFDQkblIjWpBEYHonm4ZbHshJvGyip8sWyYbesFQCb738Latuex4yr6BH +AchkFA/8UJMczJmd3UAit8Tl8elF47uVFBGLgZuUL8SSrtczgfXS1MVQgvoLM71JEAr yacO4sgt3Xl0NLrJwWRmhTlI8NMF5eznNmgaXjUq1qWOuP9vV4Ei5+mUxMewNTm6LJOF GANg== X-Gm-Message-State: AOAM532oLnc3sEa47tttC0i3kqvmeyF6K4DpRWl+lKLNkmvs9zI7Fvlv 9Pp5a7fjafXiyVROmxckBbiV4o3AIuEoq9Vy1p02gjwWYYkkqWw2iFKfTp7SVxFYkSUbfDs6cBS H3DZ+Ut1/uVE= X-Received: by 2002:a37:dcc2:: with SMTP id v185mr29847460qki.167.1626796316444; Tue, 20 Jul 2021 08:51:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx9NUWZkskDEWU7mGn9bqckF/hMB20JHvsawHIkAgaxaveiacbm2PLft2c6YquFp8IjnR6I+Q== X-Received: by 2002:a37:dcc2:: with SMTP id v185mr29847443qki.167.1626796316237; Tue, 20 Jul 2021 08:51:56 -0700 (PDT) Received: from localhost.localdomain (bras-base-toroon474qw-grc-65-184-144-111-238.dsl.bell.ca. [184.144.111.238]) by smtp.gmail.com with ESMTPSA id c15sm5467012qtc.37.2021.07.20.08.51.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Jul 2021 08:51:55 -0700 (PDT) From: Peter Xu To: stable@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Hugh Dickins , Axel Rasmussen , Andrew Morton , peterx@redhat.com, Hillf Danton , Igor Raits Subject: [PATCH stable 5.13.y/5.12.y 1/2] mm/thp: simplify copying of huge zero page pmd when fork Date: Tue, 20 Jul 2021 11:51:49 -0400 Message-Id: <20210720155150.497148-2-peterx@redhat.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210720155150.497148-1-peterx@redhat.com> References: <796cbb7-5a1c-1ba0-dde5-479aba8224f2@google.com> <20210720155150.497148-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E6E94100A7EB X-Stat-Signature: r3ccb7d1kpjx6jtzao78n3fjfwfw4udm Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=i4SZyGeB; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf07.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=peterx@redhat.com X-HE-Tag: 1626796320-23052 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Patch series "mm/uffd: Misc fix for uffd-wp and one more test". This series tries to fix some corner case bugs for uffd-wp on either thp or fork(). Then it introduced a new test with pagemap/pageout. Patch layout: Patch 1: cleanup for THP, it'll slightly simplify the follow up patches Patch 2-4: misc fixes for uffd-wp here and there; please refer to each patch Patch 5: add pagemap support for uffd-wp Patch 6: add pagemap/pageout test for uffd-wp The last test introduced can also verify some of the fixes in previous patches, as the test will fail without the fixes. However it's not easy to verify all the changes in patch 2-4, but hopefully they can still be properly reviewed. Note that if considering the ongoing uffd-wp shmem & hugetlbfs work, patch 5 will be incomplete as it's missing e.g. hugetlbfs part or the special swap pte detection. However that's not needed in this series, and since that series is still during review, this series does not depend on that one (the last test only runs with anonymous memory, not file-backed). So this series can be merged even before that series. This patch (of 6): Huge zero page is handled in a special path in copy_huge_pmd(), however it should share most codes with a normal thp page. Trying to share more code with it by removing the special path. The only leftover so far is the huge zero page refcounting (mm_get_huge_zero_page()), because that's separately done with a global counter. This prepares for a future patch to modify the huge pmd to be installed, so that we don't need to duplicate it explicitly into huge zero page case too. Link: https://lkml.kernel.org/r/20210428225030.9708-1-peterx@redhat.com Link: https://lkml.kernel.org/r/20210428225030.9708-2-peterx@redhat.com Signed-off-by: Peter Xu Cc: Kirill A. Shutemov Cc: Mike Kravetz , peterx@redhat.com Cc: Mike Rapoport Cc: Axel Rasmussen Cc: Andrea Arcangeli Cc: Hugh Dickins Cc: Jerome Glisse Cc: Alexander Viro Cc: Brian Geffon Cc: "Dr . David Alan Gilbert" Cc: Joe Perches Cc: Lokesh Gidra Cc: Mina Almasry Cc: Oliver Upton Cc: Shaohua Li Cc: Shuah Khan Cc: Stephen Rothwell Cc: Wang Qing Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds (cherry picked from commit 5fc7a5f6fd04bc18f309d9f979b32ef7d1d0a997) Signed-off-by: Peter Xu --- mm/huge_memory.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8857ef1543eb..4cea1e218b48 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1088,17 +1088,13 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, * a page table. */ if (is_huge_zero_pmd(pmd)) { - struct page *zero_page; /* * get_huge_zero_page() will never allocate a new page here, * since we already have a zero page to copy. It just takes a * reference. */ - zero_page = mm_get_huge_zero_page(dst_mm); - set_huge_zero_page(pgtable, dst_mm, vma, addr, dst_pmd, - zero_page); - ret = 0; - goto out_unlock; + mm_get_huge_zero_page(dst_mm); + goto out_zero_page; } src_page = pmd_page(pmd); @@ -1122,6 +1118,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, get_page(src_page); page_dup_rmap(src_page, true); add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); +out_zero_page: mm_inc_nr_ptes(dst_mm); pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); From patchwork Tue Jul 20 15:51:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12388725 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF09CC636C8 for ; Tue, 20 Jul 2021 15:52:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7A76960FED for ; Tue, 20 Jul 2021 15:52:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7A76960FED Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 343F46B0070; Tue, 20 Jul 2021 11:52:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2CE2D8D0001; Tue, 20 Jul 2021 11:52:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 146BE6B0072; Tue, 20 Jul 2021 11:52:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id E21F96B0070 for ; Tue, 20 Jul 2021 11:52:03 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6CA3218099AB5 for ; Tue, 20 Jul 2021 15:52:02 +0000 (UTC) X-FDA: 78383407284.17.77DDAA9 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf24.hostedemail.com (Postfix) with ESMTP id CD814B000486 for ; Tue, 20 Jul 2021 15:52:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626796321; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O9a/fbeKGsRCaGbj0GjFx29XZTyn9/xl+mDovWeTZmg=; b=YhvpP5HywUBQwZTJzJprcI+yWsOoJqEfS5DVUvGV4ulOpufak8PLRGXNoBZqa6/ZclEZmR BZi5QfRFNPJrgQYJWjtemD3h/Io+cM3QtO8DAZo3hN/bpmmrtbjTimePfdsG44sHKS9VfS 4wcI/MuK3SpRwg3VyRnCybVlhToz4fE= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-387-pCuRBOWrP-GfPchMK9oO4g-1; Tue, 20 Jul 2021 11:51:58 -0400 X-MC-Unique: pCuRBOWrP-GfPchMK9oO4g-1 Received: by mail-qt1-f199.google.com with SMTP id d7-20020ac85ac70000b029026ae3f4adc9so2245622qtd.13 for ; Tue, 20 Jul 2021 08:51:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=O9a/fbeKGsRCaGbj0GjFx29XZTyn9/xl+mDovWeTZmg=; b=Uv0pUFAGcWTJtaYPq+oFUkMP2ILbRf6eeV+N1XlQYmzZDdQIjaRhlDdNDhLbOAxnpN EerGDcpWxL8TVhhY5JHMwhqxn5Yeq7plsoH2TyjPDWLgwD1kWbz8ffcMeJh25esOXcJ+ 73J+qKrV1R7jVhZH7IdzSXNQsV2i/g6wKW6YUHNG7DD1q+sTo3MzNSm4nVybWaKdriK0 3l6UkU7tK27/M7MDPSexlKJy3mGuVwyR25ir/RfXH/vB4IyjOyNfrzsd+d/OpIKiKu+h CbMGDSjK6hAwnaQEmeSSTk5PX2rWrKImIkk+WsPAwO6wiilzu8nHJ9reZ4ZhLm/1fUWU 9orw== X-Gm-Message-State: AOAM530xu89P0S1RGQxTX8+EWKiydi0idT3eWtZIteu4M5V3c+99ElkZ R6nWvZg5etX0KO7PX58syERkfzP1P9dW1kdW+gwoiOtlv2OSH+Y3WRLOHoNvfYWkk2H5KcCkdfW vOKLq2agpYEI= X-Received: by 2002:ae9:eb97:: with SMTP id b145mr29483689qkg.111.1626796318171; Tue, 20 Jul 2021 08:51:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzM7UZgHaImiiimyOrNJGtSVevcrLmOKEvzTT9ExNnVNqev9TqLlnVV2THRD6LFzI+PQnpSqw== X-Received: by 2002:ae9:eb97:: with SMTP id b145mr29483662qkg.111.1626796317942; Tue, 20 Jul 2021 08:51:57 -0700 (PDT) Received: from localhost.localdomain (bras-base-toroon474qw-grc-65-184-144-111-238.dsl.bell.ca. [184.144.111.238]) by smtp.gmail.com with ESMTPSA id c15sm5467012qtc.37.2021.07.20.08.51.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Jul 2021 08:51:57 -0700 (PDT) From: Peter Xu To: stable@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Hugh Dickins , Axel Rasmussen , Andrew Morton , peterx@redhat.com, Hillf Danton , Igor Raits Subject: [PATCH stable 5.13.y/5.12.y 2/2] mm/userfaultfd: fix uffd-wp special cases for fork() Date: Tue, 20 Jul 2021 11:51:50 -0400 Message-Id: <20210720155150.497148-3-peterx@redhat.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210720155150.497148-1-peterx@redhat.com> References: <796cbb7-5a1c-1ba0-dde5-479aba8224f2@google.com> <20210720155150.497148-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YhvpP5Hy; spf=none (imf24.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 216.205.24.124) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam05 X-Stat-Signature: 17waxzmgwogjsnqftio88thj7zjqpnpp X-Rspamd-Queue-Id: CD814B000486 X-HE-Tag: 1626796321-595182 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We tried to do something similar in b569a1760782 ("userfaultfd: wp: drop _PAGE_UFFD_WP properly when fork") previously, but it's not doing it all right.. A few fixes around the code path: 1. We were referencing VM_UFFD_WP vm_flags on the _old_ vma rather than the new vma. That's overlooked in b569a1760782, so it won't work as expected. Thanks to the recent rework on fork code (7a4830c380f3a8b3), we can easily get the new vma now, so switch the checks to that. 2. Dropping the uffd-wp bit in copy_huge_pmd() could be wrong if the huge pmd is a migration huge pmd. When it happens, instead of using pmd_uffd_wp(), we should use pmd_swp_uffd_wp(). The fix is simply to handle them separately. 3. Forget to carry over uffd-wp bit for a write migration huge pmd entry. This also happens in copy_huge_pmd(), where we converted a write huge migration entry into a read one. 4. In copy_nonpresent_pte(), drop uffd-wp if necessary for swap ptes. 5. In copy_present_page() when COW is enforced when fork(), we also need to pass over the uffd-wp bit if VM_UFFD_WP is armed on the new vma, and when the pte to be copied has uffd-wp bit set. Remove the comment in copy_present_pte() about this. It won't help a huge lot to only comment there, but comment everywhere would be an overkill. Let's assume the commit messages would help. [peterx@redhat.com: fix a few thp pmd missing uffd-wp bit] Link: https://lkml.kernel.org/r/20210428225030.9708-4-peterx@redhat.com Link: https://lkml.kernel.org/r/20210428225030.9708-3-peterx@redhat.com Fixes: b569a1760782f ("userfaultfd: wp: drop _PAGE_UFFD_WP properly when fork") Signed-off-by: Peter Xu Cc: Jerome Glisse Cc: Mike Rapoport Cc: Alexander Viro Cc: Andrea Arcangeli Cc: Axel Rasmussen Cc: Brian Geffon Cc: "Dr . David Alan Gilbert" Cc: Hugh Dickins Cc: Joe Perches Cc: Kirill A. Shutemov Cc: Lokesh Gidra Cc: Mike Kravetz Cc: Mina Almasry Cc: Oliver Upton Cc: Shaohua Li Cc: Shuah Khan Cc: Stephen Rothwell Cc: Wang Qing Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds (cherry picked from commit 8f34f1eac3820fc2722e5159acceb22545b30b0d) Signed-off-by: Peter Xu --- include/linux/huge_mm.h | 2 +- include/linux/swapops.h | 2 ++ mm/huge_memory.c | 27 ++++++++++++++------------- mm/memory.c | 25 +++++++++++++------------ 4 files changed, 30 insertions(+), 26 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index b4e1ebaae825..939f21b69ead 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -10,7 +10,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf); int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, - struct vm_area_struct *vma); + struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma); void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd); int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, pud_t *dst_pud, pud_t *src_pud, unsigned long addr, diff --git a/include/linux/swapops.h b/include/linux/swapops.h index 6430a94c6981..0d429a102d41 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -265,6 +265,8 @@ static inline swp_entry_t pmd_to_swp_entry(pmd_t pmd) if (pmd_swp_soft_dirty(pmd)) pmd = pmd_swp_clear_soft_dirty(pmd); + if (pmd_swp_uffd_wp(pmd)) + pmd = pmd_swp_clear_uffd_wp(pmd); arch_entry = __pmd_to_swp_entry(pmd); return swp_entry(__swp_type(arch_entry), __swp_offset(arch_entry)); } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4cea1e218b48..9aaf4a8ebeeb 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1026,7 +1026,7 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, - struct vm_area_struct *vma) + struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma) { spinlock_t *dst_ptl, *src_ptl; struct page *src_page; @@ -1035,7 +1035,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, int ret = -ENOMEM; /* Skip if can be re-fill on fault */ - if (!vma_is_anonymous(vma)) + if (!vma_is_anonymous(dst_vma)) return 0; pgtable = pte_alloc_one(dst_mm); @@ -1049,14 +1049,6 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, ret = -EAGAIN; pmd = *src_pmd; - /* - * Make sure the _PAGE_UFFD_WP bit is cleared if the new VMA - * does not have the VM_UFFD_WP, which means that the uffd - * fork event is not enabled. - */ - if (!(vma->vm_flags & VM_UFFD_WP)) - pmd = pmd_clear_uffd_wp(pmd); - #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION if (unlikely(is_swap_pmd(pmd))) { swp_entry_t entry = pmd_to_swp_entry(pmd); @@ -1067,11 +1059,15 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, pmd = swp_entry_to_pmd(entry); if (pmd_swp_soft_dirty(*src_pmd)) pmd = pmd_swp_mksoft_dirty(pmd); + if (pmd_swp_uffd_wp(*src_pmd)) + pmd = pmd_swp_mkuffd_wp(pmd); set_pmd_at(src_mm, addr, src_pmd, pmd); } add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); mm_inc_nr_ptes(dst_mm); pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); + if (!userfaultfd_wp(dst_vma)) + pmd = pmd_swp_clear_uffd_wp(pmd); set_pmd_at(dst_mm, addr, dst_pmd, pmd); ret = 0; goto out_unlock; @@ -1107,11 +1103,11 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, * best effort that the pinned pages won't be replaced by another * random page during the coming copy-on-write. */ - if (unlikely(page_needs_cow_for_dma(vma, src_page))) { + if (unlikely(page_needs_cow_for_dma(src_vma, src_page))) { pte_free(dst_mm, pgtable); spin_unlock(src_ptl); spin_unlock(dst_ptl); - __split_huge_pmd(vma, src_pmd, addr, false, NULL); + __split_huge_pmd(src_vma, src_pmd, addr, false, NULL); return -EAGAIN; } @@ -1121,8 +1117,9 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, out_zero_page: mm_inc_nr_ptes(dst_mm); pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); - pmdp_set_wrprotect(src_mm, addr, src_pmd); + if (!userfaultfd_wp(dst_vma)) + pmd = pmd_clear_uffd_wp(pmd); pmd = pmd_mkold(pmd_wrprotect(pmd)); set_pmd_at(dst_mm, addr, dst_pmd, pmd); @@ -1838,6 +1835,8 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, newpmd = swp_entry_to_pmd(entry); if (pmd_swp_soft_dirty(*pmd)) newpmd = pmd_swp_mksoft_dirty(newpmd); + if (pmd_swp_uffd_wp(*pmd)) + newpmd = pmd_swp_mkuffd_wp(newpmd); set_pmd_at(mm, addr, pmd, newpmd); } goto unlock; @@ -3248,6 +3247,8 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new) pmde = pmd_mksoft_dirty(pmde); if (is_write_migration_entry(entry)) pmde = maybe_pmd_mkwrite(pmde, vma); + if (pmd_swp_uffd_wp(*pvmw->pmd)) + pmde = pmd_wrprotect(pmd_mkuffd_wp(pmde)); flush_cache_range(vma, mmun_start, mmun_start + HPAGE_PMD_SIZE); if (PageAnon(new)) diff --git a/mm/memory.c b/mm/memory.c index b15367c285bd..f8e8c99bba73 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -708,10 +708,10 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, static unsigned long copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, - pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma, - unsigned long addr, int *rss) + pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, unsigned long addr, int *rss) { - unsigned long vm_flags = vma->vm_flags; + unsigned long vm_flags = dst_vma->vm_flags; pte_t pte = *src_pte; struct page *page; swp_entry_t entry = pte_to_swp_entry(pte); @@ -780,6 +780,8 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, set_pte_at(src_mm, addr, src_pte, pte); } } + if (!userfaultfd_wp(dst_vma)) + pte = pte_swp_clear_uffd_wp(pte); set_pte_at(dst_mm, addr, dst_pte, pte); return 0; } @@ -845,6 +847,9 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma /* All done, just insert the new page copy in the child */ pte = mk_pte(new_page, dst_vma->vm_page_prot); pte = maybe_mkwrite(pte_mkdirty(pte), dst_vma); + if (userfaultfd_pte_wp(dst_vma, *src_pte)) + /* Uffd-wp needs to be delivered to dest pte as well */ + pte = pte_wrprotect(pte_mkuffd_wp(pte)); set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte); return 0; } @@ -894,12 +899,7 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, pte = pte_mkclean(pte); pte = pte_mkold(pte); - /* - * Make sure the _PAGE_UFFD_WP bit is cleared if the new VMA - * does not have the VM_UFFD_WP, which means that the uffd - * fork event is not enabled. - */ - if (!(vm_flags & VM_UFFD_WP)) + if (!userfaultfd_wp(dst_vma)) pte = pte_clear_uffd_wp(pte); set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte); @@ -974,7 +974,8 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, if (unlikely(!pte_present(*src_pte))) { entry.val = copy_nonpresent_pte(dst_mm, src_mm, dst_pte, src_pte, - src_vma, addr, rss); + dst_vma, src_vma, + addr, rss); if (entry.val) break; progress += 8; @@ -1051,8 +1052,8 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, || pmd_devmap(*src_pmd)) { int err; VM_BUG_ON_VMA(next-addr != HPAGE_PMD_SIZE, src_vma); - err = copy_huge_pmd(dst_mm, src_mm, - dst_pmd, src_pmd, addr, src_vma); + err = copy_huge_pmd(dst_mm, src_mm, dst_pmd, src_pmd, + addr, dst_vma, src_vma); if (err == -ENOMEM) return -ENOMEM; if (!err)