From patchwork Thu Dec 8 11:41:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13068308 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B54C1C3A5A7 for ; Thu, 8 Dec 2022 11:41:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 142248E0003; Thu, 8 Dec 2022 06:41:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F4728E0001; Thu, 8 Dec 2022 06:41:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED4CD8E0003; Thu, 8 Dec 2022 06:41:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DADDA8E0001 for ; Thu, 8 Dec 2022 06:41:49 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 96FF380982 for ; Thu, 8 Dec 2022 11:41:49 +0000 (UTC) X-FDA: 80218949538.04.B284CAF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf08.hostedemail.com (Postfix) with ESMTP id D5774160016 for ; Thu, 8 Dec 2022 11:41:47 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FaLvOEv2; spf=pass (imf08.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670499708; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=g5xBDwwbAPMXsVLc87QM3XiQi+bNxIe4VXdcPfLTTBs=; b=hi1QZwhR3EV3jj2VWCq+sLzQaMdQqx5S9n6c89aashlcpC68LiwGncAKbKq32/7RZtLSH5 GhsFDneuwyTlL0baNrIWLF53UDH/l/qYNavKu+L/xcCSesZh0nESIpdFO+JlgizV1OLka2 5xFQSPe9VOYEjXbGzmlfl94BF+KJ4UI= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FaLvOEv2; spf=pass (imf08.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670499708; a=rsa-sha256; cv=none; b=QU4uu7x+8oK+arxl2iJeLgIvRMB7doqXhTlm04wdeTp7XMcTivPTXxslQp5gQj5hVnkLwC D7gCuINuWLMNTxJjNMx5o99TQb/f6RJmGgW8hU4EhyvN4PQOp7BZAFm5HNssN4XPvWwwnD 8tdRtBOxRhBeMcacISQs0LBKnzo+0BE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670499707; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=g5xBDwwbAPMXsVLc87QM3XiQi+bNxIe4VXdcPfLTTBs=; b=FaLvOEv2QIHCcR7/iGtp3kT7fxYRvROJvU5A2omhHibjKQkGWdnmdwfadzuSd6IpX54Q7l IRJwhJgO+NoysW0Soj70SipbeAAEPUyzGQ6xau+onmg+/BA4PBf27wLmphPPlB+uFdpxBd EwcCaTOBO/xbn0UItoJoPvuSZ9YMZSY= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-638-EO_ZK0zzM32kIRDyvyIDqg-1; Thu, 08 Dec 2022 06:41:43 -0500 X-MC-Unique: EO_ZK0zzM32kIRDyvyIDqg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 647163803911; Thu, 8 Dec 2022 11:41:43 +0000 (UTC) Received: from t480s.in.tum.de (unknown [10.39.193.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9E5FA40C2064; Thu, 8 Dec 2022 11:41:40 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Ives van Hoorne , Peter Xu , stable@vger.kernel.org, Andrew Morton , Hugh Dickins , Alistair Popple , Mike Rapoport , Nadav Amit , Andrea Arcangeli Subject: [PATCH v1] mm/userfaultfd: enable writenotify while userfaultfd-wp is enabled for a VMA Date: Thu, 8 Dec 2022 12:41:37 +0100 Message-Id: <20221208114137.35035-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D5774160016 X-Stat-Signature: tdhqaoxs5inhcrkeps3aj8u3f344bjbo X-Rspam-User: X-HE-Tag: 1670499707-325063 X-HE-Meta: U2FsdGVkX1/IPr9gqBs7umoUnZ7S+34xzN/9KJNgo3iPRe3kIvKKciYMlolt1jmb4BbrfvZCahlbPl4JmU27xDAwBIdU4JGfhDfJSGumg5bdORuEdeKp1rvtaYdMgLv5TCL8cOc/uG130EFF3Vh9t3RF0q1Usv8Y0+eS0slYdp+pHTB/FXa7dkL4llbBPGx4sNCZMPXNaIY+3I1wDnDMMyd8+IGdEs6a+kqvd6t/k7AM/1gU5AF6N2nsplmTkP6ZamN+OBD+PBPKfi6nY+Swqe+lZ0a5PvI+rsQRvnBMj6AgRSwzGZuVS7P5ORYqMZT1B3qMMsJP/7MBxkjt5uzjm1t4YwESb84CDE+6+zYzr+O76smV5fXQIzIK8hFuwg7Qfalv3u1pVDv+5BhQgFJPUIRcDSeDDb5Inav19LqgXhkvpHjYkobRDb3vOPK4lneNCkwieTelj17AbxH+GxJRSnbw7h03W930+cqCIXk2giYfRr8a0Bdj03ZMs/8/9OOQvVJRTpEz4WiSA7BGkA6EEjCq98/WUFfq1O4p9zlhoMYsFDKo+Ar7EvpMrTDDqK09GXl39npMrjBW2whREVI7r6cfDTYrL51Kx+nDqXapQE5YeDKj+c/JHuipk73/a0BIJOHIIaLK/NBZuBVIwldEoICQ45P7HyUnbD1Odz3cXRZf5ORQBXIAqZOg5BNQ34nrXti7uoRkITkueaw4zD1WoNlLzDP1MNPbHulBeekCGatIfe4TBlhgAh38rA128aBeGIjEQrMutSuqGzyKpSq5RIJ1KQNIfhlegKDK28ngHUn4qFerWqN1dlpoL1UMmJ56TZno1iVK7ZlQ69Om4OVTieXySoimkEM9isIlB31ui0vV92/tURCFZ1KpZpN+vrV6oNFsaey13e+eOeeh0T/eKklws3aYLkygV+7DtA/6rZPXD7z7AQaFchEB5lDpLel+TV0+C6nuDqtzRqcQdmA sS3IF7Gh 2Bm3Xpz14AXyzX34= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, we don't enable writenotify when enabling userfaultfd-wp on a shared writable mapping (for now only shmem and hugetlb). The consequence is that vma->vm_page_prot will still include write permissions, to be set as default for all PTEs that get remapped (e.g., mprotect(), NUMA hinting, page migration, ...). So far, vma->vm_page_prot is assumed to be a safe default, meaning that we only add permissions (e.g., mkwrite) but not remove permissions (e.g., wrprotect). For example, when enabling softdirty tracking, we enable writenotify. With uffd-wp on shared mappings, that changed. More details on vma->vm_page_prot semantics were summarized in [1]. This is problematic for uffd-wp: we'd have to manually check for a uffd-wp PTEs/PMDs and manually write-protect PTEs/PMDs, which is error prone. Prone to such issues is any code that uses vma->vm_page_prot to set PTE permissions: primarily pte_modify() and mk_pte(). Instead, let's enable writenotify such that PTEs/PMDs/... will be mapped write-protected as default and we will only allow selected PTEs that are definitely safe to be mapped without write-protection (see can_change_pte_writable()) to be writable. In the future, we might want to enable write-bit recovery -- e.g., can_change_pte_writable() -- at more locations, for example, also when removing uffd-wp protection. This fixes two known cases: (a) remove_migration_pte() mapping uffd-wp'ed PTEs writable, resulting in uffd-wp not triggering on write access. (b) do_numa_page() / do_huge_pmd_numa_page() mapping uffd-wp'ed PTEs/PMDs writable, resulting in uffd-wp not triggering on write access. Note that do_numa_page() / do_huge_pmd_numa_page() can be reached even without NUMA hinting (which currently doesn't seem to be applicable to shmem), for example, by using uffd-wp with a PROT_WRITE shmem VMA. On such a VMA, userfaultfd-wp is currently non-functional. Note that when enabling userfaultfd-wp, there is no need to walk page tables to enforce the new default protection for the PTEs: we know that they cannot be uffd-wp'ed yet, because that can only happen after enabling uffd-wp for the VMA in general. Also note that this makes mprotect() on ranges with uffd-wp'ed PTEs not accidentally set the write bit -- which would result in uffd-wp not triggering on later write access. This commit makes uffd-wp on shmem behave just like uffd-wp on anonymous memory (iow, less special) in that regard, even though, mixing mprotect with uffd-wp is controversial. [1] https://lkml.kernel.org/r/92173bad-caa3-6b43-9d1e-9a471fdbc184@redhat.com Reported-by: Ives van Hoorne Debugged-by: Peter Xu Fixes: b1f9e876862d ("mm/uffd: enable write protection for shmem & hugetlbfs") Cc: stable@vger.kernel.org Cc: Andrew Morton Cc: Hugh Dickins Cc: Alistair Popple Cc: Mike Rapoport Cc: Nadav Amit Cc: Andrea Arcangeli Signed-off-by: David Hildenbrand Acked-by: Peter Xu --- As discussed in [2], this is supposed to replace the fix by Peter: [PATCH v3 1/2] mm/migrate: Fix read-only page got writable when recover pte This survives vm/selftests and my reproducers: * migrating pages that are uffd-wp'ed using mbind() on a machine with 2 NUMA nodes * Using a PROT_WRITE mapping with uffd-wp * Using a PROT_READ|PROT_WRITE mapping with uffd-wp'ed pages and mprotect()'ing it PROT_WRITE * Using a PROT_READ|PROT_WRITE mapping with uffd-wp'ed pages and temporarily mprotect()'ing it PROT_READ uffd-wp properly triggers in all cases. On v8.1-rc8, all mre reproducers fail. It would be good to get some more testing feedback and review. [2] https://lkml.kernel.org/r/20221202122748.113774-1-david@redhat.com --- fs/userfaultfd.c | 28 ++++++++++++++++++++++------ mm/mmap.c | 4 ++++ 2 files changed, 26 insertions(+), 6 deletions(-) base-commit: 8ed710da2873c2aeb3bb805864a699affaf1d03b diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 98ac37e34e3d..fb0733f2e623 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -108,6 +108,21 @@ static bool userfaultfd_is_initialized(struct userfaultfd_ctx *ctx) return ctx->features & UFFD_FEATURE_INITIALIZED; } +static void userfaultfd_set_vm_flags(struct vm_area_struct *vma, + vm_flags_t flags) +{ + const bool uffd_wp = !!((vma->vm_flags | flags) & VM_UFFD_WP); + + vma->vm_flags = flags; + /* + * For shared mappings, we want to enable writenotify while + * userfaultfd-wp is enabled (see vma_wants_writenotify()). We'll simply + * recalculate vma->vm_page_prot whenever userfaultfd-wp is involved. + */ + if ((vma->vm_flags & VM_SHARED) && uffd_wp) + vma_set_page_prot(vma); +} + static int userfaultfd_wake_function(wait_queue_entry_t *wq, unsigned mode, int wake_flags, void *key) { @@ -618,7 +633,8 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx, for_each_vma(vmi, vma) { if (vma->vm_userfaultfd_ctx.ctx == release_new_ctx) { vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; - vma->vm_flags &= ~__VM_UFFD_FLAGS; + userfaultfd_set_vm_flags(vma, + vma->vm_flags & ~__VM_UFFD_FLAGS); } } mmap_write_unlock(mm); @@ -652,7 +668,7 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) octx = vma->vm_userfaultfd_ctx.ctx; if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) { vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; - vma->vm_flags &= ~__VM_UFFD_FLAGS; + userfaultfd_set_vm_flags(vma, vma->vm_flags & ~__VM_UFFD_FLAGS); return 0; } @@ -733,7 +749,7 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma, } else { /* Drop uffd context if remap feature not enabled */ vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; - vma->vm_flags &= ~__VM_UFFD_FLAGS; + userfaultfd_set_vm_flags(vma, vma->vm_flags & ~__VM_UFFD_FLAGS); } } @@ -895,7 +911,7 @@ static int userfaultfd_release(struct inode *inode, struct file *file) prev = vma; } - vma->vm_flags = new_flags; + userfaultfd_set_vm_flags(vma, new_flags); vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; } mmap_write_unlock(mm); @@ -1463,7 +1479,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, * the next vma was merged into the current one and * the current one has not been updated yet. */ - vma->vm_flags = new_flags; + userfaultfd_set_vm_flags(vma, new_flags); vma->vm_userfaultfd_ctx.ctx = ctx; if (is_vm_hugetlb_page(vma) && uffd_disable_huge_pmd_share(vma)) @@ -1651,7 +1667,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, * the next vma was merged into the current one and * the current one has not been updated yet. */ - vma->vm_flags = new_flags; + userfaultfd_set_vm_flags(vma, new_flags); vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; skip: diff --git a/mm/mmap.c b/mm/mmap.c index a5eb2f175da0..6033d20198b0 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1525,6 +1525,10 @@ int vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot) if (vma_soft_dirty_enabled(vma) && !is_vm_hugetlb_page(vma)) return 1; + /* Do we need write faults for uffd-wp tracking? */ + if (userfaultfd_wp(vma)) + return 1; + /* Specialty mapping? */ if (vm_flags & VM_PFNMAP) return 0;