From patchwork Thu May 27 20:21:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12285263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46A34C47089 for ; Thu, 27 May 2021 20:21:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DC297613EB for ; Thu, 27 May 2021 20:21:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DC297613EB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 794C36B006E; Thu, 27 May 2021 16:21:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 76BDE6B0070; Thu, 27 May 2021 16:21:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C0026B0071; Thu, 27 May 2021 16:21:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id 2A1176B006E for ; Thu, 27 May 2021 16:21:31 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B58F1181AF5C7 for ; Thu, 27 May 2021 20:21:30 +0000 (UTC) X-FDA: 78188131140.15.8F59433 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf10.hostedemail.com (Postfix) with ESMTP id A606E4142C67 for ; Thu, 27 May 2021 20:21:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1622146890; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QYVv4WAkRU7fRACps66Ocj6/XAOeYepv/px19tfG5cs=; b=cDcpw4OWFT7Xqlu9Wxxj/n4CTvHTVEFBUUFXmZ07gSyDIbmZ3DZqz+aw5Ft1W3L889MSLH g9/IAd7hDIZzzoVPEbm3jGoK9rp6Tecl6/zw1nzl2sFG6wZYi3l3MdAi0/sjwT3pmY6xEl rwono5vqdjFgOkWIgyJHFECeooYutoY= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-414-M6jnJ_CENxmq_F9-63a02A-1; Thu, 27 May 2021 16:21:27 -0400 X-MC-Unique: M6jnJ_CENxmq_F9-63a02A-1 Received: by mail-qv1-f71.google.com with SMTP id f17-20020a0cf3d10000b02901eda24e6b92so1051826qvm.1 for ; Thu, 27 May 2021 13:21:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QYVv4WAkRU7fRACps66Ocj6/XAOeYepv/px19tfG5cs=; b=GjZIXWUOFVoOOkXT8hII0T+il5ks1K0R972VQjPqoShdIBS84c0I4PgmKt1LVzARcA ZnLdCNDkki2FF38cp2uli5lTn1Cgv8dGOlsZy0MKF7bqimpnU0draHZNb+fDiUep9Y7q r367tGrGhv9Sldt3aSTnNofDgagmQgF04PqiEFOdcHzoi+e3RF7vzY8Q5RDzMZT4HJSz tX7S67NaykBwEGpiy1bNjItpQu75prIcfu4iKhR8bIWYp9en5xVPBo6Y21AeLPB19iNM mOk+aj5syFV8YD5O1Wxks71RALcyu3SMftBTtpPa1PQF2DsZfdVDy/ZxuLXwabUcVY7Z +wIA== X-Gm-Message-State: AOAM531/1u7HAZmQFNeWTT0tl6eSXT4YYdDZBdqVSVgHVLYeJxukv+RC AUBSqo/cDCQmeiKknp8fIshCrqq53le1MK1JzVkDhzvJQT+lF+k44NFkDgS8wLN1j0NbCPQAx0z y5p2R9HskaphXYQyvk2Cwj4vy3yMsjEgWSn1aW408m6snjGcyti0AY8d0Aje8 X-Received: by 2002:ac8:594a:: with SMTP id 10mr301484qtz.293.1622146886420; Thu, 27 May 2021 13:21:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw7lqcECT8KPsmIK0r+r6dBmL5bSGmCLQ2KpX8k9jXbC/1IEeudtC3hjPjvAXnHx6Ajngv1fA== X-Received: by 2002:ac8:594a:: with SMTP id 10mr301440qtz.293.1622146885908; Thu, 27 May 2021 13:21:25 -0700 (PDT) Received: from localhost.localdomain (bras-base-toroon474qw-grc-72-184-145-4-219.dsl.bell.ca. [184.145.4.219]) by smtp.gmail.com with ESMTPSA id i14sm2222461qkn.99.2021.05.27.13.21.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 May 2021 13:21:25 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrea Arcangeli , "Kirill A . Shutemov" , Axel Rasmussen , Nadav Amit , Hugh Dickins , Jerome Glisse , Jason Gunthorpe , peterx@redhat.com, Andrew Morton , Miaohe Lin , Mike Rapoport , Matthew Wilcox , Mike Kravetz Subject: [PATCH v3 06/27] shmem/userfaultfd: Handle uffd-wp special pte in page fault handler Date: Thu, 27 May 2021 16:21:22 -0400 Message-Id: <20210527202122.30739-1-peterx@redhat.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210527201927.29586-1-peterx@redhat.com> References: <20210527201927.29586-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=cDcpw4OW; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf10.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=peterx@redhat.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: A606E4142C67 X-Stat-Signature: m5o8zujkady9f19smdbfk53eqjfhwheu X-HE-Tag: 1622146882-334586 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: File-backed memories are prone to unmap/swap so the ptes are always unstable. This could lead to userfaultfd-wp information got lost when unmapped or swapped out on such types of memory, for example, shmem. To keep such an information persistent, we will start to use the newly introduced swap-like special ptes to replace a null pte when those ptes were removed. Prepare this by handling such a special pte first before it is applied in the general page fault handler. The handling of this special pte page fault is similar to missing fault, but it should happen after the pte missing logic since the special pte is designed to be a swap-like pte. Meanwhile it should be handled before do_swap_page() so that the swap core logic won't be confused to see such an illegal swap pte. This is a slow path of uffd-wp handling, because unmap of wr-protected shmem ptes should be rare. So far it should only trigger in two conditions: (1) When trying to punch holes in shmem_fallocate(), there will be a pre-unmap optimization before evicting the page. That will create unmapped shmem ptes with wr-protected pages covered. (2) Swapping out of shmem pages Because of this, the page fault handling is simplifed too by not sending the wr-protect message in the 1st page fault, instead the page will be installed read-only, so the message will be generated until the next write, which will trigger the do_wp_page() path of general uffd-wp handling. Disable fault-around for all uffd-wp registered ranges for extra safety, and clean the code up a bit after we introduced MINOR fault. Signed-off-by: Peter Xu --- include/linux/userfaultfd_k.h | 12 +++++ mm/memory.c | 88 +++++++++++++++++++++++++++++++---- 2 files changed, 90 insertions(+), 10 deletions(-) diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 93f932b53a71..ca3f794d07e9 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -94,6 +94,18 @@ static inline bool uffd_disable_huge_pmd_share(struct vm_area_struct *vma) return vma->vm_flags & (VM_UFFD_WP | VM_UFFD_MINOR); } +/* + * Don't do fault around for either WP or MINOR registered uffd range. For + * MINOR registered range, fault around will be a total disaster and ptes can + * be installed without notifications; for WP it should mostly be fine as long + * as the fault around checks for pte_none() before the installation, however + * to be super safe we just forbid it. + */ +static inline bool uffd_disable_fault_around(struct vm_area_struct *vma) +{ + return vma->vm_flags & (VM_UFFD_WP | VM_UFFD_MINOR); +} + static inline bool userfaultfd_missing(struct vm_area_struct *vma) { return vma->vm_flags & VM_UFFD_MISSING; diff --git a/mm/memory.c b/mm/memory.c index 2b24af4616df..45a2f71e447a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3917,6 +3917,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) { struct vm_area_struct *vma = vmf->vma; + bool uffd_wp = pte_swp_uffd_wp_special(vmf->orig_pte); bool write = vmf->flags & FAULT_FLAG_WRITE; bool prefault = vmf->address != addr; pte_t entry; @@ -3929,6 +3930,8 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) if (write) entry = maybe_mkwrite(pte_mkdirty(entry), vma); + if (unlikely(uffd_wp)) + entry = pte_mkuffd_wp(pte_wrprotect(entry)); /* copy-on-write page */ if (write && !(vma->vm_flags & VM_SHARED)) { inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); @@ -3996,8 +3999,12 @@ vm_fault_t finish_fault(struct vm_fault *vmf) vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); ret = 0; - /* Re-check under ptl */ - if (likely(pte_none(*vmf->pte))) + + /* + * Re-check under ptl. Note: this will cover both none pte and + * uffd-wp-special swap pte + */ + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) do_set_pte(vmf, page, vmf->address); else ret = VM_FAULT_NOPAGE; @@ -4101,9 +4108,21 @@ static vm_fault_t do_fault_around(struct vm_fault *vmf) return vmf->vma->vm_ops->map_pages(vmf, start_pgoff, end_pgoff); } +/* Return true if we should do read fault-around, false otherwise */ +static inline bool should_fault_around(struct vm_fault *vmf) +{ + /* No ->map_pages? No way to fault around... */ + if (!vmf->vma->vm_ops->map_pages) + return false; + + if (uffd_disable_fault_around(vmf->vma)) + return false; + + return fault_around_bytes >> PAGE_SHIFT > 1; +} + static vm_fault_t do_read_fault(struct vm_fault *vmf) { - struct vm_area_struct *vma = vmf->vma; vm_fault_t ret = 0; /* @@ -4111,12 +4130,10 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf) * if page by the offset is not ready to be mapped (cold cache or * something). */ - if (vma->vm_ops->map_pages && fault_around_bytes >> PAGE_SHIFT > 1) { - if (likely(!userfaultfd_minor(vmf->vma))) { - ret = do_fault_around(vmf); - if (ret) - return ret; - } + if (should_fault_around(vmf)) { + ret = do_fault_around(vmf); + if (ret) + return ret; } ret = __do_fault(vmf); @@ -4435,6 +4452,57 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud) return VM_FAULT_FALLBACK; } +static vm_fault_t uffd_wp_clear_special(struct vm_fault *vmf) +{ + vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, + vmf->address, &vmf->ptl); + /* + * Be careful so that we will only recover a special uffd-wp pte into a + * none pte. Otherwise it means the pte could have changed, so retry. + */ + if (pte_swp_uffd_wp_special(*vmf->pte)) + pte_clear(vmf->vma->vm_mm, vmf->address, vmf->pte); + pte_unmap_unlock(vmf->pte, vmf->ptl); + return 0; +} + +/* + * This is actually a page-missing access, but with uffd-wp special pte + * installed. It means this pte was wr-protected before being unmapped. + */ +static vm_fault_t uffd_wp_handle_special(struct vm_fault *vmf) +{ + /* Careful! vmf->pte unmapped after return */ + if (!pte_unmap_same(vmf)) + return 0; + + /* + * Just in case there're leftover special ptes even after the region + * got unregistered - we can simply clear them. + */ + if (unlikely(!userfaultfd_wp(vmf->vma) || vma_is_anonymous(vmf->vma))) + return uffd_wp_clear_special(vmf); + + /* + * Here we share most code with do_fault(), in which we can identify + * whether this is "none pte fault" or "uffd-wp-special fault" by + * checking the vmf->orig_pte. + */ + return do_fault(vmf); +} + +static vm_fault_t do_swap_pte(struct vm_fault *vmf) +{ + /* + * We need to handle special swap ptes before handling ptes that + * contain swap entries, always. + */ + if (unlikely(pte_swp_uffd_wp_special(vmf->orig_pte))) + return uffd_wp_handle_special(vmf); + + return do_swap_page(vmf); +} + /* * These routines also need to handle stuff like marking pages dirty * and/or accessed for architectures that don't do it in hardware (most @@ -4509,7 +4577,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) } if (!pte_present(vmf->orig_pte)) - return do_swap_page(vmf); + return do_swap_pte(vmf); if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma)) return do_numa_page(vmf);