From patchwork Mon May 3 18:07:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12236705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C195C43470 for ; Mon, 3 May 2021 18:07:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DA313613B3 for ; Mon, 3 May 2021 18:07:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231664AbhECSIk (ORCPT ); Mon, 3 May 2021 14:08:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231640AbhECSIj (ORCPT ); Mon, 3 May 2021 14:08:39 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81303C06174A for ; Mon, 3 May 2021 11:07:45 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id u7-20020a259b470000b02904dca50820c2so8683869ybo.11 for ; Mon, 03 May 2021 11:07:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iF80KwmGEZ/7b4uYBqluetUylLH4grXh0TbUAxV+2kI=; b=RBaGtZ3UwPJBS/xtU/kc4GQKbuH133C3lfAme4W8VCaU1RKlc5nDKtvUbga+6zpObW g17ZYU/XzSYpf+EEKwsm3Tl01IGznzMgj9IJbZvz+R0jb1BmXikZpaRSHp5g1jVureFl CO4iaT2SlqOP/Qr0RcLlWSegkiTfWr/JvyepMxcyyuG4uMIAV6P1to/7lDlGMYfLxQPz D9R4K6T8X+hX7Hwk23jnCCgksmrLfCdYXZEUsAEdjdigxky3YcZBIoKgQLmdCkjtMtAJ 7El3SOybAPaO621rvXv6rrhU6ltuy/hgAXGcXFGlwyH6oK5Md8R/HNwsg+GdrYJDkG4s yLWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iF80KwmGEZ/7b4uYBqluetUylLH4grXh0TbUAxV+2kI=; b=WCt8KvuWr+epTBvg+Lcl+1o222t1GdVwM0PKGuoUWJ/z/XDSqavyZASMhv1FCLBV+y +QI1AtBUdS6dkk1dv4akajWMq0GN9IUk5XCM0k5PZGbhbXCqA8vtKQE+JnylZ9LBWzDZ p5n5c/7MAIgwext8V9lINrYJwQSGtJgsUcwbWrfAD0ToA7J9IQFPrx2buByMKp5CJG8A 41JkKDC2Utf+5frN61DU4YeWG7d3mlMxYNJor293vr0kCfL8iL70ay3lwzHgE48xNP32 N1DNoIEI3cIOR023EXUV/bVyt1gGostIcgqWaRNpvM5rrg3wUjjyIkOechdnzTHTWMCM m0cQ== X-Gm-Message-State: AOAM530VlmD9/Ed1aYYl0y660peW3qu2EHOFWZpIDFszrYQao1/VeuDz EiPVlJP+f883ZGj+jtGLM7jhgqSP0CUx7lpUvR/H X-Google-Smtp-Source: ABdhPJzWNiyUBjvIsLk8ysQpiD8p9lQ7yUN6A2c6kxeGvBdTBCt2ouz3+SYVAmlnQ2Yq8UJSTS/hy6dEddxLnT/1zn/H X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:3d79:e69a:a4f9:ef0]) (user=axelrasmussen job=sendgmr) by 2002:a25:b186:: with SMTP id h6mr25474371ybj.455.1620065264666; Mon, 03 May 2021 11:07:44 -0700 (PDT) Date: Mon, 3 May 2021 11:07:28 -0700 In-Reply-To: <20210503180737.2487560-1-axelrasmussen@google.com> Message-Id: <20210503180737.2487560-2-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210503180737.2487560-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v6 01/10] userfaultfd/hugetlbfs: avoid including userfaultfd_k.h in hugetlb.h From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Minimizing header file inclusion is desirable. In this case, we can do so just by forward declaring the enumeration our signature relies upon. Reviewed-by: Peter Xu Acked-by: Hugh Dickins Signed-off-by: Axel Rasmussen --- include/linux/hugetlb.h | 2 +- mm/hugetlb.c | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index b92f25ccef58..c98269e61ff6 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -11,11 +11,11 @@ #include #include #include -#include struct ctl_table; struct user_struct; struct mmu_gather; +enum mcopy_atomic_mode; #ifndef is_hugepd typedef struct { unsigned long pd; } hugepd_t; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3db405dea3dc..d2212cafd335 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -40,6 +40,7 @@ #include #include #include +#include #include "internal.h" int hugetlb_max_hstate __read_mostly; From patchwork Mon May 3 18:07:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12236707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 903B7C433ED for ; Mon, 3 May 2021 18:07:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5819D611AC for ; Mon, 3 May 2021 18:07:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231784AbhECSIs (ORCPT ); Mon, 3 May 2021 14:08:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42634 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231733AbhECSIl (ORCPT ); Mon, 3 May 2021 14:08:41 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80392C06138C for ; Mon, 3 May 2021 11:07:47 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id j63-20020a25d2420000b02904d9818b80e8so4277427ybg.14 for ; Mon, 03 May 2021 11:07:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=UHcpSiebb1Iu0vgdMG54rkkpbm1IykFSZfchz+VudZ8=; b=MSIutlUEeaUFw1UmBFBrOgc4o/XIAzqygrrjVZ+dOdkwLb91GwO6PCl/wO1Idar/4p 6W4fwR/MjDAMjTDI7DJ5QIfzfQAj0+ynkFfSIMYtlIySmmIATDTb3pkSeAvgPYFHU+o8 LF8n5KK0CF6LpShxUyO6bAGw2HdMaT5J6Un0nplbkXhWcAVGE/ZCF6337EElWQc/UpaW s74eckeMWhgL6X6NqLwWaden8KSbeNKs9QBjyoBZHPiDxCIgfSDUidaJf2/90b4QJSvA XTmeh2UxVL4KxAKDaCuaM9JQE6zgi0WMZLTnfYrqCtqcAa3cOV242N8Zre1FNKHidmGy uThw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=UHcpSiebb1Iu0vgdMG54rkkpbm1IykFSZfchz+VudZ8=; b=HTqkyYDRrsMKRE1WBvtes7GnwJtH5ZxiLe76HGH+3tFzUrffgcC5StEZly7UBl4bcZ 1ofsAjU0vvebfEWGtTkPgNfrbbuOn8M+I7PMXMYrCS4AP0cq4yFSBrlYH/B8NtaSRRJR BvXl8KBYvOERf5Ez5NboG+PQuTcQSLQB0GALvQasODoTNdiuc3XWKdIwSXS2Rt4B8lBn n298C7d7uJfTbqBwHMoBgvnT40B0liOtaCAQVfk2TyrF5LoniNWvpzOvddKFWaPIkiQa javoqI9igia47Pczu6DOgyWMKp4zmEere8lVGqNKMJEMMK2983u8InqrD1q2VqjC3oBR NwAA== X-Gm-Message-State: AOAM530o6aIC4CFwyY8ZJTEF6fCTBig4oq4aZUpqNCKPNhfsms2Dhrkg TdV1VtMFkb4KBP55ueHGeqXUXc/2PAcj+fjrK4Mg X-Google-Smtp-Source: ABdhPJw1k/9ye1l8TiRKPi/scU0imf16x5q9vZdknKxel7SWix8e77kwcvFfMxJbBxI0z55dCEIUiTaHRhsmU5tLD+79 X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:3d79:e69a:a4f9:ef0]) (user=axelrasmussen job=sendgmr) by 2002:a25:3f44:: with SMTP id m65mr28341333yba.254.1620065266748; Mon, 03 May 2021 11:07:46 -0700 (PDT) Date: Mon, 3 May 2021 11:07:29 -0700 In-Reply-To: <20210503180737.2487560-1-axelrasmussen@google.com> Message-Id: <20210503180737.2487560-3-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210503180737.2487560-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v6 02/10] userfaultfd/shmem: combine shmem_{mcopy_atomic,mfill_zeropage}_pte From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Previously, we did a dance where we had one calling path in userfaultfd.c (mfill_atomic_pte), but then we split it into two in shmem_fs.h (shmem_{mcopy_atomic,mfill_zeropage}_pte), and then rejoined into a single shared function in shmem.c (shmem_mfill_atomic_pte). This is all a bit overly complex. Just call the single combined shmem function directly, allowing us to clean up various branches, boilerplate, etc. While we're touching this function, two other small cleanup changes: - offset is equivalent to pgoff, so we can get rid of offset entirely. - Split two VM_BUG_ON cases into two statements. This means the line number reported when the BUG is hit specifies exactly which condition was true. Reviewed-by: Peter Xu Acked-by: Hugh Dickins Signed-off-by: Axel Rasmussen --- include/linux/shmem_fs.h | 19 +++++++-------- mm/shmem.c | 52 +++++++++++++--------------------------- mm/userfaultfd.c | 10 +++----- 3 files changed, 27 insertions(+), 54 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index d82b6f396588..a69ea4d97fdd 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -122,21 +122,18 @@ static inline bool shmem_file(struct file *file) extern bool shmem_charge(struct inode *inode, long pages); extern void shmem_uncharge(struct inode *inode, long pages); +#ifdef CONFIG_USERFAULTFD #ifdef CONFIG_SHMEM -extern int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, +extern int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, + bool zeropage, struct page **pagep); -extern int shmem_mfill_zeropage_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr); -#else -#define shmem_mcopy_atomic_pte(dst_mm, dst_pte, dst_vma, dst_addr, \ - src_addr, pagep) ({ BUG(); 0; }) -#define shmem_mfill_zeropage_pte(dst_mm, dst_pmd, dst_vma, \ - dst_addr) ({ BUG(); 0; }) -#endif +#else /* !CONFIG_SHMEM */ +#define shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, \ + src_addr, zeropage, pagep) ({ BUG(); 0; }) +#endif /* CONFIG_SHMEM */ +#endif /* CONFIG_USERFAULTFD */ #endif diff --git a/mm/shmem.c b/mm/shmem.c index 0c8b160a781f..04de845b50b3 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2354,13 +2354,14 @@ static struct inode *shmem_get_inode(struct super_block *sb, const struct inode return inode; } -static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, - unsigned long src_addr, - bool zeropage, - struct page **pagep) +#ifdef CONFIG_USERFAULTFD +int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, + pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, + unsigned long src_addr, + bool zeropage, + struct page **pagep) { struct inode *inode = file_inode(dst_vma->vm_file); struct shmem_inode_info *info = SHMEM_I(inode); @@ -2372,7 +2373,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, struct page *page; pte_t _dst_pte, *dst_pte; int ret; - pgoff_t offset, max_off; + pgoff_t max_off; ret = -ENOMEM; if (!shmem_inode_acct_block(inode, 1)) { @@ -2393,7 +2394,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (!page) goto out_unacct_blocks; - if (!zeropage) { /* mcopy_atomic */ + if (!zeropage) { /* COPY */ page_kaddr = kmap_atomic(page); ret = copy_from_user(page_kaddr, (const void __user *)src_addr, @@ -2407,7 +2408,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, /* don't free the page */ return -ENOENT; } - } else { /* mfill_zeropage_atomic */ + } else { /* ZEROPAGE */ clear_highpage(page); } } else { @@ -2415,15 +2416,15 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, *pagep = NULL; } - VM_BUG_ON(PageLocked(page) || PageSwapBacked(page)); + VM_BUG_ON(PageLocked(page)); + VM_BUG_ON(PageSwapBacked(page)); __SetPageLocked(page); __SetPageSwapBacked(page); __SetPageUptodate(page); ret = -EFAULT; - offset = linear_page_index(dst_vma, dst_addr); max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - if (unlikely(offset >= max_off)) + if (unlikely(pgoff >= max_off)) goto out_release; ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, @@ -2449,7 +2450,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, ret = -EFAULT; max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - if (unlikely(offset >= max_off)) + if (unlikely(pgoff >= max_off)) goto out_release_unlock; ret = -EEXIST; @@ -2486,28 +2487,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, shmem_inode_unacct_blocks(inode, 1); goto out; } - -int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, - unsigned long src_addr, - struct page **pagep) -{ - return shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, - dst_addr, src_addr, false, pagep); -} - -int shmem_mfill_zeropage_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr) -{ - struct page *page = NULL; - - return shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, - dst_addr, 0, true, &page); -} +#endif /* CONFIG_USERFAULTFD */ #ifdef CONFIG_TMPFS static const struct inode_operations shmem_symlink_inode_operations; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index e14b3820c6a8..242b7a04c16d 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -440,13 +440,9 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, dst_vma, dst_addr); } else { VM_WARN_ON_ONCE(wp_copy); - if (!zeropage) - err = shmem_mcopy_atomic_pte(dst_mm, dst_pmd, - dst_vma, dst_addr, - src_addr, page); - else - err = shmem_mfill_zeropage_pte(dst_mm, dst_pmd, - dst_vma, dst_addr); + err = shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, + dst_addr, src_addr, zeropage, + page); } return err; From patchwork Mon May 3 18:07:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12236709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89B4FC43460 for ; Mon, 3 May 2021 18:08:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 69E986137D for ; Mon, 3 May 2021 18:08:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231774AbhECSJA (ORCPT ); Mon, 3 May 2021 14:09:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231781AbhECSIs (ORCPT ); Mon, 3 May 2021 14:08:48 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60B8AC061343 for ; Mon, 3 May 2021 11:07:49 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id g21-20020ac858150000b02901ba6163708bso1960722qtg.5 for ; Mon, 03 May 2021 11:07:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=B2Y1PjHbI7aJoqQQgo4Vnp4m6n0XR/I22GG3IBxep44=; b=gTsFZvf1qsTwVhu74lp9LkD4evRddnT0OUCq2nktgrb7Qeqfhtqry+E72YR++T84jj FXO/yrAwTZLbPQiAAwfvf+aY5r3B+UVXp4W+z4eCVdecIu2KlkEWmJ+eEDTsnajop9KT hod3LhJUwoDlsuDkjH6nvfEyBxo2prHq/2niF4ikBsp1fVCf7ubjwJI2Sbd/I7z+CzoO 08wNutl5oqSAMliZ/wvffe/EPSYnMJoN93ATlRsVdEBqx0kc4GxVW4CfIU1H4OANm343 BG/ERYek+C20/S9UsylvbaAOgQSKEWJUxx/0wKow/xe2GzjB7U3gDEfnlKV66ErHLrQc 8/Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=B2Y1PjHbI7aJoqQQgo4Vnp4m6n0XR/I22GG3IBxep44=; b=ltFFoWECA8xQNK27SdzF7uXsDTTWJ9ooiV8I9zN1ed3+XaGFaBOQ91GX7kdXHf+u2t lMp8w+/LkXjf4pYOh/mBhjA4ekHtQR/2lLfgAo+MbFN6LLRtCaJWYHc5j+2jYDy/ZY7m sFTx6XiKq0ilV7Znoc49g9chs6+4ciIJkXo5e2cfJDRAhMUg+D+g5ppSPVoxeKewDr6A wVYAxNUmgwptXvjYNlyxxqh8QHpr3hBNGIWuAX91U4qBoM08gyDeUzdLWtUk3Ge/pJSJ yTIL1NNNOWzjL8tNXlPxMeIEFPq8MJTRuEp3WVsfC8CA1DFGvpqWnhJWCguqXWh1+ukx eugg== X-Gm-Message-State: AOAM533xNQPblC0XBDrTHaI6iRO4sEe5vWanLC/UjflW2p8xjRtkc0rk u8cj7aVMaODLEtvc2UCaIKp4WC/bhQBHgkcz6wJA X-Google-Smtp-Source: ABdhPJxnNCpEjYHIgZNcvbAUN1MjTKtSfxHgbzP+NuN4wp02yOZHfaGQB6Fjg9bx44bW+nF3M5O2miOL9kS1DE+zUYFi X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:3d79:e69a:a4f9:ef0]) (user=axelrasmussen job=sendgmr) by 2002:a0c:c446:: with SMTP id t6mr11124075qvi.3.1620065268499; Mon, 03 May 2021 11:07:48 -0700 (PDT) Date: Mon, 3 May 2021 11:07:30 -0700 In-Reply-To: <20210503180737.2487560-1-axelrasmussen@google.com> Message-Id: <20210503180737.2487560-4-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210503180737.2487560-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v6 03/10] userfaultfd/shmem: support minor fault registration for shmem From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This patch allows shmem-backed VMAs to be registered for minor faults. Minor faults are appropriately relayed to userspace in the fault path, for VMAs with the relevant flag. This commit doesn't hook up the UFFDIO_CONTINUE ioctl for shmem-backed minor faults, though, so userspace doesn't yet have a way to resolve such faults. Because of this, we also don't yet advertise this as a supported feature. That will be done in a separate commit when the feature is fully implemented. Acked-by: Peter Xu Acked-by: Hugh Dickins Signed-off-by: Axel Rasmussen --- fs/userfaultfd.c | 3 +-- mm/memory.c | 8 +++++--- mm/shmem.c | 12 +++++++++++- 3 files changed, 17 insertions(+), 6 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 14f92285d04f..468556fb04a9 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1267,8 +1267,7 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma, } if (vm_flags & VM_UFFD_MINOR) { - /* FIXME: Add minor fault interception for shmem. */ - if (!is_vm_hugetlb_page(vma)) + if (!(is_vm_hugetlb_page(vma) || vma_is_shmem(vma))) return false; } diff --git a/mm/memory.c b/mm/memory.c index 86ba6c1f6821..9a536cfde7c8 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3972,9 +3972,11 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf) * something). */ if (vma->vm_ops->map_pages && fault_around_bytes >> PAGE_SHIFT > 1) { - ret = do_fault_around(vmf); - if (ret) - return ret; + if (likely(!userfaultfd_minor(vmf->vma))) { + ret = do_fault_around(vmf); + if (ret) + return ret; + } } ret = __do_fault(vmf); diff --git a/mm/shmem.c b/mm/shmem.c index 04de845b50b3..e361f1d81c8d 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1785,7 +1785,7 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, * vm. If we swap it in we mark it dirty since we also free the swap * entry since a page cannot live in both the swap and page cache. * - * vmf and fault_type are only supplied by shmem_fault: + * vma, vmf, and fault_type are only supplied by shmem_fault: * otherwise they are NULL. */ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, @@ -1820,6 +1820,16 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, page = pagecache_get_page(mapping, index, FGP_ENTRY | FGP_HEAD | FGP_LOCK, 0); + + if (page && vma && userfaultfd_minor(vma)) { + if (!xa_is_value(page)) { + unlock_page(page); + put_page(page); + } + *fault_type = handle_userfault(vmf, VM_UFFD_MINOR); + return 0; + } + if (xa_is_value(page)) { error = shmem_swapin_page(inode, index, &page, sgp, gfp, vma, fault_type); From patchwork Mon May 3 18:07:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12236711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7540FC43462 for ; Mon, 3 May 2021 18:08:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 52B7A61283 for ; Mon, 3 May 2021 18:08:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232017AbhECSJB (ORCPT ); Mon, 3 May 2021 14:09:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231822AbhECSIt (ORCPT ); Mon, 3 May 2021 14:08:49 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44501C061344 for ; Mon, 3 May 2021 11:07:51 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id j63-20020a25d2420000b02904d9818b80e8so4277666ybg.14 for ; Mon, 03 May 2021 11:07:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=mgeEBSGDh62ytbvMSwCqj+rl6kmACCXPYm0spOxIoFQ=; b=dV1cIFFeG2aERZIN1kEtGfTQj7HwfDfrrcSrpzLK/f2K8KMylB2QXzzL4BKaxoDgEc veKlwAFBdXrSBJ5rP0wOOCpNJluik9CC/Ok608gZ2OgGNJEdGKbGtJAjobBL+/jQhwdo cVWazsVkaBKxjMYZEFBtTgGBsyfpRSQY0Kc4gSBFZgMsIgP6X+cfRlgC9l+MCuhK1V2d KOK98GFIDAafdmVMLZXggAGxd+jtf8NZ17gY2wifTjetYG6YAfO9eot21zEUQNrEiav3 JmwWBTB98Fr1+n9rqMY3TUn4gfzap5xJYj0KGlioeU9rH+xWzNiXvzH8kA/1nRL9XP4D FGjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=mgeEBSGDh62ytbvMSwCqj+rl6kmACCXPYm0spOxIoFQ=; b=k+FC7HAsoe1wNNaAQ7qNNyo1HEcyrlWTgDeYRnaK1ggqTiYtQELyEsBN3eIlyTJsfG qWXDkDzOEnQ7kGv/jO62BJoFhzQje9jQ3K7xBbPmsw/sv7pe+Hm7BaX7QqvKuLzDLDyh XYIpBvRv/jIQWYVCduf77g/kkwcEfQ+tkuzcHPzgjug0uJTCwxcMBUUKqYctgHPCCeSp cTGwZhvDUEzOyAQNe84w+iVKPfINZwB58VYzZRZrXV81Er0cVoqvoYOsIqyo4qIrHIyd 6UuezZIXOjF63EXWA+0fCgrTEMG4B+9Dq9dxNSDCWU+aooRwaT/9e3kKCNJGlHuGAz9i ycOQ== X-Gm-Message-State: AOAM530anU1BOI5DlOrEEeTL9CJM+I19PCOD0pv1VF3hJxJxyJbWZMjA ZsvcpQ/hb/fp9lsTLWtqgQDsrO6E031AQKJ+k2Eg X-Google-Smtp-Source: ABdhPJwJ6SP7JGkjnl+bcTQRe27B+KvmXwxZSqJ8x4DW2HmOmcxi5K8Q+abD9f3MoH9Y+Hz123oZ/DZUPBIplOBXSRH1 X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:3d79:e69a:a4f9:ef0]) (user=axelrasmussen job=sendgmr) by 2002:a25:2407:: with SMTP id k7mr9130284ybk.404.1620065270439; Mon, 03 May 2021 11:07:50 -0700 (PDT) Date: Mon, 3 May 2021 11:07:31 -0700 In-Reply-To: <20210503180737.2487560-1-axelrasmussen@google.com> Message-Id: <20210503180737.2487560-5-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210503180737.2487560-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v6 04/10] userfaultfd/shmem: support UFFDIO_CONTINUE for shmem From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org With this change, userspace can resolve a minor fault within a shmem-backed area with a UFFDIO_CONTINUE ioctl. The semantics for this match those for hugetlbfs - we look up the existing page in the page cache, and install a PTE for it. This commit introduces a new helper: mfill_atomic_install_pte. Why handle UFFDIO_CONTINUE for shmem in mm/userfaultfd.c, instead of in shmem.c? The existing userfault implementation only relies on shmem.c for VM_SHARED VMAs. However, minor fault handling / CONTINUE work just fine for !VM_SHARED VMAs as well. We'd prefer to handle CONTINUE for shmem in one place, regardless of shared/private (to reduce code duplication). Why add a new mfill_atomic_install_pte helper? A problem we have with continue is that shmem_mfill_atomic_pte() and mcopy_atomic_pte() are *close* to what we want, but not exactly. We do want to setup the PTEs in a CONTINUE operation, but we don't want to e.g. allocate a new page, charge it (e.g. to the shmem inode), manipulate various flags, etc. Also we have the problem stated above: shmem_mfill_atomic_pte() and mcopy_atomic_pte() both handle one-half of the problem (shared / private) continue cares about. So, introduce mcontinue_atomic_pte(), to handle all of the shmem continue cases. Introduce the helper so it doesn't duplicate code with mcopy_atomic_pte(). In a future commit, shmem_mfill_atomic_pte() will also be modified to use this new helper. However, since this is a bigger refactor, it seems most clear to do it as a separate change. Acked-by: Hugh Dickins Acked-by: Peter Xu Signed-off-by: Axel Rasmussen --- mm/userfaultfd.c | 172 ++++++++++++++++++++++++++++++++++------------- 1 file changed, 127 insertions(+), 45 deletions(-) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 242b7a04c16d..d1ac73a0d2a9 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -48,6 +48,83 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, return dst_vma; } +/* + * Install PTEs, to map dst_addr (within dst_vma) to page. + * + * This function handles MCOPY_ATOMIC_CONTINUE (which is always file-backed), + * whether or not dst_vma is VM_SHARED. It also handles the more general + * MCOPY_ATOMIC_NORMAL case, when dst_vma is *not* VM_SHARED (it may be file + * backed, or not). + * + * Note that MCOPY_ATOMIC_NORMAL for a VM_SHARED dst_vma is handled by + * shmem_mcopy_atomic_pte instead. + */ +static int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, struct page *page, + bool newly_allocated, bool wp_copy) +{ + int ret; + pte_t _dst_pte, *dst_pte; + bool writable = dst_vma->vm_flags & VM_WRITE; + bool vm_shared = dst_vma->vm_flags & VM_SHARED; + bool page_in_cache = page->mapping; + spinlock_t *ptl; + struct inode *inode; + pgoff_t offset, max_off; + + _dst_pte = mk_pte(page, dst_vma->vm_page_prot); + if (page_in_cache && !vm_shared) + writable = false; + if (writable || !page_in_cache) + _dst_pte = pte_mkdirty(_dst_pte); + if (writable) { + if (wp_copy) + _dst_pte = pte_mkuffd_wp(_dst_pte); + else + _dst_pte = pte_mkwrite(_dst_pte); + } + + dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); + + if (vma_is_shmem(dst_vma)) { + /* serialize against truncate with the page table lock */ + inode = dst_vma->vm_file->f_inode; + offset = linear_page_index(dst_vma, dst_addr); + max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); + ret = -EFAULT; + if (unlikely(offset >= max_off)) + goto out_unlock; + } + + ret = -EEXIST; + if (!pte_none(*dst_pte)) + goto out_unlock; + + if (page_in_cache) + page_add_file_rmap(page, false); + else + page_add_new_anon_rmap(page, dst_vma, dst_addr, false); + + /* + * Must happen after rmap, as mm_counter() checks mapping (via + * PageAnon()), which is set by __page_set_anon_rmap(). + */ + inc_mm_counter(dst_mm, mm_counter(page)); + + if (newly_allocated) + lru_cache_add_inactive_or_unevictable(page, dst_vma); + + set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + + /* No need to invalidate - it was non-present before */ + update_mmu_cache(dst_vma, dst_addr, dst_pte); + ret = 0; +out_unlock: + pte_unmap_unlock(dst_pte, ptl); + return ret; +} + static int mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, struct vm_area_struct *dst_vma, @@ -56,13 +133,9 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, struct page **pagep, bool wp_copy) { - pte_t _dst_pte, *dst_pte; - spinlock_t *ptl; void *page_kaddr; int ret; struct page *page; - pgoff_t offset, max_off; - struct inode *inode; if (!*pagep) { ret = -ENOMEM; @@ -99,43 +172,12 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL)) goto out_release; - _dst_pte = pte_mkdirty(mk_pte(page, dst_vma->vm_page_prot)); - if (dst_vma->vm_flags & VM_WRITE) { - if (wp_copy) - _dst_pte = pte_mkuffd_wp(_dst_pte); - else - _dst_pte = pte_mkwrite(_dst_pte); - } - - dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); - if (dst_vma->vm_file) { - /* the shmem MAP_PRIVATE case requires checking the i_size */ - inode = dst_vma->vm_file->f_inode; - offset = linear_page_index(dst_vma, dst_addr); - max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - ret = -EFAULT; - if (unlikely(offset >= max_off)) - goto out_release_uncharge_unlock; - } - ret = -EEXIST; - if (!pte_none(*dst_pte)) - goto out_release_uncharge_unlock; - - inc_mm_counter(dst_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, dst_vma, dst_addr, false); - lru_cache_add_inactive_or_unevictable(page, dst_vma); - - set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); - - /* No need to invalidate - it was non-present before */ - update_mmu_cache(dst_vma, dst_addr, dst_pte); - - pte_unmap_unlock(dst_pte, ptl); - ret = 0; + ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + page, true, wp_copy); + if (ret) + goto out_release; out: return ret; -out_release_uncharge_unlock: - pte_unmap_unlock(dst_pte, ptl); out_release: put_page(page); goto out; @@ -176,6 +218,41 @@ static int mfill_zeropage_pte(struct mm_struct *dst_mm, return ret; } +/* Handles UFFDIO_CONTINUE for all shmem VMAs (shared or private). */ +static int mcontinue_atomic_pte(struct mm_struct *dst_mm, + pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, + bool wp_copy) +{ + struct inode *inode = file_inode(dst_vma->vm_file); + pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); + struct page *page; + int ret; + + ret = shmem_getpage(inode, pgoff, &page, SGP_READ); + if (ret) + goto out; + if (!page) { + ret = -EFAULT; + goto out; + } + + ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + page, false, wp_copy); + if (ret) + goto out_release; + + unlock_page(page); + ret = 0; +out: + return ret; +out_release: + unlock_page(page); + put_page(page); + goto out; +} + static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) { pgd_t *pgd; @@ -415,11 +492,16 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, unsigned long dst_addr, unsigned long src_addr, struct page **page, - bool zeropage, + enum mcopy_atomic_mode mode, bool wp_copy) { ssize_t err; + if (mode == MCOPY_ATOMIC_CONTINUE) { + return mcontinue_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + wp_copy); + } + /* * The normal page fault path for a shmem will invoke the * fault, fill the hole in the file and COW it right away. The @@ -431,7 +513,7 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, * and not in the radix tree. */ if (!(dst_vma->vm_flags & VM_SHARED)) { - if (!zeropage) + if (mode == MCOPY_ATOMIC_NORMAL) err = mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, src_addr, page, wp_copy); @@ -441,7 +523,8 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, } else { VM_WARN_ON_ONCE(wp_copy); err = shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, - dst_addr, src_addr, zeropage, + dst_addr, src_addr, + mode != MCOPY_ATOMIC_NORMAL, page); } @@ -463,7 +546,6 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, long copied; struct page *page; bool wp_copy; - bool zeropage = (mcopy_mode == MCOPY_ATOMIC_ZEROPAGE); /* * Sanitize the command parameters: @@ -526,7 +608,7 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; - if (mcopy_mode == MCOPY_ATOMIC_CONTINUE) + if (!vma_is_shmem(dst_vma) && mcopy_mode == MCOPY_ATOMIC_CONTINUE) goto out_unlock; /* @@ -574,7 +656,7 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, BUG_ON(pmd_trans_huge(*dst_pmd)); err = mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, - src_addr, &page, zeropage, wp_copy); + src_addr, &page, mcopy_mode, wp_copy); cond_resched(); if (unlikely(err == -ENOENT)) { From patchwork Mon May 3 18:07:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12236713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DEC2C43460 for ; Mon, 3 May 2021 18:08:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 474BD61283 for ; Mon, 3 May 2021 18:08:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231877AbhECSJE (ORCPT ); Mon, 3 May 2021 14:09:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231849AbhECSIu (ORCPT ); Mon, 3 May 2021 14:08:50 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D0FCC061349 for ; Mon, 3 May 2021 11:07:53 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id l19-20020a37f5130000b02902e3dc23dc92so5752890qkk.15 for ; Mon, 03 May 2021 11:07:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rLicbgjBW9z9eKVcQKp4wHRRLj2JzrLl40NiB/ljX8M=; b=dYQbKoRMU91hOrSaWBLxUpTQ05+ZNfEo4S7tV4qClTWooCDGneergtpY8tZcI1BeJI p7UfIOb8j3uR/uAa9q8apWqtTdzN4eBG7l9w+JZzDYVvVL2ASJrPke57fAPMEYTbG2C4 ERRpFY0fjyF+GYSCPN9PK1DEf5KFkY2NBN83AazVqyRCWDvahiveuMsHe5fgjDew4TOj dC81tnwmnFaEYCEAwYH+M3YmqOjN7jNU+SND9kXKVx1GFSpkFTiBrhVvuwcemQgfWDDd YHuVEWNXez6FTpwpAMYamSg5JI28GO4eDmoPjy66uTbUhczB7zLfdC2un3/a+fpgilm0 YWiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rLicbgjBW9z9eKVcQKp4wHRRLj2JzrLl40NiB/ljX8M=; b=g/Od16qoDuNeZMdQ/t9IILKdVJfHWpVMlN2SOt1hoI9PB6GHP+9wsgCKTsdAgqDt2X nR7RVWob5Nw828d2AMW19NYOPwDh+c3lSByBOw38hW40NrGbel5dKXdBegxqdZeiSpgc TdUhxxNC+VvjxmM1X+AXMcSSFLiQHYdIQDHzfZepKnDZ0MvjxzZU8/hAFt3MdYo319Cw /VJkoSJ3OrxnnGVxnAsIDyoYKO/YJspa7rPB3MkH4FIk0ZNJKrI8Fm4LYNxrePaSdmz2 CLuk2ljYrCd2P2qZ8cfIG3zL22sA0bE3+nWk5JuN90VOsYRZnaG/hQlXntY7WGxsuA7I Kieg== X-Gm-Message-State: AOAM532j9JGeD3fiMSjlzkYuL50mQf/c6IzXUp+cQCvNgEm67Gt0O1xW wY+XdC0ACUveKwXbi7JF5w9bNwCWjVp5AgUKORKa X-Google-Smtp-Source: ABdhPJy1xqLPQr/XdBSNr2qNlen7wPq5zImtLkaKsKj5y77xMMSUpDG+ZJHsvmvO9ip76I57W6uNh5GMDPom3sfr5b8/ X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:3d79:e69a:a4f9:ef0]) (user=axelrasmussen job=sendgmr) by 2002:ad4:4441:: with SMTP id l1mr20459035qvt.1.1620065272357; Mon, 03 May 2021 11:07:52 -0700 (PDT) Date: Mon, 3 May 2021 11:07:32 -0700 In-Reply-To: <20210503180737.2487560-1-axelrasmussen@google.com> Message-Id: <20210503180737.2487560-6-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210503180737.2487560-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v6 05/10] userfaultfd/shmem: advertise shmem minor fault support From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Now that the feature is fully implemented (the faulting path hooks exist so userspace is notified, and the ioctl to resolve such faults is available), advertise this as a supported feature. Acked-by: Hugh Dickins Acked-by: Peter Xu Signed-off-by: Axel Rasmussen --- Documentation/admin-guide/mm/userfaultfd.rst | 3 ++- fs/userfaultfd.c | 3 ++- include/uapi/linux/userfaultfd.h | 7 ++++++- 3 files changed, 10 insertions(+), 3 deletions(-) diff --git a/Documentation/admin-guide/mm/userfaultfd.rst b/Documentation/admin-guide/mm/userfaultfd.rst index 3aa38e8b8361..6528036093e1 100644 --- a/Documentation/admin-guide/mm/userfaultfd.rst +++ b/Documentation/admin-guide/mm/userfaultfd.rst @@ -77,7 +77,8 @@ events, except page fault notifications, may be generated: - ``UFFD_FEATURE_MINOR_HUGETLBFS`` indicates that the kernel supports ``UFFDIO_REGISTER_MODE_MINOR`` registration for hugetlbfs virtual memory - areas. + areas. ``UFFD_FEATURE_MINOR_SHMEM`` is the analogous feature indicating + support for shmem virtual memory areas. The userland application should set the feature flags it intends to use when invoking the ``UFFDIO_API`` ioctl, to request that those features be diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 468556fb04a9..9f3b8684cf3c 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1940,7 +1940,8 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx, /* report all available features and ioctls to userland */ uffdio_api.features = UFFD_API_FEATURES; #ifndef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR - uffdio_api.features &= ~UFFD_FEATURE_MINOR_HUGETLBFS; + uffdio_api.features &= + ~(UFFD_FEATURE_MINOR_HUGETLBFS | UFFD_FEATURE_MINOR_SHMEM); #endif uffdio_api.ioctls = UFFD_API_IOCTLS; ret = -EFAULT; diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h index bafbeb1a2624..159a74e9564f 100644 --- a/include/uapi/linux/userfaultfd.h +++ b/include/uapi/linux/userfaultfd.h @@ -31,7 +31,8 @@ UFFD_FEATURE_MISSING_SHMEM | \ UFFD_FEATURE_SIGBUS | \ UFFD_FEATURE_THREAD_ID | \ - UFFD_FEATURE_MINOR_HUGETLBFS) + UFFD_FEATURE_MINOR_HUGETLBFS | \ + UFFD_FEATURE_MINOR_SHMEM) #define UFFD_API_IOCTLS \ ((__u64)1 << _UFFDIO_REGISTER | \ (__u64)1 << _UFFDIO_UNREGISTER | \ @@ -185,6 +186,9 @@ struct uffdio_api { * UFFD_FEATURE_MINOR_HUGETLBFS indicates that minor faults * can be intercepted (via REGISTER_MODE_MINOR) for * hugetlbfs-backed pages. + * + * UFFD_FEATURE_MINOR_SHMEM indicates the same support as + * UFFD_FEATURE_MINOR_HUGETLBFS, but for shmem-backed pages instead. */ #define UFFD_FEATURE_PAGEFAULT_FLAG_WP (1<<0) #define UFFD_FEATURE_EVENT_FORK (1<<1) @@ -196,6 +200,7 @@ struct uffdio_api { #define UFFD_FEATURE_SIGBUS (1<<7) #define UFFD_FEATURE_THREAD_ID (1<<8) #define UFFD_FEATURE_MINOR_HUGETLBFS (1<<9) +#define UFFD_FEATURE_MINOR_SHMEM (1<<10) __u64 features; __u64 ioctls; From patchwork Mon May 3 18:07:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12236715 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 679C3C43603 for ; Mon, 3 May 2021 18:08:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 33A6C61283 for ; Mon, 3 May 2021 18:08:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231717AbhECSJP (ORCPT ); Mon, 3 May 2021 14:09:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231781AbhECSJA (ORCPT ); Mon, 3 May 2021 14:09:00 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 088FDC06134D for ; Mon, 3 May 2021 11:07:55 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id v12-20020a25848c0000b02904f30b36aebfso8834741ybk.1 for ; Mon, 03 May 2021 11:07:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AiJiKYjwSKz3YxlxFSckEn8zOf3uLPV5kjYbENKZphQ=; b=Qc38F5NEAHlR4PJqurW9Fn/35siI7B5/TFr+xY4RKuarShtU6D17askUe5rIyJNlvR KIddBuYHXdKF17ADt7roESMhdAd7S6zKEA2KdFPRLm6Rig6WwmXNYPq/0XawRxxZ3s9d FHBGx7jNV4Z5SPzx7CEfXdWnVkX+JqF2JpwtAh8yt8Ba3lVPZ4jyXe4V/Wz7OVSlIsj1 0CtIh6JznDm2RSkpsFpphDNLaUXCiPpQhxp+/xrvaJuTLEQpMaaQYfgRu1q+ZPxiw5CD VuR1GYO9qkw7PjFevzYuYE0hXaFpRvRAnBFYJOcjOsKC6NNtW77JMIEMxZTPb6Ruhup3 boGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AiJiKYjwSKz3YxlxFSckEn8zOf3uLPV5kjYbENKZphQ=; b=Xcta1r9N/OgthedmuAszyLhzHmOgkJznjn7zROFIh5ecpHqB/62tKvcNrvaXJYpLXf d2PNs1rjL24KlGmZRyzA8ZusKfm611dJm50y8pWyQULyzd79hNRrFzWCJKUQSiqjRdp5 uXAQCt7zMidGpZg0pcLT+3lSAMefilgxDsoA7FNl0CY9WvnThy769rX9eXi109d4tUcs eovlascbNuOxcppuY1mNkpZPGyoKp9a8/GMClWASTQGBj9aWMDN1g4Ap7nIX30hv09Jy /ditXEzaGQ2FgE4Bj60mq+pId+cbO2iZRBkBFwkX0ROxcNwm1maxYl6h6vPI0PkH8xv/ UvFg== X-Gm-Message-State: AOAM532qSa2iGaK/OngEpc9GxisrrE86t7r5gkahaf72mNmEGEQRQaNB cBTxRF+SGzE8PFAj/SMboEKs5ZZs6TxZ/IHAQAlO X-Google-Smtp-Source: ABdhPJyRZZ1YDoRpiiiH56Kae8DUQPfQZxFN1pQ+5qUM7NThcqZKeA+2bSBX+3eoVJj15C64lUb3rVSesG8+qCAitYNf X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:3d79:e69a:a4f9:ef0]) (user=axelrasmussen job=sendgmr) by 2002:a25:cac7:: with SMTP id a190mr27666541ybg.144.1620065274199; Mon, 03 May 2021 11:07:54 -0700 (PDT) Date: Mon, 3 May 2021 11:07:33 -0700 In-Reply-To: <20210503180737.2487560-1-axelrasmussen@google.com> Message-Id: <20210503180737.2487560-7-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210503180737.2487560-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v6 06/10] userfaultfd/shmem: modify shmem_mfill_atomic_pte to use install_pte() From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In a previous commit, we added the mfill_atomic_install_pte() helper. This helper does the job of setting up PTEs for an existing page, to map it into a given VMA. It deals with both the anon and shmem cases, as well as the shared and private cases. In other words, shmem_mfill_atomic_pte() duplicates a case it already handles. So, expose it, and let shmem_mfill_atomic_pte() use it directly, to reduce code duplication. This requires that we refactor shmem_mfill_atomic_pte() a bit: Instead of doing accounting (shmem_recalc_inode() et al) part-way through the PTE setup, do it afterward. This frees up mfill_atomic_install_pte() from having to care about this accounting, and means we don't need to e.g. shmem_uncharge() in the error path. A side effect is this switches shmem_mfill_atomic_pte() to use lru_cache_add_inactive_or_unevictable() instead of just lru_cache_add(). This wrapper does some extra accounting in an exceptional case, if appropriate, so it's actually the more correct thing to use. Signed-off-by: Axel Rasmussen Reviewed-by: Peter Xu Acked-by: Hugh Dickins --- include/linux/userfaultfd_k.h | 5 +++ mm/shmem.c | 58 ++++++++--------------------------- mm/userfaultfd.c | 17 ++++------ 3 files changed, 23 insertions(+), 57 deletions(-) diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 794d1538b8ba..331d2ccf0bcc 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -53,6 +53,11 @@ enum mcopy_atomic_mode { MCOPY_ATOMIC_CONTINUE, }; +extern int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, struct page *page, + bool newly_allocated, bool wp_copy); + extern ssize_t mcopy_atomic(struct mm_struct *dst_mm, unsigned long dst_start, unsigned long src_start, unsigned long len, bool *mmap_changing, __u64 mode); diff --git a/mm/shmem.c b/mm/shmem.c index e361f1d81c8d..2e9f56c83489 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2378,14 +2378,11 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, struct address_space *mapping = inode->i_mapping; gfp_t gfp = mapping_gfp_mask(mapping); pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); - spinlock_t *ptl; void *page_kaddr; struct page *page; - pte_t _dst_pte, *dst_pte; int ret; pgoff_t max_off; - ret = -ENOMEM; if (!shmem_inode_acct_block(inode, 1)) { /* * We may have got a page, returned -ENOENT triggering a retry, @@ -2396,10 +2393,11 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, put_page(*pagep); *pagep = NULL; } - goto out; + return -ENOMEM; } if (!*pagep) { + ret = -ENOMEM; page = shmem_alloc_page(gfp, info, pgoff); if (!page) goto out_unacct_blocks; @@ -2414,9 +2412,9 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, /* fallback to copy_from_user outside mmap_lock */ if (unlikely(ret)) { *pagep = page; - shmem_inode_unacct_blocks(inode, 1); + ret = -ENOENT; /* don't free the page */ - return -ENOENT; + goto out_unacct_blocks; } } else { /* ZEROPAGE */ clear_highpage(page); @@ -2442,32 +2440,10 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (ret) goto out_release; - _dst_pte = mk_pte(page, dst_vma->vm_page_prot); - if (dst_vma->vm_flags & VM_WRITE) - _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte)); - else { - /* - * We don't set the pte dirty if the vma has no - * VM_WRITE permission, so mark the page dirty or it - * could be freed from under us. We could do it - * unconditionally before unlock_page(), but doing it - * only if VM_WRITE is not set is faster. - */ - set_page_dirty(page); - } - - dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); - - ret = -EFAULT; - max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - if (unlikely(pgoff >= max_off)) - goto out_release_unlock; - - ret = -EEXIST; - if (!pte_none(*dst_pte)) - goto out_release_unlock; - - lru_cache_add(page); + ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + page, true, false); + if (ret) + goto out_delete_from_cache; spin_lock_irq(&info->lock); info->alloced++; @@ -2475,27 +2451,17 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, shmem_recalc_inode(inode); spin_unlock_irq(&info->lock); - inc_mm_counter(dst_mm, mm_counter_file(page)); - page_add_file_rmap(page, false); - set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); - - /* No need to invalidate - it was non-present before */ - update_mmu_cache(dst_vma, dst_addr, dst_pte); - pte_unmap_unlock(dst_pte, ptl); + SetPageDirty(page); unlock_page(page); - ret = 0; -out: - return ret; -out_release_unlock: - pte_unmap_unlock(dst_pte, ptl); - ClearPageDirty(page); + return 0; +out_delete_from_cache: delete_from_page_cache(page); out_release: unlock_page(page); put_page(page); out_unacct_blocks: shmem_inode_unacct_blocks(inode, 1); - goto out; + return ret; } #endif /* CONFIG_USERFAULTFD */ diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index d1ac73a0d2a9..5508f7d9e2dc 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -51,18 +51,13 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, /* * Install PTEs, to map dst_addr (within dst_vma) to page. * - * This function handles MCOPY_ATOMIC_CONTINUE (which is always file-backed), - * whether or not dst_vma is VM_SHARED. It also handles the more general - * MCOPY_ATOMIC_NORMAL case, when dst_vma is *not* VM_SHARED (it may be file - * backed, or not). - * - * Note that MCOPY_ATOMIC_NORMAL for a VM_SHARED dst_vma is handled by - * shmem_mcopy_atomic_pte instead. + * This function handles both MCOPY_ATOMIC_NORMAL and _CONTINUE for both shmem + * and anon, and for both shared and private VMAs. */ -static int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, struct page *page, - bool newly_allocated, bool wp_copy) +int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, struct page *page, + bool newly_allocated, bool wp_copy) { int ret; pte_t _dst_pte, *dst_pte; From patchwork Mon May 3 18:07:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12236719 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0006C43461 for ; Mon, 3 May 2021 18:08:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7D5696135F for ; Mon, 3 May 2021 18:08:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232155AbhECSJR (ORCPT ); Mon, 3 May 2021 14:09:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231955AbhECSJA (ORCPT ); Mon, 3 May 2021 14:09:00 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6176C06138B for ; Mon, 3 May 2021 11:07:56 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id s8-20020a5b04480000b029049fb35700b9so8822767ybp.5 for ; Mon, 03 May 2021 11:07:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Xac2BuhJWUqyH5WnNjzkjLemK7LFrvUwXp8Wxy1Y0Zs=; b=Dp+eYm9V8pZWxqVn9dwHTmOQieeOZFLVSfiZ0cB8hyg/Se1Z0xBe3ZybxDqks2XPjY 8UY79cMWP+Hjxb01Sw3pTvJ0KocvdaAW7w+lNbr++xYwJ3LBdxqsDvl37hdnwhLcIGkJ W901lPAsz24G6F3civKoUw7LlPZyZE85p3LpdSnc9T3I282uaYSz2ZaClGGV2NIuxODx s6C6PVVd5L6Y6Fywsuts+SizzKGyGBRgg8Bg2r5IrRM7rkaSxYjci6FH5h1vsPuNYT9G a4eaJbsGTfQJyzntteD/+XhOxllNr9eCBVCRyBiHKxy+a0MoZTknH98spga/D0k1x2a7 VbNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Xac2BuhJWUqyH5WnNjzkjLemK7LFrvUwXp8Wxy1Y0Zs=; b=Fkk56Nqs3wTmoj3swKkDi/6AUDygm+Zil6IlPgKU8z3zXPJw0jfWKrbFAIqMymvfJB 41cwj4VhMQNIlnjiXXPmoj4Y7HDUK0KS4jDhf7DMvzbRm3gWf1bNdbCs1dqhiPDGZpW4 Y93jFGztANIGphSdomluuxx4NQ/BeH3G8XcIYXHWmKiIe7UWzDSkXoUgvNvz4kzOcpW3 ySCH/G21UHHTYIWaIwnypq+sJmU3TcFKY+ihwd4SAvJvF1AC8azO3XRHVtAcoNzWHz8p D59klAMYmqJvJUeO3Qhwy5W0mAIipzADNFM4CJwwIJ0AxkARYUlc+IhkW4T4tyJmfw5t KIaA== X-Gm-Message-State: AOAM53226zTG6/yxVuO+xrG0upkvggeOx6l7+j/5B/kJ7jF4Z0GU/q3u ADXwv6JDvryEAipCc6+oOpkRZ5Z8vI2kovxEyQVB X-Google-Smtp-Source: ABdhPJx+jXbiz0zAjVjFU2FT4iWpi3DJuyrx4MPp0buQPJJtAqRHzn6LRuVX4gpKOL6VG9z3xIXvY7lqv2L1UxWsKKT+ X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:3d79:e69a:a4f9:ef0]) (user=axelrasmussen job=sendgmr) by 2002:a05:6902:4ab:: with SMTP id r11mr29737692ybs.70.1620065276077; Mon, 03 May 2021 11:07:56 -0700 (PDT) Date: Mon, 3 May 2021 11:07:34 -0700 In-Reply-To: <20210503180737.2487560-1-axelrasmussen@google.com> Message-Id: <20210503180737.2487560-8-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210503180737.2487560-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v6 07/10] userfaultfd/selftests: use memfd_create for shmem test type From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This is a preparatory commit. In the future, we want to be able to setup alias mappings for area_src and area_dst in the shmem test, like we do in the hugetlb_shared test. With a VMA obtained via mmap(MAP_ANONYMOUS | MAP_SHARED), it isn't clear how to do this. So, mmap() with an fd, so we can create alias mappings. Use memfd_create instead of actually passing in a tmpfs path like hugetlb does, since it's more convenient / simpler to run, and works just as well. Future commits will: 1. Setup the alias mappings. 2. Extend our tests to actually take advantage of this, to test new userfaultfd behavior being introduced in this series. Also, a small fix in the area we're changing: when the hugetlb setup fails in main(), pass in the right argv[] so we actually print out the hugetlb file path. Reviewed-by: Peter Xu Signed-off-by: Axel Rasmussen --- tools/testing/selftests/vm/userfaultfd.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index 6339aeaeeff8..fc40831f818f 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -85,6 +85,7 @@ static bool test_uffdio_wp = false; static bool test_uffdio_minor = false; static bool map_shared; +static int shm_fd; static int huge_fd; static char *huge_fd_off0; static unsigned long long *count_verify; @@ -277,8 +278,11 @@ static void shmem_release_pages(char *rel_area) static void shmem_allocate_area(void **alloc_area) { + unsigned long offset = + alloc_area == (void **)&area_src ? 0 : nr_pages * page_size; + *alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, - MAP_ANONYMOUS | MAP_SHARED, -1, 0); + MAP_SHARED, shm_fd, offset); if (*alloc_area == MAP_FAILED) err("mmap of memfd failed"); } @@ -1448,6 +1452,16 @@ int main(int argc, char **argv) err("Open of %s failed", argv[4]); if (ftruncate(huge_fd, 0)) err("ftruncate %s to size 0 failed", argv[4]); + } else if (test_type == TEST_SHMEM) { + shm_fd = memfd_create(argv[0], 0); + if (shm_fd < 0) + err("memfd_create"); + if (ftruncate(shm_fd, nr_pages * page_size * 2)) + err("ftruncate"); + if (fallocate(shm_fd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 0, + nr_pages * page_size * 2)) + err("fallocate"); } printf("nr_pages: %lu, nr_pages_per_cpu: %lu\n", nr_pages, nr_pages_per_cpu); From patchwork Mon May 3 18:07:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12236717 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C233C43600 for ; Mon, 3 May 2021 18:08:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6341A613BA for ; Mon, 3 May 2021 18:08:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231964AbhECSJQ (ORCPT ); Mon, 3 May 2021 14:09:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231995AbhECSJB (ORCPT ); Mon, 3 May 2021 14:09:01 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDF5BC061352 for ; Mon, 3 May 2021 11:07:58 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id j63-20020a25d2420000b02904d9818b80e8so4278189ybg.14 for ; Mon, 03 May 2021 11:07:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=DoWXx87b+Num+Reqv5sSHd1wfcUc2x8Dquy/B9+1IoU=; b=nvwXd3Q7vwLtpi5zok2xV+bzBipiCzcjLFfLaLjrW3LnM2BxxbcRTwblxJc2oN8Nsj k9449XsNR7OXw68xhaizvXpCfBzb5QfIg3A6TRWZi7/5lg9FTL3cHLk3FOvP/X0ZV0Is onU3vCJUkeqyL4f/kSilwcEhU95cN5Ud2/M7kRjXHAGJT5zoatWaSYmiRU1dZLLqIrHP J3tLvtFtiYW82BuflVjKfw9mtaPXfyBk2ouzNWsdlJj/KkTgLRS+wwn9pU1EnP5nqIry hGIYPD3PVyvS3WmKXKbOVxUgXehclcwOqRNYhvSF8K4yp9IUT7EEcH3Dxt/7Jvwdr0g4 hfDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=DoWXx87b+Num+Reqv5sSHd1wfcUc2x8Dquy/B9+1IoU=; b=pK2I1Zk2Ddy3DXXHV1FDdOqDtuGwogJ9GodYMzBsillb/otq+vPxnvFlQeMcgv/K+c jjCBZu6xagTPvdvChes9fK1Vnoey6ZNTFr4jHGLmhVPEdtm/1UGf/Eb2EYTbhvaF93cM hy5RvM+3NcV4jMF9aILQsm//HFsICzmjnuoUyST/eOU0YEuEscCoXagGIwXIR/O1U7YX QdMTRFDXc1EuMaNtvvUFGDZQqjLFJorkano7ZYX8T6GbV9JGPNpwlI0jkNF6K7be5v5k HGqZyL68bNWximuO9wqikqhKz8lSH26qPHeMAO/SW367CdSm+p8JN3E7CQvoa+ZcLulH mHZQ== X-Gm-Message-State: AOAM5322h38garp4TzzOU7FdHhpSYvm3bMP0KS2CnLJsuhD/afQi88uT JRnjYtMx060Ncni9kludT/Fd6sEXzTc12HtLyLgc X-Google-Smtp-Source: ABdhPJwuU013RuhBQO/T92u+GOekQFViLlft8ynCR7Gl8N6K8mV5+xEqVTnMsrhevMFtXctpx6HhHAvJdQpR6qe5zVCL X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:3d79:e69a:a4f9:ef0]) (user=axelrasmussen job=sendgmr) by 2002:a25:880f:: with SMTP id c15mr28912270ybl.373.1620065278099; Mon, 03 May 2021 11:07:58 -0700 (PDT) Date: Mon, 3 May 2021 11:07:35 -0700 In-Reply-To: <20210503180737.2487560-1-axelrasmussen@google.com> Message-Id: <20210503180737.2487560-9-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210503180737.2487560-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v6 08/10] userfaultfd/selftests: create alias mappings in the shmem test From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Previously, we just allocated two shm areas: area_src and area_dst. With this commit, change this so we also allocate area_src_alias, and area_dst_alias. area_*_alias and area_* (respectively) point to the same underlying physical pages, but are different VMAs. In a future commit in this series, we'll leverage this setup to exercise minor fault handling support for shmem, just like we do in the hugetlb_shared test. Reviewed-by: Peter Xu Signed-off-by: Axel Rasmussen --- tools/testing/selftests/vm/userfaultfd.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index fc40831f818f..1f65c4ab7994 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -278,13 +278,29 @@ static void shmem_release_pages(char *rel_area) static void shmem_allocate_area(void **alloc_area) { - unsigned long offset = - alloc_area == (void **)&area_src ? 0 : nr_pages * page_size; + void *area_alias = NULL; + bool is_src = alloc_area == (void **)&area_src; + unsigned long offset = is_src ? 0 : nr_pages * page_size; *alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, MAP_SHARED, shm_fd, offset); if (*alloc_area == MAP_FAILED) err("mmap of memfd failed"); + + area_alias = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, + MAP_SHARED, shm_fd, offset); + if (area_alias == MAP_FAILED) + err("mmap of memfd alias failed"); + + if (is_src) + area_src_alias = area_alias; + else + area_dst_alias = area_alias; +} + +static void shmem_alias_mapping(__u64 *start, size_t len, unsigned long offset) +{ + *start = (unsigned long)area_dst_alias + offset; } struct uffd_test_ops { @@ -314,7 +330,7 @@ static struct uffd_test_ops shmem_uffd_test_ops = { .expected_ioctls = SHMEM_EXPECTED_IOCTLS, .allocate_area = shmem_allocate_area, .release_pages = shmem_release_pages, - .alias_mapping = noop_alias_mapping, + .alias_mapping = shmem_alias_mapping, }; static struct uffd_test_ops hugetlb_uffd_test_ops = { From patchwork Mon May 3 18:07:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12236723 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C4D9C43600 for ; Mon, 3 May 2021 18:08:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 354F76135F for ; Mon, 3 May 2021 18:08:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231954AbhECSJT (ORCPT ); Mon, 3 May 2021 14:09:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232031AbhECSJC (ORCPT ); Mon, 3 May 2021 14:09:02 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93115C061358 for ; Mon, 3 May 2021 11:08:00 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id a3-20020a2580430000b02904f7a1a09012so4964054ybn.3 for ; Mon, 03 May 2021 11:08:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=OgpBn3KZ3jyoZwWP1rPcu0mUJ12jh2UTvDexIJWh8yU=; b=hZnbZpMsObpdwytHdhy+DFEXqL70zl8uGsd3CZ8jymimewvUOXkVI5J7lEvJfOw4sF SfT4DAx/79h+JvXLohVHhYbRwdCtEh1zFCgd1cr7v7zcAiEW4UypRX07yf0zz8C9e8QT UdHcyGu/8WiLWBLthvoY8BeAlP/wEVHiWAXpmGnXQFuP//BBhMwhOXaBPseEt5+NaG25 6b53gvQwI3reFQyGEsVNjAjJ3SvhtGBnNM7Gz8qaV62BJLTQ1GoBxa/3p7/FYUvV+rff PcOoVUvxl+QRvJYZ0eyYThT7sDlTsY1uJl+by2xMWKg2rqgnoQH1VO3VrqXTx2GJBkWz 4vfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=OgpBn3KZ3jyoZwWP1rPcu0mUJ12jh2UTvDexIJWh8yU=; b=APLjV1mPpUIvBPkMh30a/LKRq7OkLq3RlneHiF8BAhrtAM8LtsQMWNTT9a8Ixq4ENE e06+DPBbLPAFfkm6pQDkWvF/HrfMN+gN6mG8W6rrQkmGV8bCzw2u+bu6IBUYWWrwzwI2 1Kato9c5fVNlTJ7UrGe7rSdfl6OlDUlFwFZ2ewD4BVOgwtgfzjmbGzYA85tV1Yv4/EHQ CMKd4kaB6Eg6OdIg3ZbFbpqOz2lZogVXKQiGOCXGTyUuVtRQtuH4YnO9JXkEAKYdslFh 66GZN9ApkKY7U3c7Q6xhROEh2idSq8TcbTNeDZT3lvi5D8V62phIFkVlaRtbo741uTca mcrw== X-Gm-Message-State: AOAM532qt08dzNRBMOGof+G7nvOaeBuIez73JYb2xFx0zo0DID6276hn k8ZqfAUUngIDBAsf0w4uK0eripnG64aT55KJNbgE X-Google-Smtp-Source: ABdhPJyVr/aH/ZCMGx+83Zsm2MAUGx9W3uoUxe5pPXvl1twCNNqiw1KmiQhaT5GoTKZ+1tt037NtADXnyR1Nl2fMTCfR X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:3d79:e69a:a4f9:ef0]) (user=axelrasmussen job=sendgmr) by 2002:a5b:4a:: with SMTP id e10mr3396454ybp.201.1620065279864; Mon, 03 May 2021 11:07:59 -0700 (PDT) Date: Mon, 3 May 2021 11:07:36 -0700 In-Reply-To: <20210503180737.2487560-1-axelrasmussen@google.com> Message-Id: <20210503180737.2487560-10-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210503180737.2487560-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v6 09/10] userfaultfd/selftests: reinitialize test context in each test From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Currently, the context (fds, mmap-ed areas, etc.) are global. Each test mutates this state in some way, in some cases really "clobbering it" (e.g., the events test mremap-ing area_dst over the top of area_src, or the minor faults tests overwriting the count_verify values in the test areas). We run the tests in a particular order, each test is careful to make the right assumptions about its starting state, etc. But, this is fragile. It's better for a test's success or failure to not depend on what some other prior test case did to the global state. To that end, clear and reinitialize the test context at the start of each test case, so whatever prior test cases did doesn't affect future tests. This is particularly relevant to this series because the events test's mremap of area_dst screws up assumptions the minor fault test was relying on. This wasn't a problem for hugetlb, as we don't mremap in that case. Reviewed-by: Peter Xu Signed-off-by: Axel Rasmussen --- tools/testing/selftests/vm/userfaultfd.c | 215 ++++++++++++----------- 1 file changed, 116 insertions(+), 99 deletions(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index 1f65c4ab7994..3fbc69f513dc 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -89,7 +89,8 @@ static int shm_fd; static int huge_fd; static char *huge_fd_off0; static unsigned long long *count_verify; -static int uffd, uffd_flags, finished, *pipefd; +static int uffd = -1; +static int uffd_flags, finished, *pipefd; static char *area_src, *area_src_alias, *area_dst, *area_dst_alias; static char *zeropage; pthread_attr_t attr; @@ -342,6 +343,111 @@ static struct uffd_test_ops hugetlb_uffd_test_ops = { static struct uffd_test_ops *uffd_test_ops; +static void userfaultfd_open(uint64_t *features) +{ + struct uffdio_api uffdio_api; + + uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK | UFFD_USER_MODE_ONLY); + if (uffd < 0) + err("userfaultfd syscall not available in this kernel"); + uffd_flags = fcntl(uffd, F_GETFD, NULL); + + uffdio_api.api = UFFD_API; + uffdio_api.features = *features; + if (ioctl(uffd, UFFDIO_API, &uffdio_api)) + err("UFFDIO_API failed.\nPlease make sure to " + "run with either root or ptrace capability."); + if (uffdio_api.api != UFFD_API) + err("UFFDIO_API error: %" PRIu64, (uint64_t)uffdio_api.api); + + *features = uffdio_api.features; +} + +static inline void munmap_area(void **area) +{ + if (*area) + if (munmap(*area, nr_pages * page_size)) + err("munmap"); + + *area = NULL; +} + +static void uffd_test_ctx_clear(void) +{ + size_t i; + + if (pipefd) { + for (i = 0; i < nr_cpus * 2; ++i) { + if (close(pipefd[i])) + err("close pipefd"); + } + free(pipefd); + pipefd = NULL; + } + + if (count_verify) { + free(count_verify); + count_verify = NULL; + } + + if (uffd != -1) { + if (close(uffd)) + err("close uffd"); + uffd = -1; + } + + huge_fd_off0 = NULL; + munmap_area((void **)&area_src); + munmap_area((void **)&area_src_alias); + munmap_area((void **)&area_dst); + munmap_area((void **)&area_dst_alias); +} + +static void uffd_test_ctx_init_ext(uint64_t *features) +{ + unsigned long nr, cpu; + + uffd_test_ctx_clear(); + + uffd_test_ops->allocate_area((void **)&area_src); + uffd_test_ops->allocate_area((void **)&area_dst); + + uffd_test_ops->release_pages(area_src); + uffd_test_ops->release_pages(area_dst); + + userfaultfd_open(features); + + count_verify = malloc(nr_pages * sizeof(unsigned long long)); + if (!count_verify) + err("count_verify"); + + for (nr = 0; nr < nr_pages; nr++) { + *area_mutex(area_src, nr) = + (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER; + count_verify[nr] = *area_count(area_src, nr) = 1; + /* + * In the transition between 255 to 256, powerpc will + * read out of order in my_bcmp and see both bytes as + * zero, so leave a placeholder below always non-zero + * after the count, to avoid my_bcmp to trigger false + * positives. + */ + *(area_count(area_src, nr) + 1) = 1; + } + + pipefd = malloc(sizeof(int) * nr_cpus * 2); + if (!pipefd) + err("pipefd"); + for (cpu = 0; cpu < nr_cpus; cpu++) + if (pipe2(&pipefd[cpu * 2], O_CLOEXEC | O_NONBLOCK)) + err("pipe"); +} + +static inline void uffd_test_ctx_init(uint64_t features) +{ + uffd_test_ctx_init_ext(&features); +} + static int my_bcmp(char *str1, char *str2, size_t n) { unsigned long i; @@ -726,40 +832,6 @@ static int stress(struct uffd_stats *uffd_stats) return 0; } -static int userfaultfd_open_ext(uint64_t *features) -{ - struct uffdio_api uffdio_api; - - uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK | UFFD_USER_MODE_ONLY); - if (uffd < 0) { - fprintf(stderr, - "userfaultfd syscall not available in this kernel\n"); - return 1; - } - uffd_flags = fcntl(uffd, F_GETFD, NULL); - - uffdio_api.api = UFFD_API; - uffdio_api.features = *features; - if (ioctl(uffd, UFFDIO_API, &uffdio_api)) { - fprintf(stderr, "UFFDIO_API failed.\nPlease make sure to " - "run with either root or ptrace capability.\n"); - return 1; - } - if (uffdio_api.api != UFFD_API) { - fprintf(stderr, "UFFDIO_API error: %" PRIu64 "\n", - (uint64_t)uffdio_api.api); - return 1; - } - - *features = uffdio_api.features; - return 0; -} - -static int userfaultfd_open(uint64_t features) -{ - return userfaultfd_open_ext(&features); -} - sigjmp_buf jbuf, *sigbuf; static void sighndl(int sig, siginfo_t *siginfo, void *ptr) @@ -868,6 +940,8 @@ static int faulting_process(int signal_test) MREMAP_MAYMOVE | MREMAP_FIXED, area_src); if (area_dst == MAP_FAILED) err("mremap"); + /* Reset area_src since we just clobbered it */ + area_src = NULL; for (; nr < nr_pages; nr++) { count = *area_count(area_dst, nr); @@ -961,10 +1035,8 @@ static int userfaultfd_zeropage_test(void) printf("testing UFFDIO_ZEROPAGE: "); fflush(stdout); - uffd_test_ops->release_pages(area_dst); + uffd_test_ctx_init(0); - if (userfaultfd_open(0)) - return 1; uffdio_register.range.start = (unsigned long) area_dst; uffdio_register.range.len = nr_pages * page_size; uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING; @@ -981,7 +1053,6 @@ static int userfaultfd_zeropage_test(void) if (my_bcmp(area_dst, zeropage, page_size)) err("zeropage is not zero"); - close(uffd); printf("done.\n"); return 0; } @@ -999,12 +1070,10 @@ static int userfaultfd_events_test(void) printf("testing events (fork, remap, remove): "); fflush(stdout); - uffd_test_ops->release_pages(area_dst); - features = UFFD_FEATURE_EVENT_FORK | UFFD_FEATURE_EVENT_REMAP | UFFD_FEATURE_EVENT_REMOVE; - if (userfaultfd_open(features)) - return 1; + uffd_test_ctx_init(features); + fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); uffdio_register.range.start = (unsigned long) area_dst; @@ -1037,8 +1106,6 @@ static int userfaultfd_events_test(void) if (pthread_join(uffd_mon, NULL)) return 1; - close(uffd); - uffd_stats_report(&stats, 1); return stats.missing_faults != nr_pages; @@ -1058,11 +1125,9 @@ static int userfaultfd_sig_test(void) printf("testing signal delivery: "); fflush(stdout); - uffd_test_ops->release_pages(area_dst); - features = UFFD_FEATURE_EVENT_FORK|UFFD_FEATURE_SIGBUS; - if (userfaultfd_open(features)) - return 1; + uffd_test_ctx_init(features); + fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); uffdio_register.range.start = (unsigned long) area_dst; @@ -1103,7 +1168,6 @@ static int userfaultfd_sig_test(void) printf("done.\n"); if (userfaults) err("Signal test failed, userfaults: %ld", userfaults); - close(uffd); return userfaults != 0; } @@ -1126,10 +1190,7 @@ static int userfaultfd_minor_test(void) printf("testing minor faults: "); fflush(stdout); - uffd_test_ops->release_pages(area_dst); - - if (userfaultfd_open_ext(&features)) - return 1; + uffd_test_ctx_init_ext(&features); /* If kernel reports the feature isn't supported, skip the test. */ if (!(features & UFFD_FEATURE_MINOR_HUGETLBFS)) { printf("skipping test due to lack of feature support\n"); @@ -1183,8 +1244,6 @@ static int userfaultfd_minor_test(void) if (pthread_join(uffd_mon, NULL)) return 1; - close(uffd); - uffd_stats_report(&stats, 1); return stats.missing_faults != 0 || stats.minor_faults != nr_pages; @@ -1196,50 +1255,9 @@ static int userfaultfd_stress(void) char *tmp_area; unsigned long nr; struct uffdio_register uffdio_register; - unsigned long cpu; struct uffd_stats uffd_stats[nr_cpus]; - uffd_test_ops->allocate_area((void **)&area_src); - if (!area_src) - return 1; - uffd_test_ops->allocate_area((void **)&area_dst); - if (!area_dst) - return 1; - - if (userfaultfd_open(0)) - return 1; - - count_verify = malloc(nr_pages * sizeof(unsigned long long)); - if (!count_verify) { - perror("count_verify"); - return 1; - } - - for (nr = 0; nr < nr_pages; nr++) { - *area_mutex(area_src, nr) = (pthread_mutex_t) - PTHREAD_MUTEX_INITIALIZER; - count_verify[nr] = *area_count(area_src, nr) = 1; - /* - * In the transition between 255 to 256, powerpc will - * read out of order in my_bcmp and see both bytes as - * zero, so leave a placeholder below always non-zero - * after the count, to avoid my_bcmp to trigger false - * positives. - */ - *(area_count(area_src, nr) + 1) = 1; - } - - pipefd = malloc(sizeof(int) * nr_cpus * 2); - if (!pipefd) { - perror("pipefd"); - return 1; - } - for (cpu = 0; cpu < nr_cpus; cpu++) { - if (pipe2(&pipefd[cpu*2], O_CLOEXEC | O_NONBLOCK)) { - perror("pipe"); - return 1; - } - } + uffd_test_ctx_init(0); if (posix_memalign(&area, page_size, page_size)) err("out of memory"); @@ -1360,7 +1378,6 @@ static int userfaultfd_stress(void) uffd_stats_report(uffd_stats, nr_cpus); } - close(uffd); return userfaultfd_zeropage_test() || userfaultfd_sig_test() || userfaultfd_events_test() || userfaultfd_minor_test(); } From patchwork Mon May 3 18:07:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12236721 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC33FC433ED for ; Mon, 3 May 2021 18:08:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C9F0F61363 for ; Mon, 3 May 2021 18:08:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231947AbhECSJS (ORCPT ); Mon, 3 May 2021 14:09:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232036AbhECSJC (ORCPT ); Mon, 3 May 2021 14:09:02 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F33EC06135A for ; Mon, 3 May 2021 11:08:02 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id w10-20020ac86b0a0000b02901ba74ac38c9so1950718qts.22 for ; Mon, 03 May 2021 11:08:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ntouV34m2zzK96oVKUjz7Unp11YYIFITerKCQ/PICUo=; b=AabA9pNdVHR8wrAzCpx2+upbcMZBKVFarGpOoH4Xk/pMoNwzxHSVAVZbMSVw5xg+fw 7/C+w4V3s83Qq2uRdOkxN3vqlsdzIJTWPCiLHlFYuFvXL216RqNGqAqQT0xJOz6rlGuu 7jTjsObZhjLM6eIlBM9C+dl5P5tu+bZ7yF8CPjYkJgsyLmyOmSZt/dZwNN0GTnF+0lic isndnsawZBzxPFoBuMwYiCgrcK6McGt9MZZMQXBNufkSumKB5sbDo4c4tOQ7kxe0SLSh HO79/mVXzjgFKOxf0OVFfenBh5T/OvVrNJZfk777uxUhclay/uqQCEHduvlyoPcts/yn /ktA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ntouV34m2zzK96oVKUjz7Unp11YYIFITerKCQ/PICUo=; b=NXHPu8ngy4sxdsmJ7zIaPgqHWeqGdkKEh5wk3fni9IAfo8etEH/5wgRLen2M70dl3L vgpBeVZ6HD39kATSebhDynre3z2rGDFotGfWZBomrxWZS6fV/jmgEECQ0xp3mDzmJdY/ wzQJvWzk9vnB+ZBN9Lq3QB3i6afs2o7KPimod9u+nzE2xX/XM6hSz3R9Dm9xBGE8hG37 13hn8ywR0adqsNfoeCLEdTODUYsRYraYp2OPfVtw8Ah25juI3MtghfdrsB230DtX4qR6 4Bw0zxNIe4N7oH6lgkXwjfZ+8r2KX4dIor4Id2jHboB8yP+VBrYManBrn7TGZzp8TySq Tj7A== X-Gm-Message-State: AOAM533rQVUm3FI/YDhMxjQ5PWIe/ElkIeopt21yNRSNbfO+OWd5p2bu mv3YjyYCM5DZw2WKRMh8KciygXwNpoYPVmUQArli X-Google-Smtp-Source: ABdhPJwrhkcA07r1npaWCK9fxUqJ2Xw1p2+5ZZDXTYqOYqIOwOPfPfCd9E+z83gXUpu5S4gSaGrtBt+b+HU5WgQxVGvi X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:3d79:e69a:a4f9:ef0]) (user=axelrasmussen job=sendgmr) by 2002:ad4:4f82:: with SMTP id em2mr20912806qvb.55.1620065281620; Mon, 03 May 2021 11:08:01 -0700 (PDT) Date: Mon, 3 May 2021 11:07:37 -0700 In-Reply-To: <20210503180737.2487560-1-axelrasmussen@google.com> Message-Id: <20210503180737.2487560-11-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210503180737.2487560-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v6 10/10] userfaultfd/selftests: exercise minor fault handling shmem support From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Enable test_uffdio_minor for test_type == TEST_SHMEM, and modify the test slightly to pass in / check for the right feature flags. Reviewed-by: Peter Xu Signed-off-by: Axel Rasmussen --- tools/testing/selftests/vm/userfaultfd.c | 29 ++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index 3fbc69f513dc..a7ecc9993439 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -474,6 +474,7 @@ static void wp_range(int ufd, __u64 start, __u64 len, bool wp) static void continue_range(int ufd, __u64 start, __u64 len) { struct uffdio_continue req; + int ret; req.range.start = start; req.range.len = len; @@ -482,6 +483,17 @@ static void continue_range(int ufd, __u64 start, __u64 len) if (ioctl(ufd, UFFDIO_CONTINUE, &req)) err("UFFDIO_CONTINUE failed for address 0x%" PRIx64, (uint64_t)start); + + /* + * Error handling within the kernel for continue is subtly different + * from copy or zeropage, so it may be a source of bugs. Trigger an + * error (-EEXIST) on purpose, to verify doing so doesn't cause a BUG. + */ + req.mapped = 0; + ret = ioctl(ufd, UFFDIO_CONTINUE, &req); + if (ret >= 0 || req.mapped != -EEXIST) + err("failed to exercise UFFDIO_CONTINUE error handling, ret=%d, mapped=%" PRId64, + ret, (int64_t) req.mapped); } static void *locking_thread(void *arg) @@ -1182,7 +1194,7 @@ static int userfaultfd_minor_test(void) void *expected_page; char c; struct uffd_stats stats = { 0 }; - uint64_t features = UFFD_FEATURE_MINOR_HUGETLBFS; + uint64_t req_features, features_out; if (!test_uffdio_minor) return 0; @@ -1190,9 +1202,17 @@ static int userfaultfd_minor_test(void) printf("testing minor faults: "); fflush(stdout); - uffd_test_ctx_init_ext(&features); - /* If kernel reports the feature isn't supported, skip the test. */ - if (!(features & UFFD_FEATURE_MINOR_HUGETLBFS)) { + if (test_type == TEST_HUGETLB) + req_features = UFFD_FEATURE_MINOR_HUGETLBFS; + else if (test_type == TEST_SHMEM) + req_features = UFFD_FEATURE_MINOR_SHMEM; + else + return 1; + + features_out = req_features; + uffd_test_ctx_init_ext(&features_out); + /* If kernel reports required features aren't supported, skip test. */ + if ((features_out & req_features) != req_features) { printf("skipping test due to lack of feature support\n"); fflush(stdout); return 0; @@ -1426,6 +1446,7 @@ static void set_test_type(const char *type) map_shared = true; test_type = TEST_SHMEM; uffd_test_ops = &shmem_uffd_test_ops; + test_uffdio_minor = true; } else { err("Unknown test type: %s", type); }