From patchwork Wed May 17 19:25:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13245599 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D84BC77B7A for ; Wed, 17 May 2023 19:25:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A6EFA900003; Wed, 17 May 2023 15:25:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A1E50280001; Wed, 17 May 2023 15:25:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8BF69900007; Wed, 17 May 2023 15:25:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 79F76900003 for ; Wed, 17 May 2023 15:25:41 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 49AF2160309 for ; Wed, 17 May 2023 19:25:41 +0000 (UTC) X-FDA: 80800726440.08.2794F2B Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) by imf25.hostedemail.com (Postfix) with ESMTP id 5A41BA000D for ; Wed, 17 May 2023 19:25:38 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=nfT3DV5j; spf=pass (imf25.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.54 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684351538; a=rsa-sha256; cv=none; b=PGdbVSgizsqMFHxMUPJzoxqDpVDuH0MZY2SkJaDEoCGKPlFX4bl3NUxQN1Z1egQ43/v4eW oSoCEp/tCOvCJdxqRRprGS2iLgU+4Gr/BglF2+zzEeOJf/+zFZyv/dkE8fmDH6qDus/WXQ XhpfGQZst6GNAb3Aqum3PDSyL6u470I= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=nfT3DV5j; spf=pass (imf25.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.54 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684351538; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tvBvKq78j+8ZO0sz/gcDiszQ/0Gmc90EAVmVpzMrEXI=; b=SSKRoMJcg39UMjjR5ZY6AuSKNCSQOF06+knJ3MpkTckmYZ7Z6UkJzXmwH04zOrQgc14fey zUCX6A5QXZuD0IW/qYmKdWamLu4by+zDdYCgImkEapxOvVYu4Z0QtLUlB7J/9phN5TZfIc zcPKY7n/XukSPjzNMSimFtpwSq10SXk= Received: by mail-wm1-f54.google.com with SMTP id 5b1f17b1804b1-3f4449fa085so8032855e9.0 for ; Wed, 17 May 2023 12:25:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684351536; x=1686943536; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tvBvKq78j+8ZO0sz/gcDiszQ/0Gmc90EAVmVpzMrEXI=; b=nfT3DV5jdC7gxwPrB/ZKtPnYKQ25gnkriwDG+lhLVAFjGa2OetNGiE9gmXcXovGYCZ Q3kF9JnLaH5lvilBxfIpE1fO/wxjDmjWesKGbNm17C2MuNhCciajaM9AX2mvr64+T1pp tmFn0ISHaDw9UvQ0vkVUpL7vfPfkMQdv9i9dA1ycdyrdDtgoXSGJNQidYAU+sxl1Vl/Z xgRB1Pc4DOqGRA+7foY8MKfdB7FJRw/LCZhhzyQsCEEJEv1Q2jjwotxrXcQoTeFfMZFS ONu7altJvB7HoYyawfb5c4BafqJkYTmhOmtqrpwujLCHwMbnglsU9EG/H4SLt5bpbyNW X8aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684351536; x=1686943536; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tvBvKq78j+8ZO0sz/gcDiszQ/0Gmc90EAVmVpzMrEXI=; b=BXAQPhODXtrJEvoLIlnZJdrPDunn0Q4ZWmcbV2V06PDpXjFatUzc4MRVwHoH7/sD4r IPSL28FsJk3FBwoM2Dc99X2cxLnsU0pYF0A9vy3jNEmBsK/rikiZPRL/Si94kUBJQGYp B8m+78pt9p0j/x2ffHLqdNY7pP01YWO2784duRmPiIGVK5PyCLw6eEpAV6d5ZjbtXzze WfW4h4d843uaaSgjTMGReAmuzvRm4IWy3IMutJrU9Db5q/e4x1dgkiNabTx/jlucWkFa UDK7Eh9DmwfVXDd7io9FgCOW5KGEEsGZob/AzPw+cf7RhRpfg//cRnrmsfFSgiY0izos TI1g== X-Gm-Message-State: AC+VfDzqxTO8pu0PBvvfK8bFl4Xavyz6SCHSY06tWEC8igfbvdpVYPJA tYKQ62qvJIPAlVffuxUUTeKmQlKPQtX28A== X-Google-Smtp-Source: ACHHUZ6bsCbUCY0r6E6dZjjVwGP76WCyzce1QcEy5aU0k29CWJV6pgBDdw+tD0qwF6dXuTCLd2aOmQ== X-Received: by 2002:a1c:f711:0:b0:3f5:1a4:a08d with SMTP id v17-20020a1cf711000000b003f501a4a08dmr9995991wmh.7.1684351536409; Wed, 17 May 2023 12:25:36 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id p4-20020a05600c358400b003f1738d0d13sm4252469wmq.1.2023.05.17.12.25.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 May 2023 12:25:35 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , x86@kernel.org, linux-sgx@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, kvm@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , Jarkko Sakkinen , "H . Peter Anvin" , Xinhui Pan , David Airlie , Daniel Vetter , Dimitri Sivanich , Arnd Bergmann , Greg Kroah-Hartman , Paolo Bonzini , Jens Axboe , Pavel Begunkov , Jason Gunthorpe , John Hubbard , Christian Konig , Lorenzo Stoakes , Jason Gunthorpe , Christoph Hellwig , Sean Christopherson Subject: [PATCH v6 1/6] mm/gup: remove unused vmas parameter from get_user_pages() Date: Wed, 17 May 2023 20:25:33 +0100 Message-Id: <589e0c64794668ffc799651e8d85e703262b1e9d.1684350871.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: ui5zxows3kawcb9xb167q9cukrwiny5w X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 5A41BA000D X-HE-Tag: 1684351538-585266 X-HE-Meta: U2FsdGVkX19UhessWxNT71BsiXtSEcmm42zC3XrPYl5Bu55x7P+HdGL8r4dy4Ot2Ns4gS1u/9Swww6RYW1ayQEi5SunX3qoWXmSN8sgsEnArx0IMTdCJgRy5LVIPCNhVkIJZxGD+GMVsZ1vMeGiWn2Qz/VCemDX1db6kSD/XOAKBDyfTlffXczyNOC9K3VJYpESDqBFR6AK5oWDiKnRbGIJ/UTbe15wT7/ptTyOOl93W6HRe7fIm5Krkd051bfvSJuRQNgnRReUZY5RgrbR5XbjDSShD3S3uCyL8latQAkPFMp/eKCQeQXZewvFFaiEVf8PVQjGfsmAbKu1TLxMYZaiElLlzZ4MxEdq1s84erbNVoV49UJPXoNcKA+RBftWbP1RsSR2VRrCjn0JCn/MRjCq3SeoTSnEeUCcdybO8M6VEj7JQnVvZW177VBl5D0+BZrcb/dzfkvK6HrUVXt4jCVmO2GbKG1g7JOtFtXfWXhC5r6KTN+4KP/jvRm1at3O3CTn44Os/Ynce52JuoMO3IL2/8j8lfKh2ZRerYFW3womTNAlACiHTj0X2vANrD45/esfsGEbr+0III+VSdYKLAVZXQpBAylxl9HHicV1rgx+Nyf/E948iv8sq46mDFb9gdCw0VUvfCk1XRwkrBx5aIW37M1P+pFhcDH0xMICKVotd5EEhevD1c2OIZFV06J0r2LYlOV8OcIAiic1Y6HfjUdnnPmLysgP6nUkQ93ehvXKijo0UIeAJaEnUwnuVknBfXY9IuBcJjv/bBI2W0NoB30KfMp2mVeTVDTvUGNYS2zbLVAZd8UExj+GX9rP89fSk8lSGVJJEJsWegQbNA4TBT9PuWunSZ/+9mOFRtVxzq9BZW7hONig6dG4zLgIFu95nGvFsUrlWYIuJJiuPz52DFE4Y1mVM+7JCEqZ5BA+3Sf4DGdxqajQyYd6XzMElk/xd7m7mPtxyDUcw5Fa3qxT rmiRHBvE fQUDKO/TYRmFCFWbM9wJFieNGYe1geXN61Wiui47cdSv0eZ8Fcs1LvrdABxC+uXA7r7cYf+wYe5GXwacHBLfHuPW+aNOy3yHK5sBAaRsKPn1FM+Cc8DffkYdRs7LqXj3e3+0v0eesONnialZhxfJ5YOkZjcc+sC4zAacmW8JRUwe2lWEu9TtxcSL3ZQiI4YFuw5igvv3olxbFxdpaA4QzOhJ9N2+pshgTaVHY+4Q1hw10oyy8W+reqHknwucnYGkpMnNQqSn5k7zMWtBxQ6nWJy1pgcJ2hI4ksSA8zwUq5jbKwwShtXlcuFYC+CPZmqyhxAY2wV+nr0bXrWoTjqL4yCYZVoJc/bVRNEEBhy1dCkTNOgtkhLpVnm0Zj+jI+C2aW40rDqQ1TbRXr3PZ8ItxPBCbLYUgmRdDCF0fC+/gfQ1yhNPsfWOniZdErvLgvQh0oHApvHw30Ka6X+5jDScriF1/9cw+ruOpXTrn/dQZvsqvzcS/HiOlyuHHHUbwDQIToK8zC41OZmgEQrgUBTV2loocThDSpnxW9atppcXwU4TSA8yDHrB81V4L5x310bwIEvJCjhE4Gtx8TxSGt8917lXMmcLZ8tuw974iIbSwLP0k7wSFK8AAMwzvrsxE3BSYLBOVpiXdpuAYQjtUoFRikP3jsf6+Ow7RWt1GOVAnKFGi5H8a28bSJ5l2QLW+TXXTM+Eoeiwo1mV5g64MWzMMHZm7OhHjmBVF88OVNijWKUBsrzG9X3/du46QJr239nFCl1TO20eQ32BhgjsoxeILnmVLIOt6lcVIvuhhGp/GX1TVvkb8Nj+MUpVfH5XaIlALr4nYFWpNRw8ozbCoWXbCJRDthA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: No invocation of get_user_pages() use the vmas parameter, so remove it. The GUP API is confusing and caveated. Recent changes have done much to improve that, however there is more we can do. Exporting vmas is a prime target as the caller has to be extremely careful to preclude their use after the mmap_lock has expired or otherwise be left with dangling pointers. Removing the vmas parameter focuses the GUP functions upon their primary purpose - pinning (and outputting) pages as well as performing the actions implied by the input flags. This is part of a patch series aiming to remove the vmas parameter altogether. Suggested-by: Matthew Wilcox (Oracle) Acked-by: Greg Kroah-Hartman Acked-by: David Hildenbrand Reviewed-by: Jason Gunthorpe Acked-by: Christian König (for radeon parts) Acked-by: Jarkko Sakkinen Reviewed-by: Christoph Hellwig Acked-by: Sean Christopherson (KVM) Signed-off-by: Lorenzo Stoakes --- arch/x86/kernel/cpu/sgx/ioctl.c | 2 +- drivers/gpu/drm/radeon/radeon_ttm.c | 2 +- drivers/misc/sgi-gru/grufault.c | 2 +- include/linux/mm.h | 3 +-- mm/gup.c | 9 +++------ mm/gup_test.c | 5 ++--- virt/kvm/kvm_main.c | 2 +- 7 files changed, 10 insertions(+), 15 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 21ca0a831b70..5d390df21440 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -214,7 +214,7 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, if (!(vma->vm_flags & VM_MAYEXEC)) return -EACCES; - ret = get_user_pages(src, 1, 0, &src_page, NULL); + ret = get_user_pages(src, 1, 0, &src_page); if (ret < 1) return -EFAULT; diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index 2220cdf6a3f6..3a9db030f98f 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -359,7 +359,7 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_device *bdev, struct ttm_tt *ttm struct page **pages = ttm->pages + pinned; r = get_user_pages(userptr, num_pages, write ? FOLL_WRITE : 0, - pages, NULL); + pages); if (r < 0) goto release_pages; diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c index b836936e9747..378cf02a2aa1 100644 --- a/drivers/misc/sgi-gru/grufault.c +++ b/drivers/misc/sgi-gru/grufault.c @@ -185,7 +185,7 @@ static int non_atomic_pte_lookup(struct vm_area_struct *vma, #else *pageshift = PAGE_SHIFT; #endif - if (get_user_pages(vaddr, 1, write ? FOLL_WRITE : 0, &page, NULL) <= 0) + if (get_user_pages(vaddr, 1, write ? FOLL_WRITE : 0, &page) <= 0) return -EFAULT; *paddr = page_to_phys(page); put_page(page); diff --git a/include/linux/mm.h b/include/linux/mm.h index db3f66ed2f32..2c1a92bf5626 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2382,8 +2382,7 @@ long pin_user_pages_remote(struct mm_struct *mm, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); long get_user_pages(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas); + unsigned int gup_flags, struct page **pages); long pin_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas); diff --git a/mm/gup.c b/mm/gup.c index 90d9b65ff35c..b8189396f435 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2294,8 +2294,6 @@ long get_user_pages_remote(struct mm_struct *mm, * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. Or NULL, if caller * only intends to ensure the pages are faulted in. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * * This is the same as get_user_pages_remote(), just with a less-flexible * calling convention where we assume that the mm being operated on belongs to @@ -2303,16 +2301,15 @@ long get_user_pages_remote(struct mm_struct *mm, * obviously don't pass FOLL_REMOTE in here. */ long get_user_pages(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas) + unsigned int gup_flags, struct page **pages) { int locked = 1; - if (!is_valid_gup_args(pages, vmas, NULL, &gup_flags, FOLL_TOUCH)) + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_TOUCH)) return -EINVAL; return __get_user_pages_locked(current->mm, start, nr_pages, pages, - vmas, &locked, gup_flags); + NULL, &locked, gup_flags); } EXPORT_SYMBOL(get_user_pages); diff --git a/mm/gup_test.c b/mm/gup_test.c index 8ae7307a1bb6..9ba8ea23f84e 100644 --- a/mm/gup_test.c +++ b/mm/gup_test.c @@ -139,8 +139,7 @@ static int __gup_test_ioctl(unsigned int cmd, pages + i); break; case GUP_BASIC_TEST: - nr = get_user_pages(addr, nr, gup->gup_flags, pages + i, - NULL); + nr = get_user_pages(addr, nr, gup->gup_flags, pages + i); break; case PIN_FAST_BENCHMARK: nr = pin_user_pages_fast(addr, nr, gup->gup_flags, @@ -161,7 +160,7 @@ static int __gup_test_ioctl(unsigned int cmd, pages + i, NULL); else nr = get_user_pages(addr, nr, gup->gup_flags, - pages + i, NULL); + pages + i); break; default: ret = -EINVAL; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index cb5c13eee193..eaa5bb8dbadc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2477,7 +2477,7 @@ static inline int check_user_page_hwpoison(unsigned long addr) { int rc, flags = FOLL_HWPOISON | FOLL_WRITE; - rc = get_user_pages(addr, 1, flags, NULL, NULL); + rc = get_user_pages(addr, 1, flags, NULL); return rc == -EHWPOISON; } From patchwork Wed May 17 19:25:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13245600 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 073F1C77B75 for ; Wed, 17 May 2023 19:25:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 87492280002; Wed, 17 May 2023 15:25:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7FCCA280001; Wed, 17 May 2023 15:25:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 64E98280002; Wed, 17 May 2023 15:25:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 545C6280001 for ; Wed, 17 May 2023 15:25:43 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 27F73A063A for ; Wed, 17 May 2023 19:25:43 +0000 (UTC) X-FDA: 80800726524.24.712D81B Received: from mail-wr1-f51.google.com (mail-wr1-f51.google.com [209.85.221.51]) by imf27.hostedemail.com (Postfix) with ESMTP id 3C3FD4000D for ; Wed, 17 May 2023 19:25:40 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b="jGTRNaT/"; spf=pass (imf27.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.51 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684351541; a=rsa-sha256; cv=none; b=ESOfJ14ySR3P2bJYJGIyfDKVwRXdoU6erJoSKAUdVZMVctmr2BEbgm7+RufrHuwm+y21ts imR/wojGsF8KwTlVLVC/B5TKFoZrxwfUWr5T5ZQ7rgVKVfh0u4DrbczHG1FmVug9AW/PMm lefdkErDAxau2BvoafD+86BCjMx4aSU= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b="jGTRNaT/"; spf=pass (imf27.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.51 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684351541; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=S3weABmS34nZVMLJo+XTzwZMXin1WTjGdDKVc/7JR4E=; b=LTnNdUITxfgGzsqNUaMeITIlD2l8f3nOTb/LU//196b1B3OhaSSJF8TU3Ae11ZHbYlDyTn F0/p3nkqIt1jIVKwzkuvXWFR8MQW5QH6EQKw14hMYY+LtyhhURQ//5yZkQEg+7rhMTuPEI CdxoRirGbaVerFXGCrnoHkorsQBv9qI= Received: by mail-wr1-f51.google.com with SMTP id ffacd0b85a97d-30636edb493so296626f8f.1 for ; Wed, 17 May 2023 12:25:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684351539; x=1686943539; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=S3weABmS34nZVMLJo+XTzwZMXin1WTjGdDKVc/7JR4E=; b=jGTRNaT/LTEfefiq7iqw5TaJTPZA9TvIW3MjtLorKW+Kvhvp856nVPG5cbioprbYxW /1gJIPovKpgZj//NU8LN58hvGAl9o0TeOmswwjgR3g+OAR/02RTQtyaXz2b/YGkqVosZ 6hsrlRZVEvvFxPLck4msRzdPTPsf3m2WXx7T13x0VCG4M0mW/wr55sxmbQSTnd7YYheo vQujQxPFUYm7HVF1F1sW21FscyTgGBy6UrtT6V5kydLfRh12iqndOBJOMXpYaHJWHt4Q AZvImUUAbB5UT1lEZFAFTBzcRayTpDXVrTX416cenJYX7bP0DZ8GK1O/VM1RVqG+2Bug lavQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684351539; x=1686943539; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=S3weABmS34nZVMLJo+XTzwZMXin1WTjGdDKVc/7JR4E=; b=WxjRUbuvrvTIZZwGgK09bYod/0ZokxJ5CGLoqb35PlDe9XqSpW2Sot5xAWXFHear2R Kvn72z2lNPYi9HN+JN9+tiVE9qpvIklcVGEfU+QfPlDcspd2ZJcVuztR67kWFAoU1mpB GndZ9ZALkJk1nNKiA09LuTK+o589G1JHJPzD3zy7QuQpCBv1NlxzgvduWCbuNBKPeW1y E8+8yeF9yHBzqIRxgDQOL08vfwMgpbB448yfB98VIt7CiUquGoaNMGzLOlmh5almczZk 5/7XXk3krQ/JiaBsy/Q5YcpFLauyyo1KciEXm8kzr1NnMHOkDuS5TJ+yTXafS0AkkCiw HoXw== X-Gm-Message-State: AC+VfDxQuIrudOZ4KuEEDJQHJPWX6AQHfM+gM8kRJ/PktGFcX/xFUptp Q6eHt1ug6WV4LcVUTx2kPAWMCAoQk4nwdA== X-Google-Smtp-Source: ACHHUZ41Ld9JgKrYrlAx6gZJdD/qq/cESCQqKPp4/u9Jh+7PIgc4GjZmT9TPDR7j6Hx7/rIs3UxZoQ== X-Received: by 2002:a5d:528a:0:b0:2fd:c315:bb2c with SMTP id c10-20020a5d528a000000b002fdc315bb2cmr1331031wrv.22.1684351539048; Wed, 17 May 2023 12:25:39 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id t2-20020a5d5342000000b00307972e46fasm3588386wrv.107.2023.05.17.12.25.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 May 2023 12:25:38 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , iommu@lists.linux.dev Cc: Matthew Wilcox , David Hildenbrand , kvm@vger.kernel.org, Jason Gunthorpe , Kevin Tian , Joerg Roedel , Will Deacon , Robin Murphy , Alex Williamson , Jens Axboe , Pavel Begunkov , John Hubbard , Lorenzo Stoakes , Jason Gunthorpe , Christoph Hellwig Subject: [PATCH v6 2/6] mm/gup: remove unused vmas parameter from pin_user_pages_remote() Date: Wed, 17 May 2023 20:25:36 +0100 Message-Id: <28f000beb81e45bf538a2aaa77c90f5482b67a32.1684350871.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 3C3FD4000D X-Stat-Signature: tinxcrez9rzcmbhtm5af3cjmw4f85qkx X-HE-Tag: 1684351540-114910 X-HE-Meta: U2FsdGVkX1816V4dsJdCw0OrReBreJ/39fOR9XjLnyunENpWl8BtmNBoZmfQt354iXQdR88LhX2g+Z/C6XTXX5IuCPKBgSzm653EJFmV1cBzSlnyn9DtpNxROlDKF5p6fF+qblmB+a8wL8qDt/MY/3PGFeRsK2/Lo5Clk0pdqFZypcJx1oFl/fuRS+SxvFUyBcI/LWkgr5IswcM+0sjrRP+xC2vMuy34T7O13/usgzaH/tIfM84O8324SBCjwVYFJrgKlFGKuWXI9xxVbqgN4sHxwjaD4pW4VRZ0tDvvXAMxwe4vOykCgrhb74/cVEHz+PK8BNSFb5zbFWh7HHwmY2DwZLVX0wfsoq7EeBK1fYKxvpEeyb3SuZIYSDRKqhTlj9J74qlrZDObYTdyUl2bGE4BMQhGUEts1y388hLjmqkzt5KbgpJ4qRkw5EW/Xmn+cCM4/htqSFu+Kw4zARK8hmPLt4doFdIxoJ/Rh6wK7K5lJhj7vCvO/39T+5ezBCN+KB79Uf1cusavnrA7kEHGBe8PErU+z0+Boki1ft6/+xVvKstLqSyjD3a49nWRGeHl0ItxVa3S0vyks9QayJ4Grz/5KKQJfXmmCgxkIS9/1QETbCkfanWidr6dCA7VwJ4Chufy5KoXc4eyZ4Iyf3BRl40wVlmotF3hrJCYsuX/c9HPmsaE+r6vatmItr6BUBL1NypkJp2/4XTtgcM2G0Et4ZAVXyemEmTpTzg9Rjo/OQmpj7mDDXCH3ALjC1/QksyO24iZjHVSZYq9sky3DvEJnIkYPJcR7Jqmf9SnoAxgDLa3NhhmtCvSn7pd8MSOdTwlamHIzMSTgBAbM8YcqDJhokboTC+PV3MrlJmp7NzAtKX/E9lkzzfkNcTdAkjmclcDrRTEXY82mKJM8muAO8sFF+yvBSKqA7WJSAYM6i1RyPcPi7x7IT5yt/RMFAD8FqR4lhfkamnJLY0lzsvKsm4 gdJvzPwy qN+Jk1ey9QWSxHDmR6gLcvPqX+1GSv18E9aVKjc5QzBsCD7irhoaVq6oLqBIXAK1Ulr31BzJNbCak/lFUqlBK1Fi2fkVxqvN7Aha+P0LlMlqr/2MTC+eS3qGvEJAIxP98tSaciHcMNGkKCvqYVXpWFXoLJdjBKEwuStUxKEo96qekUaNuy2a1Gv56+VNinS77jw0qtjnScONNh2rZ80VzN067R5/bq2bRJ0+K94zbwlIZnRAqQN0jsAK7w78QtMg8Ig1YvV6rkNYyhhmGQaL1JE6KjmytjTmPjZMyO0xyyZtC/nPIAss5Wp2S4//Qp7gzpSSK7rS74uRxioLIptsa1OYkF4gahMJtHmXHRntU8DrgdfbUT5n8vB9UoHp93tbjqwo8hVgRXTcdn65rNFCIoRHmrIbYe15i4F8V7LcXR7n0IAPadfZ4h67apJnqK+dIu1LYtPQhBlgsbDBSc0jrMeG8xLJWNB7ZMgim0ybnC7Isj5Ta/fibHFAuFB9k9CB44iG+OUe9/5MgXYUs2xJoOjZfozG+ArG5YqNbpeetWAUUmX47o3KAT0BtbBh4kbKRpQgwFYuadYENIWrEzXtUTP4zzF6aMU/pB/oy6RvWRLUPdjE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: No invocation of pin_user_pages_remote() uses the vmas parameter, so remove it. This forms part of a larger patch set eliminating the use of the vmas parameters altogether. Acked-by: David Hildenbrand Reviewed-by: Jason Gunthorpe Reviewed-by: Christoph Hellwig Signed-off-by: Lorenzo Stoakes --- drivers/iommu/iommufd/pages.c | 4 ++-- drivers/vfio/vfio_iommu_type1.c | 2 +- include/linux/mm.h | 2 +- kernel/trace/trace_events_user.c | 2 +- mm/gup.c | 8 +++----- mm/process_vm_access.c | 2 +- 6 files changed, 9 insertions(+), 11 deletions(-) diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index 3c47846cc5ef..412ca96be128 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -786,7 +786,7 @@ static int pfn_reader_user_pin(struct pfn_reader_user *user, user->locked = 1; } rc = pin_user_pages_remote(pages->source_mm, uptr, npages, - user->gup_flags, user->upages, NULL, + user->gup_flags, user->upages, &user->locked); } if (rc <= 0) { @@ -1799,7 +1799,7 @@ static int iopt_pages_rw_page(struct iopt_pages *pages, unsigned long index, rc = pin_user_pages_remote( pages->source_mm, (uintptr_t)(pages->uptr + index * PAGE_SIZE), 1, (flags & IOMMUFD_ACCESS_RW_WRITE) ? FOLL_WRITE : 0, &page, - NULL, NULL); + NULL); mmap_read_unlock(pages->source_mm); if (rc != 1) { if (WARN_ON(rc >= 0)) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 3d4dd9420c30..3d2d9a944906 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -562,7 +562,7 @@ static int vaddr_get_pfns(struct mm_struct *mm, unsigned long vaddr, mmap_read_lock(mm); ret = pin_user_pages_remote(mm, vaddr, npages, flags | FOLL_LONGTERM, - pages, NULL, NULL); + pages, NULL); if (ret > 0) { int i; diff --git a/include/linux/mm.h b/include/linux/mm.h index 2c1a92bf5626..8ea82e9e7719 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2380,7 +2380,7 @@ long get_user_pages_remote(struct mm_struct *mm, long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked); + int *locked); long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages); long pin_user_pages(unsigned long start, unsigned long nr_pages, diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index b1ecd7677642..bdc2666e8d39 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -406,7 +406,7 @@ static int user_event_enabler_write(struct user_event_mm *mm, return -EBUSY; ret = pin_user_pages_remote(mm->mm, uaddr, 1, FOLL_WRITE | FOLL_NOFAULT, - &page, NULL, NULL); + &page, NULL); if (unlikely(ret <= 0)) { if (!fixup_fault) diff --git a/mm/gup.c b/mm/gup.c index b8189396f435..ce78a5186dbb 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -3243,8 +3243,6 @@ EXPORT_SYMBOL_GPL(pin_user_pages_fast); * @gup_flags: flags modifying lookup behaviour * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * @locked: pointer to lock flag indicating whether lock is held and * subsequently whether VM_FAULT_RETRY functionality can be * utilised. Lock must initially be held. @@ -3259,14 +3257,14 @@ EXPORT_SYMBOL_GPL(pin_user_pages_fast); long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked) + int *locked) { int local_locked = 1; - if (!is_valid_gup_args(pages, vmas, locked, &gup_flags, + if (!is_valid_gup_args(pages, NULL, locked, &gup_flags, FOLL_PIN | FOLL_TOUCH | FOLL_REMOTE)) return 0; - return __gup_longterm_locked(mm, start, nr_pages, pages, vmas, + return __gup_longterm_locked(mm, start, nr_pages, pages, NULL, locked ? locked : &local_locked, gup_flags); } diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c index 78dfaf9e8990..0523edab03a6 100644 --- a/mm/process_vm_access.c +++ b/mm/process_vm_access.c @@ -104,7 +104,7 @@ static int process_vm_rw_single_vec(unsigned long addr, mmap_read_lock(mm); pinned_pages = pin_user_pages_remote(mm, pa, pinned_pages, flags, process_pages, - NULL, &locked); + &locked); if (locked) mmap_read_unlock(mm); if (pinned_pages <= 0) From patchwork Wed May 17 19:25:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13245601 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E89DC77B75 for ; Wed, 17 May 2023 19:25:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 14650280003; Wed, 17 May 2023 15:25:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F9FA280001; Wed, 17 May 2023 15:25:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB33C280003; Wed, 17 May 2023 15:25:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D212E280001 for ; Wed, 17 May 2023 15:25:46 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 9FDC81C7464 for ; Wed, 17 May 2023 19:25:46 +0000 (UTC) X-FDA: 80800726692.22.D9D0F44 Received: from mail-wm1-f53.google.com (mail-wm1-f53.google.com [209.85.128.53]) by imf16.hostedemail.com (Postfix) with ESMTP id 49B73180013 for ; Wed, 17 May 2023 19:25:44 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=PPJXpdUX; spf=pass (imf16.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.53 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684351544; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bI0DKFHOgADCUjTm1gh+Y1O91LrE51ywiHLT1I9lXDA=; b=ewXvGeyceQucE368LspQOwX80rsiYjMW1mPG8YTPRXImlb7TOTSw5ZhH8W/rVLpxLN2FsE 7SSppPWg0tLb/sdppHabrLeuYGKXOWnvUVawUw8nRkQ1xpLyPCWo9fQtSCpnJEO12FLrku 6KsAwHg5FHDfYuN9mMvdXXML4a5rzU8= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=PPJXpdUX; spf=pass (imf16.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.53 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684351544; a=rsa-sha256; cv=none; b=lwpdGT6Y2JjDByTFX568yCASGXq5FfM4IIxafn6Xia4cdrqdRGQvMB6jn8qUN2KBrpK+Gg lUWmbgBoIuGjtRE5om6b1O+HAFdsVDfFztljR5KVDQ6/I0UFqE8Wr2Izf+/LQRfItyYQIe DYw9I6sWkhI9J0nZe6E3SJZpxewce8o= Received: by mail-wm1-f53.google.com with SMTP id 5b1f17b1804b1-3f475366522so8182695e9.1 for ; Wed, 17 May 2023 12:25:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684351542; x=1686943542; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bI0DKFHOgADCUjTm1gh+Y1O91LrE51ywiHLT1I9lXDA=; b=PPJXpdUXIcVpmQMhEHWRMFWnyiO/AkYwOg6Poy1s98Vqy6aKFCPRNnMy8d5v8bAFqc +8wMS9nv5WUwLJd9H32NxThhM4ouyxllOQHXQelpnO3YAVr9CE5Inpg3NGbsYd0EdaHg P4Q0b0He9mSNboyHrvYjM+Pwp0y5gDU0E73+HgDtLw+uu5mE9Fc1P56FJH/zXC/mv+VH 9ocxC0K0R7QslQOD6ENe2+qMAG4qjmU1OKhZv/lFsUKtWIkqqukr0OkTSwe0tu28TeJv 4bF7DB3Tg/tio9vZQgTWPK17aLJuDrMIMdf3+IUdOB7qBAwvqtqQnvZndtKlEK4hz9cN 9PCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684351542; x=1686943542; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bI0DKFHOgADCUjTm1gh+Y1O91LrE51ywiHLT1I9lXDA=; b=G+313GEbo9ZbhEHzKeia0ZSS3EAanQjsPrjtQHnJikC+vhKk97li+tG3wNFhgBVaHT IAMHFaqGiREU8cAEJqf4orUd2xnU0M4Olleb+ssEUNMdBCZqaxuh6JRuMdIcqQYTyL1u IkR8h7/eT0TBo9HZJlY2RYw2TXjHGpOLvHeJ+nI7nlYEEE8Rq6LBY3haROgA4RLJcUt5 ruPDqAMzVPAj7N2IgTd3870/RGuHAcr74aYLF1xdJSmOz6BCOOd7SBOaLFis3LavjcZ+ Qbz/lFGPj52/w2dpu6whCwJjkpeWW6v95t+Va1lByd6hovEsuj16CNrOVxNXFY899hso XfWQ== X-Gm-Message-State: AC+VfDxVQ94t5LkEFXnb29KrALawiBYbt4AT5Bce25ng2SQrVsvkXXui QoUVwP+G0rcoVOolZeHs20stiIkQAZfIHw== X-Google-Smtp-Source: ACHHUZ5dLCxYeuwr8LZ0JYXMXTV//FAhHeUvNtFAAtLYy6+re7TyFBcErReKgD76Hv5TyZsWloYKgw== X-Received: by 2002:a05:600c:2905:b0:3f4:e7c6:989a with SMTP id i5-20020a05600c290500b003f4e7c6989amr14580646wmd.32.1684351542106; Wed, 17 May 2023 12:25:42 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id m39-20020a05600c3b2700b003f07ef4e3e0sm5658805wms.0.2023.05.17.12.25.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 May 2023 12:25:41 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, Catalin Marinas , Will Deacon , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Eric Biederman , Kees Cook , Alexander Viro , Christian Brauner , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Kentaro Takeda , Tetsuo Handa , Paul Moore , James Morris , "Serge E . Hallyn" , Paolo Bonzini , Jens Axboe , Pavel Begunkov , Jason Gunthorpe , John Hubbard , Dan Carpenter , Lorenzo Stoakes , Christoph Hellwig Subject: [PATCH v6 3/6] mm/gup: remove vmas parameter from get_user_pages_remote() Date: Wed, 17 May 2023 20:25:39 +0100 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: xpoppq5f3zecnsf53edejzdkyasgkhiy X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 49B73180013 X-Rspam-User: X-HE-Tag: 1684351544-588817 X-HE-Meta: U2FsdGVkX19y/lDL9wuBmpuuTPY0n5QJ3wy/xB+d62hjRLXvfwrPYdJLxfEMBeFXP9HzPe9MGcfxJB0gu4AnoB5GbuuUxCULB+2RFPfArkdrVskHo79vDgMs92PIYBugJe7oGi/L55LATXwGQU8htG5fOvtiEBQ13QZ84qPNJX09VhnIWKFroooGiLBE5oRMl+/Vaql6PE8fgxQ/AToTlY8K5SeTtzR2wLmEsaN9Ga/af06WbiwnJ3bxUPRxgJIrsLwDTXz4O2MzOlFe3Ob7gstBCSIMfgdsDJDYx4ycDUSLB9ZwovBUvoqFDNaDypOO75TL4eQPDzcNL+gL08vlzyKeUkf1UTcBC1w2hnPfD4+8Ok97KtGbbAIjGGr//mpPHyL1+MsebWy6z/2817jG43bQ8hyUMmk2ruwDtqwg9dhcjWCPKAczTfAehSP2IyWCKBJX4NGwPceTW3cbHGgtmplbOLiDCpeq1oZwMvsZHf7Ps6s4Vx8TiqbQipxjpRATdDqZ0k+bLCjgE6GFNo3v6OLV/cbzOBAAVoe9gKrekFyC/eOnNnBoxF46/Ujb9weQIaHDfZiMtxh2GyXMYKKhC0iqvXA4XMeha09+CBhiyygo6pUoODA1tx96Z144pPfAZnRV8hvZZsQiIoD/+bfQvEpZDzHQqmrcd1FqEY+Sv1fuMRxVXkLcHuod6GsAhGY3d2rybPuAj0PaNvyfjBCCxff1j1ztvsBiKdswasHBwKM3zOK0IROtd7zrCIhzpZTg4lk2yOvf0RQibi8euDowFXb8B3OJvkY0YhLmDkO3r5W176mK1R6S7u+nZCxPHw043aJqGNQDRGU3YwBWhhDPbxSFVDWi/3NHF5eZ8AgzrvJ76rJHAgW6b9aTD0qoSx02GNtnpm8u27lXx/HJGcC7mWeMdw1gyQlffzOXqQf0JroX151rMqZ5owVttrv7qIrkviPGtT/l9yFIWUvwWNl 3/+qv3BY 4wx5M+dbRKLXuUupctaWLSiOEt8yVG/q3sreEl3rrn9X/ZSVDZheCTBFRv/O0HmcJknMq2pqNhBqAwV76SkRRY5XzIbYKsIgequO7o54lOAA49IQEWfJepL6mvjuB+TaxTgrAu19ztm2Vvj7Im2ee5mxvOxMzOueXP9B9BQzYevvjS9M7heSccbQ8r0BM3rC+bRb8+T/+8ZcDtCxrpJUudh61gkk6SS79diuWKENPeCRDR+oKXcwyLicKweGxXRBnP4srpGf23lFadmcjFJuHAkta0DPmahdCXl1zL+GTmGcQtFX6lKmQ5++YAD/l+5R+wIV3zMlpiSeZsDuphJ+n+wgMm+Ky1vtF0HdO9pOHacjY3aremIAdRHMV/1i2+TkW3Rn6iPh9D0DHXipYS1t7jjWRgIUJ8Aj1IV9/q+2VGa17+icbzILsuvSdEBvB61xcC9wxykChc6lu0J32zpwZsc2LfQHdjDR+JHh9KskrOXKRHDH4S5ti3nxM6SjDK5ojQb6KLMKBbkyphnmg6aAFc11qrJjt3mskL3Mx0O0hMHzj/TKWX32nHwCnKnFsCs2XIpbY9Qbdk+fsdAk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The only instances of get_user_pages_remote() invocations which used the vmas parameter were for a single page which can instead simply look up the VMA directly. In particular:- - __update_ref_ctr() looked up the VMA but did nothing with it so we simply remove it. - __access_remote_vm() was already using vma_lookup() when the original lookup failed so by doing the lookup directly this also de-duplicates the code. We are able to perform these VMA operations as we already hold the mmap_lock in order to be able to call get_user_pages_remote(). As part of this work we add get_user_page_vma_remote() which abstracts the VMA lookup, error handling and decrementing the page reference count should the VMA lookup fail. This forms part of a broader set of patches intended to eliminate the vmas parameter altogether. Reviewed-by: Catalin Marinas (for arm64) Acked-by: David Hildenbrand Reviewed-by: Janosch Frank (for s390) Reviewed-by: Christoph Hellwig Signed-off-by: Lorenzo Stoakes --- arch/arm64/kernel/mte.c | 17 +++++++++-------- arch/s390/kvm/interrupt.c | 2 +- fs/exec.c | 2 +- include/linux/mm.h | 34 +++++++++++++++++++++++++++++++--- kernel/events/uprobes.c | 13 +++++-------- mm/gup.c | 12 ++++-------- mm/memory.c | 20 ++++++++++---------- mm/rmap.c | 2 +- security/tomoyo/domain.c | 2 +- virt/kvm/async_pf.c | 3 +-- 10 files changed, 64 insertions(+), 43 deletions(-) diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index f5bcb0dc6267..cc793c246653 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -419,10 +419,9 @@ long get_mte_ctrl(struct task_struct *task) static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, struct iovec *kiov, unsigned int gup_flags) { - struct vm_area_struct *vma; void __user *buf = kiov->iov_base; size_t len = kiov->iov_len; - int ret; + int err = 0; int write = gup_flags & FOLL_WRITE; if (!access_ok(buf, len)) @@ -432,14 +431,16 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, return -EIO; while (len) { + struct vm_area_struct *vma; unsigned long tags, offset; void *maddr; - struct page *page = NULL; + struct page *page = get_user_page_vma_remote(mm, addr, + gup_flags, &vma); - ret = get_user_pages_remote(mm, addr, 1, gup_flags, &page, - &vma, NULL); - if (ret <= 0) + if (IS_ERR_OR_NULL(page)) { + err = page == NULL ? -EIO : PTR_ERR(page); break; + } /* * Only copy tags if the page has been mapped as PROT_MTE @@ -449,7 +450,7 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, * was never mapped with PROT_MTE. */ if (!(vma->vm_flags & VM_MTE)) { - ret = -EOPNOTSUPP; + err = -EOPNOTSUPP; put_page(page); break; } @@ -482,7 +483,7 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, kiov->iov_len = buf - kiov->iov_base; if (!kiov->iov_len) { /* check for error accessing the tracee's address space */ - if (ret <= 0) + if (err) return -EIO; else return -EFAULT; diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index da6dac36e959..9bd0a873f3b1 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -2777,7 +2777,7 @@ static struct page *get_map_page(struct kvm *kvm, u64 uaddr) mmap_read_lock(kvm->mm); get_user_pages_remote(kvm->mm, uaddr, 1, FOLL_WRITE, - &page, NULL, NULL); + &page, NULL); mmap_read_unlock(kvm->mm); return page; } diff --git a/fs/exec.c b/fs/exec.c index a466e797c8e2..25c65b64544b 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -220,7 +220,7 @@ static struct page *get_arg_page(struct linux_binprm *bprm, unsigned long pos, */ mmap_read_lock(bprm->mm); ret = get_user_pages_remote(bprm->mm, pos, 1, gup_flags, - &page, NULL, NULL); + &page, NULL); mmap_read_unlock(bprm->mm); if (ret <= 0) return NULL; diff --git a/include/linux/mm.h b/include/linux/mm.h index 8ea82e9e7719..679b41ef7a6d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2366,6 +2366,9 @@ static inline void unmap_shared_mapping_range(struct address_space *mapping, unmap_mapping_range(mapping, holebegin, holelen, 0); } +static inline struct vm_area_struct *vma_lookup(struct mm_struct *mm, + unsigned long addr); + extern int access_process_vm(struct task_struct *tsk, unsigned long addr, void *buf, int len, unsigned int gup_flags); extern int access_remote_vm(struct mm_struct *mm, unsigned long addr, @@ -2374,13 +2377,38 @@ extern int __access_remote_vm(struct mm_struct *mm, unsigned long addr, void *buf, int len, unsigned int gup_flags); long get_user_pages_remote(struct mm_struct *mm, - unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked); + unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + int *locked); long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, int *locked); + +static inline struct page *get_user_page_vma_remote(struct mm_struct *mm, + unsigned long addr, + int gup_flags, + struct vm_area_struct **vmap) +{ + struct page *page; + struct vm_area_struct *vma; + int got = get_user_pages_remote(mm, addr, 1, gup_flags, &page, NULL); + + if (got < 0) + return ERR_PTR(got); + if (got == 0) + return NULL; + + vma = vma_lookup(mm, addr); + if (WARN_ON_ONCE(!vma)) { + put_page(page); + return ERR_PTR(-EINVAL); + } + + *vmap = vma; + return page; +} + long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages); long pin_user_pages(unsigned long start, unsigned long nr_pages, diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 59887c69d54c..cac3aef7c6f7 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -365,7 +365,6 @@ __update_ref_ctr(struct mm_struct *mm, unsigned long vaddr, short d) { void *kaddr; struct page *page; - struct vm_area_struct *vma; int ret; short *ptr; @@ -373,7 +372,7 @@ __update_ref_ctr(struct mm_struct *mm, unsigned long vaddr, short d) return -EINVAL; ret = get_user_pages_remote(mm, vaddr, 1, - FOLL_WRITE, &page, &vma, NULL); + FOLL_WRITE, &page, NULL); if (unlikely(ret <= 0)) { /* * We are asking for 1 page. If get_user_pages_remote() fails, @@ -474,10 +473,9 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, if (is_register) gup_flags |= FOLL_SPLIT_PMD; /* Read the page with vaddr into memory */ - ret = get_user_pages_remote(mm, vaddr, 1, gup_flags, - &old_page, &vma, NULL); - if (ret <= 0) - return ret; + old_page = get_user_page_vma_remote(mm, vaddr, gup_flags, &vma); + if (IS_ERR_OR_NULL(old_page)) + return PTR_ERR(old_page); ret = verify_opcode(old_page, vaddr, &opcode); if (ret <= 0) @@ -2027,8 +2025,7 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr) * but we treat this as a 'remote' access since it is * essentially a kernel access to the memory. */ - result = get_user_pages_remote(mm, vaddr, 1, FOLL_FORCE, &page, - NULL, NULL); + result = get_user_pages_remote(mm, vaddr, 1, FOLL_FORCE, &page, NULL); if (result < 0) return result; diff --git a/mm/gup.c b/mm/gup.c index ce78a5186dbb..1493cc8dd526 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2208,8 +2208,6 @@ static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. Or NULL, if caller * only intends to ensure the pages are faulted in. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * @locked: pointer to lock flag indicating whether lock is held and * subsequently whether VM_FAULT_RETRY functionality can be * utilised. Lock must initially be held. @@ -2224,8 +2222,6 @@ static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, * * The caller is responsible for releasing returned @pages, via put_page(). * - * @vmas are valid only as long as mmap_lock is held. - * * Must be called with mmap_lock held for read or write. * * get_user_pages_remote walks a process's page tables and takes a reference @@ -2262,15 +2258,15 @@ static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked) + int *locked) { int local_locked = 1; - if (!is_valid_gup_args(pages, vmas, locked, &gup_flags, + if (!is_valid_gup_args(pages, NULL, locked, &gup_flags, FOLL_TOUCH | FOLL_REMOTE)) return -EINVAL; - return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, NULL, locked ? locked : &local_locked, gup_flags); } @@ -2280,7 +2276,7 @@ EXPORT_SYMBOL(get_user_pages_remote); long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked) + int *locked) { return 0; } diff --git a/mm/memory.c b/mm/memory.c index 146bb94764f8..8358f3b853f2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5590,7 +5590,6 @@ EXPORT_SYMBOL_GPL(generic_access_phys); int __access_remote_vm(struct mm_struct *mm, unsigned long addr, void *buf, int len, unsigned int gup_flags) { - struct vm_area_struct *vma; void *old_buf = buf; int write = gup_flags & FOLL_WRITE; @@ -5599,29 +5598,30 @@ int __access_remote_vm(struct mm_struct *mm, unsigned long addr, void *buf, /* ignore errors, just check how much was successfully transferred */ while (len) { - int bytes, ret, offset; + int bytes, offset; void *maddr; - struct page *page = NULL; + struct vm_area_struct *vma = NULL; + struct page *page = get_user_page_vma_remote(mm, addr, + gup_flags, &vma); - ret = get_user_pages_remote(mm, addr, 1, - gup_flags, &page, &vma, NULL); - if (ret <= 0) { + if (IS_ERR_OR_NULL(page)) { #ifndef CONFIG_HAVE_IOREMAP_PROT break; #else + int res = 0; + /* * Check if this is a VM_IO | VM_PFNMAP VMA, which * we can access using slightly different code. */ - vma = vma_lookup(mm, addr); if (!vma) break; if (vma->vm_ops && vma->vm_ops->access) - ret = vma->vm_ops->access(vma, addr, buf, + res = vma->vm_ops->access(vma, addr, buf, len, write); - if (ret <= 0) + if (res <= 0) break; - bytes = ret; + bytes = res; #endif } else { bytes = len; diff --git a/mm/rmap.c b/mm/rmap.c index b42fc0389c24..ae127f60a4fb 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2328,7 +2328,7 @@ int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, npages = get_user_pages_remote(mm, start, npages, FOLL_GET | FOLL_WRITE | FOLL_SPLIT_PMD, - pages, NULL, NULL); + pages, NULL); if (npages < 0) return npages; diff --git a/security/tomoyo/domain.c b/security/tomoyo/domain.c index 31af29f669d2..ac20c0bdff9d 100644 --- a/security/tomoyo/domain.c +++ b/security/tomoyo/domain.c @@ -916,7 +916,7 @@ bool tomoyo_dump_page(struct linux_binprm *bprm, unsigned long pos, */ mmap_read_lock(bprm->mm); ret = get_user_pages_remote(bprm->mm, pos, 1, - FOLL_FORCE, &page, NULL, NULL); + FOLL_FORCE, &page, NULL); mmap_read_unlock(bprm->mm); if (ret <= 0) return false; diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 9bfe1d6f6529..e033c79d528e 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -61,8 +61,7 @@ static void async_pf_execute(struct work_struct *work) * access remotely. */ mmap_read_lock(mm); - get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, NULL, - &locked); + get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, &locked); if (locked) mmap_read_unlock(mm); From patchwork Wed May 17 19:25:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13245602 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C021C77B7A for ; Wed, 17 May 2023 19:25:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2A59F280004; Wed, 17 May 2023 15:25:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2559C280001; Wed, 17 May 2023 15:25:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0FAFA280004; Wed, 17 May 2023 15:25:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id DE803280001 for ; Wed, 17 May 2023 15:25:48 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A8646AE046 for ; Wed, 17 May 2023 19:25:48 +0000 (UTC) X-FDA: 80800726776.03.2F8B1DB Received: from mail-wr1-f52.google.com (mail-wr1-f52.google.com [209.85.221.52]) by imf22.hostedemail.com (Postfix) with ESMTP id AEEF4C0014 for ; Wed, 17 May 2023 19:25:46 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=dcn4x8zm; spf=pass (imf22.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.52 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684351546; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IDPgbDXgNpmFaT/Xq61rTSESXh922Cqn8VFjEHQlO/U=; b=kkgTTN1JRIgDOFYjc0fGXZ8CvGlxbqf/hVA/GlDorYNlRy0or2wPShH+1GF748habNQ3xI YRTegftkRLOGlGuRgl+zPLZHq0EoPyv0MjOH1pObjkSyap+uEUr88ofmW0uGm79t/Dw/Dk c58UAAegaUoq3FQq9VeaEHpBR/DDhkU= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=dcn4x8zm; spf=pass (imf22.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.52 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684351546; a=rsa-sha256; cv=none; b=vd2C1usMUo2hVCkEiE7j67J5GXrq+SY924UhTvZMZfmXHRsihV3EDk7P609ejCgedJDqqh u5P0FCQ1qnBRWK3zm4QMkFMZhWPMuEmUDiyjnG/GsVBFUYZJcANy6wAXP8z5Kaq2TLHtwn koxPw5Spv2BL2M6kmjTpKfbGnvMhG1w= Received: by mail-wr1-f52.google.com with SMTP id ffacd0b85a97d-3078aa0b152so821854f8f.3 for ; Wed, 17 May 2023 12:25:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684351545; x=1686943545; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IDPgbDXgNpmFaT/Xq61rTSESXh922Cqn8VFjEHQlO/U=; b=dcn4x8zmZYzMdxqkxin+0Kp7oATcck21D2y26zDzpSXi6bO9bOBrwUasI9ojQYvRQA Zi713T2NPyPIzQBAXspWoX4abLIAfZM/Hag4DsUoyq9FLSKw83Xnw4kJhDkspxU1p62K vVaoFNdRZIAhuKmQttjFmCmDyC572QHulp1NowyT8iq/Nvyci8UUJp55j4wCgmf0x+HA NC0uwt++BTCQqq8HDts1cBcme23dDYfw3Tg0t0eNYYibcHvjK+2B2QjCBcx5Xjgtwhr7 qtiINwWlapVJ3KpKa0fevqyxxIICgFQbRh/pFOxP564gguNmk1v+f2g4r5eMslZkWV72 e5Dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684351545; x=1686943545; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IDPgbDXgNpmFaT/Xq61rTSESXh922Cqn8VFjEHQlO/U=; b=GuwQxBI4nNq6zbrrgpvQjikvHT1FZsf7KvviyRTU8JawXRDRmPjOZep+vJrdb+F6Hm lU9V+apDK9/g6K4A5NRMwkePuFOaUTlJXqpBx550xEESUPd4Unbg7KyDIWwN69SyezDt 7hXNge0xK94YLHqq/dnlZE/dVM5Pwo4OCrBdljxCpEfvWL+XImglakGhFBpoqtvmBkrx GZI4RFzXB7VW20K3LVYkmu0S7ysL4jE5NqqvtApILO9Ve46JqwJlCs2J4ei+vBM4SjeY HcRRnZD120IaybB166a7czdLWtt7hQacfir2zOixVqeBn+2/kgCQKtTOjkae9iisy+2u 06kg== X-Gm-Message-State: AC+VfDzHCSVvIda/BfC7clgumMIvaxmAJy7aDU1EuQ2Ad3sS2F1D6x1o krVewN0h6FqQWBJPSZ8jlXkVA+QhLj0rgw== X-Google-Smtp-Source: ACHHUZ5gUIunrgcLQA5O73BeJ9za8lFR9tf5MVp4yuOnPxdVTT9mfQ+v0R4XrZy+Pi+kpOanAXPPSw== X-Received: by 2002:adf:e58a:0:b0:2f6:a8dd:f088 with SMTP id l10-20020adfe58a000000b002f6a8ddf088mr1148050wrm.62.1684351544804; Wed, 17 May 2023 12:25:44 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id f26-20020a1c6a1a000000b003f4272c2d0csm3076037wmc.36.2023.05.17.12.25.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 May 2023 12:25:43 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , Jens Axboe , Pavel Begunkov , io-uring@vger.kernel.org, Jason Gunthorpe , John Hubbard , Lorenzo Stoakes , Christoph Hellwig Subject: [PATCH v6 4/6] io_uring: rsrc: delegate VMA file-backed check to GUP Date: Wed, 17 May 2023 20:25:42 +0100 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: a5kas787dr8pw6qf1nw16w39ttgafnqa X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: AEEF4C0014 X-Rspam-User: X-HE-Tag: 1684351546-576236 X-HE-Meta: U2FsdGVkX18lP986o4/boZ5UOKy4M7FBz8ih75CCnozGQ2nv6yhrREuBSvsUhg5nqCJtaXMSRyjiyx0J8TqBi6vXufcWdrBHnfpDMmvl3Wh8gCLnqFeiR333r8JNr2epLEtvKdOwFWplTKxkUCmGUnY/PXDyiFeTVCBRU3PHd14C1GsUUcoAqIzArwJDCcQLan0KKP7aclUhGcil4hyRPCY3HkSkWUZZw+rsgUMoxwWeX3dtPU4s+2bcNPxEJFEoX52wbJBqkSZC2j1NhVDMkv/pPpgsSsbCBDhgjbOzaR0fJE7TMtVhY34hI1FCyAn2eO37zadbY3sSu3w3g7Ddt7QoAalwm0Vf5s+agqKnvZVBwwqOKPPmzTifWXy9QZE7mCkBu7+XYXfEooAzWdSqwwG8iNAgVRAYZnCUOIyOQJm8H8+ZxGFvvHCRMsYNfvEhZ/NU3/7YtYyTDMxrGWOvRicd7JzdBKf9PRBO/PdCCq5dBvBc8elF47yqrtqupH1Xvsmm8PQArGnCwI8oJRpO4RkTxOGHKLlYoKk2wznrNwSfgXMGNgpHbotaIMIxms6hvxbIr1neCgng70eegBKAXXrwbXsSJFzeeUrwpXI7gkpvHZG8PvELxzYeVO+tZcKkLkwigYVJe3OA4pZrRNqUXgZF9X4lN2LHQrjx8K5uFmANH8/OGAXMzeTV1ZbIYtwEoxc6ybGpJ/newLu7ubTQLjoyIcnzE+1YNy9T7+g2IKqCj5JdQhwkFTQfv1Zz6LHth5uhYg8iyXT37nQnMmqNdPkGwC5MM2BXySKuV0y+NlvDQCDv6LcLDK8qRlLwcb6aFyRAaZSmUq+FzYcX+mLRqD+zIRv72f5G6OtlduD+GEMVHaSARBH2zBKWxuGJlKqVZkjeBsf2FAqYDGoipoPz1tl+KXJFLJIbiyKhgbVJnLaMQ0zUyNsudV7Ba7kmusUIkvJvBMigscxULdN2WeD z9Xie2x+ JlFB5t1bTcKAvafT7hC4wvNuOrElP0RQNzS2JinhqxnTgDp3VbxVGUyMonFCM96MSxHKn1qB/y/1pAGJgJwLFty+qNwa/MK8xm2Vz3GrIOSe9IUOrOpEdI6V5wmkqdKLEB09phFoTM0hwf02Pb7AE9pR2AEfCPGxxre1LZVfc5jlRKobqn9n/HIHh8v1BNYAFpW5i0yN9uq+Q7N8d9RA7HD6yVFbYT92og20eH1Mdh5BP33FL0KXrNSkXCV5Pi+3qsSBaoKyQPsDLii2UfqR0ScYZysPDIh5diWTJ0ojTN562LL/Z+60/9NgGWxTdpAE9LQiRmWOLNomuvEetzHNtUquyITNcEyY1oHX/r3MnSGsorL8/cfEbn6IvTf58lQ9G2YpOU/gAkiSMSlx6yqm/1JrX6OpoOYqaM5erYvpTRRk/LvTajLNAdYR7C8pUXCGiIDABaYRraDR4G7kINzvHOZg05mwPHfa3mImrRyvUCJ54x/JQdInamDj+MV22AD3si29kxmU2B5bJd5Lf37hcjIHrn8lzvfC4J3USl9x8C2hUwzLEk7Fk8dwkQml4XHmAPV2bgyhyP2LmnkI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that the GUP explicitly checks FOLL_LONGTERM pin_user_pages() for broken file-backed mappings in "mm/gup: disallow FOLL_LONGTERM GUP-nonfast writing to file-backed mappings", there is no need to explicitly check VMAs for this condition, so simply remove this logic from io_uring altogether. Reviewed-by: Christoph Hellwig Reviewed-by: Jens Axboe Reviewed-by: David Hildenbrand Signed-off-by: Lorenzo Stoakes --- io_uring/rsrc.c | 34 ++++++---------------------------- 1 file changed, 6 insertions(+), 28 deletions(-) diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index d46f72a5ef73..b6451f8bc5d5 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -1030,9 +1030,8 @@ static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages, struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages) { unsigned long start, end, nr_pages; - struct vm_area_struct **vmas = NULL; struct page **pages = NULL; - int i, pret, ret = -ENOMEM; + int pret, ret = -ENOMEM; end = (ubuf + len + PAGE_SIZE - 1) >> PAGE_SHIFT; start = ubuf >> PAGE_SHIFT; @@ -1042,45 +1041,24 @@ struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages) if (!pages) goto done; - vmas = kvmalloc_array(nr_pages, sizeof(struct vm_area_struct *), - GFP_KERNEL); - if (!vmas) - goto done; - ret = 0; mmap_read_lock(current->mm); pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM, - pages, vmas); - if (pret == nr_pages) { - /* don't support file backed memory */ - for (i = 0; i < nr_pages; i++) { - struct vm_area_struct *vma = vmas[i]; - - if (vma_is_shmem(vma)) - continue; - if (vma->vm_file && - !is_file_hugepages(vma->vm_file)) { - ret = -EOPNOTSUPP; - break; - } - } + pages, NULL); + if (pret == nr_pages) *npages = nr_pages; - } else { + else ret = pret < 0 ? pret : -EFAULT; - } + mmap_read_unlock(current->mm); if (ret) { - /* - * if we did partial map, or found file backed vmas, - * release any pages we did get - */ + /* if we did partial map, release any pages we did get */ if (pret > 0) unpin_user_pages(pages, pret); goto done; } ret = 0; done: - kvfree(vmas); if (ret < 0) { kvfree(pages); pages = ERR_PTR(ret); From patchwork Wed May 17 19:25:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13245603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B9A2C77B75 for ; Wed, 17 May 2023 19:25:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D05A0280005; Wed, 17 May 2023 15:25:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CDCFB280001; Wed, 17 May 2023 15:25:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5836280005; Wed, 17 May 2023 15:25:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A35F0280001 for ; Wed, 17 May 2023 15:25:53 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6FE23C067C for ; Wed, 17 May 2023 19:25:53 +0000 (UTC) X-FDA: 80800726944.16.6E41C92 Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) by imf19.hostedemail.com (Postfix) with ESMTP id 76FEB1A0010 for ; Wed, 17 May 2023 19:25:50 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=MvsWmjCa; spf=pass (imf19.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.43 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684351550; a=rsa-sha256; cv=none; b=627OaUmeRJFoOp1RB+SvrZPkEftr6Z/NR9/OeC68gPXC0CTVfVd2kY0nD83ylZ07/s1czZ FpV2MMdVWFmWho4gFVNB9OSFNOZvaqjRSn4GFeHXCtMdF4kH4drdmxAWv6/cLTbHqoHSnQ jTJbXiNNcsja/p6GVqPfB9g4VJ/BcUM= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=MvsWmjCa; spf=pass (imf19.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.43 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684351550; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aITIhEgQhOCGEYk6Vp/fZNHOQL457RQtUqwWkpxEXBw=; b=6d69kKvQQqqLtKq3we5a6HJ7+OLhSiUVCW1GCpEraKOPlzQ5qyjKbfNpz90LvrBY/0p2H5 Fw5lEkedFAOjy4jVvaCKurywH7O1Bakqt5Z5ROj/xr8TDxUXzA3UIzQif+959rsNUdSS9j iZgyMLtdZZ0Hg532Hk2kEoFTrPIZ3LA= Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-3f450815d0bso11907905e9.0 for ; Wed, 17 May 2023 12:25:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684351549; x=1686943549; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aITIhEgQhOCGEYk6Vp/fZNHOQL457RQtUqwWkpxEXBw=; b=MvsWmjCaNu8eL0xRTmAMzlqLtTWX+wxZX9RFpUjZz79foqQP8CndrImJ0ZFlQA1NkX 4urTvyrIEdZy7jwf/D0gaY7BYxFbfWneo4kedH6X7p36EpIokbTjMYg3ajRy6cq2LP0p ZO9FlTqiBXoUM3fIlpuf98gqlRu0Kxxbwjf/Ax2huSGExG1MCuL9a0vKGAmFAzZsTyHy I27LMsy2m6gZPbHWRo9TCYVlkybLeg6dW0Ri3egq0K7kjAsUfUcKLURvCg4RvlKRxSnj r31dJ/LjWH0fYt00dqn9R5+iZVXsZaYmSo57DkWt2/0EzWbSDT2ApAeEB6bW0aP88G2Z HZmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684351549; x=1686943549; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aITIhEgQhOCGEYk6Vp/fZNHOQL457RQtUqwWkpxEXBw=; b=S0T/JQi1oNjKHd+K1y9yGnBd3MxifI2YqVvFfM7vlRs2DdBfdN3upjFLbZDl9n0CeL 7OoUAFyIOl4SfDGJs2yPP3xxz6IAJKDeWG0E19Ka7QW7U1rOHJwoF0lFp3eGRWocv4e3 8Kj80Nw+BQK7Jwm3K0yGrxK2F/RVyvbUQi/p3uh/E0FEvrR7uaQCc7hfiVdGuDEhiZNd hodmr7hGMgp/0ycbFqxktF0SY6VgDgqOqo8JxRUgGn348tvo4RVyUoymbBBs2p0NTbSh mrarscnM2RNWG8ZUVk9+9/5a9l99uJ52HYsLt681kdhOOhhqJ38Aj0SWrw0kf3/VSCU5 cCfA== X-Gm-Message-State: AC+VfDwpnt70G7ola+XNrUdLr3IFLjh+RuGRs6hjDuO0W31Nd3bB+zB3 /+e9BdrhcmsI7kIYHse655TTB82wd1JucQ== X-Google-Smtp-Source: ACHHUZ4vj59CpEBUh+LCGakc9pc+qc/ayE4H4Tx2vR6aYGp/AvSAnn5rVqIlkYwGRVJwjsSCJX9GYA== X-Received: by 2002:a7b:c4c3:0:b0:3f3:1cb7:b2a6 with SMTP id g3-20020a7bc4c3000000b003f31cb7b2a6mr29514427wmk.6.1684351548372; Wed, 17 May 2023 12:25:48 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id r2-20020a5d4982000000b00306415ac69asm3770139wrq.15.2023.05.17.12.25.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 May 2023 12:25:47 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Dennis Dalessandro , Jason Gunthorpe , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Mauro Carvalho Chehab , "Michael S . Tsirkin" , Jason Wang , Jens Axboe , Pavel Begunkov , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linuxppc-dev@lists.ozlabs.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, netdev@vger.kernel.org, io-uring@vger.kernel.org, bpf@vger.kernel.org, John Hubbard , Lorenzo Stoakes , Christoph Hellwig , Sakari Ailus Subject: [PATCH v6 5/6] mm/gup: remove vmas parameter from pin_user_pages() Date: Wed, 17 May 2023 20:25:45 +0100 Message-Id: <195a99ae949c9f5cb589d2222b736ced96ec199a.1684350871.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 76FEB1A0010 X-Stat-Signature: rxfzcysqh7hodmr5gw1ekwc7mhncs3jq X-HE-Tag: 1684351550-205095 X-HE-Meta: U2FsdGVkX1+viuMW5hvdTjk4wPYwY5y6OWHCc2eU+8c/tkKr1nHRbu7Aw3zK4ERQZiQ2MdWre7jIjn+C9dz7/I1sfW9THbcfWmppi/UvONkHqwcv+YyB9Fj7ykt3ey3hWzRMz60ZNT1cJwdFWrW4R1nyh2+uNHNElkkEM/1gbpObpP7ROkYtD8HKV2tiqeutFHP4hKwi/kti0EbX0Gh5WEUsRZPfl2+cXZyyK3oViU8p6Mxts5013YD3/TZlf+FEekU7IgKRyJcFMLflz+9FS/0eZ+qPj1OoMd90HcFxEe3vDxKp+s7JdF7fCWU02gL9csvH2T75WJdziB9uscSlfGerXjGJOWw6J8lIdXxRZZ7tjT7UCsD10xJuGvJwj0+arynwtOU210dI5OJ0rJOXkNQvoDNxsnaNtofjk0C9gRz38zcEJfroC7FU0x5jxQr8DTpqTjqTpGAUCVmIEK9mNr4ZixHI+uX1C2ttZewSYJAni2QRVO9nDlgnR3iDs6DKP1l67VI8nW5QNChSNPCtLXeSGrpJxBH6chcEfmsah0tJesc5JcnvMQ64jfSr3KpFyN2/LDNfDBXEkp912T8ALoDyEV7KnaL8ji4PfqWEqzkzFIokWBXDoyXZ67nJ1b5OGNoY8/rAOHSdneTzLrtG9wSVotkq5kttbD47d0enrZsCibePXUzw8INBXQZvPU1CO62d9GBooLgq3ohRHPmOKKXJpdjUFYHStgUWZPI3HjUWU6YtQfRzCEUlCMdNGEC0aeOXUZ4XJyApU77vpScg2VmxfXpRlva7seJUrKgvgL0g+PGAGe9uhBCSe6fRTqzDx7amj2e/85vb593AgOB3fgi8HqT8Y23CHDjXIDF+9AswRjC6El7Bz/pdwx8wryBRNsEGY7oh9ugQWQsPeXwd1+H1slgy8Y0qeLsSqLk9k6L6klRZpNAu2xbsUjIeUSRwOHcXbg2NWAqU94OlwlR Tj/bJNuG z156VPVOrxUq7E7gpZXiZkkl8nezCMTLXrMBjimXs1VFvoLJRVtPkPDgpT80m35M9hAjQNxDw1syq74+bWq5AGNvo+u64DjYKvIXk3hpYJuoyPAJQ3m+Qy4ZSzteHKcKivb53r3G75U1dTmtfgPLIq4Tcjh4c3TTqusOjonWDciI1ankjb5k/4Lh0FPfbcCwnt7ZdzA7BbivxGFxOiRzwDkfA2BLadkGcctxhN6OU/J6xb5YMeS5cNB/iTXnGubYXsMkgiUIaXVyO0m458jt6YGQLnwVPBJ6cXJKc43H9K+K1wucj4h2/H2GTfxBnWheoR6lQop3AfNA4qjeA6a9eZvyIuUl5jLnrtBQ8bjJCyrVP5eOmrG4SQTMC4hngCncR751aRPSTRi4TRYqeTgMj2RTejtyhrPXIyFEcjYKfiGzJMBO07i8g9hJX/nz8JLtCT6IlzXqWRNCS1beSbpOf0GjdeRUBeBRBd1I+E3n9wKLdyRqfsfsDnG6wqnNBOD3FNQW4lrsYVXCG+6K1QtNh0Ef5yoe8oiZJw/S6uOAn0jRKNiaS4okdr9OrHQVUI9aHOw3KaGHpaha0DVTCdlwabi1bTSOeYEBRD/J05f53NojYmumsu9t4StOaO7rQlaS6GxBKxps5eFayQIpGst5JrUGI83vwpKslpfmO X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We are now in a position where no caller of pin_user_pages() requires the vmas parameter at all, so eliminate this parameter from the function and all callers. This clears the way to removing the vmas parameter from GUP altogether. Acked-by: David Hildenbrand Acked-by: Dennis Dalessandro (for qib) Reviewed-by: Christoph Hellwig Acked-by: Sakari Ailus (for drivers/media) Signed-off-by: Lorenzo Stoakes --- arch/powerpc/mm/book3s64/iommu_api.c | 2 +- drivers/infiniband/hw/qib/qib_user_pages.c | 2 +- drivers/infiniband/hw/usnic/usnic_uiom.c | 2 +- drivers/infiniband/sw/siw/siw_mem.c | 2 +- drivers/media/v4l2-core/videobuf-dma-sg.c | 2 +- drivers/vdpa/vdpa_user/vduse_dev.c | 2 +- drivers/vhost/vdpa.c | 2 +- include/linux/mm.h | 3 +-- io_uring/rsrc.c | 2 +- mm/gup.c | 9 +++------ mm/gup_test.c | 9 ++++----- net/xdp/xdp_umem.c | 2 +- 12 files changed, 17 insertions(+), 22 deletions(-) diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c index 81d7185e2ae8..d19fb1f3007d 100644 --- a/arch/powerpc/mm/book3s64/iommu_api.c +++ b/arch/powerpc/mm/book3s64/iommu_api.c @@ -105,7 +105,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, ret = pin_user_pages(ua + (entry << PAGE_SHIFT), n, FOLL_WRITE | FOLL_LONGTERM, - mem->hpages + entry, NULL); + mem->hpages + entry); if (ret == n) { pinned += n; continue; diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index f693bc753b6b..1bb7507325bc 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -111,7 +111,7 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages, ret = pin_user_pages(start_page + got * PAGE_SIZE, num_pages - got, FOLL_LONGTERM | FOLL_WRITE, - p + got, NULL); + p + got); if (ret < 0) { mmap_read_unlock(current->mm); goto bail_release; diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 2a5cac2658ec..84e0f41e7dfa 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -140,7 +140,7 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, ret = pin_user_pages(cur_base, min_t(unsigned long, npages, PAGE_SIZE / sizeof(struct page *)), - gup_flags, page_list, NULL); + gup_flags, page_list); if (ret < 0) goto out; diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c index f51ab2ccf151..e6e25f15567d 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -422,7 +422,7 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable) umem->page_chunk[i].plist = plist; while (nents) { rv = pin_user_pages(first_page_va, nents, foll_flags, - plist, NULL); + plist); if (rv < 0) goto out_sem_up; diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2-core/videobuf-dma-sg.c index 53001532e8e3..405b89ea1054 100644 --- a/drivers/media/v4l2-core/videobuf-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf-dma-sg.c @@ -180,7 +180,7 @@ static int videobuf_dma_init_user_locked(struct videobuf_dmabuf *dma, data, size, dma->nr_pages); err = pin_user_pages(data & PAGE_MASK, dma->nr_pages, gup_flags, - dma->pages, NULL); + dma->pages); if (err != dma->nr_pages) { dma->nr_pages = (err >= 0) ? err : 0; diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c index de97e38c3b82..4d4405f058e8 100644 --- a/drivers/vdpa/vdpa_user/vduse_dev.c +++ b/drivers/vdpa/vdpa_user/vduse_dev.c @@ -1052,7 +1052,7 @@ static int vduse_dev_reg_umem(struct vduse_dev *dev, goto out; pinned = pin_user_pages(uaddr, npages, FOLL_LONGTERM | FOLL_WRITE, - page_list, NULL); + page_list); if (pinned != npages) { ret = pinned < 0 ? pinned : -ENOMEM; goto out; diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index 8c1aefc865f0..61223fcbe82b 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -983,7 +983,7 @@ static int vhost_vdpa_pa_map(struct vhost_vdpa *v, while (npages) { sz2pin = min_t(unsigned long, npages, list_size); pinned = pin_user_pages(cur_base, sz2pin, - gup_flags, page_list, NULL); + gup_flags, page_list); if (sz2pin != pinned) { if (pinned < 0) { ret = pinned; diff --git a/include/linux/mm.h b/include/linux/mm.h index 679b41ef7a6d..db09c7062965 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2412,8 +2412,7 @@ static inline struct page *get_user_page_vma_remote(struct mm_struct *mm, long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages); long pin_user_pages(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas); + unsigned int gup_flags, struct page **pages); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, struct page **pages, unsigned int gup_flags); long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages, diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index b6451f8bc5d5..b56bda46a9eb 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -1044,7 +1044,7 @@ struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages) ret = 0; mmap_read_lock(current->mm); pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM, - pages, NULL); + pages); if (pret == nr_pages) *npages = nr_pages; else diff --git a/mm/gup.c b/mm/gup.c index 1493cc8dd526..36701b5f0123 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -3274,8 +3274,6 @@ EXPORT_SYMBOL(pin_user_pages_remote); * @gup_flags: flags modifying lookup behaviour * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * * Nearly the same as get_user_pages(), except that FOLL_TOUCH is not set, and * FOLL_PIN is set. @@ -3284,15 +3282,14 @@ EXPORT_SYMBOL(pin_user_pages_remote); * see Documentation/core-api/pin_user_pages.rst for details. */ long pin_user_pages(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas) + unsigned int gup_flags, struct page **pages) { int locked = 1; - if (!is_valid_gup_args(pages, vmas, NULL, &gup_flags, FOLL_PIN)) + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_PIN)) return 0; return __gup_longterm_locked(current->mm, start, nr_pages, - pages, vmas, &locked, gup_flags); + pages, NULL, &locked, gup_flags); } EXPORT_SYMBOL(pin_user_pages); diff --git a/mm/gup_test.c b/mm/gup_test.c index 9ba8ea23f84e..1668ce0e0783 100644 --- a/mm/gup_test.c +++ b/mm/gup_test.c @@ -146,18 +146,17 @@ static int __gup_test_ioctl(unsigned int cmd, pages + i); break; case PIN_BASIC_TEST: - nr = pin_user_pages(addr, nr, gup->gup_flags, pages + i, - NULL); + nr = pin_user_pages(addr, nr, gup->gup_flags, pages + i); break; case PIN_LONGTERM_BENCHMARK: nr = pin_user_pages(addr, nr, gup->gup_flags | FOLL_LONGTERM, - pages + i, NULL); + pages + i); break; case DUMP_USER_PAGES_TEST: if (gup->test_flags & GUP_TEST_FLAG_DUMP_PAGES_USE_PIN) nr = pin_user_pages(addr, nr, gup->gup_flags, - pages + i, NULL); + pages + i); else nr = get_user_pages(addr, nr, gup->gup_flags, pages + i); @@ -270,7 +269,7 @@ static inline int pin_longterm_test_start(unsigned long arg) gup_flags, pages); else cur_pages = pin_user_pages(addr, remaining_pages, - gup_flags, pages, NULL); + gup_flags, pages); if (cur_pages < 0) { pin_longterm_test_stop(); ret = cur_pages; diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c index 02207e852d79..06cead2b8e34 100644 --- a/net/xdp/xdp_umem.c +++ b/net/xdp/xdp_umem.c @@ -103,7 +103,7 @@ static int xdp_umem_pin_pages(struct xdp_umem *umem, unsigned long address) mmap_read_lock(current->mm); npgs = pin_user_pages(address, umem->npgs, - gup_flags | FOLL_LONGTERM, &umem->pgs[0], NULL); + gup_flags | FOLL_LONGTERM, &umem->pgs[0]); mmap_read_unlock(current->mm); if (npgs != umem->npgs) { From patchwork Wed May 17 19:25:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13245604 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57B3AC77B7D for ; Wed, 17 May 2023 19:25:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E666B280006; Wed, 17 May 2023 15:25:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E1751280001; Wed, 17 May 2023 15:25:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C446A280006; Wed, 17 May 2023 15:25:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id AF9D5280001 for ; Wed, 17 May 2023 15:25:55 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 86167A0689 for ; Wed, 17 May 2023 19:25:55 +0000 (UTC) X-FDA: 80800727070.04.B81051F Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) by imf29.hostedemail.com (Postfix) with ESMTP id 88DE012000D for ; Wed, 17 May 2023 19:25:53 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=a4SjuqLN; spf=pass (imf29.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.45 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684351553; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lWH+EjyNuCUCrFkcRLy75iYiK0WxcM9W/xkINR/0Rz4=; b=HSLS7Mbs/wMS2swFhse5Gl3xZ6SDNRD1mUBnt7VyDzzCycJ4zEHYSDQLtY0FqJpnsc34Kc Xl+kx+9CA48o0mAIlgLcrT0yNhv/Kp09g2Gr1uV62XUpBBoH84HkChJH5U5IgoC3hJnqkC voOLLfuPRn/bDqXqOnVs+Ztj+uh+yG0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684351553; a=rsa-sha256; cv=none; b=ZYnm0ipjtQjNQs9Rb3F4Nyig0MbGkL2m9i8MajKLqjypWa2II1qj44HRmvQ1T2wrrow14f 93lLrKU3H7vaLsDauW447kFA4wDsJejxcKF05WtdN7wWOhfJSOvDCo40g8YlvIVnm//bNo gt+zYEL6Qo15Wvcf7HhMZSvz6m4iF4M= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=a4SjuqLN; spf=pass (imf29.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.45 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-3f42c865535so12371915e9.1 for ; Wed, 17 May 2023 12:25:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684351552; x=1686943552; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lWH+EjyNuCUCrFkcRLy75iYiK0WxcM9W/xkINR/0Rz4=; b=a4SjuqLN8rNls/ol005Ct1bSiRSR3ZDC0+t0FHDX9I6N66fP6VMNi5GQdV2/4maT9p 35/0GIxTwANNNP8X+BJjbM3mWcHsROlPHVAATIdTHv4Bp8uxx+9ScW/v7YS3MJQUARk7 +riG4lvx6u1IuPfj7AxR/wDOeO/yU2xMqSSrBuh1+wWub3UmFqjlkvCu8QaClmd5+Gwz OG+Ic8nANpto3x8cvKHdyijHWnRLu+eSVK/6MLMXOi9fDjks3/HYyvrxm5RxvnB1Q7Ql 8dg2oO99YkLAQ+tuKhXBJKsM+ulmwWYgUSy4/i33ZuKuel2kysR4yqdUcn1gqX53DY6N Vmfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684351552; x=1686943552; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lWH+EjyNuCUCrFkcRLy75iYiK0WxcM9W/xkINR/0Rz4=; b=Lc+L4r8QvgaM1Fj0nz4Y69/alQWzdUxu0ORas07IHOLbMxMtwFzPIeIijdScANvK3x IvCiEs7Aq+uhNGtA3IN/pp7SSVOHxvFFFNExeIVjvinVCesbStV13f+ZrRmoAVHqdFCp uiXfxcfjF7KtW/ezaqiTgf8m82i2ySvfWXxB838eLZ3jKNcCzhn/iRCpA0DJFY2szpf3 vqInbvymLlpKBagIXMyakAQ3oIi75Z9i9QsRpHLsQZ07G9HjLBj3V6yMmnW2B3BLaa5J /MN+vyqzyA+LYKPEZ53vEu4yiEttHpluUwBbSHrsy5ndkrMMOTnchQexto3dALkJ++/9 KcFA== X-Gm-Message-State: AC+VfDxKQwr8ADoOr2rCDNP31Lf14mrFB2CjOyBQtBpXOtMmEjhNm5vz mSLD0NBXEP1inSkIAsDl4khOv03IWOfssw== X-Google-Smtp-Source: ACHHUZ6hGvIrFp0xpcy9GeDyGDr3FJIKFv357WoRg4Q2KCZW17oKe0lRQ74bo8hL1hnfg3V+1U1tZQ== X-Received: by 2002:a7b:cc05:0:b0:3f1:7581:eaaf with SMTP id f5-20020a7bcc05000000b003f17581eaafmr29632435wmh.4.1684351551268; Wed, 17 May 2023 12:25:51 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id o12-20020a1c750c000000b003f17eaae2c9sm3163677wmc.1.2023.05.17.12.25.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 May 2023 12:25:50 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , Mike Kravetz , Muchun Song , Jens Axboe , Pavel Begunkov , Jason Gunthorpe , John Hubbard , Lorenzo Stoakes , Christoph Hellwig Subject: [PATCH v6 6/6] mm/gup: remove vmas array from internal GUP functions Date: Wed, 17 May 2023 20:25:48 +0100 Message-Id: <6811b4b2b4b3baf3dd07f422bb18853bb2cd09fb.1684350871.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: ax5e761o5qq1k9ouxkmj37fo7m9d55ea X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 88DE012000D X-Rspam-User: X-HE-Tag: 1684351553-86981 X-HE-Meta: U2FsdGVkX1/paT3M7wlojBDRJ28hgDHxYrpwCvhR7Sqx31Z4TaFrdta9yccVuR2c4VuuEDcrNJ4CnC7j/gTg10qF7h6qyD8Z+uE1KV+fmnGZcB5BXnkhTlKDruMmYl8OuFg716jaemGcm0ATtjHR0ArlMAVw616OkxWkbuCiTF6i03j7i0f0h33zBSths6a0zWOiZYJSpIXoR772uL9ZQR1nIiAKymnxbfJ+ist4QbvS4M4LtnG9wn4KsjvR6DAa/JkV9Ev1i/34FcDWh0GEEboAv7174c2nhvHuBGNp4Eubm5j9/TbyAXIjxFXQObKmbR0edAoFwfjJkOIrXF+9s2GhylPYieW8A8q6iW9KqZRxL5laAlCXecaehQvELBW1aiUzux2rUR0BhqkY9F73oamYjWUauubgLSfn4/8HmYoPR11LkMDDbm3feAiEMu/pqmEL4wRmh5KWFvpXlzGm0K5BmD6c/wm35UbYW6jufm5mQh47hI/aPNWhlzBpg6rpGrySU0IJsw3fZXOZjEHiw8yZ84yguTDxGlqM87wR1XFx0mzDY3MjJJbYWySVlCScyif24cH3VkladE7oRAmZxeg/UBCtgpztzT8FXrRtUUgyI5rG3RgkTSjQ3dbIk0Jhlb3kYHUrGZGfztJZ2BUIcJZWxd7Q3B9cI9ViNDGmSH0lLS+n473WMfJfvKzWOtN8EvIe2pI53PuqjxhndbDK5/lf5aePlrVwTPjLfdlu/FSB0/EdF3Tu48+jEN90QwMnSgH4tQXQGrOAK3iwQ7jFGdPL0El2J9l2GPFgZzJ07Pt4FRdnhEre9Lj1mdeSR+EO9ewRDIkMtdg+r02lrdr+NxKj9YcMa5H+AK8ufeIVXwy9JMdEPBCSHyjtjMNdnHNpJYbW92yztNVNFc4cpLLNmsNLD3ahKH5bbKCIsXHg9M4EfU4vhPK93JTqttLHw1Lbv5609Y5q8XB8W4U6CFq R8jRZ3Rq R0R9gOYZ84aWUNsUkEMzq3EqVVld2BNi/y8S8ia/1INgrrqFzujw5tC4uqsF9W5lGJKdW3mrmVNeIYbmyIZzLj/5BV+ca76keEZF9HelJfriQaHvpzs7/pf03VM/D4Ofr1ELsrOgIDSp82bDREelaIC05tCVeJvrteEpg31yV3q5aEybpeJrc2nV4LvnDydcgUss2ohK+wo7sbMeUHekhOddeqGaEOsh6LIj98odg8VviUjNESXf2sNycGT1VSkU+fZTxfu6nCF/O/qS2VMA/hePVNcRNg+B+j9RL8n+YQH9uPqZqfcaST8GDOAMqx8ZmNRxNk7g6+OGesLD3WoQfKiJmw5kdDYL+L/Pjvlw35EaiMJRNO6uvAtDFvGAa5VuBdzjQPtxwt93IsPn9ZbL0moZpe4pS8lXdMd7xjMzG3gXuOByavi7C7gmYkq+6fyLMnegBaAQhpgpgtkVxv8WWHVThYn526FJ7B05z6pAjFBJ5gPbcOh1e+HHKBF/11ZZnKM/ULhox7O/zO+P7rjk89oq+BeEAlWFwJq0P X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now we have eliminated all callers to GUP APIs which use the vmas parameter, eliminate it altogether. This eliminates a class of bugs where vmas might have been kept around longer than the mmap_lock and thus we need not be concerned about locks being dropped during this operation leaving behind dangling pointers. This simplifies the GUP API and makes it considerably clearer as to its purpose - follow flags are applied and if pinning, an array of pages is returned. Acked-by: David Hildenbrand Reviewed-by: Christoph Hellwig Signed-off-by: Lorenzo Stoakes --- include/linux/hugetlb.h | 10 ++--- mm/gup.c | 83 +++++++++++++++-------------------------- mm/hugetlb.c | 24 +++++------- 3 files changed, 45 insertions(+), 72 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 6d041aa9f0fe..b2b698f9a2ec 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -133,9 +133,8 @@ int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, unsigned long address, unsigned int flags); long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, - struct page **, struct vm_area_struct **, - unsigned long *, unsigned long *, long, unsigned int, - int *); + struct page **, unsigned long *, unsigned long *, + long, unsigned int, int *); void unmap_hugepage_range(struct vm_area_struct *, unsigned long, unsigned long, struct page *, zap_flags_t); @@ -306,9 +305,8 @@ static inline struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, static inline long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, struct page **pages, - struct vm_area_struct **vmas, unsigned long *position, - unsigned long *nr_pages, long i, unsigned int flags, - int *nonblocking) + unsigned long *position, unsigned long *nr_pages, + long i, unsigned int flags, int *nonblocking) { BUG(); return 0; diff --git a/mm/gup.c b/mm/gup.c index 36701b5f0123..dbe96d266670 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1067,8 +1067,6 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. Or NULL, if caller * only intends to ensure the pages are faulted in. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * @locked: whether we're still with the mmap_lock held * * Returns either number of pages pinned (which may be less than the @@ -1082,8 +1080,6 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) * * The caller is responsible for releasing returned @pages, via put_page(). * - * @vmas are valid only as long as mmap_lock is held. - * * Must be called with mmap_lock held. It may be released. See below. * * __get_user_pages walks a process's page tables and takes a reference to @@ -1119,7 +1115,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) static long __get_user_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked) + int *locked) { long ret = 0, i = 0; struct vm_area_struct *vma = NULL; @@ -1159,9 +1155,9 @@ static long __get_user_pages(struct mm_struct *mm, goto out; if (is_vm_hugetlb_page(vma)) { - i = follow_hugetlb_page(mm, vma, pages, vmas, - &start, &nr_pages, i, - gup_flags, locked); + i = follow_hugetlb_page(mm, vma, pages, + &start, &nr_pages, i, + gup_flags, locked); if (!*locked) { /* * We've got a VM_FAULT_RETRY @@ -1226,10 +1222,6 @@ static long __get_user_pages(struct mm_struct *mm, ctx.page_mask = 0; } next_page: - if (vmas) { - vmas[i] = vma; - ctx.page_mask = 0; - } page_increm = 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask); if (page_increm > nr_pages) page_increm = nr_pages; @@ -1384,7 +1376,6 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, - struct vm_area_struct **vmas, int *locked, unsigned int flags) { @@ -1422,7 +1413,7 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, pages_done = 0; for (;;) { ret = __get_user_pages(mm, start, nr_pages, flags, pages, - vmas, locked); + locked); if (!(flags & FOLL_UNLOCKABLE)) { /* VM_FAULT_RETRY couldn't trigger, bypass */ pages_done = ret; @@ -1486,7 +1477,7 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, *locked = 1; ret = __get_user_pages(mm, start, 1, flags | FOLL_TRIED, - pages, NULL, locked); + pages, locked); if (!*locked) { /* Continue to retry until we succeeded */ BUG_ON(ret != 0); @@ -1584,7 +1575,7 @@ long populate_vma_page_range(struct vm_area_struct *vma, * not result in a stack expansion that recurses back here. */ ret = __get_user_pages(mm, start, nr_pages, gup_flags, - NULL, NULL, locked ? locked : &local_locked); + NULL, locked ? locked : &local_locked); lru_add_drain(); return ret; } @@ -1642,7 +1633,7 @@ long faultin_vma_page_range(struct vm_area_struct *vma, unsigned long start, return -EINVAL; ret = __get_user_pages(mm, start, nr_pages, gup_flags, - NULL, NULL, locked); + NULL, locked); lru_add_drain(); return ret; } @@ -1710,8 +1701,7 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors) #else /* CONFIG_MMU */ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, - struct vm_area_struct **vmas, int *locked, - unsigned int foll_flags) + int *locked, unsigned int foll_flags) { struct vm_area_struct *vma; bool must_unlock = false; @@ -1755,8 +1745,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, if (pages[i]) get_page(pages[i]); } - if (vmas) - vmas[i] = vma; + start = (start + PAGE_SIZE) & PAGE_MASK; } @@ -1937,8 +1926,7 @@ struct page *get_dump_page(unsigned long addr) int locked = 0; int ret; - ret = __get_user_pages_locked(current->mm, addr, 1, &page, NULL, - &locked, + ret = __get_user_pages_locked(current->mm, addr, 1, &page, &locked, FOLL_FORCE | FOLL_DUMP | FOLL_GET); return (ret == 1) ? page : NULL; } @@ -2111,7 +2099,6 @@ static long __gup_longterm_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, - struct vm_area_struct **vmas, int *locked, unsigned int gup_flags) { @@ -2119,13 +2106,13 @@ static long __gup_longterm_locked(struct mm_struct *mm, long rc, nr_pinned_pages; if (!(gup_flags & FOLL_LONGTERM)) - return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, locked, gup_flags); flags = memalloc_pin_save(); do { nr_pinned_pages = __get_user_pages_locked(mm, start, nr_pages, - pages, vmas, locked, + pages, locked, gup_flags); if (nr_pinned_pages <= 0) { rc = nr_pinned_pages; @@ -2143,9 +2130,8 @@ static long __gup_longterm_locked(struct mm_struct *mm, * Check that the given flags are valid for the exported gup/pup interface, and * update them with the required flags that the caller must have set. */ -static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, - int *locked, unsigned int *gup_flags_p, - unsigned int to_set) +static bool is_valid_gup_args(struct page **pages, int *locked, + unsigned int *gup_flags_p, unsigned int to_set) { unsigned int gup_flags = *gup_flags_p; @@ -2187,13 +2173,6 @@ static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, (gup_flags & FOLL_PCI_P2PDMA))) return false; - /* - * Can't use VMAs with locked, as locked allows GUP to unlock - * which invalidates the vmas array - */ - if (WARN_ON_ONCE(vmas && (gup_flags & FOLL_UNLOCKABLE))) - return false; - *gup_flags_p = gup_flags; return true; } @@ -2262,11 +2241,11 @@ long get_user_pages_remote(struct mm_struct *mm, { int local_locked = 1; - if (!is_valid_gup_args(pages, NULL, locked, &gup_flags, + if (!is_valid_gup_args(pages, locked, &gup_flags, FOLL_TOUCH | FOLL_REMOTE)) return -EINVAL; - return __get_user_pages_locked(mm, start, nr_pages, pages, NULL, + return __get_user_pages_locked(mm, start, nr_pages, pages, locked ? locked : &local_locked, gup_flags); } @@ -2301,11 +2280,11 @@ long get_user_pages(unsigned long start, unsigned long nr_pages, { int locked = 1; - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_TOUCH)) + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_TOUCH)) return -EINVAL; return __get_user_pages_locked(current->mm, start, nr_pages, pages, - NULL, &locked, gup_flags); + &locked, gup_flags); } EXPORT_SYMBOL(get_user_pages); @@ -2329,12 +2308,12 @@ long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, { int locked = 0; - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_TOUCH | FOLL_UNLOCKABLE)) return -EINVAL; return __get_user_pages_locked(current->mm, start, nr_pages, pages, - NULL, &locked, gup_flags); + &locked, gup_flags); } EXPORT_SYMBOL(get_user_pages_unlocked); @@ -3124,7 +3103,7 @@ static int internal_get_user_pages_fast(unsigned long start, start += nr_pinned << PAGE_SHIFT; pages += nr_pinned; ret = __gup_longterm_locked(current->mm, start, nr_pages - nr_pinned, - pages, NULL, &locked, + pages, &locked, gup_flags | FOLL_TOUCH | FOLL_UNLOCKABLE); if (ret < 0) { /* @@ -3166,7 +3145,7 @@ int get_user_pages_fast_only(unsigned long start, int nr_pages, * FOLL_FAST_ONLY is required in order to match the API description of * this routine: no fall back to regular ("slow") GUP. */ - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_GET | FOLL_FAST_ONLY)) return -EINVAL; @@ -3199,7 +3178,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, * FOLL_GET, because gup fast is always a "pin with a +1 page refcount" * request. */ - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_GET)) + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_GET)) return -EINVAL; return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); } @@ -3224,7 +3203,7 @@ EXPORT_SYMBOL_GPL(get_user_pages_fast); int pin_user_pages_fast(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) { - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_PIN)) + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_PIN)) return -EINVAL; return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); } @@ -3257,10 +3236,10 @@ long pin_user_pages_remote(struct mm_struct *mm, { int local_locked = 1; - if (!is_valid_gup_args(pages, NULL, locked, &gup_flags, + if (!is_valid_gup_args(pages, locked, &gup_flags, FOLL_PIN | FOLL_TOUCH | FOLL_REMOTE)) return 0; - return __gup_longterm_locked(mm, start, nr_pages, pages, NULL, + return __gup_longterm_locked(mm, start, nr_pages, pages, locked ? locked : &local_locked, gup_flags); } @@ -3286,10 +3265,10 @@ long pin_user_pages(unsigned long start, unsigned long nr_pages, { int locked = 1; - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_PIN)) + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_PIN)) return 0; return __gup_longterm_locked(current->mm, start, nr_pages, - pages, NULL, &locked, gup_flags); + pages, &locked, gup_flags); } EXPORT_SYMBOL(pin_user_pages); @@ -3303,11 +3282,11 @@ long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages, { int locked = 0; - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_PIN | FOLL_TOUCH | FOLL_UNLOCKABLE)) return 0; - return __gup_longterm_locked(current->mm, start, nr_pages, pages, NULL, + return __gup_longterm_locked(current->mm, start, nr_pages, pages, &locked, gup_flags); } EXPORT_SYMBOL(pin_user_pages_unlocked); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f154019e6b84..ea24718db4af 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6425,17 +6425,14 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, } #endif /* CONFIG_USERFAULTFD */ -static void record_subpages_vmas(struct page *page, struct vm_area_struct *vma, - int refs, struct page **pages, - struct vm_area_struct **vmas) +static void record_subpages(struct page *page, struct vm_area_struct *vma, + int refs, struct page **pages) { int nr; for (nr = 0; nr < refs; nr++) { if (likely(pages)) pages[nr] = nth_page(page, nr); - if (vmas) - vmas[nr] = vma; } } @@ -6508,9 +6505,9 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, } long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, - struct page **pages, struct vm_area_struct **vmas, - unsigned long *position, unsigned long *nr_pages, - long i, unsigned int flags, int *locked) + struct page **pages, unsigned long *position, + unsigned long *nr_pages, long i, unsigned int flags, + int *locked) { unsigned long pfn_offset; unsigned long vaddr = *position; @@ -6638,7 +6635,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, * If subpage information not requested, update counters * and skip the same_page loop below. */ - if (!pages && !vmas && !pfn_offset && + if (!pages && !pfn_offset && (vaddr + huge_page_size(h) < vma->vm_end) && (remainder >= pages_per_huge_page(h))) { vaddr += huge_page_size(h); @@ -6653,11 +6650,10 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, refs = min3(pages_per_huge_page(h) - pfn_offset, remainder, (vma->vm_end - ALIGN_DOWN(vaddr, PAGE_SIZE)) >> PAGE_SHIFT); - if (pages || vmas) - record_subpages_vmas(nth_page(page, pfn_offset), - vma, refs, - likely(pages) ? pages + i : NULL, - vmas ? vmas + i : NULL); + if (pages) + record_subpages(nth_page(page, pfn_offset), + vma, refs, + likely(pages) ? pages + i : NULL); if (pages) { /*