From patchwork Wed Aug 21 04:03:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11105375 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 38D421805 for ; Wed, 21 Aug 2019 04:04:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 16AB82332A for ; Wed, 21 Aug 2019 04:04:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="RCYmMjup" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727352AbfHUEEC (ORCPT ); Wed, 21 Aug 2019 00:04:02 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:18789 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727335AbfHUEEB (ORCPT ); Wed, 21 Aug 2019 00:04:01 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 20 Aug 2019 21:03:57 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 20 Aug 2019 21:03:57 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 20 Aug 2019 21:03:57 -0700 Received: from HQMAIL110.nvidia.com (172.18.146.15) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 21 Aug 2019 04:03:57 +0000 Received: from HQMAIL107.nvidia.com (172.20.187.13) by hqmail110.nvidia.com (172.18.146.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 21 Aug 2019 04:03:56 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 21 Aug 2019 04:03:56 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 20 Aug 2019 21:03:56 -0700 From: John Hubbard To: Andrew Morton CC: Christoph Hellwig , Dan Williams , Dave Chinner , Ira Weiny , Jan Kara , Jason Gunthorpe , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Vlastimil Babka , LKML , , , , John Hubbard , "Andy Whitcroft" , Joe Perches , "Gilad Ben-Yossef" , Ofir Drang Subject: [PATCH 1/4] checkpatch: revert broken NOTIFIER_HEAD check Date: Tue, 20 Aug 2019 21:03:52 -0700 Message-ID: <20190821040355.19566-1-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.1 MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1566360237; bh=b0DVOtsBG53vqmQyIWzNnfUqAU65Pon9C8BrFwFcc1M=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: MIME-Version:X-NVConfidentiality:Content-Transfer-Encoding: Content-Type; b=RCYmMjup+49b/+zlM0P/oVeE+gZMo0wjyYY2EUAhT1rekOgnfipguhjJHZ7SBUwr+ pA8/CDo2+IUJmyYtpK8ZnPHRk2seQr5lND0QkjR1ZMaqCQbH262dqNVO+054zFEbK8 uT1cXvR2T/upHLCdRwlLnU0fef5Fv5lcInQ5Cfysp8XIKy+z4tGs9wj2VtdkhKSfYf k7b+QtXH4UP+aWWme7vPSCeQi5BYwUuyup4McTAZ90K3ZlxcpxwdEdVIYvsR97Tpu5 7QZCtVVMikBs3kYXP0ChmL0Bf1c2nZv8olUfiEi1ld5I/1Fk38nFdgtANUfGh9Pioz BJhhcSmhgBkLg== Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org commit 1a47005dd5aa ("checkpatch: add *_NOTIFIER_HEAD as var definition") causes the following warning when run on some patches: Unescaped left brace in regex is passed through in regex; marked by < --HERE in m/(?: ... [238 lines of appalling perl output, mercifully not included] ... )/ at ./scripts/checkpatch.pl line 3889. This is broken, so revert it until a better solution is found. Fixes: 1a47005dd5aa ("checkpatch: add *_NOTIFIER_HEAD as var definition") Cc: Andy Whitcroft Cc: Joe Perches Cc: Gilad Ben-Yossef Cc: Ofir Drang Cc: Andrew Morton Signed-off-by: John Hubbard --- scripts/checkpatch.pl | 1 - 1 file changed, 1 deletion(-) diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index 5c00151cdee8..284eb4bd84aa 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -3891,7 +3891,6 @@ sub process { ^.DEFINE_$Ident\(\Q$name\E\)| ^.DECLARE_$Ident\(\Q$name\E\)| ^.LIST_HEAD\(\Q$name\E\)| - ^.{$Ident}_NOTIFIER_HEAD\(\Q$name\E\)| ^.(?:$Storage\s+)?$Type\s*\(\s*\*\s*\Q$name\E\s*\)\s*\(| \b\Q$name\E(?:\s+$Attribute)*\s*(?:;|=|\[|\() )/x) { From patchwork Wed Aug 21 04:03:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11105371 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4C21514DE for ; Wed, 21 Aug 2019 04:04:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 280D022CF7 for ; Wed, 21 Aug 2019 04:04:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="ejyl/X8L" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726658AbfHUED7 (ORCPT ); Wed, 21 Aug 2019 00:03:59 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:18780 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726409AbfHUED7 (ORCPT ); Wed, 21 Aug 2019 00:03:59 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 20 Aug 2019 21:03:57 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 20 Aug 2019 21:03:57 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 20 Aug 2019 21:03:57 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 21 Aug 2019 04:03:57 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 21 Aug 2019 04:03:57 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 20 Aug 2019 21:03:56 -0700 From: John Hubbard To: Andrew Morton CC: Christoph Hellwig , Dan Williams , Dave Chinner , Ira Weiny , Jan Kara , Jason Gunthorpe , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Vlastimil Babka , LKML , , , , John Hubbard Subject: [PATCH 2/4] For Ira: tiny formatting tweak to kerneldoc Date: Tue, 20 Aug 2019 21:03:53 -0700 Message-ID: <20190821040355.19566-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190821040355.19566-1-jhubbard@nvidia.com> References: <20190821040355.19566-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1566360237; bh=N6ccyzf5PMKW+koaJVWa/WJ5yd4Gtcd+1v71ttVDCNg=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=ejyl/X8LGp01d3V/HJlJwY+MWLCCvGZ39TTEg9R9pYbOpOmFI0ccB+jQnpnh9VgIH dhWOd9LD5FqATRk2iyAHYsquSGNX0Xo2wNigIAsNVWdBCRelcHUQQsxpIQxXl0Rs3j aJKpdDavlZNbpn9cPq29Qnxefgfefjf45gDj4mhxS9r1YIpIiIiRhBbZyOALotnhL/ zCqK+dG9WRz+1OGythS2WTZ2NiWsRhGnj9KI4bY1dIRXHMU9RMxxX2OYaE9obDVejW OYakeTAaS4XHjrpRPDWfyhbjQJ5nbOyzY6cO75mhq/xJwCHEtDrt8CK+tNaRkhsIpa pEMwZF6ZD4oTQ== Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org For your vaddr_pin_pages() and vaddr_unpin_pages(). Just merge it into wherever it goes please. Didn't want to cause merge problems so it's a separate patch-let. Signed-off-by: John Hubbard --- mm/gup.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 56421b880325..e49096d012ea 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2465,7 +2465,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, EXPORT_SYMBOL_GPL(get_user_pages_fast); /** - * vaddr_pin_pages pin pages by virtual address and return the pages to the + * vaddr_pin_pages() - pin pages by virtual address and return the pages to the * user. * * @addr: start address @@ -2505,7 +2505,7 @@ long vaddr_pin_pages(unsigned long addr, unsigned long nr_pages, EXPORT_SYMBOL(vaddr_pin_pages); /** - * vaddr_unpin_pages - counterpart to vaddr_pin_pages + * vaddr_unpin_pages() - counterpart to vaddr_pin_pages * * @pages: array of pages returned * @nr_pages: number of pages in pages From patchwork Wed Aug 21 04:03:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11105391 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 767F51813 for ; Wed, 21 Aug 2019 04:04:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5500B2332A for ; Wed, 21 Aug 2019 04:04:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="WIFEFpxC" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726478AbfHUEEL (ORCPT ); Wed, 21 Aug 2019 00:04:11 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:18781 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726150AbfHUEEA (ORCPT ); Wed, 21 Aug 2019 00:04:00 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 20 Aug 2019 21:03:57 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 20 Aug 2019 21:03:57 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 20 Aug 2019 21:03:57 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 21 Aug 2019 04:03:57 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 21 Aug 2019 04:03:57 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 20 Aug 2019 21:03:56 -0700 From: John Hubbard To: Andrew Morton CC: Christoph Hellwig , Dan Williams , Dave Chinner , Ira Weiny , Jan Kara , Jason Gunthorpe , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Vlastimil Babka , LKML , , , , John Hubbard , Michal Hocko Subject: [PATCH 3/4] mm/gup: introduce FOLL_PIN flag for get_user_pages() Date: Tue, 20 Aug 2019 21:03:54 -0700 Message-ID: <20190821040355.19566-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190821040355.19566-1-jhubbard@nvidia.com> References: <20190821040355.19566-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1566360237; bh=5r1fejuMLYlPmFKrbU5YkdWKiDn6t75myInCYFT6WEU=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=WIFEFpxCtE6OwHNCIdo2NZnxXjKCOObWWTR/5bPnu4OTiQxbaaNtQ88ilav5mkb8e oi0VcWLet2xwR4MkTdU+8ReBwF00cDt0fCuZrzyvuGb6CXP9affFbctpBJHCsHQ10B 0skkD0l1WJzTts31vkmZuRaWUjd9Wkk6THQplpRacXxWgzCLaX36mIuNd4zv7WzRt9 9eap0lu+lgOI9bIQ6qW3m6ykB9I0F5l9seBAIHdtgtiy1ui7q+ll2eqNR7uBN6XzKu WYTEWcvEJWpqN7ZAYHDwCQqJBsv47Kviys5ZrStsJDUoisgl1GbRChKt1KztKfNVMa wKy2nX60T0cnw== Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org FOLL_PIN is set by callers of vaddr_pin_pages(). This is different than FOLL_LONGTERM, because even short term page pins need a new kind of tracking, if those pinned pages' data is going to potentially be modified. This situation is described in more detail in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). FOLL_PIN is added now, rather than waiting until there is code that takes action based on FOLL_PIN. That's because having FOLL_PIN in the code helps to highlight the differences between: a) get_user_pages(): soon to be deprecated. Used to pin pages, but without awareness of file systems that might use those pages, b) The original vaddr_pin_pages(): intended only for FOLL_LONGTERM and DAX use cases. This assumes direct IO and therefore is not applicable the most of the other callers of get_user_pages(), and Also add fairly extensive documentation of the meaning and use of both FOLL_PIN and FOLL_LONGTERM. Thanks to Jan Kara and Vlastimil Babka for explaining the 4 cases in this documentation. (I've reworded it and expanded on it slightly.) Cc: Vlastimil Babka Cc: Jan Kara Cc: Michal Hocko Cc: Ira Weiny Signed-off-by: John Hubbard --- include/linux/mm.h | 56 +++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 50 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index bc675e94ddf8..6e7de424bf5e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2644,6 +2644,8 @@ static inline vm_fault_t vmf_error(int err) struct page *follow_page(struct vm_area_struct *vma, unsigned long address, unsigned int foll_flags); +/* Flags for follow_page(), get_user_pages ("GUP"), and vaddr_pin_pages(): */ + #define FOLL_WRITE 0x01 /* check pte is writable */ #define FOLL_TOUCH 0x02 /* mark page accessed */ #define FOLL_GET 0x04 /* do get_page on page */ @@ -2663,13 +2665,15 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, #define FOLL_ANON 0x8000 /* don't do file mappings */ #define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */ #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ +#define FOLL_PIN 0x40000 /* pages must be released via put_user_page() */ /* - * NOTE on FOLL_LONGTERM: + * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each + * other. Here is what they mean, and how to use them: * * FOLL_LONGTERM indicates that the page will be held for an indefinite time - * period _often_ under userspace control. This is contrasted with - * iov_iter_get_pages() where usages which are transient. + * period _often_ under userspace control. This is in contrast to + * iov_iter_get_pages(), where usages which are transient. * * FIXME: For pages which are part of a filesystem, mappings are subject to the * lifetime enforced by the filesystem and we need guarantees that longterm @@ -2684,11 +2688,51 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, * Currently only get_user_pages() and get_user_pages_fast() support this flag * and calls to get_user_pages_[un]locked are specifically not allowed. This * is due to an incompatibility with the FS DAX check and - * FAULT_FLAG_ALLOW_RETRY + * FAULT_FLAG_ALLOW_RETRY. * - * In the CMA case: longterm pins in a CMA region would unnecessarily fragment - * that region. And so CMA attempts to migrate the page before pinning when + * In the CMA case: long term pins in a CMA region would unnecessarily fragment + * that region. And so, CMA attempts to migrate the page before pinning, when * FOLL_LONGTERM is specified. + * + * FOLL_PIN indicates that a special kind of tracking (not just page->_refcount, + * but an additional pin counting system) will be invoked. This is intended for + * anything that gets a page reference and then touches page data (for example, + * Direct IO). This lets the filesystem know that some non-file-system entity is + * potentially changing the pages' data. FOLL_PIN pages must be released, + * ultimately, by a call to put_user_page(). Typically that will be via one of + * the vaddr_unpin_pages() variants. + * + * FIXME: note that this special tracking is not in place yet. However, the + * pages should still be released by put_user_page(). + * + * When and where to use each flag: + * + * CASE 1: Direct IO (DIO). There are GUP references to pages that are serving + * as DIO buffers. These buffers are needed for a relatively short time (so they + * are not "long term"). No special synchronization with page_mkclean() or + * munmap() is provided. Therefore, flags to set at the call site are: + * + * FOLL_PIN + * + * CASE 2: RDMA. There are GUP references to pages that are serving as DMA + * buffers. These buffers are needed for a long time ("long term"). No special + * synchronization with page_mkclean() or munmap() is provided. Therefore, flags + * to set at the call site are: + * + * FOLL_PIN | FOLL_LONGTERM + * + * There is also a special case when the pages are DAX pages: in addition to the + * above flags, the caller needs a file lease. This is provided via the struct + * vaddr_pin argument to vaddr_pin_pages(). + * + * CASE 3: ODP (Mellanox/Infiniband On Demand Paging: the hardware supports + * replayable page faulting). There are GUP references to pages serving as DMA + * buffers. For ODP, MMU notifiers are used to synchronize with page_mkclean() + * and munmap(). Therefore, normal GUP calls are sufficient, so neither flag + * needs to be set. + * + * CASE 4: pinning for struct page manipulation only. Here, normal GUP calls are + * sufficient, so neither flag needs to be set. */ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags) From patchwork Wed Aug 21 04:03:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11105387 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0DD3D14DE for ; Wed, 21 Aug 2019 04:04:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D7B372332A for ; Wed, 21 Aug 2019 04:04:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="nhL1HSK2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727316AbfHUEEA (ORCPT ); Wed, 21 Aug 2019 00:04:00 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:18775 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725866AbfHUEEA (ORCPT ); Wed, 21 Aug 2019 00:04:00 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 20 Aug 2019 21:03:58 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 20 Aug 2019 21:03:58 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 20 Aug 2019 21:03:58 -0700 Received: from HQMAIL110.nvidia.com (172.18.146.15) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 21 Aug 2019 04:03:57 +0000 Received: from HQMAIL111.nvidia.com (172.20.187.18) by hqmail110.nvidia.com (172.18.146.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 21 Aug 2019 04:03:57 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 21 Aug 2019 04:03:57 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 20 Aug 2019 21:03:57 -0700 From: John Hubbard To: Andrew Morton CC: Christoph Hellwig , Dan Williams , Dave Chinner , Ira Weiny , Jan Kara , Jason Gunthorpe , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Vlastimil Babka , LKML , , , , John Hubbard Subject: [PATCH 4/4] mm/gup: introduce vaddr_pin_pages_remote(), and invoke it Date: Tue, 20 Aug 2019 21:03:55 -0700 Message-ID: <20190821040355.19566-4-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190821040355.19566-1-jhubbard@nvidia.com> References: <20190821040355.19566-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1566360238; bh=84bhj2PTobIrNB4qrJQexuzhd7TgFsVEFFO7B0hh5tU=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=nhL1HSK2U4WrKC5vGw3005UMictu1E4EM+yEDzDmb6ZkyWfRgLvFtM7196fevI5sV GXmipG5tsboszbdoEPIJy0512aMsw78D+8KclfySfhJeY74wtgtdl9tzr9YvK03S0F d9PC4sNf31/PgrnlUlmK0M1k1S7zdRM39LPfT2OkKqrNba4Kewa1lGY2MxXQE16M9i yPtAlQWhCfrDDRKVlTvz3MH9E+1UzGRAevREc/6foOp3DNrMo31pgLtt+1Br8HvKPn RX4BTFbIm2RUzGmbp9kDWvkaiGUYHSV5AQ/XpeX0xE0Y3fOw18V/JdP7PqrECgIl86 PeNFAAGpzLlNQ== Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org vaddr_pin_user_pages_remote() is the "vaddr_pin_pages" corresponding variant to get_user_pages_remote(): it adds the ability to handle FOLL_PIN, FOLL_LONGTERM, or both. Note that the put_user_page*() requirement won't be truly required until all of the call sites have been converted, and the tracking of pages is activated. Also, change process_vm_rw_single_vec() to invoke the new function. Signed-off-by: John Hubbard --- include/linux/mm.h | 5 +++++ mm/gup.c | 33 +++++++++++++++++++++++++++++++++ mm/process_vm_access.c | 23 ++++++++++++++--------- 3 files changed, 52 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6e7de424bf5e..849b509e9f89 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1606,6 +1606,11 @@ int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc, long vaddr_pin_pages(unsigned long addr, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vaddr_pin *vaddr_pin); +long vaddr_pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, + unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + struct vm_area_struct **vmas, int *locked, + struct vaddr_pin *vaddr_pin); void vaddr_unpin_pages(struct page **pages, unsigned long nr_pages, struct vaddr_pin *vaddr_pin, bool make_dirty); bool mapping_inode_has_layout(struct vaddr_pin *vaddr_pin, struct page *page); diff --git a/mm/gup.c b/mm/gup.c index e49096d012ea..d7ce9b38178f 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2522,3 +2522,36 @@ void vaddr_unpin_pages(struct page **pages, unsigned long nr_pages, __put_user_pages_dirty_lock(pages, nr_pages, make_dirty, vaddr_pin); } EXPORT_SYMBOL(vaddr_unpin_pages); + +/** + * vaddr_pin_user_pages_remote() - pin pages by virtual address and return the + * pages to the user. + * + * @tsk: the task_struct to use for page fault accounting, or + * NULL if faults are not to be recorded. + * @mm: mm_struct of target mm + * @addr: start address + * @nr_pages: number of pages to pin + * @gup_flags: flags to use for the pin. Please see FOLL_* documentation in + * mm.h. + * @pages: array of pages returned + * @vaddr_pin: If FOLL_LONGTERM is set, then vaddr_pin should point to an + * initialized struct that contains the owning mm and file. Otherwise, vaddr_pin + * should be set to NULL. + * + * This is the "vaddr_pin_pages" corresponding variant to + * get_user_pages_remote(), but with the ability to handle FOLL_PIN, + * FOLL_LONGTERM, or both. + */ +long vaddr_pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, + unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + struct vm_area_struct **vmas, int *locked, + struct vaddr_pin *vaddr_pin) +{ + gup_flags |= FOLL_TOUCH | FOLL_REMOTE; + + return __get_user_pages_locked(tsk, mm, start, nr_pages, pages, vmas, + locked, gup_flags, vaddr_pin); +} +EXPORT_SYMBOL(vaddr_pin_user_pages_remote); diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c index 357aa7bef6c0..e08c1f760ad4 100644 --- a/mm/process_vm_access.c +++ b/mm/process_vm_access.c @@ -96,7 +96,7 @@ static int process_vm_rw_single_vec(unsigned long addr, flags |= FOLL_WRITE; while (!rc && nr_pages && iov_iter_count(iter)) { - int pages = min(nr_pages, max_pages_per_loop); + int pinned_pages = min(nr_pages, max_pages_per_loop); int locked = 1; size_t bytes; @@ -106,14 +106,18 @@ static int process_vm_rw_single_vec(unsigned long addr, * current/current->mm */ down_read(&mm->mmap_sem); - pages = get_user_pages_remote(task, mm, pa, pages, flags, - process_pages, NULL, &locked); + + flags |= FOLL_PIN; + pinned_pages = vaddr_pin_user_pages_remote(task, mm, pa, + pinned_pages, flags, + process_pages, NULL, + &locked, NULL); if (locked) up_read(&mm->mmap_sem); - if (pages <= 0) + if (pinned_pages <= 0) return -EFAULT; - bytes = pages * PAGE_SIZE - start_offset; + bytes = pinned_pages * PAGE_SIZE - start_offset; if (bytes > len) bytes = len; @@ -122,10 +126,11 @@ static int process_vm_rw_single_vec(unsigned long addr, vm_write); len -= bytes; start_offset = 0; - nr_pages -= pages; - pa += pages * PAGE_SIZE; - while (pages) - put_page(process_pages[--pages]); + nr_pages -= pinned_pages; + pa += pinned_pages * PAGE_SIZE; + + /* If vm_write is set, the pages need to be made dirty: */ + vaddr_unpin_pages(process_pages, pinned_pages, NULL, vm_write); } return rc;