From patchwork Thu Nov 21 07:13:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11255749 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 82F5114E5 for ; Thu, 21 Nov 2019 07:18:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5F37D2089F for ; Thu, 21 Nov 2019 07:18:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="Y6IwovW0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727511AbfKUHRq (ORCPT ); Thu, 21 Nov 2019 02:17:46 -0500 Received: from hqemgate15.nvidia.com ([216.228.121.64]:17722 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726714AbfKUHN7 (ORCPT ); Thu, 21 Nov 2019 02:13:59 -0500 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 20 Nov 2019 23:13:53 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 20 Nov 2019 23:13:57 -0800 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 20 Nov 2019 23:13:57 -0800 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 21 Nov 2019 07:13:55 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 21 Nov 2019 07:13:56 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Wed, 20 Nov 2019 23:13:55 -0800 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?utf-8?b?QmrDtnJuIFQ=?= =?utf-8?b?w7ZwZWw=?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Magnus Karlsson , "Mauro Carvalho Chehab" , Michael Ellerman , Michal Hocko , Mike Kravetz , "Paul Mackerras" , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard , "Jason Gunthorpe" Subject: [PATCH v7 09/24] vfio, mm: fix get_user_pages_remote() and FOLL_LONGTERM Date: Wed, 20 Nov 2019 23:13:39 -0800 Message-ID: <20191121071354.456618-10-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191121071354.456618-1-jhubbard@nvidia.com> References: <20191121071354.456618-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1574320433; bh=Uwl18s1SSepFZDEDvXpYEZES6pW0/uOIoLMPiWyv/Qc=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=Y6IwovW0VYU+VHJ2ZkZZO4WhiYcLlNJDLX3At5D7gGMjXxPYWMnfr+PP9N1AAfxLO 98vur3h9JgtyRFONejciuxhxSQO1Pa14CyV7yQakfyXp0lFWlQzheM4YkVfWD+43IO j8hwojT+QlI1DFtKNbau0hn2YdoAVV3aerMAnO1THZmLR/I6OOm8dlbL0vb+gouocl lK12lrkUqW9V0l4c8dN7C2/K4n57zvr07C9hlu8zfG2pkhVgbNVW5GTuUpvcvAJWNJ 4OWP/Nh3bpVtRXyIyFU7kBLJemn2iYUCWJhCg/vAFVkujLxZKNvmpZ5kxUprVWzRPR E6VI3Y54AZ71g== Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org As it says in the updated comment in gup.c: current FOLL_LONGTERM behavior is incompatible with FAULT_FLAG_ALLOW_RETRY because of the FS DAX check requirement on vmas. However, the corresponding restriction in get_user_pages_remote() was slightly stricter than is actually required: it forbade all FOLL_LONGTERM callers, but we can actually allow FOLL_LONGTERM callers that do not set the "locked" arg. Update the code and comments accordingly, and update the VFIO caller to take advantage of this, fixing a bug as a result: the VFIO caller is logically a FOLL_LONGTERM user. Also, remove an unnessary pair of calls that were releasing and reacquiring the mmap_sem. There is no need to avoid holding mmap_sem just in order to call page_to_pfn(). Also, move the DAX check ("if a VMA is DAX, don't allow long term pinning") from the VFIO call site, all the way into the internals of get_user_pages_remote() and __gup_longterm_locked(). That is: get_user_pages_remote() calls __gup_longterm_locked(), which in turn calls check_dax_vmas(). It's lightly explained in the comments as well. Thanks to Jason Gunthorpe for pointing out a clean way to fix this, and to Dan Williams for helping clarify the DAX refactoring. Reviewed-by: Jason Gunthorpe Reviewed-by: Ira Weiny Suggested-by: Jason Gunthorpe Cc: Dan Williams Cc: Jerome Glisse Signed-off-by: John Hubbard Tested-by: Alex Williamson Acked-by: Alex Williamson --- drivers/vfio/vfio_iommu_type1.c | 30 +++++------------------------- mm/gup.c | 27 ++++++++++++++++++++++----- 2 files changed, 27 insertions(+), 30 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index d864277ea16f..c7a111ad9975 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -340,7 +340,6 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr, { struct page *page[1]; struct vm_area_struct *vma; - struct vm_area_struct *vmas[1]; unsigned int flags = 0; int ret; @@ -348,33 +347,14 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr, flags |= FOLL_WRITE; down_read(&mm->mmap_sem); - if (mm == current->mm) { - ret = get_user_pages(vaddr, 1, flags | FOLL_LONGTERM, page, - vmas); - } else { - ret = get_user_pages_remote(NULL, mm, vaddr, 1, flags, page, - vmas, NULL); - /* - * The lifetime of a vaddr_get_pfn() page pin is - * userspace-controlled. In the fs-dax case this could - * lead to indefinite stalls in filesystem operations. - * Disallow attempts to pin fs-dax pages via this - * interface. - */ - if (ret > 0 && vma_is_fsdax(vmas[0])) { - ret = -EOPNOTSUPP; - put_page(page[0]); - } - } - up_read(&mm->mmap_sem); - + ret = get_user_pages_remote(NULL, mm, vaddr, 1, flags | FOLL_LONGTERM, + page, NULL, NULL); if (ret == 1) { *pfn = page_to_pfn(page[0]); - return 0; + ret = 0; + goto done; } - down_read(&mm->mmap_sem); - vaddr = untagged_addr(vaddr); vma = find_vma_intersection(mm, vaddr, vaddr + 1); @@ -384,7 +364,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr, if (is_invalid_reserved_pfn(*pfn)) ret = 0; } - +done: up_read(&mm->mmap_sem); return ret; } diff --git a/mm/gup.c b/mm/gup.c index 14fcdc502166..cce2c9676853 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -29,6 +29,13 @@ struct follow_page_context { unsigned int page_mask; }; +static __always_inline long __gup_longterm_locked(struct task_struct *tsk, + struct mm_struct *mm, + unsigned long start, + unsigned long nr_pages, + struct page **pages, + struct vm_area_struct **vmas, + unsigned int flags); /* * Return the compound head page with ref appropriately incremented, * or NULL if that failed. @@ -1167,13 +1174,23 @@ long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, struct vm_area_struct **vmas, int *locked) { /* - * FIXME: Current FOLL_LONGTERM behavior is incompatible with + * Parts of FOLL_LONGTERM behavior are incompatible with * FAULT_FLAG_ALLOW_RETRY because of the FS DAX check requirement on - * vmas. As there are no users of this flag in this call we simply - * disallow this option for now. + * vmas. However, this only comes up if locked is set, and there are + * callers that do request FOLL_LONGTERM, but do not set locked. So, + * allow what we can. */ - if (WARN_ON_ONCE(gup_flags & FOLL_LONGTERM)) - return -EINVAL; + if (gup_flags & FOLL_LONGTERM) { + if (WARN_ON_ONCE(locked)) + return -EINVAL; + /* + * This will check the vmas (even if our vmas arg is NULL) + * and return -ENOTSUPP if DAX isn't allowed in this case: + */ + return __gup_longterm_locked(tsk, mm, start, nr_pages, pages, + vmas, gup_flags | FOLL_TOUCH | + FOLL_REMOTE); + } return __get_user_pages_locked(tsk, mm, start, nr_pages, pages, vmas, locked,