From patchwork Fri Jan 31 06:12:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11359175 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C6951139A for ; Fri, 31 Jan 2020 06:12:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 939F72464B for ; Fri, 31 Jan 2020 06:12:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="Mt0Pjp4Z" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 939F72464B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AC89B6B04F7; Fri, 31 Jan 2020 01:12:42 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AA7EF6B04F9; Fri, 31 Jan 2020 01:12:42 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98DD66B04FA; Fri, 31 Jan 2020 01:12:42 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0041.hostedemail.com [216.40.44.41]) by kanga.kvack.org (Postfix) with ESMTP id 821886B04F7 for ; Fri, 31 Jan 2020 01:12:42 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3993E180AD802 for ; Fri, 31 Jan 2020 06:12:42 +0000 (UTC) X-FDA: 76436910564.30.beam08_178b55969cd5c X-Spam-Summary: 2,0,0,486004ec20cf57b1,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:alex.williamson@redhat.com:aneesh.kumar@linux.ibm.com:axboe@kernel.dk:bjorn.topel@intel.com:corbet@lwn.net:dan.j.williams@intel.com:daniel.vetter@ffwll.ch:hch@lst.de:hverkuil-cisco@xs4all.nl:ira.weiny@intel.com:jack@suse.cz:jgg@mellanox.com:jgg@ziepe.ca:jglisse@redhat.com:jhubbard@nvidia.com:kirill@shutemov.name:leonro@mellanox.com::mchehab@kernel.org:mm-commits@vger.kernel.org:rppt@linux.ibm.com:torvalds@linux-foundation.org,RULES_HIT:41:69:152:355:379:800:960:967:968:973:988:989:1260:1263:1277:1311:1313:1314:1345:1359:1381:1431:1437:1513:1515:1516:1518:1521:1535:1543:1593:1594:1711:1730:1747:1777:1792:2198:2199:2393:2525:2553:2559:2563:2682:2685:2731:2859:2895:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:4419:5007:6119:6261:6653:6737: 6738:757 X-HE-Tag: beam08_178b55969cd5c X-Filterd-Recvd-Size: 5380 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Fri, 31 Jan 2020 06:12:41 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4EB1020663; Fri, 31 Jan 2020 06:12:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1580451161; bh=wyj3gy1L5tQvQn6y08zHGyhkDbdu5qR10hZdyGTypr4=; h=Date:From:To:Subject:In-Reply-To:From; b=Mt0Pjp4Z63uZEzTlArnahCgaISZyVgNm86fbnI9fmJfLCSQcKpXnsnaWoeR5W7EoV qxTjzNyjIDYfkLTC/0i3g/YwwhMuyJk2yLfLHhsWY3HOw/Q0O1sYkuqXn+2hNqsMZH /oRXFNbzmBA2UZlJIEKUs+ypAo8FXM40CD2CHmI4= Date: Thu, 30 Jan 2020 22:12:39 -0800 From: Andrew Morton To: akpm@linux-foundation.org, alex.williamson@redhat.com, aneesh.kumar@linux.ibm.com, axboe@kernel.dk, bjorn.topel@intel.com, corbet@lwn.net, dan.j.williams@intel.com, daniel.vetter@ffwll.ch, hch@lst.de, hverkuil-cisco@xs4all.nl, ira.weiny@intel.com, jack@suse.cz, jgg@mellanox.com, jgg@ziepe.ca, jglisse@redhat.com, jhubbard@nvidia.com, kirill@shutemov.name, leonro@mellanox.com, linux-mm@kvack.org, mchehab@kernel.org, mm-commits@vger.kernel.org, rppt@linux.ibm.com, torvalds@linux-foundation.org Subject: [patch 030/118] vfio: fix FOLL_LONGTERM use, simplify get_user_pages_remote() call Message-ID: <20200131061239.1_xurFxAv%akpm@linux-foundation.org> In-Reply-To: <20200130221021.5f0211c56346d5485af07923@linux-foundation.org> User-Agent: s-nail v14.8.16 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: John Hubbard Subject: vfio: fix FOLL_LONGTERM use, simplify get_user_pages_remote() call Update VFIO to take advantage of the recently loosened restriction on FOLL_LONGTERM with get_user_pages_remote(). Also, now it is possible to fix a bug: the VFIO caller is logically a FOLL_LONGTERM user, but it wasn't setting FOLL_LONGTERM. Also, remove an unnessary pair of calls that were releasing and reacquiring the mmap_sem. There is no need to avoid holding mmap_sem just in order to call page_to_pfn(). Also, now that the the DAX check ("if a VMA is DAX, don't allow long term pinning") is in the internals of get_user_pages_remote() and __gup_longterm_locked(), there's no need for it at the VFIO call site. So remove it. Link: http://lkml.kernel.org/r/20200107224558.2362728-8-jhubbard@nvidia.com Signed-off-by: John Hubbard Tested-by: Alex Williamson Acked-by: Alex Williamson Reviewed-by: Jason Gunthorpe Reviewed-by: Ira Weiny Suggested-by: Jason Gunthorpe Cc: Dan Williams Cc: Jerome Glisse Cc: Aneesh Kumar K.V Cc: Björn Töpel Cc: Christoph Hellwig Cc: Daniel Vetter Cc: Hans Verkuil Cc: Jan Kara Cc: Jens Axboe Cc: Jonathan Corbet Cc: Kirill A. Shutemov Cc: Leon Romanovsky Cc: Mauro Carvalho Chehab Cc: Mike Rapoport Signed-off-by: Andrew Morton --- drivers/vfio/vfio_iommu_type1.c | 30 +++++------------------------- 1 file changed, 5 insertions(+), 25 deletions(-) --- a/drivers/vfio/vfio_iommu_type1.c~vfio-fix-foll_longterm-use-simplify-get_user_pages_remote-call +++ a/drivers/vfio/vfio_iommu_type1.c @@ -322,7 +322,6 @@ static int vaddr_get_pfn(struct mm_struc { struct page *page[1]; struct vm_area_struct *vma; - struct vm_area_struct *vmas[1]; unsigned int flags = 0; int ret; @@ -330,33 +329,14 @@ static int vaddr_get_pfn(struct mm_struc flags |= FOLL_WRITE; down_read(&mm->mmap_sem); - if (mm == current->mm) { - ret = get_user_pages(vaddr, 1, flags | FOLL_LONGTERM, page, - vmas); - } else { - ret = get_user_pages_remote(NULL, mm, vaddr, 1, flags, page, - vmas, NULL); - /* - * The lifetime of a vaddr_get_pfn() page pin is - * userspace-controlled. In the fs-dax case this could - * lead to indefinite stalls in filesystem operations. - * Disallow attempts to pin fs-dax pages via this - * interface. - */ - if (ret > 0 && vma_is_fsdax(vmas[0])) { - ret = -EOPNOTSUPP; - put_page(page[0]); - } - } - up_read(&mm->mmap_sem); - + ret = get_user_pages_remote(NULL, mm, vaddr, 1, flags | FOLL_LONGTERM, + page, NULL, NULL); if (ret == 1) { *pfn = page_to_pfn(page[0]); - return 0; + ret = 0; + goto done; } - down_read(&mm->mmap_sem); - vaddr = untagged_addr(vaddr); vma = find_vma_intersection(mm, vaddr, vaddr + 1); @@ -366,7 +346,7 @@ static int vaddr_get_pfn(struct mm_struc if (is_invalid_reserved_pfn(*pfn)) ret = 0; } - +done: up_read(&mm->mmap_sem); return ret; }