From patchwork Mon Dec 9 22:53:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11280827 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8836013B6 for ; Mon, 9 Dec 2019 22:56:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 66CE720726 for ; Mon, 9 Dec 2019 22:56:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="C/fTUzxT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727500AbfLIWz5 (ORCPT ); Mon, 9 Dec 2019 17:55:57 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:19895 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727493AbfLIWyK (ORCPT ); Mon, 9 Dec 2019 17:54:10 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 09 Dec 2019 14:54:01 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 09 Dec 2019 14:54:06 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 09 Dec 2019 14:54:06 -0800 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 9 Dec 2019 22:54:06 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 9 Dec 2019 22:54:06 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 09 Dec 2019 14:54:05 -0800 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?utf-8?b?QmrDtnJuIFQ=?= =?utf-8?b?w7ZwZWw=?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard Subject: [PATCH v8 19/26] vfio, mm: pin_user_pages (FOLL_PIN) and put_user_page() conversion Date: Mon, 9 Dec 2019 14:53:37 -0800 Message-ID: <20191209225344.99740-20-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191209225344.99740-1-jhubbard@nvidia.com> References: <20191209225344.99740-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1575932041; bh=wk2NwG+UPRzhGeGA4o6oMgrCb1pvQrHxv3rBUNt8i3I=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=C/fTUzxTzc3p4YYS/Nv5gvqXhsAxxoWv8tZ974VATw38uiWGEbLNaDGmwbNqYtLK1 K59KCCk/dyawtHZe5FCoi/kvtcW4tuPSy7Y/1aWeU93Ir6XhWJweBeyDa6HOSUR7Mt 9ZlEDQlQeMhgvVQC7Ax4Htw9EfR2wJmO49/8vDlTMQQsyJc+VQSQNddYpablP3ZCSe UCSPMjSZHDwfu46LDOlrFiKiyN3j+N9FuQDUdnf7LKiQ9XM83hMEstSKoHfrGDbtKw M7fn1Bq5IxACM89YgHuTaBgLNc2aX9ruMDTWtsGw6NIfxaiiHBwfAv8XXXEt9BBtqm im/CQZKpFUklQ== Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org 1. Change vfio from get_user_pages_remote(), to pin_user_pages_remote(). 2. Because all FOLL_PIN-acquired pages must be released via put_user_page(), also convert the put_page() call over to put_user_pages_dirty_lock(). Note that this effectively changes the code's behavior in vfio_iommu_type1.c: put_pfn(): it now ultimately calls set_page_dirty_lock(), instead of set_page_dirty(). This is probably more accurate. As Christoph Hellwig put it, "set_page_dirty() is only safe if we are dealing with a file backed page where we have reference on the inode it hangs off." [1] [1] https://lore.kernel.org/r/20190723153640.GB720@lst.de Tested-by: Alex Williamson Acked-by: Alex Williamson Signed-off-by: John Hubbard --- drivers/vfio/vfio_iommu_type1.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index b800fc9a0251..18bfc2fc8e6d 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -309,9 +309,8 @@ static int put_pfn(unsigned long pfn, int prot) { if (!is_invalid_reserved_pfn(pfn)) { struct page *page = pfn_to_page(pfn); - if (prot & IOMMU_WRITE) - SetPageDirty(page); - put_page(page); + + put_user_pages_dirty_lock(&page, 1, prot & IOMMU_WRITE); return 1; } return 0; @@ -329,7 +328,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr, flags |= FOLL_WRITE; down_read(&mm->mmap_sem); - ret = get_user_pages_remote(NULL, mm, vaddr, 1, flags | FOLL_LONGTERM, + ret = pin_user_pages_remote(NULL, mm, vaddr, 1, flags | FOLL_LONGTERM, page, NULL, NULL); if (ret == 1) { *pfn = page_to_pfn(page[0]);