From patchwork Mon Dec 16 22:25:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11295825 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0780B112B for ; Mon, 16 Dec 2019 22:28:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D8D532467B for ; Mon, 16 Dec 2019 22:28:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="LDXDwMrF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728555AbfLPW1s (ORCPT ); Mon, 16 Dec 2019 17:27:48 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:2343 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727452AbfLPWZv (ORCPT ); Mon, 16 Dec 2019 17:25:51 -0500 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 16 Dec 2019 14:25:32 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Mon, 16 Dec 2019 14:25:41 -0800 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Mon, 16 Dec 2019 14:25:41 -0800 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Dec 2019 22:25:40 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 16 Dec 2019 22:25:40 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 16 Dec 2019 14:25:40 -0800 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?utf-8?b?QmrDtnJuIFQ=?= =?utf-8?b?w7ZwZWw=?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard Subject: [PATCH v11 11/25] goldish_pipe: convert to pin_user_pages() and put_user_page() Date: Mon, 16 Dec 2019 14:25:23 -0800 Message-ID: <20191216222537.491123-12-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20191216222537.491123-1-jhubbard@nvidia.com> References: <20191216222537.491123-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1576535132; bh=XIijceIHWSW9TrSkDewdB+tU5E9JpJGXsRb1kC3VbQM=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=LDXDwMrFYu6AWK6uD/mvIYN6EuuGbtT/SyMiWGv23Qfa4BKHvlSPjdYQy2/2dtVnq dYROSFoaEkY1uU7PfbAc7ixO1MCQVgO4fUwIxrG39q4RcSYiUGNCWbHt+WakJUXHEV 1nLbRiAYlNCE3KwIybAfesI/WAWg4Es7pr0Y+fxx8p1liK0c90h789l8cauGWutkbz JC5bBJ+SAAe5/RHuVmKj9dcH5NHBXrNhL73GErXQoNr01HC39JTGvnXxdI/KUeualP yw0BKWKorFCC3bPbDIfQUZPX2gFElH+XmzSS1ksvBb5ABx088UchXighe+dvW73O/F AQqqbpjG6pBgw== Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org 1. Call the new global pin_user_pages_fast(), from pin_goldfish_pages(). 2. As required by pin_user_pages(), release these pages via put_user_page(). In this case, do so via put_user_pages_dirty_lock(). That has the side effect of calling set_page_dirty_lock(), instead of set_page_dirty(). This is probably more accurate. As Christoph Hellwig put it, "set_page_dirty() is only safe if we are dealing with a file backed page where we have reference on the inode it hangs off." [1] Another side effect is that the release code is simplified because the page[] loop is now in gup.c instead of here, so just delete the local release_user_pages() entirely, and call put_user_pages_dirty_lock() directly, instead. [1] https://lore.kernel.org/r/20190723153640.GB720@lst.de Reviewed-by: Jan Kara Reviewed-by: Ira Weiny Signed-off-by: John Hubbard --- drivers/platform/goldfish/goldfish_pipe.c | 17 +++-------------- 1 file changed, 3 insertions(+), 14 deletions(-) diff --git a/drivers/platform/goldfish/goldfish_pipe.c b/drivers/platform/goldfish/goldfish_pipe.c index ef50c264db71..2a5901efecde 100644 --- a/drivers/platform/goldfish/goldfish_pipe.c +++ b/drivers/platform/goldfish/goldfish_pipe.c @@ -274,7 +274,7 @@ static int goldfish_pin_pages(unsigned long first_page, *iter_last_page_size = last_page_size; } - ret = get_user_pages_fast(first_page, requested_pages, + ret = pin_user_pages_fast(first_page, requested_pages, !is_write ? FOLL_WRITE : 0, pages); if (ret <= 0) @@ -285,18 +285,6 @@ static int goldfish_pin_pages(unsigned long first_page, return ret; } -static void release_user_pages(struct page **pages, int pages_count, - int is_write, s32 consumed_size) -{ - int i; - - for (i = 0; i < pages_count; i++) { - if (!is_write && consumed_size > 0) - set_page_dirty(pages[i]); - put_page(pages[i]); - } -} - /* Populate the call parameters, merging adjacent pages together */ static void populate_rw_params(struct page **pages, int pages_count, @@ -372,7 +360,8 @@ static int transfer_max_buffers(struct goldfish_pipe *pipe, *consumed_size = pipe->command_buffer->rw_params.consumed_size; - release_user_pages(pipe->pages, pages_count, is_write, *consumed_size); + put_user_pages_dirty_lock(pipe->pages, pages_count, + !is_write && *consumed_size > 0); mutex_unlock(&pipe->lock); return 0;