From patchwork Fri Aug 2 02:19:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 11072415 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5CDCE746 for ; Fri, 2 Aug 2019 02:26:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E408287C0 for ; Fri, 2 Aug 2019 02:26:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4243B2877B; Fri, 2 Aug 2019 02:26:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DEEDE287C0 for ; Fri, 2 Aug 2019 02:26:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733060AbfHBC0S (ORCPT ); Thu, 1 Aug 2019 22:26:18 -0400 Received: from mail-pg1-f196.google.com ([209.85.215.196]:33800 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389229AbfHBCU2 (ORCPT ); Thu, 1 Aug 2019 22:20:28 -0400 Received: by mail-pg1-f196.google.com with SMTP id n9so29013492pgc.1; Thu, 01 Aug 2019 19:20:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Qd17+I20JNTl38JpPt9ENs79+ajS7TX5o2e3ywFXns4=; b=gkgYcjKv8mviCpscfvidKdy0yp2exmwhbdFHg1VYLE3YGnNwT45rP9cYRgwxWvdSGD XzCmVpBd7+RlIjI+MEb+wWJpyo2f4ZmUYQSjUgDqWwLqeKRJRBA6AZnXsK08IyCyXhiy 4dNgArxki6iOU/cWoHR8rdD6eQRfJHzRIm1i/ZNo3u5RbTwbDPqtUA6ksm0/Lv2IxLjK q70kCAhSPRAGOQENe2IrIBNCWn54pipkVfZgN+QCwPKsZgDTTDuP78ZHv5mKxve9ym1t pLsjPHA1rQycUPxkIzWtiiwiI/XQHl1n8MJ/ZwIZ8xTcRU1Mrrugy/JGwgBrkRXuy135 NXOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Qd17+I20JNTl38JpPt9ENs79+ajS7TX5o2e3ywFXns4=; b=ae6J6QgR+J2HQKyt1dy+xWPPA5S8+5jKHGXipFlKxsHOw24UaKrJWnkIS8GXRPauva X9AF/XhhDxNGSRG/UpUVOJSyp/hL/vhKLgUVT/oSY8c2cbHATQEc5mJURS6ZKPkjv/ha 5Mb3+R7Ouaz6jhHHeyAhvHfNzSmL5wBBTIM7kK+zRUOBCnY4gjqngVjtHVCQFMMaoAGU kr47kodZxDYD8hU+tWlyVG4uVmI019JoZcL5h3JLh6P8HMbZEkr+eOyWQ/+4XTxxQJIU 39Tor+p8fXXWUh7ZkpL23jz3tQFAlEUJUNXrzWQo8azk+4OAmlKxky5Ai90ucKrAnEnx ERDA== X-Gm-Message-State: APjAAAU7RG2RWCSHuVr/+EbCCUiPtYvcpzHc572uetSsNJsPjvytVGq4 boL4xiuVaXuTRG+KNS48czc= X-Google-Smtp-Source: APXvYqzcfAPxLmmHN9CIJXSofnjYAmirbplKZ2y79i+8ei53QCOaijMzmdrNExYuUwo7YKHbzyVulQ== X-Received: by 2002:a63:4e60:: with SMTP id o32mr124066909pgl.68.1564712427200; Thu, 01 Aug 2019 19:20:27 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id u9sm38179744pgc.5.2019.08.01.19.20.25 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 19:20:26 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Christoph Hellwig , Dan Williams , Dave Chinner , Dave Hansen , Ira Weiny , Jan Kara , Jason Gunthorpe , =?utf-8?b?SsOpcsO0?= =?utf-8?b?bWUgR2xpc3Nl?= , LKML , amd-gfx@lists.freedesktop.org, ceph-devel@vger.kernel.org, devel@driverdev.osuosl.org, devel@lists.orangefs.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-block@vger.kernel.org, linux-crypto@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, linux-xfs@vger.kernel.org, netdev@vger.kernel.org, rds-devel@oss.oracle.com, sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, John Hubbard , Frank Haverkamp , "Guilherme G. Piccoli" , Arnd Bergmann , Greg Kroah-Hartman Subject: [PATCH 10/34] genwqe: convert put_page() to put_user_page*() Date: Thu, 1 Aug 2019 19:19:41 -0700 Message-Id: <20190802022005.5117-11-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190802022005.5117-1-jhubbard@nvidia.com> References: <20190802022005.5117-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page() or release_pages(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). This changes the release code slightly, because each page slot in the page_list[] array is no longer checked for NULL. However, that check was wrong anyway, because the get_user_pages() pattern of usage here never allowed for NULL entries within a range of pinned pages. Cc: Frank Haverkamp Cc: "Guilherme G. Piccoli" Cc: Arnd Bergmann Cc: Greg Kroah-Hartman Signed-off-by: John Hubbard Acked-by: Greg Kroah-Hartman --- drivers/misc/genwqe/card_utils.c | 17 +++-------------- 1 file changed, 3 insertions(+), 14 deletions(-) diff --git a/drivers/misc/genwqe/card_utils.c b/drivers/misc/genwqe/card_utils.c index 2e1c4d2905e8..2a888f31d2c5 100644 --- a/drivers/misc/genwqe/card_utils.c +++ b/drivers/misc/genwqe/card_utils.c @@ -517,24 +517,13 @@ int genwqe_free_sync_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl) /** * genwqe_free_user_pages() - Give pinned pages back * - * Documentation of get_user_pages is in mm/gup.c: - * - * If the page is written to, set_page_dirty (or set_page_dirty_lock, - * as appropriate) must be called after the page is finished with, and - * before put_page is called. + * The pages may have been written to, so we call put_user_pages_dirty_lock(), + * rather than put_user_pages(). */ static int genwqe_free_user_pages(struct page **page_list, unsigned int nr_pages, int dirty) { - unsigned int i; - - for (i = 0; i < nr_pages; i++) { - if (page_list[i] != NULL) { - if (dirty) - set_page_dirty_lock(page_list[i]); - put_page(page_list[i]); - } - } + put_user_pages_dirty_lock(page_list, nr_pages, dirty); return 0; }