From patchwork Fri Sep 28 05:39:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10618961 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 87528417B for ; Fri, 28 Sep 2018 05:40:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 762B02B14F for ; Fri, 28 Sep 2018 05:40:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6A9D42B16F; Fri, 28 Sep 2018 05:40:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04E1A2B195 for ; Fri, 28 Sep 2018 05:40:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728820AbeI1MB7 (ORCPT ); Fri, 28 Sep 2018 08:01:59 -0400 Received: from mail-pf1-f195.google.com ([209.85.210.195]:43845 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728490AbeI1MB6 (ORCPT ); Fri, 28 Sep 2018 08:01:58 -0400 Received: by mail-pf1-f195.google.com with SMTP id j26-v6so3493861pfi.10; Thu, 27 Sep 2018 22:39:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FVIsmZsD2UYUIajD9wCWvufUKrPcgTlr9PeHswP0yEs=; b=n5r6tBaCfMwQJkMX1wNstsmX0XTetJz0PQnc4I6TTdGn5A0pBCYos1RpuFTQmM5hee ePBsKuqJClD80pR5uM+cOaCDaDiobtXHd4BugZ9R1lWYdqV3kMb/O1cbQx2j6jifnfPC OHYQKA1Z4f1/VkcgCqMJNMdxpDWpZCbVSPHwavstfckQEu0JsDhU4QTn82HhYQaLSXNS ektAk+Q6/Mat3WtnRAVZRR08nDBFtuDcbTkGsWB7Ps0t2T+XjWP3iYeWCvxEHdDTJ28d nP3DII/FHRK+Rc+fskScXoB3LUkO6/aUlAvFZvg/5NPsorwoQLAdV3O/w5WAONvGXDG/ ylsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FVIsmZsD2UYUIajD9wCWvufUKrPcgTlr9PeHswP0yEs=; b=BoEo53AJS/8oAuWR7d34NX+QvzL4/G2yGO2YlZxvsUwIdBDpLZDYRAw4niq47PtX0k PZSp/VJtNnKuyBkg2MIe6N33AivSLUe84LYpGQM2O2O6N9cghUM3T3tJT1fo1z6zr1p7 oiGlmW1hXsSPJERQBd8KIqyomZLMmeLczVVByqXGwf4Jf+i93p26YBmHJK2/OJabig+H Cbr5PYP4NCUF5p4INTC+MV38h8j07cN6RSfDBJRvIckVAyn1okinSobbUjK5kLSkXQXM u1Vhpi8yxDdfBFw48i8Xoo/Lnyst8kUsPzneKVgDDZiz+3t7tPS3SmscsfQMOVD6iq0g pSTg== X-Gm-Message-State: ABuFfohEh3QtXmxuO8ftZkDKICERNZC6TpM8RNEvL9wMemWVf7L0OUyT yxXXCtl1EDiBVmvPIty6IaY= X-Google-Smtp-Source: ACcGV63VBR+76bN61j/R5pu3ZYOFO30IL/SSRAT4nbQ6f5aCW7Pi+BLOjTjb+ceaQUz6DgMOBj4qfg== X-Received: by 2002:a62:12c9:: with SMTP id 70-v6mr14833644pfs.140.1538113197825; Thu, 27 Sep 2018 22:39:57 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id u9-v6sm6569953pfi.104.2018.09.27.22.39.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Sep 2018 22:39:56 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara , Al Viro Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard Subject: [PATCH 1/4] mm: get_user_pages: consolidate error handling Date: Thu, 27 Sep 2018 22:39:46 -0700 Message-Id: <20180928053949.5381-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180928053949.5381-1-jhubbard@nvidia.com> References: <20180928053949.5381-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard An upcoming patch requires a way to operate on each page that any of the get_user_pages_*() variants returns. In preparation for that, consolidate the error handling for __get_user_pages(). This provides a single location (the "out:" label) for operating on the collected set of pages that are about to be returned. As long every use of the "ret" variable is being edited, rename "ret" --> "err", so that its name matches its true role. This also gets rid of two shadowed variable declarations, as a tiny beneficial a side effect. Reviewed-by: Jan Kara Signed-off-by: John Hubbard --- mm/gup.c | 37 ++++++++++++++++++++++--------------- 1 file changed, 22 insertions(+), 15 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 1abc8b4afff6..05ee7c18e59a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -660,6 +660,7 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, struct vm_area_struct **vmas, int *nonblocking) { long i = 0; + int err = 0; unsigned int page_mask; struct vm_area_struct *vma = NULL; @@ -685,18 +686,19 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, if (!vma || start >= vma->vm_end) { vma = find_extend_vma(mm, start); if (!vma && in_gate_area(mm, start)) { - int ret; - ret = get_gate_page(mm, start & PAGE_MASK, + err = get_gate_page(mm, start & PAGE_MASK, gup_flags, &vma, pages ? &pages[i] : NULL); - if (ret) - return i ? : ret; + if (err) + goto out; page_mask = 0; goto next_page; } - if (!vma || check_vma_flags(vma, gup_flags)) - return i ? : -EFAULT; + if (!vma || check_vma_flags(vma, gup_flags)) { + err = -EFAULT; + goto out; + } if (is_vm_hugetlb_page(vma)) { i = follow_hugetlb_page(mm, vma, pages, vmas, &start, &nr_pages, i, @@ -709,23 +711,25 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, * If we have a pending SIGKILL, don't keep faulting pages and * potentially allocating memory. */ - if (unlikely(fatal_signal_pending(current))) - return i ? i : -ERESTARTSYS; + if (unlikely(fatal_signal_pending(current))) { + err = -ERESTARTSYS; + goto out; + } cond_resched(); page = follow_page_mask(vma, start, foll_flags, &page_mask); if (!page) { - int ret; - ret = faultin_page(tsk, vma, start, &foll_flags, + err = faultin_page(tsk, vma, start, &foll_flags, nonblocking); - switch (ret) { + switch (err) { case 0: goto retry; case -EFAULT: case -ENOMEM: case -EHWPOISON: - return i ? i : ret; + goto out; case -EBUSY: - return i; + err = 0; + goto out; case -ENOENT: goto next_page; } @@ -737,7 +741,8 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, */ goto next_page; } else if (IS_ERR(page)) { - return i ? i : PTR_ERR(page); + err = PTR_ERR(page); + goto out; } if (pages) { pages[i] = page; @@ -757,7 +762,9 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, start += page_increm * PAGE_SIZE; nr_pages -= page_increm; } while (nr_pages); - return i; + +out: + return i ? i : err; } static bool vma_permits_fault(struct vm_area_struct *vma, From patchwork Fri Sep 28 05:39:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10618953 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 24EE3180E for ; Fri, 28 Sep 2018 05:40:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 12A352B16F for ; Fri, 28 Sep 2018 05:40:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 06A8A2B195; Fri, 28 Sep 2018 05:40:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 62F1B2B192 for ; Fri, 28 Sep 2018 05:40:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728905AbeI1MCB (ORCPT ); Fri, 28 Sep 2018 08:02:01 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:41759 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726271AbeI1MCB (ORCPT ); Fri, 28 Sep 2018 08:02:01 -0400 Received: by mail-pg1-f193.google.com with SMTP id z3-v6so3591331pgv.8; Thu, 27 Sep 2018 22:40:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1mqzGKFVMjSnTLiDsAB/mdn2GuZtUkrT/vpmmy1PyQs=; b=o+Kj425jnKP42HZw2x4xeAHlUiUb3CZEXHJRcINqCsnanOWKoDQcrLtOmLskp9VWzI R49hevMHgx5Oqhb/KxZmZXFpqPT9XxjRA801Ch4IO4lGkq6dJWo7RJYA3OGLPLE/m1lX shbtgZkw25oR0DBx4bBOEJGoSX5PJRB+9UDhiPtUqQEOu73eqgHSAJOFFpEcOwOmH3Ng FkMF54STPwp6EWRRuKQ5s0l1a/DkGtmuQs9sspRRnxOXE7U790apW8hYjonhKa6WjS9J gPyxjdzkZCTp/NGZ4sftBnnibXaHKNMuTdlafDkkORRZmUAS0dWpdNFtDx1bU9ChbQe8 Du1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1mqzGKFVMjSnTLiDsAB/mdn2GuZtUkrT/vpmmy1PyQs=; b=Mf/DZj+XV8lBjl5JIbwZp0lC+bo+X1kIJz5V3WvgVwhM5UYyKaHOLnGpawYGNkEMz1 5JeAHil8Ku65Uz5qge89Jith+OLIp6KKOK6zUwyvc8gLKBl9Y6loUL8pGp1auiSk3Jtw WIU/NQxVhPWMlm9jYa8KA62sCYQg+BYqqg8SKwo+nG+AElLOFkJH40FQWoydGkHY+kAQ jQG+uUQKfN0uCo15UCQY56Te0Bi+PRzczZDJlnknJYZFFIov3kWtapf8+lP/Qdf4ZHZ3 n1tyB6D2ZpKnwvFLjp13U9V6/x0sqMXNC1OCJDrpCn2gbnDFPDkW/DnzFdf3zMf+DRHa ZF9w== X-Gm-Message-State: ABuFfoi7nS0CYKAle7YBj0z6QqE4XqWbnVfBl+Oxl82dEGIacKxjA4Wt u/V6qa7VT3k8ct4N1UXs8p8= X-Google-Smtp-Source: ACcGV63ypN7jtruDL64Blbt9oo1dr/8T7M2lG9FPtvldbr3uBAGfMxqPHuSYsSPqT8bCod53sR8lrw== X-Received: by 2002:a65:448a:: with SMTP id l10-v6mr13566667pgq.382.1538113200923; Thu, 27 Sep 2018 22:40:00 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id u9-v6sm6569953pfi.104.2018.09.27.22.39.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Sep 2018 22:40:00 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara , Al Viro Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard Subject: [PATCH 2/4] mm: introduce put_user_page(), placeholder version Date: Thu, 27 Sep 2018 22:39:48 -0700 Message-Id: <20180928053949.5381-4-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180928053949.5381-1-jhubbard@nvidia.com> References: <20180928053949.5381-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard Introduces put_user_page(), which simply calls put_page(). This provides a way to update all get_user_pages*() callers, so that they call put_user_page(), instead of put_page(). Also adds release_user_pages(), a drop-in replacement for release_pages(). This is intended to be easily grep-able, for later performance improvements, since release_user_pages is not batched like release_pages() is, and is significantly slower. Also: rename goldfish_pipe.c's release_user_pages(), in order to avoid a naming conflict with the new external function of the same name. This prepares for eventually fixing the problem described in [1], and is following a plan listed in [2]. [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubbard@nvidia.com Proposed steps for fixing get_user_pages() + DMA problems. CC: Matthew Wilcox CC: Michal Hocko CC: Christopher Lameter CC: Jason Gunthorpe CC: Dan Williams CC: Jan Kara CC: Al Viro Signed-off-by: John Hubbard --- drivers/platform/goldfish/goldfish_pipe.c | 4 ++-- include/linux/mm.h | 14 ++++++++++++++ 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/drivers/platform/goldfish/goldfish_pipe.c b/drivers/platform/goldfish/goldfish_pipe.c index 2da567540c2d..fad0345376e0 100644 --- a/drivers/platform/goldfish/goldfish_pipe.c +++ b/drivers/platform/goldfish/goldfish_pipe.c @@ -332,7 +332,7 @@ static int pin_user_pages(unsigned long first_page, unsigned long last_page, } -static void release_user_pages(struct page **pages, int pages_count, +static void __release_user_pages(struct page **pages, int pages_count, int is_write, s32 consumed_size) { int i; @@ -410,7 +410,7 @@ static int transfer_max_buffers(struct goldfish_pipe *pipe, *consumed_size = pipe->command_buffer->rw_params.consumed_size; - release_user_pages(pages, pages_count, is_write, *consumed_size); + __release_user_pages(pages, pages_count, is_write, *consumed_size); mutex_unlock(&pipe->lock); diff --git a/include/linux/mm.h b/include/linux/mm.h index a61ebe8ad4ca..72caf803115f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -943,6 +943,20 @@ static inline void put_page(struct page *page) __put_page(page); } +/* Placeholder version, until all get_user_pages*() callers are updated. */ +static inline void put_user_page(struct page *page) +{ + put_page(page); +} + +/* A drop-in replacement for release_pages(): */ +static inline void release_user_pages(struct page **pages, + unsigned long npages) +{ + while (npages) + put_user_page(pages[--npages]); +} + #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define SECTION_IN_PAGE_FLAGS #endif From patchwork Fri Sep 28 05:39:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10618957 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7D9DD180E for ; Fri, 28 Sep 2018 05:40:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6BEE52B14F for ; Fri, 28 Sep 2018 05:40:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5F28A2B192; Fri, 28 Sep 2018 05:40:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C66D32B14F for ; Fri, 28 Sep 2018 05:40:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728868AbeI1MCA (ORCPT ); Fri, 28 Sep 2018 08:02:00 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:46627 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728852AbeI1MCA (ORCPT ); Fri, 28 Sep 2018 08:02:00 -0400 Received: by mail-pg1-f193.google.com with SMTP id b129-v6so3574591pga.13; Thu, 27 Sep 2018 22:39:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ThRzzWpBpu4s48OlpLDvaH54RyEw4MNxyT1MO+mJU2I=; b=oXY94iVcGXqVYOVsyrL60tx2EzoBczbAy7YBhVQxjcdSZwlYXxbKExa7jHilnyBs8j QjC/djhpqbPVPN2GVY6qxBzo5aGhRhctuMyuEI+QGGxAwT19qkBBOAgmkN2OyldBOK2B J34ONI5c9LN6PuRNaDTr6FvxiB5+22YkdxGdDuGqplcxyV5kcJEl0uNd1AJF8FhS9Kmx WzWt8MXU7ZeKczlfdbiurh5O76NeEvJjldxaGyWe4ugD+9nBDb4H2Zhw2GROcBXHGbAB W08O7RBUef9IkTYmBKAxyKZqAkRfQAA3g6sNszemh9ALVphLjZwFt/xc46Lhh+LzCjAE VcDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ThRzzWpBpu4s48OlpLDvaH54RyEw4MNxyT1MO+mJU2I=; b=jZ1tRDd9ByViy3a7a6TyaF9B+5fME6XUjZX1l4BZVY5MZRpMOGTw6XpviPGnLrhY09 t+gnyMSEBmktoYvtOQoKj8PYgpFOg3BMARDlbTQX4myCm1mRXA4mo0f6yy7gXQhLeOPQ 1wfqYmqEGDzF7GkWsNUkufnD0UikVlXvcHICdYB/M7yQN1RuwlgLXw8G+h48Vz4xEU+O VQ+Dvi2A1lR1JeyMQxhGMoehudJ9vImvR+RFq8cnGBuR8mMxmXhNNC7u2wOoYCP137IO 68/WtkAyXsBfvg/BpiMMYVEVwQOUSOCaZvWX/PtJ2GQ/rxc4Iymc3KezMtJstsb3kZ2G jp2w== X-Gm-Message-State: ABuFfohImc7aYk+ewp5pHHWjwVZLNHDShOUcI0vbOuUd8mkHTWLx0GQg Jfd6q+/dJVwlwiJsABB0vdA= X-Google-Smtp-Source: ACcGV63im43pJbETQH3bFshzFo7dV3xsDzH3NWhLxzp7kBSMLcE3yPW9Bruuq2pUeXW/hVOToZUwbg== X-Received: by 2002:a65:6409:: with SMTP id a9-v6mr5126447pgv.204.1538113199449; Thu, 27 Sep 2018 22:39:59 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id u9-v6sm6569953pfi.104.2018.09.27.22.39.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Sep 2018 22:39:58 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara , Al Viro Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard , Doug Ledford , Mike Marciniszyn , Dennis Dalessandro , Christian Benvenuti Subject: [PATCH 3/4] infiniband/mm: convert to the new put_user_page() call Date: Thu, 27 Sep 2018 22:39:47 -0700 Message-Id: <20180928053949.5381-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180928053949.5381-1-jhubbard@nvidia.com> References: <20180928053949.5381-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard For code that retains pages via get_user_pages*(), release those pages via the new put_user_page(), instead of put_page(). This prepares for eventually fixing the problem described in [1], and is following a plan listed in [2]. [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubbard@nvidia.com Proposed steps for fixing get_user_pages() + DMA problems. CC: Doug Ledford CC: Jason Gunthorpe CC: Mike Marciniszyn CC: Dennis Dalessandro CC: Christian Benvenuti CC: linux-rdma@vger.kernel.org CC: linux-kernel@vger.kernel.org CC: linux-mm@kvack.org Signed-off-by: John Hubbard Acked-by: Jason Gunthorpe Reviewed-by: Dennis Dalessandro --- drivers/infiniband/core/umem.c | 2 +- drivers/infiniband/core/umem_odp.c | 2 +- drivers/infiniband/hw/hfi1/user_pages.c | 2 +- drivers/infiniband/hw/mthca/mthca_memfree.c | 6 +++--- drivers/infiniband/hw/qib/qib_user_pages.c | 2 +- drivers/infiniband/hw/qib/qib_user_sdma.c | 8 ++++---- drivers/infiniband/hw/usnic/usnic_uiom.c | 2 +- 7 files changed, 12 insertions(+), 12 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index a41792dbae1f..9430d697cb9f 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -60,7 +60,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d page = sg_page(sg); if (!PageDirty(page) && umem->writable && dirty) set_page_dirty_lock(page); - put_page(page); + put_user_page(page); } sg_free_table(&umem->sg_head); diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 6ec748eccff7..6227b89cf05c 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -717,7 +717,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem *umem, u64 user_virt, u64 bcnt, ret = -EFAULT; break; } - put_page(local_page_list[j]); + put_user_page(local_page_list[j]); continue; } diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c index e341e6dcc388..c7516029af33 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -126,7 +126,7 @@ void hfi1_release_user_pages(struct mm_struct *mm, struct page **p, for (i = 0; i < npages; i++) { if (dirty) set_page_dirty_lock(p[i]); - put_page(p[i]); + put_user_page(p[i]); } if (mm) { /* during close after signal, mm can be NULL */ diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniband/hw/mthca/mthca_memfree.c index cc9c0c8ccba3..b8b12effd009 100644 --- a/drivers/infiniband/hw/mthca/mthca_memfree.c +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c @@ -481,7 +481,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, ret = pci_map_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); if (ret < 0) { - put_page(pages[0]); + put_user_page(pages[0]); goto out; } @@ -489,7 +489,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, mthca_uarc_virt(dev, uar, i)); if (ret) { pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_page(sg_page(&db_tab->page[i].mem)); + put_user_page(sg_page(&db_tab->page[i].mem)); goto out; } @@ -555,7 +555,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar, if (db_tab->page[i].uvirt) { mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, uar, i), 1); pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_page(sg_page(&db_tab->page[i].mem)); + put_user_page(sg_page(&db_tab->page[i].mem)); } } diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index 16543d5e80c3..3f8fd42dd7fc 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -45,7 +45,7 @@ static void __qib_release_user_pages(struct page **p, size_t num_pages, for (i = 0; i < num_pages; i++) { if (dirty) set_page_dirty_lock(p[i]); - put_page(p[i]); + put_user_page(p[i]); } } diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c index 926f3c8eba69..14f94d823907 100644 --- a/drivers/infiniband/hw/qib/qib_user_sdma.c +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -266,7 +266,7 @@ static void qib_user_sdma_init_frag(struct qib_user_sdma_pkt *pkt, pkt->addr[i].length = len; pkt->addr[i].first_desc = first_desc; pkt->addr[i].last_desc = last_desc; - pkt->addr[i].put_page = put_page; + pkt->addr[i].put_page = put_user_page; pkt->addr[i].dma_mapped = dma_mapped; pkt->addr[i].page = page; pkt->addr[i].kvaddr = kvaddr; @@ -321,7 +321,7 @@ static int qib_user_sdma_page_to_frags(const struct qib_devdata *dd, * the caller can ignore this page. */ if (put) { - put_page(page); + put_user_page(page); } else { /* coalesce case */ kunmap(page); @@ -635,7 +635,7 @@ static void qib_user_sdma_free_pkt_frag(struct device *dev, kunmap(pkt->addr[i].page); if (pkt->addr[i].put_page) - put_page(pkt->addr[i].page); + put_user_page(pkt->addr[i].page); else __free_page(pkt->addr[i].page); } else if (pkt->addr[i].kvaddr) { @@ -710,7 +710,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd, /* if error, return all pages not managed by pkt */ free_pages: while (i < j) - put_page(pages[i++]); + put_user_page(pages[i++]); done: return ret; diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 9dd39daa602b..0f607f31c262 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -91,7 +91,7 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty) pa = sg_phys(sg); if (!PageDirty(page) && dirty) set_page_dirty_lock(page); - put_page(page); + put_user_page(page); usnic_dbg("pa: %pa\n", &pa); } kfree(chunk); From patchwork Fri Sep 28 05:39:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10618947 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 65E77180E for ; Fri, 28 Sep 2018 05:40:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 52C462B177 for ; Fri, 28 Sep 2018 05:40:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 473302B192; Fri, 28 Sep 2018 05:40:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 405942B177 for ; Fri, 28 Sep 2018 05:40:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728940AbeI1MCD (ORCPT ); Fri, 28 Sep 2018 08:02:03 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:35717 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726271AbeI1MCD (ORCPT ); Fri, 28 Sep 2018 08:02:03 -0400 Received: by mail-pg1-f194.google.com with SMTP id v133-v6so3615562pgb.2; Thu, 27 Sep 2018 22:40:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gTxV0aWUbi8aFwYtgG+7uDbNGSrkpXx/JtqQLR1W0X4=; b=bDNixGuJlh/VViGK/IVdrd2Ya6UuePjmCtcsgKK/3jykSPzSHwUVjRr32fbM/h5Eu1 aWG7kNjNxKOjioZ+vW8NhCUe4/AWHvUKE+sTbc3smgAHTt2Mp1w3R3VVwRQwLhQ3upp4 z345xOHYP0aevKOufhDJEUGR/aRHBPJXIruZc5zPLn0NiTAHFetIb+GInBNx5Sked+RL tQmMg7HH66UETFNKv4JaGPH4Qfp2ag0aQwKy08qRSMys5TZw49jJf7/sQ2I6+era5G/Z Xnq3phaDVdsUDxIIQGQBKEDA8pNMdbl423L7RDkgYp5PhTSIB39uPk+cEli7RbXh2qHf 68Vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gTxV0aWUbi8aFwYtgG+7uDbNGSrkpXx/JtqQLR1W0X4=; b=VJ1A2PO6bkoMqs9hwyB4oaZIcBRsnj5knWPd/OuXJEfuABfBRR/vgcH4ofPDwtXWWh YO7dJ3VtHAupxGerOSDPlgxig3UuZoVeJjDxG581cHY7dryO+1mf3QYYaRN3rkaPMKvK qdyRoHyf27eZriSE+0asiUIL3J4SDzbU06zVBU1i0csNT/1BGSVJyG2NvMTnQnWjZjf8 GyjnIHbAsPwFP6KyV81XEAXwh2ywAtKFcbPKLkxoq622b3He+yPePvx4QNNtGQw3d404 C5v0JwhbonLd+DZkF7OobPGIYmjvQJcEBEhwhV3BwWgnpCiOlKKg+x3UuKUp9Fq7s6gr ljmw== X-Gm-Message-State: ABuFfoglYAdJn6JwY1/oCrZv75jOXocLu66VUp9lnfiJ6f2CGBII4V2N 9BBoZvLn/blsP+zu/gr4CRg= X-Google-Smtp-Source: ACcGV63rxOc0Wx8ndnvvnZLm+cbC7UMeUJ/O4wKjbN4LjtpvySOpDGyJ7dXg2lbQNGsJNDt27t9iSg== X-Received: by 2002:a63:531a:: with SMTP id h26-v6mr8070623pgb.441.1538113202538; Thu, 27 Sep 2018 22:40:02 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id u9-v6sm6569953pfi.104.2018.09.27.22.40.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Sep 2018 22:40:01 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara , Al Viro Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard Subject: [PATCH 4/4] goldfish_pipe/mm: convert to the new release_user_pages() call Date: Thu, 27 Sep 2018 22:39:49 -0700 Message-Id: <20180928053949.5381-5-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180928053949.5381-1-jhubbard@nvidia.com> References: <20180928053949.5381-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard For code that retains pages via get_user_pages*(), release those pages via the new release_user_pages(), instead of calling put_page(). This prepares for eventually fixing the problem described in [1], and is following a plan listed in [2]. [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubbard@nvidia.com Proposed steps for fixing get_user_pages() + DMA problems. CC: Al Viro Signed-off-by: John Hubbard --- drivers/platform/goldfish/goldfish_pipe.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/platform/goldfish/goldfish_pipe.c b/drivers/platform/goldfish/goldfish_pipe.c index fad0345376e0..1e9455a86698 100644 --- a/drivers/platform/goldfish/goldfish_pipe.c +++ b/drivers/platform/goldfish/goldfish_pipe.c @@ -340,8 +340,9 @@ static void __release_user_pages(struct page **pages, int pages_count, for (i = 0; i < pages_count; i++) { if (!is_write && consumed_size > 0) set_page_dirty(pages[i]); - put_page(pages[i]); } + + release_user_pages(pages, pages_count); } /* Populate the call parameters, merging adjacent pages together */