From patchwork Fri Sep 28 05:39:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10618937 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CCCF3175A for ; Fri, 28 Sep 2018 05:40:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B886F2B14F for ; Fri, 28 Sep 2018 05:40:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ABE142B192; Fri, 28 Sep 2018 05:40:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 68CFE2B177 for ; Fri, 28 Sep 2018 05:40:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CF6EF8E0004; Fri, 28 Sep 2018 01:39:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CA63A8E0001; Fri, 28 Sep 2018 01:39:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B70248E0004; Fri, 28 Sep 2018 01:39:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id 6505C8E0001 for ; Fri, 28 Sep 2018 01:39:59 -0400 (EDT) Received: by mail-pf1-f197.google.com with SMTP id f89-v6so5577848pff.7 for ; Thu, 27 Sep 2018 22:39:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=FVIsmZsD2UYUIajD9wCWvufUKrPcgTlr9PeHswP0yEs=; b=rXZxRuSRXym38NOJXssSMa2snhHg5iSihgym5RYGUSVaLZw/f/4lFZC+wu5BoMiWYd WsKH7tfoZGacoPO2VU/lcCXu/svQpEKU+UpcVyDmSeVqTO355Pmsv7qnb43zql0u8sxR 01UrHD1tvwDGGj5QAoJnBytyzLBWxlbrxaCTTbVINI4B+6JUfiWBrtvHtkBGXeKTwhNX XVbG5LrF6oU8PnwsSK3awWN0rd7wQ1DtVS/dFaFdWPSWEQhpu4n02rOM8og10Tqvzvd0 xCUdBUBKGff2BXp7n8+Dz1hW0k8oL8NHYwZ1dWu7uxO9qFHSxeriMXjClxGGewPU/Uzp EYow== X-Gm-Message-State: ABuFfohoUNE75cW3y2vvh7V1djKkFb/HIAdEjcBOxHRAdAiTCy7lyajV kFC9GtDyFwacm2QbDamyDGr3ErWMWOB4GnxWWloBqvDL3GbyZbaZoocQrr0hPYIc7pIR0bzvr8M 7l4IDPZlK23o4WvSxXnjyvvldayU7COkvbOhhknfPpSAu3oFYYNDAk5hmE/NXT76tKIphTURmx3 lD/fB6Fc1GOmlrNJaQz3Ol506eXZ/sTYnls/fFFxZz64E0AtZyViaPfzz3OomBuUcvsvnkXdyFV d8vpDwC5aABLt8qthUIleUm95YCWZO1oJoKaJFfVcMZgECn6gHaCGt04TbF2q8mzjYDmPKjuVey nePvB0J6nR5rYl95eNB/VJpGvGlpXa2LjbegSno5vr5VAvl8WFm3HcHVkwsq5ZF39yukq6zpFnU s X-Received: by 2002:a62:2315:: with SMTP id j21-v6mr608384pfj.90.1538113199102; Thu, 27 Sep 2018 22:39:59 -0700 (PDT) X-Received: by 2002:a62:2315:: with SMTP id j21-v6mr608324pfj.90.1538113198207; Thu, 27 Sep 2018 22:39:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538113198; cv=none; d=google.com; s=arc-20160816; b=u1h/YKwJeuZlHqQnC27mJbvtnGo9o1JHkbmmC1k7h1jNXd1ab+t8EEp9h6ym4ZVjtW 39kABkc+q6lIJ7qdkIcFtvyyJGy992z4yKotLYhmG1H6Yec6rv+WvPk4OclpYbCQNKjV aDEhQciuZBpBZ7Yokir8Jm0nrKgUx5IcfiOt5xm0sRaxsAqV3LcE2uX5Xe7/T8xIQwKL tUACSTA6aoL6V2P9CMp1+MkG8X72NEJv1Qw2HzkMGWd/1wTUrf1EczLlc2KnYOEwq8d2 VwTursctVBDhyBuBoALg2CxsWoAHsanBWerRKRsrJ2PC1muUjweJT7o2AZFIMmNBJOiO HgfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=FVIsmZsD2UYUIajD9wCWvufUKrPcgTlr9PeHswP0yEs=; b=hXN/vsO/1tZQrPDTbA3b+8l3qsFzOUmh5w3fP3grvMlJhdo1ddj2x2KUB+LLPdnxRV 2k5CLPEQYREEyKitVun8Qca7iCUP+1J30rdrIkNSzMoTdvoSSMKaiTBqRKbfZ1qd8n3Q 3Bz/67vwdYWdDlxcT6XsSx20bo6bfw4ZAfVCBUfA2gVYjRFA5nnuwHs5Squ4EWxN9u81 Hn0zytHr9SytE2FH9l0uA1C9hiSqDeLvOzT5EQ49sv7vHV1D8Z9peiKe2Y9PF/WARi/3 SiQkAQADEv778UCV9GPamxkjlx+46DmwR+aZdUkYA/0QbHfJWi48ePIq/jS8pDCHHiDG 8mOQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=n5r6tBaC; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id t12-v6sor1121626pga.201.2018.09.27.22.39.58 for (Google Transport Security); Thu, 27 Sep 2018 22:39:58 -0700 (PDT) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=n5r6tBaC; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FVIsmZsD2UYUIajD9wCWvufUKrPcgTlr9PeHswP0yEs=; b=n5r6tBaCfMwQJkMX1wNstsmX0XTetJz0PQnc4I6TTdGn5A0pBCYos1RpuFTQmM5hee ePBsKuqJClD80pR5uM+cOaCDaDiobtXHd4BugZ9R1lWYdqV3kMb/O1cbQx2j6jifnfPC OHYQKA1Z4f1/VkcgCqMJNMdxpDWpZCbVSPHwavstfckQEu0JsDhU4QTn82HhYQaLSXNS ektAk+Q6/Mat3WtnRAVZRR08nDBFtuDcbTkGsWB7Ps0t2T+XjWP3iYeWCvxEHdDTJ28d nP3DII/FHRK+Rc+fskScXoB3LUkO6/aUlAvFZvg/5NPsorwoQLAdV3O/w5WAONvGXDG/ ylsg== X-Google-Smtp-Source: ACcGV63VBR+76bN61j/R5pu3ZYOFO30IL/SSRAT4nbQ6f5aCW7Pi+BLOjTjb+ceaQUz6DgMOBj4qfg== X-Received: by 2002:a62:12c9:: with SMTP id 70-v6mr14833644pfs.140.1538113197825; Thu, 27 Sep 2018 22:39:57 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id u9-v6sm6569953pfi.104.2018.09.27.22.39.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Sep 2018 22:39:56 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara , Al Viro Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard Subject: [PATCH 1/4] mm: get_user_pages: consolidate error handling Date: Thu, 27 Sep 2018 22:39:46 -0700 Message-Id: <20180928053949.5381-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180928053949.5381-1-jhubbard@nvidia.com> References: <20180928053949.5381-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard An upcoming patch requires a way to operate on each page that any of the get_user_pages_*() variants returns. In preparation for that, consolidate the error handling for __get_user_pages(). This provides a single location (the "out:" label) for operating on the collected set of pages that are about to be returned. As long every use of the "ret" variable is being edited, rename "ret" --> "err", so that its name matches its true role. This also gets rid of two shadowed variable declarations, as a tiny beneficial a side effect. Reviewed-by: Jan Kara Signed-off-by: John Hubbard --- mm/gup.c | 37 ++++++++++++++++++++++--------------- 1 file changed, 22 insertions(+), 15 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 1abc8b4afff6..05ee7c18e59a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -660,6 +660,7 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, struct vm_area_struct **vmas, int *nonblocking) { long i = 0; + int err = 0; unsigned int page_mask; struct vm_area_struct *vma = NULL; @@ -685,18 +686,19 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, if (!vma || start >= vma->vm_end) { vma = find_extend_vma(mm, start); if (!vma && in_gate_area(mm, start)) { - int ret; - ret = get_gate_page(mm, start & PAGE_MASK, + err = get_gate_page(mm, start & PAGE_MASK, gup_flags, &vma, pages ? &pages[i] : NULL); - if (ret) - return i ? : ret; + if (err) + goto out; page_mask = 0; goto next_page; } - if (!vma || check_vma_flags(vma, gup_flags)) - return i ? : -EFAULT; + if (!vma || check_vma_flags(vma, gup_flags)) { + err = -EFAULT; + goto out; + } if (is_vm_hugetlb_page(vma)) { i = follow_hugetlb_page(mm, vma, pages, vmas, &start, &nr_pages, i, @@ -709,23 +711,25 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, * If we have a pending SIGKILL, don't keep faulting pages and * potentially allocating memory. */ - if (unlikely(fatal_signal_pending(current))) - return i ? i : -ERESTARTSYS; + if (unlikely(fatal_signal_pending(current))) { + err = -ERESTARTSYS; + goto out; + } cond_resched(); page = follow_page_mask(vma, start, foll_flags, &page_mask); if (!page) { - int ret; - ret = faultin_page(tsk, vma, start, &foll_flags, + err = faultin_page(tsk, vma, start, &foll_flags, nonblocking); - switch (ret) { + switch (err) { case 0: goto retry; case -EFAULT: case -ENOMEM: case -EHWPOISON: - return i ? i : ret; + goto out; case -EBUSY: - return i; + err = 0; + goto out; case -ENOENT: goto next_page; } @@ -737,7 +741,8 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, */ goto next_page; } else if (IS_ERR(page)) { - return i ? i : PTR_ERR(page); + err = PTR_ERR(page); + goto out; } if (pages) { pages[i] = page; @@ -757,7 +762,9 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, start += page_increm * PAGE_SIZE; nr_pages -= page_increm; } while (nr_pages); - return i; + +out: + return i ? i : err; } static bool vma_permits_fault(struct vm_area_struct *vma, From patchwork Fri Sep 28 05:39:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10618945 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 785BD180E for ; Fri, 28 Sep 2018 05:40:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 64DAD2B14F for ; Fri, 28 Sep 2018 05:40:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5954B2B192; Fri, 28 Sep 2018 05:40:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 935A32B14F for ; Fri, 28 Sep 2018 05:40:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B268D8E0006; Fri, 28 Sep 2018 01:40:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ACB488E0001; Fri, 28 Sep 2018 01:40:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8FDE28E0006; Fri, 28 Sep 2018 01:40:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by kanga.kvack.org (Postfix) with ESMTP id 4D6588E0001 for ; Fri, 28 Sep 2018 01:40:02 -0400 (EDT) Received: by mail-pf1-f200.google.com with SMTP id x85-v6so5627874pfe.13 for ; Thu, 27 Sep 2018 22:40:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=1mqzGKFVMjSnTLiDsAB/mdn2GuZtUkrT/vpmmy1PyQs=; b=l4pmHVFDU0c7/RYKOOVISmbbkIEAsbrB+HeJ61ZcUc5hUkfwvnsca2vb7ALU3JW7QK P4z3TfhMeE0XuCblA4B/w+VY9edjbWqtuZZbGvSK9MuYvm6VumgJMo+I3oKaZ+bpHHhm PFiBaEDmSLr6ath45ogy022Ur5n8IRAopjhHb230XhRCuTAscyVgl+FnJLcgZyw+bFW4 dRVAFqVh34dMWED5It58sHl9+E8+v6+Q3Auel1JkQZkb6P3bA+73O0YNY3Z4WTUlzzgv qM1PJwRq6vmuP93DkCnqRsJ6os6KtflKZABaHxDcLzZN5t5FxmQ+bXyJgHipIuH55mqa u/Og== X-Gm-Message-State: ABuFfoiFGr0vlltjj+5/f1K1q9hlgx4mDojtxZIeYWoAFOdVvPcgZx3G ++F5iPsVt1D1S9+RXZ5i5GmZMzH34MSxeOtzQP6J1kaCpKDUfcKF8CSjGbBX6z4U0zDeG04SMHY ez01yziNnIkk309vfwbcyo/r/u+Xjs2r75bI0Wde6Nk9kW4gyXyJ4K+qxNBEZC5gKRujYAjPIrp +2NQF3WkeW7thDy5R3dT2642f0fcrCQRiNM0qGTnWwRBPMmne02h8Hiof7F9yZd15Ule9687j21 SUJnMmRpsbhbgZRq2Zybve2McRQGxF/5Hsi+z6grRCS5RUVAO3Guj9jY5ktb+ciTXXEaQfm5qOk to5VgFqYQWpB2sYjUmowp3r9bEkZZfJb93pJUCAnbgQIUxce84jYd78MoG8aO5ToHq+qUoSdVtN f X-Received: by 2002:a17:902:4403:: with SMTP id k3-v6mr14114866pld.243.1538113201997; Thu, 27 Sep 2018 22:40:01 -0700 (PDT) X-Received: by 2002:a17:902:4403:: with SMTP id k3-v6mr14114810pld.243.1538113201228; Thu, 27 Sep 2018 22:40:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538113201; cv=none; d=google.com; s=arc-20160816; b=lH805AWLFMgB8RONc9y6eXVMrdW8bU5SXbFiHhC9YOpuEVuYs831/wCzIP8B5jNsC2 C1JwmNH98IrEML0WKPzoFq9Nl1JXLPzAOudIH3iRJnBeDrlPKGVtQOqMScuVW5xHw2NC orMevmf1hgJZWNWIPtd5+y/fkhRlY3DsE/Uv4LvLTpHWrp2Kb752NtSDza+C1uDzMh3m H6VPp4p8v5zls9ay7OufpK2hKxXeIXoC+3rIhiW5yemaCxwWKyiFtGCuwvY1Kja7mJ9y 4rx8FHu4cMQ0xFmT9k8d3cDBisAa0ifcX4daPeFdQpEDVMFkJ3b0OGfPDs4a6nRM6yky O2YQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=1mqzGKFVMjSnTLiDsAB/mdn2GuZtUkrT/vpmmy1PyQs=; b=pVdQJT675KHK4rf+rE3uH1a8uF73tQMvko3SLWb7QXVHcd9DhDT/B3d6Qcm8kpClSM RWpjBAa/0Jsz+JDgjmSTXi3eOoNE0fRCYZV6z14z5jFb+wgrhJfIB39I8lXuyWlax1db iHstjGCtBhvWd9HvL/v27DXZXlA+jNWD9cU0G/hYlZGKQiCiSboTdr+/xMP1z4WpZjvS NkhxjEgCUanNH56nRJ/H0eDBKYk7y6jQprGjPV08JbB6Lhm3FOKk5qd39cq8SBiBX+eI mpFeJYzlZdTGfpiAkYEYfz06evF70eo/MkQe0kN7eE9AK71P1EVOFO/sxE9bKbjp1qQ+ b9+w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=o+Kj425j; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id v126-v6sor1089741pgv.289.2018.09.27.22.40.01 for (Google Transport Security); Thu, 27 Sep 2018 22:40:01 -0700 (PDT) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=o+Kj425j; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1mqzGKFVMjSnTLiDsAB/mdn2GuZtUkrT/vpmmy1PyQs=; b=o+Kj425jnKP42HZw2x4xeAHlUiUb3CZEXHJRcINqCsnanOWKoDQcrLtOmLskp9VWzI R49hevMHgx5Oqhb/KxZmZXFpqPT9XxjRA801Ch4IO4lGkq6dJWo7RJYA3OGLPLE/m1lX shbtgZkw25oR0DBx4bBOEJGoSX5PJRB+9UDhiPtUqQEOu73eqgHSAJOFFpEcOwOmH3Ng FkMF54STPwp6EWRRuKQ5s0l1a/DkGtmuQs9sspRRnxOXE7U790apW8hYjonhKa6WjS9J gPyxjdzkZCTp/NGZ4sftBnnibXaHKNMuTdlafDkkORRZmUAS0dWpdNFtDx1bU9ChbQe8 Du1g== X-Google-Smtp-Source: ACcGV63ypN7jtruDL64Blbt9oo1dr/8T7M2lG9FPtvldbr3uBAGfMxqPHuSYsSPqT8bCod53sR8lrw== X-Received: by 2002:a65:448a:: with SMTP id l10-v6mr13566667pgq.382.1538113200923; Thu, 27 Sep 2018 22:40:00 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id u9-v6sm6569953pfi.104.2018.09.27.22.39.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Sep 2018 22:40:00 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara , Al Viro Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard Subject: [PATCH 2/4] mm: introduce put_user_page(), placeholder version Date: Thu, 27 Sep 2018 22:39:48 -0700 Message-Id: <20180928053949.5381-4-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180928053949.5381-1-jhubbard@nvidia.com> References: <20180928053949.5381-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard Introduces put_user_page(), which simply calls put_page(). This provides a way to update all get_user_pages*() callers, so that they call put_user_page(), instead of put_page(). Also adds release_user_pages(), a drop-in replacement for release_pages(). This is intended to be easily grep-able, for later performance improvements, since release_user_pages is not batched like release_pages() is, and is significantly slower. Also: rename goldfish_pipe.c's release_user_pages(), in order to avoid a naming conflict with the new external function of the same name. This prepares for eventually fixing the problem described in [1], and is following a plan listed in [2]. [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubbard@nvidia.com Proposed steps for fixing get_user_pages() + DMA problems. CC: Matthew Wilcox CC: Michal Hocko CC: Christopher Lameter CC: Jason Gunthorpe CC: Dan Williams CC: Jan Kara CC: Al Viro Signed-off-by: John Hubbard --- drivers/platform/goldfish/goldfish_pipe.c | 4 ++-- include/linux/mm.h | 14 ++++++++++++++ 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/drivers/platform/goldfish/goldfish_pipe.c b/drivers/platform/goldfish/goldfish_pipe.c index 2da567540c2d..fad0345376e0 100644 --- a/drivers/platform/goldfish/goldfish_pipe.c +++ b/drivers/platform/goldfish/goldfish_pipe.c @@ -332,7 +332,7 @@ static int pin_user_pages(unsigned long first_page, unsigned long last_page, } -static void release_user_pages(struct page **pages, int pages_count, +static void __release_user_pages(struct page **pages, int pages_count, int is_write, s32 consumed_size) { int i; @@ -410,7 +410,7 @@ static int transfer_max_buffers(struct goldfish_pipe *pipe, *consumed_size = pipe->command_buffer->rw_params.consumed_size; - release_user_pages(pages, pages_count, is_write, *consumed_size); + __release_user_pages(pages, pages_count, is_write, *consumed_size); mutex_unlock(&pipe->lock); diff --git a/include/linux/mm.h b/include/linux/mm.h index a61ebe8ad4ca..72caf803115f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -943,6 +943,20 @@ static inline void put_page(struct page *page) __put_page(page); } +/* Placeholder version, until all get_user_pages*() callers are updated. */ +static inline void put_user_page(struct page *page) +{ + put_page(page); +} + +/* A drop-in replacement for release_pages(): */ +static inline void release_user_pages(struct page **pages, + unsigned long npages) +{ + while (npages) + put_user_page(pages[--npages]); +} + #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define SECTION_IN_PAGE_FLAGS #endif From patchwork Fri Sep 28 05:39:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10618941 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 31F68180E for ; Fri, 28 Sep 2018 05:40:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1E1832B16F for ; Fri, 28 Sep 2018 05:40:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 117732B14F; Fri, 28 Sep 2018 05:40:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 592E02B14F for ; Fri, 28 Sep 2018 05:40:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B739B8E0005; Fri, 28 Sep 2018 01:40:01 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AD6158E0001; Fri, 28 Sep 2018 01:40:01 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 888E18E0005; Fri, 28 Sep 2018 01:40:01 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by kanga.kvack.org (Postfix) with ESMTP id 350378E0001 for ; Fri, 28 Sep 2018 01:40:01 -0400 (EDT) Received: by mail-pg1-f199.google.com with SMTP id v138-v6so3314436pgb.7 for ; Thu, 27 Sep 2018 22:40:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=ThRzzWpBpu4s48OlpLDvaH54RyEw4MNxyT1MO+mJU2I=; b=i915j27XqiEfQTeqRd9m81YULNgv/8dfNpp3+PJcYf5nN2x+Ip4lioatXHFsWi1QWx nL6IXtWIpXwnQD04TIr2tMUjCvAC5z6zL3wR3yQQBztTr50Z/fryGW5sHZ3v/5h2gGP/ Smfss27kTQxHwvhHQPPYfv5rxLY2oszMmfZb3P8tsRtQp9t05THSJPRig4wrVu5GR/oh LEJQRRnvkEs2Xw8ztsU5qA0Ec5FzSpvlXNK2UhDXPSWwb+x9EyaqVA5PHgbhuCMCRnaQ 5s63wYOqqLrpiNWThAMfpb9ddsmr3YS+92UQS4XyASH/YB3/HRR8JbfB5i8tQwa19nha 2exA== X-Gm-Message-State: ABuFfogZXzJNrh/Aa4ZbfgmvFvtN1WR+lin5L5gKTf/PaYCov66jKDCn M8GUmFT6KAiy5GnXIkQgRAQU87e/T3lOE8e8ZcVU6yIhWwYiySOqsOAErjWJUX71AJWUX4R9Uta dtG0jjo/L2Et49+AxNv5/pMD5neIGoGS3QIQ9sIbmeBDCbWygcTrETEhj22gNhD70+wtQiQ7EdP /ANix7z6NY/jlthhUjzt5rSk60Zo18gJ7j4lVLpXlD/o8SakUOhtD86q7ghDYZZwatiIKBLn8ZJ OuXPXqD95SO6jpxky8LXDdcb0yTSIbkLaDs3x5GMFVWnSkOz+C9KnSzgyBQ02z69tt6GZS4zTDz wif537SoPe4Amcw7cKyuYAur4e+opPOq05HPa+2u9s8plHAXABJImrF+yt0kJW+bA3q2f8yfDdR a X-Received: by 2002:a63:5f03:: with SMTP id t3-v6mr13456204pgb.68.1538113200887; Thu, 27 Sep 2018 22:40:00 -0700 (PDT) X-Received: by 2002:a63:5f03:: with SMTP id t3-v6mr13456134pgb.68.1538113199873; Thu, 27 Sep 2018 22:39:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538113199; cv=none; d=google.com; s=arc-20160816; b=IWF+9Sp0xuEpyjnrVy4IoNwwgwLJBFr4gyDdUPUHjarb6hka8jKSMWK2VljGwjlcBB QVZnEDcAZ25sUGlv/jddrepIXJApAqng+kTqg1TsqpoXphC+rAb1aMJevMnHuUBiSdqR oQW0lqdU92TiSW5qXgU/gVdIkGlyM/qKnMPelaoAu6AeGPlKgNWHRxATgld5QOtRbAzs U47NhtQ1BzExIY44tBWyc3PtgjbLrhSmypnttnqtn+wslxfhg/R5AWkfDYsnzGXP/Xfv kF05UWkREZwnqI9wbjzWU5SO/eoCFp0Zo9ZLNPuMQc6Zcjykdw5d9cg9D4+Pdvx0iOca kJFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=ThRzzWpBpu4s48OlpLDvaH54RyEw4MNxyT1MO+mJU2I=; b=y03fut26rQt0XUR952ApyV5j2HXEaysAi6uXPZjzen4vRGSsAltxhk7orKtDR/zshD 0QUe4yh5oHJ0MBBV2zChbpVNEYhGILtv4OxJ+NRjbinKpeYLMaq2nZJyOACKVbQYkrQY Qfh8Da/CQ5lR/cyy/Xmg1KaW0HoOCFlkrnrJn0Xjr/x0uv9FdguP4p8IbGYhbR5BlWY2 3+15CnrHTpm3F3pv5U4bmsNQpC8hkCdCO0TdsOrVaVg4uzbaBrHj+1DzVnLyAPeCE4w6 bi4+lZ9zFH/iJX50CGIo97hXzvE0Ln6C1fzs8jKOOsZqlTeDySQLaHf6kwbjdcaIy61P LR0A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=oXY94iVc; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id d132-v6sor1043817pgc.394.2018.09.27.22.39.59 for (Google Transport Security); Thu, 27 Sep 2018 22:39:59 -0700 (PDT) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=oXY94iVc; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ThRzzWpBpu4s48OlpLDvaH54RyEw4MNxyT1MO+mJU2I=; b=oXY94iVcGXqVYOVsyrL60tx2EzoBczbAy7YBhVQxjcdSZwlYXxbKExa7jHilnyBs8j QjC/djhpqbPVPN2GVY6qxBzo5aGhRhctuMyuEI+QGGxAwT19qkBBOAgmkN2OyldBOK2B J34ONI5c9LN6PuRNaDTr6FvxiB5+22YkdxGdDuGqplcxyV5kcJEl0uNd1AJF8FhS9Kmx WzWt8MXU7ZeKczlfdbiurh5O76NeEvJjldxaGyWe4ugD+9nBDb4H2Zhw2GROcBXHGbAB W08O7RBUef9IkTYmBKAxyKZqAkRfQAA3g6sNszemh9ALVphLjZwFt/xc46Lhh+LzCjAE VcDw== X-Google-Smtp-Source: ACcGV63im43pJbETQH3bFshzFo7dV3xsDzH3NWhLxzp7kBSMLcE3yPW9Bruuq2pUeXW/hVOToZUwbg== X-Received: by 2002:a65:6409:: with SMTP id a9-v6mr5126447pgv.204.1538113199449; Thu, 27 Sep 2018 22:39:59 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id u9-v6sm6569953pfi.104.2018.09.27.22.39.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Sep 2018 22:39:58 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara , Al Viro Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard , Doug Ledford , Mike Marciniszyn , Dennis Dalessandro , Christian Benvenuti Subject: [PATCH 3/4] infiniband/mm: convert to the new put_user_page() call Date: Thu, 27 Sep 2018 22:39:47 -0700 Message-Id: <20180928053949.5381-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180928053949.5381-1-jhubbard@nvidia.com> References: <20180928053949.5381-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard For code that retains pages via get_user_pages*(), release those pages via the new put_user_page(), instead of put_page(). This prepares for eventually fixing the problem described in [1], and is following a plan listed in [2]. [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubbard@nvidia.com Proposed steps for fixing get_user_pages() + DMA problems. CC: Doug Ledford CC: Jason Gunthorpe CC: Mike Marciniszyn CC: Dennis Dalessandro CC: Christian Benvenuti CC: linux-rdma@vger.kernel.org CC: linux-kernel@vger.kernel.org CC: linux-mm@kvack.org Signed-off-by: John Hubbard Acked-by: Jason Gunthorpe Reviewed-by: Dennis Dalessandro --- drivers/infiniband/core/umem.c | 2 +- drivers/infiniband/core/umem_odp.c | 2 +- drivers/infiniband/hw/hfi1/user_pages.c | 2 +- drivers/infiniband/hw/mthca/mthca_memfree.c | 6 +++--- drivers/infiniband/hw/qib/qib_user_pages.c | 2 +- drivers/infiniband/hw/qib/qib_user_sdma.c | 8 ++++---- drivers/infiniband/hw/usnic/usnic_uiom.c | 2 +- 7 files changed, 12 insertions(+), 12 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index a41792dbae1f..9430d697cb9f 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -60,7 +60,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d page = sg_page(sg); if (!PageDirty(page) && umem->writable && dirty) set_page_dirty_lock(page); - put_page(page); + put_user_page(page); } sg_free_table(&umem->sg_head); diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 6ec748eccff7..6227b89cf05c 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -717,7 +717,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem *umem, u64 user_virt, u64 bcnt, ret = -EFAULT; break; } - put_page(local_page_list[j]); + put_user_page(local_page_list[j]); continue; } diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c index e341e6dcc388..c7516029af33 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -126,7 +126,7 @@ void hfi1_release_user_pages(struct mm_struct *mm, struct page **p, for (i = 0; i < npages; i++) { if (dirty) set_page_dirty_lock(p[i]); - put_page(p[i]); + put_user_page(p[i]); } if (mm) { /* during close after signal, mm can be NULL */ diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniband/hw/mthca/mthca_memfree.c index cc9c0c8ccba3..b8b12effd009 100644 --- a/drivers/infiniband/hw/mthca/mthca_memfree.c +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c @@ -481,7 +481,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, ret = pci_map_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); if (ret < 0) { - put_page(pages[0]); + put_user_page(pages[0]); goto out; } @@ -489,7 +489,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, mthca_uarc_virt(dev, uar, i)); if (ret) { pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_page(sg_page(&db_tab->page[i].mem)); + put_user_page(sg_page(&db_tab->page[i].mem)); goto out; } @@ -555,7 +555,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar, if (db_tab->page[i].uvirt) { mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, uar, i), 1); pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_page(sg_page(&db_tab->page[i].mem)); + put_user_page(sg_page(&db_tab->page[i].mem)); } } diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index 16543d5e80c3..3f8fd42dd7fc 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -45,7 +45,7 @@ static void __qib_release_user_pages(struct page **p, size_t num_pages, for (i = 0; i < num_pages; i++) { if (dirty) set_page_dirty_lock(p[i]); - put_page(p[i]); + put_user_page(p[i]); } } diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c index 926f3c8eba69..14f94d823907 100644 --- a/drivers/infiniband/hw/qib/qib_user_sdma.c +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -266,7 +266,7 @@ static void qib_user_sdma_init_frag(struct qib_user_sdma_pkt *pkt, pkt->addr[i].length = len; pkt->addr[i].first_desc = first_desc; pkt->addr[i].last_desc = last_desc; - pkt->addr[i].put_page = put_page; + pkt->addr[i].put_page = put_user_page; pkt->addr[i].dma_mapped = dma_mapped; pkt->addr[i].page = page; pkt->addr[i].kvaddr = kvaddr; @@ -321,7 +321,7 @@ static int qib_user_sdma_page_to_frags(const struct qib_devdata *dd, * the caller can ignore this page. */ if (put) { - put_page(page); + put_user_page(page); } else { /* coalesce case */ kunmap(page); @@ -635,7 +635,7 @@ static void qib_user_sdma_free_pkt_frag(struct device *dev, kunmap(pkt->addr[i].page); if (pkt->addr[i].put_page) - put_page(pkt->addr[i].page); + put_user_page(pkt->addr[i].page); else __free_page(pkt->addr[i].page); } else if (pkt->addr[i].kvaddr) { @@ -710,7 +710,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd, /* if error, return all pages not managed by pkt */ free_pages: while (i < j) - put_page(pages[i++]); + put_user_page(pages[i++]); done: return ret; diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 9dd39daa602b..0f607f31c262 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -91,7 +91,7 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty) pa = sg_phys(sg); if (!PageDirty(page) && dirty) set_page_dirty_lock(page); - put_page(page); + put_user_page(page); usnic_dbg("pa: %pa\n", &pa); } kfree(chunk); From patchwork Fri Sep 28 05:39:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10618949 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D698C180E for ; Fri, 28 Sep 2018 05:40:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C4CC02B16F for ; Fri, 28 Sep 2018 05:40:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B8C162B195; Fri, 28 Sep 2018 05:40:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0849E2B16F for ; Fri, 28 Sep 2018 05:40:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 77D6A8E0008; Fri, 28 Sep 2018 01:40:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 703608E0007; Fri, 28 Sep 2018 01:40:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 531DB8E0008; Fri, 28 Sep 2018 01:40:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id 11BE08E0001 for ; Fri, 28 Sep 2018 01:40:04 -0400 (EDT) Received: by mail-pg1-f198.google.com with SMTP id 186-v6so5577197pgc.12 for ; Thu, 27 Sep 2018 22:40:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=gTxV0aWUbi8aFwYtgG+7uDbNGSrkpXx/JtqQLR1W0X4=; b=rCyuLUBJ1as/t2nJ89uKRrRzK2ZWeK55zqWYr0xQXJ1FR7PZDeDB5S8/2uov5sI/Ts t2RRrLN2UkUetrhHxVJzlzWYsCmoXxQ6vWrUj2IzAKeKTk8VbXVqUMNewfh3kO5+z281 dt0O8p9FG9rXptmscVGGHfN3Q0AcYxVHXnhHAsPRUqPx6r8XH/DzuXP0Oh0qNjawLJlZ j+QSRm55mxMIAExRDlKbqOGZUKJEE2kxRATIGNEC0C1sel8/9yoIMPq95KxG5nJZVFyb wSXRM6/dMC9XOJ3TXqEWtMkIl4z3pUozv7Vb80hvxXcnq8QFKhws8JPS6PjxohQnFFip CVpQ== X-Gm-Message-State: ABuFfoikhg9HIqmoiCtsbh4IjnL9EmreTIYrLB4vb6fEGfxoHYlxv1x+ MNBwxOluiT+igoA7W05K2s0ceM9DcZcHe2nAMOvvE++Q6fnhi8ujSkGbOcLvm7TUH5rOXbgcl6c ipPkZi4tV8oh6p3KyG2e9UtMEIV0nbrcGMWdfaYxdyQXQ5wCKqwQ5m7F+jQ+SfevgZC0wPK712p 6RoSgnix7F1OxlogN385914S38FPFC8fVJbn5sViAMw98Z+OnNFhsqkdvh83Zacmem3OeHt/Ofm FMQe3Mrp/n9rwTkTtiq9ImdDzdDXQu4Yp71lg70BvU4OULm5LYRXfF5yxXG130ujYqS4i8GOLYS YtNpGVYH7M+0pxqqra9yKD1Vr3QoSj/eJkY3aVx4RLRkunLvN780v5w4VXrviqM1SH4ZhR9GO50 8 X-Received: by 2002:a63:af5b:: with SMTP id s27-v6mr1122320pgo.448.1538113203744; Thu, 27 Sep 2018 22:40:03 -0700 (PDT) X-Received: by 2002:a63:af5b:: with SMTP id s27-v6mr1122265pgo.448.1538113202839; Thu, 27 Sep 2018 22:40:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538113202; cv=none; d=google.com; s=arc-20160816; b=kgGO1ujEOScrIMxI37zuGNen5wahDVKRBk6mmftOQQQODZHiFkJnpgPWYEHMNW/dtF X8d2PHWBVp46jHDVD7aOiM20YphVZ3F1Q6Td/NJGm5vaLyjpCx6/ImRHzWPN0Mwf9+rW tcvqogcw4bhjxe0c1Ce5c65FBDiHcpludjtDHhSJMtHvxmDJyXki3N/lVvs7/ugQdGkx QmcT6rsjqVmpekBAvE3WoJv8dsUV84WUPKiwmwju0OdZiuOkocT8laZ5R2nqzxN2VzFt WmdEbxS/gOX2u57QtltOAmL/bYyqWdPxMatvKtLdLLMiT6wbNnF3xidp3p7E0vmPh/80 f5lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=gTxV0aWUbi8aFwYtgG+7uDbNGSrkpXx/JtqQLR1W0X4=; b=q+ImwmKK77Nt68u+IpDPtfs6yvHmQxMsY2WXxJ9uTXbbPSAc8dVv7xbEaOMpy033hT CfUbg5rHirpCOUHjdWlKSGY2jXntlu0jKqj91/BZnHlQxAR0uGcpHjEoPOYVyt3mOlD8 +KGHBtlgmESPekzmxRf0ELWBc/2/yzMOXNGC0m+r32v7+hYiF37QFwkya/evy08zWd28 W5R49ULwMpy/HAb34xAly5SOWkoYGowRnDQTQF9xp7jydJddrZnFCtPUC2AkgXrn2Pfk iBNhj2mCTrOdDFg3l1vnPArIkz7sp22FLdWwH6K99ySXwgzok1/mj9q1Xzo1TsPP0EAE Cadg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=bDNixGuJ; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id 15-v6sor1117246pgt.175.2018.09.27.22.40.02 for (Google Transport Security); Thu, 27 Sep 2018 22:40:02 -0700 (PDT) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=bDNixGuJ; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gTxV0aWUbi8aFwYtgG+7uDbNGSrkpXx/JtqQLR1W0X4=; b=bDNixGuJlh/VViGK/IVdrd2Ya6UuePjmCtcsgKK/3jykSPzSHwUVjRr32fbM/h5Eu1 aWG7kNjNxKOjioZ+vW8NhCUe4/AWHvUKE+sTbc3smgAHTt2Mp1w3R3VVwRQwLhQ3upp4 z345xOHYP0aevKOufhDJEUGR/aRHBPJXIruZc5zPLn0NiTAHFetIb+GInBNx5Sked+RL tQmMg7HH66UETFNKv4JaGPH4Qfp2ag0aQwKy08qRSMys5TZw49jJf7/sQ2I6+era5G/Z Xnq3phaDVdsUDxIIQGQBKEDA8pNMdbl423L7RDkgYp5PhTSIB39uPk+cEli7RbXh2qHf 68Vw== X-Google-Smtp-Source: ACcGV63rxOc0Wx8ndnvvnZLm+cbC7UMeUJ/O4wKjbN4LjtpvySOpDGyJ7dXg2lbQNGsJNDt27t9iSg== X-Received: by 2002:a63:531a:: with SMTP id h26-v6mr8070623pgb.441.1538113202538; Thu, 27 Sep 2018 22:40:02 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id u9-v6sm6569953pfi.104.2018.09.27.22.40.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Sep 2018 22:40:01 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara , Al Viro Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard Subject: [PATCH 4/4] goldfish_pipe/mm: convert to the new release_user_pages() call Date: Thu, 27 Sep 2018 22:39:49 -0700 Message-Id: <20180928053949.5381-5-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180928053949.5381-1-jhubbard@nvidia.com> References: <20180928053949.5381-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard For code that retains pages via get_user_pages*(), release those pages via the new release_user_pages(), instead of calling put_page(). This prepares for eventually fixing the problem described in [1], and is following a plan listed in [2]. [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubbard@nvidia.com Proposed steps for fixing get_user_pages() + DMA problems. CC: Al Viro Signed-off-by: John Hubbard --- drivers/platform/goldfish/goldfish_pipe.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/platform/goldfish/goldfish_pipe.c b/drivers/platform/goldfish/goldfish_pipe.c index fad0345376e0..1e9455a86698 100644 --- a/drivers/platform/goldfish/goldfish_pipe.c +++ b/drivers/platform/goldfish/goldfish_pipe.c @@ -340,8 +340,9 @@ static void __release_user_pages(struct page **pages, int pages_count, for (i = 0; i < pages_count; i++) { if (!is_write && consumed_size > 0) set_page_dirty(pages[i]); - put_page(pages[i]); } + + release_user_pages(pages, pages_count); } /* Populate the call parameters, merging adjacent pages together */