From patchwork Fri Oct 5 04:02:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10627307 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 66508112B for ; Fri, 5 Oct 2018 04:02:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 55EF22963B for ; Fri, 5 Oct 2018 04:02:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4991D2964B; Fri, 5 Oct 2018 04:02:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B3F462963B for ; Fri, 5 Oct 2018 04:02:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EAE756B000C; Fri, 5 Oct 2018 00:02:31 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E5CAC6B000D; Fri, 5 Oct 2018 00:02:31 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD9586B000E; Fri, 5 Oct 2018 00:02:31 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by kanga.kvack.org (Postfix) with ESMTP id 8AEDA6B000C for ; Fri, 5 Oct 2018 00:02:31 -0400 (EDT) Received: by mail-pg1-f200.google.com with SMTP id e24-v6so5888324pga.16 for ; Thu, 04 Oct 2018 21:02:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=FVIsmZsD2UYUIajD9wCWvufUKrPcgTlr9PeHswP0yEs=; b=FQ4h/LPHosKAF6yBLvdW7SHE3lFOUnwjIbjs3D9x94ZoiD5o+HobEtnOSDXL7SreU6 kiVz31YByJYb0gVSJlsZMXMcUX/u+eo6uzQ3ftrrLxc3JleJSJk8xnmru5HDhiJfJ/g+ Bnxsq5RxZ+xmSJrxHg4vcROfMok+CM5LoG6ihtQDzK/5DqhorYaTIkq9kQYIvNR6UYtE ufSR9vsdZRMjMO8eSwi9jP/VaO0M5KiSiOBO74QbazZm6257m/Pst2x41m/MBvSOOPd5 9Nr07ccVSF1fiOzDKcveWtKYylcA/NXND3QRvPo8AWbtVAirreAETwbGEfKi1wwImPVp ieaA== X-Gm-Message-State: ABuFfoi6AbMkIICioZZxyd+hhSAmbCWcz68W8TuUYE3dqGkpIjvw9AfD ipApWap/tOMbdwRShlX3Ex0QdreZt+35sKQiSMFM3UOWE3xzTXEumckECDTAyeAH/719wRe5y+b 0zDT07KnckfrQgHyTYTQ8Y9nPvJy8WWWUBH8QByDuASCRnGZoFXx9w6JyNMZtN0GL19+Db+q4nb Tv7U+csVD5XduzVkUwXLzWFOaZMHJyDXs9RrBSkk6J83uX7sI6KQOfqRhSv6qHu25H8NRQySH8Q MBfovGqB/iIVXgYUi1papeewd8y5cSg/tu1OZYpFxPwRejYZBV8ch7p2HUE7URrHUjHNc2QU00d qmXTKGr1fUBK/GGHoFHfZASKvGtV1X/KbC1Y5pW82EhE3XfsBSjQTtcaXdJ3saxlMDK1KEBlYWI 9 X-Received: by 2002:a62:56c1:: with SMTP id h62-v6mr9933311pfj.107.1538712151227; Thu, 04 Oct 2018 21:02:31 -0700 (PDT) X-Received: by 2002:a62:56c1:: with SMTP id h62-v6mr9933261pfj.107.1538712150434; Thu, 04 Oct 2018 21:02:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538712150; cv=none; d=google.com; s=arc-20160816; b=znalLLIbQZG4A0MVcIROTI7rAAhi29irGNJFHd01TPM/lf9bqSZ0jJBj94IT5/Jdnf W1gYlK1SoNC24CJhx5YAVt/6jdzFKLtcOkbzjnlFWitg2I+U8Wu1PboTao9EYHEgZ7hD sYhTdljTG8pb0w+DDz/b5ZFYR3qhe4bRZ44li82xomTXjiSOJXat4Kj6vNONUWP58IyR 8IAwWr/zbgyMUVll0KoJovfblrrsTauvU4Z1P6jCuTFNSUpjOar7gJm87E7+W0jdRz6b 8pS+hvtvJvQRkWt6EyGjwBuIwQjA8WU2PdFpiFTDP8wHowCRyiVvAv8JkNSDTakftkvO UeFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=FVIsmZsD2UYUIajD9wCWvufUKrPcgTlr9PeHswP0yEs=; b=e/9NeVavLI+85ztldRU41ww+XKkZSFoWZq1jkhKwav4gAYz/4oKgvhA7eyzyaHIuBb HKrCG5tn08UEn/cw4yO9dQVFEL52XwdCc/DZVQxNFo+HvLBvznkGdBYq6mRvihmL0gCS BBu9Rj41oSOi9gWlZa9B6KjoJA1gwHvbTxugPnDotXy5zP0wdvEmY1yn9k+V5/Kvyxgu aFRpvFl0+4Bhpix3Otaqd9sadMyWB1u1CFgSGpGBMM+t1R7xQmn2e90qimuk9izMRz8Y 4ETbjRHaAGYK4V19hsu6Hf0axW1EIPpQ6jID1Jv76BnTD6x86jRMbvUTOVSZ5BYif9pg poWg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=jnBPsSfK; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id h194-v6sor5414140pfe.64.2018.10.04.21.02.30 for (Google Transport Security); Thu, 04 Oct 2018 21:02:30 -0700 (PDT) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=jnBPsSfK; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FVIsmZsD2UYUIajD9wCWvufUKrPcgTlr9PeHswP0yEs=; b=jnBPsSfK5fSXGwduKnZGAGaMGRRYua4ihkiQF/kSRjRfwYA7Nki2njxBsluBQp1a2N zJoaFzdL6stH9ZpgKFPj3ZAgP+bbvg0sGgjF2xk4+CTaXJU6EYIzpO14aVEeJNm21Ez6 rgn2EBLW6zUyMoGUFhRmQquZsjsA4YOr2ZgVH/rUGJ0CHEGDXEQujR26l2PawuGebxWo 9634LJicJryqBps2Ker4hPLHAIFzDZEOcp4X0IHkcbpB2nkg0LO/dOc+G/0UlqKJDh7w O88lGtgflHTsV4BLHfZNYEdINT3cQAYIVuodOJ7bMRiEa1jlVruRwEqlmf/+9PIbrPlo bhTw== X-Google-Smtp-Source: ACcGV63oi1zgzhFKtBbotOp0wxv3Zx7L4Yb5NK6heXq7C8ZXXTl2TMHDg76/I8sW8MkGluw2iX8tEg== X-Received: by 2002:a62:4799:: with SMTP id p25-v6mr9886156pfi.197.1538712150072; Thu, 04 Oct 2018 21:02:30 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id c2-v6sm8787899pfo.107.2018.10.04.21.02.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 04 Oct 2018 21:02:29 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard Subject: [PATCH v2 1/3] mm: get_user_pages: consolidate error handling Date: Thu, 4 Oct 2018 21:02:23 -0700 Message-Id: <20181005040225.14292-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181005040225.14292-1-jhubbard@nvidia.com> References: <20181005040225.14292-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard An upcoming patch requires a way to operate on each page that any of the get_user_pages_*() variants returns. In preparation for that, consolidate the error handling for __get_user_pages(). This provides a single location (the "out:" label) for operating on the collected set of pages that are about to be returned. As long every use of the "ret" variable is being edited, rename "ret" --> "err", so that its name matches its true role. This also gets rid of two shadowed variable declarations, as a tiny beneficial a side effect. Reviewed-by: Jan Kara Signed-off-by: John Hubbard --- mm/gup.c | 37 ++++++++++++++++++++++--------------- 1 file changed, 22 insertions(+), 15 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 1abc8b4afff6..05ee7c18e59a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -660,6 +660,7 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, struct vm_area_struct **vmas, int *nonblocking) { long i = 0; + int err = 0; unsigned int page_mask; struct vm_area_struct *vma = NULL; @@ -685,18 +686,19 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, if (!vma || start >= vma->vm_end) { vma = find_extend_vma(mm, start); if (!vma && in_gate_area(mm, start)) { - int ret; - ret = get_gate_page(mm, start & PAGE_MASK, + err = get_gate_page(mm, start & PAGE_MASK, gup_flags, &vma, pages ? &pages[i] : NULL); - if (ret) - return i ? : ret; + if (err) + goto out; page_mask = 0; goto next_page; } - if (!vma || check_vma_flags(vma, gup_flags)) - return i ? : -EFAULT; + if (!vma || check_vma_flags(vma, gup_flags)) { + err = -EFAULT; + goto out; + } if (is_vm_hugetlb_page(vma)) { i = follow_hugetlb_page(mm, vma, pages, vmas, &start, &nr_pages, i, @@ -709,23 +711,25 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, * If we have a pending SIGKILL, don't keep faulting pages and * potentially allocating memory. */ - if (unlikely(fatal_signal_pending(current))) - return i ? i : -ERESTARTSYS; + if (unlikely(fatal_signal_pending(current))) { + err = -ERESTARTSYS; + goto out; + } cond_resched(); page = follow_page_mask(vma, start, foll_flags, &page_mask); if (!page) { - int ret; - ret = faultin_page(tsk, vma, start, &foll_flags, + err = faultin_page(tsk, vma, start, &foll_flags, nonblocking); - switch (ret) { + switch (err) { case 0: goto retry; case -EFAULT: case -ENOMEM: case -EHWPOISON: - return i ? i : ret; + goto out; case -EBUSY: - return i; + err = 0; + goto out; case -ENOENT: goto next_page; } @@ -737,7 +741,8 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, */ goto next_page; } else if (IS_ERR(page)) { - return i ? i : PTR_ERR(page); + err = PTR_ERR(page); + goto out; } if (pages) { pages[i] = page; @@ -757,7 +762,9 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, start += page_increm * PAGE_SIZE; nr_pages -= page_increm; } while (nr_pages); - return i; + +out: + return i ? i : err; } static bool vma_permits_fault(struct vm_area_struct *vma, From patchwork Fri Oct 5 04:02:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10627309 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3A5971731 for ; Fri, 5 Oct 2018 04:02:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2A0802963B for ; Fri, 5 Oct 2018 04:02:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1E64C2964C; Fri, 5 Oct 2018 04:02:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 735F22963B for ; Fri, 5 Oct 2018 04:02:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 906B26B000E; Fri, 5 Oct 2018 00:02:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8B8096B0010; Fri, 5 Oct 2018 00:02:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 735A66B0266; Fri, 5 Oct 2018 00:02:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by kanga.kvack.org (Postfix) with ESMTP id 284BE6B000E for ; Fri, 5 Oct 2018 00:02:33 -0400 (EDT) Received: by mail-pg1-f197.google.com with SMTP id h9-v6so5860384pgs.11 for ; Thu, 04 Oct 2018 21:02:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=sneRiApjkE2w/LYnauim2aX4bIqXkJ3JToZQnFGODZI=; b=FuosSHsK+M1XTB6fniTVBbMFk+QkI/1EUItiXb2ZWDTF4ZaU6qe8BklFCL3leRQZu9 SbTy2Bt4Rs08DuE+5R+1/Awn4M6Cf8lzGYSM7SSxFF2W05UoIi4tC6shiFioYeRdSloL UjoT8sg1MH8waUFje/IVnV5YIExe7YNjI9y9IvW/vY0MNCCGYFSlNQe2CHQFFuM50wGG xiR3JB8Oa0ECH4yWQJHJJIfkI08Qfpjxpv0Mv6VdSK/A7IrsYlvhciWDnmA8DAS5upGc mo7DGUyMquKvXM1sxKI3dgi03LcrJ3bMYWk+RVTTs8W8Xy9SAPUD4Ie3vZXI08OaKifr f78w== X-Gm-Message-State: ABuFfojzy52DqC3WkkhnFl2WWKp1hogCJjl1+CV+/if8GQ1FKVdBLx1E vx/tcIos6SaFbt/KibBCCCuwt/mTEdS+CAR4YtbobBIssWSz7QqcYNPovIJ3utdRSXNzwjsVrpJ t6VbNYyZm5wr5IL2Dj2qxDBFJPS2xgl2QiTyV7rQ/+8tTBgC6btCKYpAONokrzzWaeJe3Syc6YF UjjupCv0cBM7B1ZQ2kCtwNVbE4fgJhjS/FirOm/ju9LacUunq0NxDoyIJUWmTc3HhL4sCHl8OEm fu0Dml9UKl6nI1ybF3lrzvGmkQ0ARDjAibQhw+bMgUC6zjUY29pCv6tACfJkTpApA4mxZcnMG7R Df+zjEZ7VkVYvtRzFzvLeYc9/YV4i5qQ0nQrlQEGxlU/TBdSiRsSHe/qlnMyhE3o5Sx4THMNJCF n X-Received: by 2002:a17:902:7842:: with SMTP id e2-v6mr9619393pln.104.1538712152834; Thu, 04 Oct 2018 21:02:32 -0700 (PDT) X-Received: by 2002:a17:902:7842:: with SMTP id e2-v6mr9619337pln.104.1538712152010; Thu, 04 Oct 2018 21:02:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538712151; cv=none; d=google.com; s=arc-20160816; b=GWv9m2wkNZX0cmi223zID5kXIF2Mx7u1Il5feKlbMahAD7brU6qb86ZB9B7LKlhRPs XJ1HGMDqfrFVj8Fx248ZCj7EEq966qyAOS1qPfz5jtOYflv4RaQeZYUihevAf6+c82kZ cIHwah7hbAMYXjvCFB7TvFq3NfPAPMwoeZEBtTIXX4+p/oua0plpWW6gV066iVcVj5DF ktcFPoNn34FqFdGGYKy+iH+Q9so9cIepqmfeN9g7NsUNUnsWgbe7nTxEFRLb/oVF2Xdc QMF1qIIMCqnkzlv8yNjDw5olsqkUnqnuOVxJ7ERnwxyXWOeO9cDkDthEKLpNbwmqeuf5 Ijog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=sneRiApjkE2w/LYnauim2aX4bIqXkJ3JToZQnFGODZI=; b=TvW2WSjoE+MmVEsC3K6p3bySUlR2O6btIbbZbKctLJYnyOyBeztN+KnCC8twMAX9nX U7Bim5cXfKnSQd4GquvAFXdy+5xTqrGfgHRIS3KyhALG2djzrGklfHPuUl580Hkbq5Tl xIrwjia6zInLg7YF1yQYDpxBnaykmizE67Vcwa9+MlDvcBRnatTjM2h1GsEvcCOfETSW KDrBMNw1NTizIp8cbm+BzbUlJpceL9/GijxJvuvzHx1I3avDXzChC3wxDsjpH3VN+oKy /dN4EKUyCh8PjvaU49+jjvTzEXC0mOBQ9XUZMehRPsR/KeW6QW5rm5gQKgsYxUqvEw1J QNYQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=G925Ez54; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id c195-v6sor6142756pfc.33.2018.10.04.21.02.31 for (Google Transport Security); Thu, 04 Oct 2018 21:02:31 -0700 (PDT) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=G925Ez54; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sneRiApjkE2w/LYnauim2aX4bIqXkJ3JToZQnFGODZI=; b=G925Ez54V6JJWAef4ifohJatJCsxpMB3vmgzPXoleKdhs2SUZ1saufdVQb43Te9LG/ VMP7duDIeynYhl/lRxVvlvTsDI3eBLW/rZJXFGtR9fSvrT/2nEtnHo5N2uF3ZSrDEpay irbzwETAEbj208oSKf+x8TPpIWjqIyVgi+9frgwQK2gFY33UZ/o/YPi+T/fSDzcwP9O9 7XGwPRp78aymRSx8CjSetBbZ8BybVcDOeK8MVVfjSXYWvOb+uJ+bKL9fm4K3nTSNh+xk 6kj5l/cYdt2TnnhV8OrsIES/odt1eMN+pa6HHkK8Lr+Vg+zR/ruKiHT7d/jMj2pvNlE6 afxA== X-Google-Smtp-Source: ACcGV63ud+uapXxy4ksvwJiP03+IcTue5DKxZxXzl98fY+QwOT06xFfAHVA5zn20RAqTInx/E6hdlA== X-Received: by 2002:a62:2741:: with SMTP id n62-v6mr2718591pfn.138.1538712151684; Thu, 04 Oct 2018 21:02:31 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id c2-v6sm8787899pfo.107.2018.10.04.21.02.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 04 Oct 2018 21:02:30 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard , Al Viro , Jerome Glisse , Christoph Hellwig Subject: [PATCH v2 2/3] mm: introduce put_user_page[s](), placeholder versions Date: Thu, 4 Oct 2018 21:02:24 -0700 Message-Id: <20181005040225.14292-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181005040225.14292-1-jhubbard@nvidia.com> References: <20181005040225.14292-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard Introduces put_user_page(), which simply calls put_page(). This provides a way to update all get_user_pages*() callers, so that they call put_user_page(), instead of put_page(). Also introduces put_user_pages(), and a few dirty/locked variations, as a replacement for release_pages(), for the same reasons. These may be used for subsequent performance improvements, via batching of pages to be released. This prepares for eventually fixing the problem described in [1], and is following a plan listed in [2], [3], [4]. [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubbard@nvidia.com Proposed steps for fixing get_user_pages() + DMA problems. [3]https://lkml.kernel.org/r/20180710082100.mkdwngdv5kkrcz6n@quack2.suse.cz Bounce buffers (otherwise [2] is not really viable). [4] https://lkml.kernel.org/r/20181003162115.GG24030@quack2.suse.cz Follow-up discussions. CC: Matthew Wilcox CC: Michal Hocko CC: Christopher Lameter CC: Jason Gunthorpe CC: Dan Williams CC: Jan Kara CC: Al Viro CC: Jerome Glisse CC: Christoph Hellwig Signed-off-by: John Hubbard --- include/linux/mm.h | 42 ++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 40 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a61ebe8ad4ca..1a9aae7c659f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -137,6 +137,8 @@ extern int overcommit_ratio_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); extern int overcommit_kbytes_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); +int set_page_dirty(struct page *page); +int set_page_dirty_lock(struct page *page); #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) @@ -943,6 +945,44 @@ static inline void put_page(struct page *page) __put_page(page); } +/* Placeholder version, until all get_user_pages*() callers are updated. */ +static inline void put_user_page(struct page *page) +{ + put_page(page); +} + +/* For get_user_pages*()-pinned pages, use these variants instead of + * release_pages(): + */ +static inline void put_user_pages_dirty(struct page **pages, + unsigned long npages) +{ + while (npages) { + set_page_dirty(pages[npages]); + put_user_page(pages[npages]); + --npages; + } +} + +static inline void put_user_pages_dirty_lock(struct page **pages, + unsigned long npages) +{ + while (npages) { + set_page_dirty_lock(pages[npages]); + put_user_page(pages[npages]); + --npages; + } +} + +static inline void put_user_pages(struct page **pages, + unsigned long npages) +{ + while (npages) { + put_user_page(pages[npages]); + --npages; + } +} + #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define SECTION_IN_PAGE_FLAGS #endif @@ -1534,8 +1574,6 @@ int redirty_page_for_writepage(struct writeback_control *wbc, void account_page_dirtied(struct page *page, struct address_space *mapping); void account_page_cleaned(struct page *page, struct address_space *mapping, struct bdi_writeback *wb); -int set_page_dirty(struct page *page); -int set_page_dirty_lock(struct page *page); void __cancel_dirty_page(struct page *page); static inline void cancel_dirty_page(struct page *page) { From patchwork Fri Oct 5 04:02:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10627313 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2AA10112B for ; Fri, 5 Oct 2018 04:02:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 19DE32963B for ; Fri, 5 Oct 2018 04:02:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0DF842964C; Fri, 5 Oct 2018 04:02:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 50B402963B for ; Fri, 5 Oct 2018 04:02:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4BFDF6B0010; Fri, 5 Oct 2018 00:02:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4725A6B0266; Fri, 5 Oct 2018 00:02:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 33F116B0269; Fri, 5 Oct 2018 00:02:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id DC7256B0010 for ; Fri, 5 Oct 2018 00:02:34 -0400 (EDT) Received: by mail-pf1-f197.google.com with SMTP id i76-v6so7988129pfk.14 for ; Thu, 04 Oct 2018 21:02:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=06kpAZHYKZG3hvFobEbSnj6k7+UsR1Ngfd/d8YfSvoY=; b=Fj8KvsxyC4aOfocb0ejrKR/pn9p6wjAJQDCoELG/D4c+FBs02bc2rpj7rvTy+FjGqb dVg6qP0u8qrZeuqnlegvD7314XzynQXXmWTF7x7ZzjnLFLDxe/5iEAKCC8oEdPSUbEdR PriVj21hEvOmkywkYFxO4MxLhd7SOY2+ZDQp67DF9rADGVM0fQpvYfEPWGbFdr3XlwWM nBSJ+sLK1y42esrLE+dbZx9GUW7v/0VjokxBkI3EHGUt73Df7UIbIGelfnq/YhedQksL d+PlZ7JqkXuoXLrWbBavmidw6FuowLjjgVLANUSM/BM4Jd9cixSzkkypQHX+p9KXt1uG FxQA== X-Gm-Message-State: ABuFfoj16qY76o0lKQBC/R2g00yk373E4Xh0x8OG4caRo8MUeF/TFFUO kea6HqS6EfKUWX8RZUPMGoR7EbO6CekEP7pDwUkcMRNBx+6S5rVoAd+moR2kIa736KpiqcnEgVB unewUuIFmNN/QZMa0pBHGhbnW2kUWqOglqPWEECmKMSIVPIClweLb3Bl6aV5jALd/a2s2q0wZIu pCvicb9qf2yhU/nyDJyRbp+SThkuAZEkMxIKktTKgIciWeoRMNhzjyD+96aAWn42SFmhKPgggBV ErCPA+e9p15vwpuoutpU1xai2ejvISSgZOteCjit+KSHDMNKnxGUzBqTEh10RRuReSy7Q6mHDu6 hsrrrbadYQixEAT4wGACcHDrvv1EO/waGs1ANbyktH/S8YcHNbjbnYcy1V1MZlYpCNham9GLG4U o X-Received: by 2002:a17:902:6b83:: with SMTP id p3-v6mr9560143plk.24.1538712154551; Thu, 04 Oct 2018 21:02:34 -0700 (PDT) X-Received: by 2002:a17:902:6b83:: with SMTP id p3-v6mr9560088plk.24.1538712153594; Thu, 04 Oct 2018 21:02:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538712153; cv=none; d=google.com; s=arc-20160816; b=Idcd/ubgIVgTY6J+zof7sARdMzbB29XsLHAQ1e19NRchWE9wQU2BqC/G4C7bfVXh73 tqP0mVHo0TRQHxlSaFmZ/7aGexaNiBwl6BIPvRR/5TeWN7vgnB15XFYhB8wvuEBUchaR 3jB7smKWjAEPOABXMwne4uQy2vZQd7oaO0AaQ79cEx9JSl/n98D0fN9fR5ua0qMbx/nU K/tUh1md/uwVOCQOCmbj7vaq5DHFp8S+qFyH7nvnjKZ0ixu+1jjNv37rG7SPxKpAeMeR OFIO5aHRuTjWOpEnjdVUP6SQ8lsQ0GKB0dMHlvQIlL/JDW1GMSmyeMb/tmSRskUoRVSd mpUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=06kpAZHYKZG3hvFobEbSnj6k7+UsR1Ngfd/d8YfSvoY=; b=KQA6143MV8aNvlDpj1vpei6Z7Q/BihDhGU81/bBVsKWgTJy96ZXcKby4ODnYZ17voX c8xL1Q2Wkqdg6iOzTOeEJJYF1ljqfUcdPFA6O/5XI/HlrZt5HuPgELTTXEMC6/3+ITFp cG0HUX717kA0B5w/Kl/VmvxuizDFfJOxvPe+zAuq6yCdi/p7XSFWsri5XBl6cnhHITMe M5xH9w/w+Mq58HH/MVmoeLMaEKyGuOHZDZiOX8+n751/37dJX3tBSidO3r0cx6OTgKUX 0YWWTXUYI/1HwEn19fjobpyFgZKRlrp5ritCSXWd4yG1kUy8dyrz9F7sJrVHsM7l63uo Is8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=m01qciC3; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id s4-v6sor5773457pgc.18.2018.10.04.21.02.33 for (Google Transport Security); Thu, 04 Oct 2018 21:02:33 -0700 (PDT) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=m01qciC3; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=06kpAZHYKZG3hvFobEbSnj6k7+UsR1Ngfd/d8YfSvoY=; b=m01qciC3ihAX+OJYOJUHMVZJQUb2OHYOl4hJWZhhGfRtUaRe254M9NSR7MWq6eEnLG 3GJ7RN5fBskGy+1f8TmRo+9gfRF67uw/W12/9wx9Drwc+DuODH/LzLuYvFTALPEOIdJm UivOvTtzO9HBgq64T3dNE0dHrY++PBEqS22Od1A91UQ7A41uKNmYrdbRh8pyNj5SFb3g nx4xsnG5ZQkbLsX0kYGEnrUpCvOSZ3ichQNQlcmLOOgHnzA2MLQ+qkcuAVhyWeTT79oq EntOuw16DUkfTFAEZIcyIG4X+oXC28W5cK02kdFQZZiHBtPEGi8rIm65CCDC/WvM5wnE t/hg== X-Google-Smtp-Source: ACcGV60bqr96u2KQwI/dbT7TXLTzDOPCI3TbVohRWSQxZcY4mSWa2YztvMFams7Dx3hHS7DAC3Xfpw== X-Received: by 2002:a65:668d:: with SMTP id b13-v6mr8326568pgw.163.1538712153246; Thu, 04 Oct 2018 21:02:33 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id c2-v6sm8787899pfo.107.2018.10.04.21.02.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 04 Oct 2018 21:02:32 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard , Doug Ledford , Mike Marciniszyn , Dennis Dalessandro , Christian Benvenuti Subject: [PATCH v2 3/3] infiniband/mm: convert to the new put_user_page[s]() calls Date: Thu, 4 Oct 2018 21:02:25 -0700 Message-Id: <20181005040225.14292-4-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181005040225.14292-1-jhubbard@nvidia.com> References: <20181005040225.14292-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard For code that retains pages via get_user_pages*(), release those pages via the new put_user_page(), instead of put_page(). This prepares for eventually fixing the problem described in [1], and is following a plan listed in [2], [3], [4]. [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubbard@nvidia.com Proposed steps for fixing get_user_pages() + DMA problems. [3]https://lkml.kernel.org/r/20180710082100.mkdwngdv5kkrcz6n@quack2.suse.cz Bounce buffers (otherwise [2] is not really viable). [4] https://lkml.kernel.org/r/20181003162115.GG24030@quack2.suse.cz Follow-up discussions. CC: Doug Ledford CC: Jason Gunthorpe CC: Mike Marciniszyn CC: Dennis Dalessandro CC: Christian Benvenuti CC: linux-rdma@vger.kernel.org CC: linux-kernel@vger.kernel.org CC: linux-mm@kvack.org Signed-off-by: John Hubbard --- drivers/infiniband/core/umem.c | 2 +- drivers/infiniband/core/umem_odp.c | 2 +- drivers/infiniband/hw/hfi1/user_pages.c | 11 ++++------- drivers/infiniband/hw/mthca/mthca_memfree.c | 6 +++--- drivers/infiniband/hw/qib/qib_user_pages.c | 11 ++++------- drivers/infiniband/hw/qib/qib_user_sdma.c | 8 ++++---- drivers/infiniband/hw/usnic/usnic_uiom.c | 2 +- 7 files changed, 18 insertions(+), 24 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index a41792dbae1f..9430d697cb9f 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -60,7 +60,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d page = sg_page(sg); if (!PageDirty(page) && umem->writable && dirty) set_page_dirty_lock(page); - put_page(page); + put_user_page(page); } sg_free_table(&umem->sg_head); diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 6ec748eccff7..6227b89cf05c 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -717,7 +717,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem *umem, u64 user_virt, u64 bcnt, ret = -EFAULT; break; } - put_page(local_page_list[j]); + put_user_page(local_page_list[j]); continue; } diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c index e341e6dcc388..99ccc0483711 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -121,13 +121,10 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np void hfi1_release_user_pages(struct mm_struct *mm, struct page **p, size_t npages, bool dirty) { - size_t i; - - for (i = 0; i < npages; i++) { - if (dirty) - set_page_dirty_lock(p[i]); - put_page(p[i]); - } + if (dirty) + put_user_pages_dirty_lock(p, npages); + else + put_user_pages(p, npages); if (mm) { /* during close after signal, mm can be NULL */ down_write(&mm->mmap_sem); diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniband/hw/mthca/mthca_memfree.c index cc9c0c8ccba3..b8b12effd009 100644 --- a/drivers/infiniband/hw/mthca/mthca_memfree.c +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c @@ -481,7 +481,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, ret = pci_map_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); if (ret < 0) { - put_page(pages[0]); + put_user_page(pages[0]); goto out; } @@ -489,7 +489,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, mthca_uarc_virt(dev, uar, i)); if (ret) { pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_page(sg_page(&db_tab->page[i].mem)); + put_user_page(sg_page(&db_tab->page[i].mem)); goto out; } @@ -555,7 +555,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar, if (db_tab->page[i].uvirt) { mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, uar, i), 1); pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_page(sg_page(&db_tab->page[i].mem)); + put_user_page(sg_page(&db_tab->page[i].mem)); } } diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index 16543d5e80c3..1a5c64c8695f 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -40,13 +40,10 @@ static void __qib_release_user_pages(struct page **p, size_t num_pages, int dirty) { - size_t i; - - for (i = 0; i < num_pages; i++) { - if (dirty) - set_page_dirty_lock(p[i]); - put_page(p[i]); - } + if (dirty) + put_user_pages_dirty_lock(p, num_pages); + else + put_user_pages(p, num_pages); } /* diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c index 926f3c8eba69..14f94d823907 100644 --- a/drivers/infiniband/hw/qib/qib_user_sdma.c +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -266,7 +266,7 @@ static void qib_user_sdma_init_frag(struct qib_user_sdma_pkt *pkt, pkt->addr[i].length = len; pkt->addr[i].first_desc = first_desc; pkt->addr[i].last_desc = last_desc; - pkt->addr[i].put_page = put_page; + pkt->addr[i].put_page = put_user_page; pkt->addr[i].dma_mapped = dma_mapped; pkt->addr[i].page = page; pkt->addr[i].kvaddr = kvaddr; @@ -321,7 +321,7 @@ static int qib_user_sdma_page_to_frags(const struct qib_devdata *dd, * the caller can ignore this page. */ if (put) { - put_page(page); + put_user_page(page); } else { /* coalesce case */ kunmap(page); @@ -635,7 +635,7 @@ static void qib_user_sdma_free_pkt_frag(struct device *dev, kunmap(pkt->addr[i].page); if (pkt->addr[i].put_page) - put_page(pkt->addr[i].page); + put_user_page(pkt->addr[i].page); else __free_page(pkt->addr[i].page); } else if (pkt->addr[i].kvaddr) { @@ -710,7 +710,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd, /* if error, return all pages not managed by pkt */ free_pages: while (i < j) - put_page(pages[i++]); + put_user_page(pages[i++]); done: return ret; diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 9dd39daa602b..0f607f31c262 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -91,7 +91,7 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty) pa = sg_phys(sg); if (!PageDirty(page) && dirty) set_page_dirty_lock(page); - put_page(page); + put_user_page(page); usnic_dbg("pa: %pa\n", &pa); } kfree(chunk);