From patchwork Mon Oct 8 21:16:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10631441 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 666D014DB for ; Mon, 8 Oct 2018 21:16:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 57F6929816 for ; Mon, 8 Oct 2018 21:16:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4C03C298FE; Mon, 8 Oct 2018 21:16:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B91AD29816 for ; Mon, 8 Oct 2018 21:16:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2D1AE6B0269; Mon, 8 Oct 2018 17:16:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2802C6B026A; Mon, 8 Oct 2018 17:16:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F9746B026B; Mon, 8 Oct 2018 17:16:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by kanga.kvack.org (Postfix) with ESMTP id C340D6B0269 for ; Mon, 8 Oct 2018 17:16:32 -0400 (EDT) Received: by mail-pl1-f198.google.com with SMTP id d40-v6so18637130pla.14 for ; Mon, 08 Oct 2018 14:16:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=FVIsmZsD2UYUIajD9wCWvufUKrPcgTlr9PeHswP0yEs=; b=A2PTKzSK8Ayr7xJTfDlUeIVnvBdSlDnmAgq2i3grMWn9confGMmk+b4mJ7w5uHoXXr iu2BAX8R4Yfbd+Ujqftw+lY247bxcZGKFpfm3LgpnCeiF2OL4vkbvo//MtrfPme+pg+i J41FPw+aIFszgkZRQJYCOkdxDToKULF/p5aF44mkz041MGG/G2IWIySVNsbh7Je9GY1t UQnXMz8maBynEHgdI02vCwNU8it9W2hIcgS53CxbUEm7s4ok0LOOHk280eiiSxZMT/2E PKTqGymscQvdKUaOPnWgdCawAc2CBWkx/jQ3Tao8qm2aNCWG5ZqQAp3DmW9IaRX+dDGs 33ow== X-Gm-Message-State: ABuFfog5bT+u7yHIqEXfCRtpzFgcNosmwrPG9aT+OpTCMX4SEeUvIY4k KsXu4IQCGCqgcbzkkvUghrV5vBck+1cxFUHbwxvoIvo/Vn3rR2dUPXcE6SafJOV2ezvHY3IMa2i uSaGulYTx2tBTOtWMY/FvtlmQO+g4ySUj5uAbK/db7RLWlStlD/kfbOpZkZOmYb3UGOrfei7goR Niq5k4eE8WOjI1Weodq/GFOsioEt831ox7xJf+U+AHWe9YMjmZW2/kIqYN0WZPaVkY9OCthE+x3 K2k9hXGgsEV2CzMKTd9XuhjSTomgv4rAfE66BL6OVXmEtdbazsssA2ikhcnh5HtpspAR155/K5v nZTfDssnwBhahgEZQXVhW6ZcjtdzYdR20kStzocQIQK+7SMqIS4awHH83QM+lj4uaQDGwDKf8JE m X-Received: by 2002:a17:902:2f84:: with SMTP id t4-v6mr26022123plb.87.1539033392481; Mon, 08 Oct 2018 14:16:32 -0700 (PDT) X-Received: by 2002:a17:902:2f84:: with SMTP id t4-v6mr26022097plb.87.1539033391583; Mon, 08 Oct 2018 14:16:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539033391; cv=none; d=google.com; s=arc-20160816; b=UmM+inCPzMkvDyBBBBZjsBIK0rOxHFYVQnZD5x4sDVQ0hagmUs4ld8sIkYWpq30afP eEIUsm/Y/h2zEH2rNTmGvl8w/dw/y9FiBiXdwluhn7gjHG101RaXqB9TczHsIHq/TKip p0XQRqWOcMyvnR6cprim27/6uL2H/LZDgzHx/ERCOmdFxP8zxuxU63gaGqtNEVlVMlnE 9GjATUtcxAFq/aSXfHksb2544of1D60WfCsBCeWxWDkbZsrtp4KrlyvuAbuxm31HoUkA OUhUmN79+xqAEgfpc5d94AMvZ9cwH1s/6F4nTMDD5NlfWnrWE90QLS/bz1wBFULmSo0L 10Gw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=FVIsmZsD2UYUIajD9wCWvufUKrPcgTlr9PeHswP0yEs=; b=Rt46/ZjssubHKcBoS+DxddcLdiRgvnvRHIevgy4XYrGHfPfJ1kweDpsybOTIwbj8w5 RlpOFU/vZh4d+t3eiZCZgZMMvl0j8/4R98ks5MXy6uZE5oe73CLZsbefEAYd7cTxgGwC CU75Q9QwmCLL8spBN3WKjXloW82Qo44yn0b12ix08vOIUU8eOdser/l9zpacG3tA+9z3 Id9ssh2rg8a/nKJfV7/X/ouQzJgg8fmSAmrnmwmdv8GZ3jnOBS1fMI4SOu4EQbN5lIIG wkkVsDY7ZOli0RXxZSFU1BFFt7MFkWBcbwjyPku9psCAuX7wOLG3WEcLVylEJuosgGY+ c/yA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=KtRdHhB0; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id j62-v6sor14265372pgd.42.2018.10.08.14.16.31 for (Google Transport Security); Mon, 08 Oct 2018 14:16:31 -0700 (PDT) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=KtRdHhB0; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FVIsmZsD2UYUIajD9wCWvufUKrPcgTlr9PeHswP0yEs=; b=KtRdHhB0TCig2y75Kke1hlMVtbsRYuIxehy0tyoHB5CxNfdOJNFD432pphe0UuMqt0 iIy5Z83K+zy46HGxlbqqDsXsVdTjIXRXl0FHbMDk8CGz/wdWxekIFD2tP7G3OmX3tjjT TB+eR/mDXBavM5skuUEQjfrhm6ZbN7qTgEGd5Ie9zhzUcZ1jQLQysuhT1yCjNtqJF5AC 0R/ngkbwnLfoVkTfM46z7wDcBd908tUL+gd04Y3jqEHColQOuCf+4ZFSmBz7B5icsXwB nCsOndufh/mFykG7tBO5gpkH3Tu4QqT1BKx5okpt33/lKnXShpV1aKyFbuRV8z7Eco7D 8iGA== X-Google-Smtp-Source: ACcGV62z1JG6uLWKoscZWokp7sITSTbr35JddE1XhG5ebAVUJIKrHM52PfMrmwGo/yuduL9YGbh6Sw== X-Received: by 2002:a63:b305:: with SMTP id i5-v6mr22403816pgf.46.1539033391198; Mon, 08 Oct 2018 14:16:31 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id w127-v6sm23480045pfd.112.2018.10.08.14.16.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Oct 2018 14:16:30 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard Subject: [PATCH v4 1/3] mm: get_user_pages: consolidate error handling Date: Mon, 8 Oct 2018 14:16:21 -0700 Message-Id: <20181008211623.30796-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008211623.30796-1-jhubbard@nvidia.com> References: <20181008211623.30796-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard An upcoming patch requires a way to operate on each page that any of the get_user_pages_*() variants returns. In preparation for that, consolidate the error handling for __get_user_pages(). This provides a single location (the "out:" label) for operating on the collected set of pages that are about to be returned. As long every use of the "ret" variable is being edited, rename "ret" --> "err", so that its name matches its true role. This also gets rid of two shadowed variable declarations, as a tiny beneficial a side effect. Reviewed-by: Jan Kara Signed-off-by: John Hubbard Reviewed-by: Andrew Morton --- mm/gup.c | 37 ++++++++++++++++++++++--------------- 1 file changed, 22 insertions(+), 15 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 1abc8b4afff6..05ee7c18e59a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -660,6 +660,7 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, struct vm_area_struct **vmas, int *nonblocking) { long i = 0; + int err = 0; unsigned int page_mask; struct vm_area_struct *vma = NULL; @@ -685,18 +686,19 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, if (!vma || start >= vma->vm_end) { vma = find_extend_vma(mm, start); if (!vma && in_gate_area(mm, start)) { - int ret; - ret = get_gate_page(mm, start & PAGE_MASK, + err = get_gate_page(mm, start & PAGE_MASK, gup_flags, &vma, pages ? &pages[i] : NULL); - if (ret) - return i ? : ret; + if (err) + goto out; page_mask = 0; goto next_page; } - if (!vma || check_vma_flags(vma, gup_flags)) - return i ? : -EFAULT; + if (!vma || check_vma_flags(vma, gup_flags)) { + err = -EFAULT; + goto out; + } if (is_vm_hugetlb_page(vma)) { i = follow_hugetlb_page(mm, vma, pages, vmas, &start, &nr_pages, i, @@ -709,23 +711,25 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, * If we have a pending SIGKILL, don't keep faulting pages and * potentially allocating memory. */ - if (unlikely(fatal_signal_pending(current))) - return i ? i : -ERESTARTSYS; + if (unlikely(fatal_signal_pending(current))) { + err = -ERESTARTSYS; + goto out; + } cond_resched(); page = follow_page_mask(vma, start, foll_flags, &page_mask); if (!page) { - int ret; - ret = faultin_page(tsk, vma, start, &foll_flags, + err = faultin_page(tsk, vma, start, &foll_flags, nonblocking); - switch (ret) { + switch (err) { case 0: goto retry; case -EFAULT: case -ENOMEM: case -EHWPOISON: - return i ? i : ret; + goto out; case -EBUSY: - return i; + err = 0; + goto out; case -ENOENT: goto next_page; } @@ -737,7 +741,8 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, */ goto next_page; } else if (IS_ERR(page)) { - return i ? i : PTR_ERR(page); + err = PTR_ERR(page); + goto out; } if (pages) { pages[i] = page; @@ -757,7 +762,9 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, start += page_increm * PAGE_SIZE; nr_pages -= page_increm; } while (nr_pages); - return i; + +out: + return i ? i : err; } static bool vma_permits_fault(struct vm_area_struct *vma, From patchwork Mon Oct 8 21:16:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10631445 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 16B5915E9 for ; Mon, 8 Oct 2018 21:16:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0813429816 for ; Mon, 8 Oct 2018 21:16:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F05A229839; Mon, 8 Oct 2018 21:16:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5C0B429816 for ; Mon, 8 Oct 2018 21:16:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D31AC6B026A; Mon, 8 Oct 2018 17:16:34 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CE2236B026B; Mon, 8 Oct 2018 17:16:34 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BAB726B026C; Mon, 8 Oct 2018 17:16:34 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id 7EE626B026A for ; Mon, 8 Oct 2018 17:16:34 -0400 (EDT) Received: by mail-pf1-f197.google.com with SMTP id 87-v6so18299814pfq.8 for ; Mon, 08 Oct 2018 14:16:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=k+/+jNVpClYYrnYjCdSDtQ5UKgPHQ1Yc+3CgIrq50RY=; b=liBrnie4KV1g605SR2L0fV2tq9LosJHH3AVaUJjYSoOwh2k/QZyR8CVffJj6k6tTPF EF3a/Ku21bHRgCr5L52SMhUT+d3TVv3OMloGbIXKWT9uu8+0+wsx3Uh2wr/fDFFmpEM0 yXbwiL8C3U54SYZrGDIJuqmEI1YwI31l33hNtJKx2wmXMBmiSXVHffT1a3waTE+vW1cy m1wu+jYfmOMM1IvIJNSMSV8J5gOmClxm3DC92BSowN229/ac+Z0JBQfw87U6LcaEhOTC /BTPHn3gyh5BLaRBNoVVY3jg+Wsroxb3H+ToqtA1Y6G5xBDKc7euAtw2U6SkOwkWOPI5 TlZg== X-Gm-Message-State: ABuFfogn4W/SZS8UMBvzttqmcSiQQfGvowRWdidJFmEdIGZGnHatQ+O0 70XPlYSLn7R878thmFMzyZfUkrAn+NTZptqS0TILQAEwdGarCfyz599JWIs0kGveFZ9evjNW5tD qCBRxwZ6mP36VkrRh+OyGRbtB69PwM+3HsdAefbF4aYVT1J8Yegc6ujbAgIjgw8wrLztfs53LKu de72255RP8YL2LqPg8nhSrtpk02YIBL8UfPqrl8pizZ8/iF0Sh4TtDiWzFSEQEFoLaTajvYT2EG xaxpn6iR6unUTWRpSajK+m04X1ijIIDPp2X2jXeeMgaPJ3Bi93fwZ1tfpBuqosNxL7FrUNm7z6B tSNZpOacH7zKFuNu0uJXKfL7DOcLQANBBgMb2tk4cXu82Wp6z37ZccXhUp3WaM13SsaID1IbiUB p X-Received: by 2002:a17:902:6546:: with SMTP id d6-v6mr25673173pln.139.1539033394181; Mon, 08 Oct 2018 14:16:34 -0700 (PDT) X-Received: by 2002:a17:902:6546:: with SMTP id d6-v6mr25673137pln.139.1539033393319; Mon, 08 Oct 2018 14:16:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539033393; cv=none; d=google.com; s=arc-20160816; b=UBf9A2hcoy00FXDDnpS4vmpCrgN18upU5tRGnpX2zRRdju61eiQTbHqG1YyzluR92i wjGxjw0Hky+bS7fp96hFmsbWJJ1GDOBC66qEj58fvx9ywuF7k2DZfacXelTdzJTT+lvr GxqV9BMhDMVl5iMjRPo6wjpyscbGdXyR0HUcGocsyfAeFt/jRoOxMR8ontAYvhOm+7QG OYIJiqBb/WwnLDqi+jzwmbzOPl3tPiy+fggXd8FHte+IpJdAfbfJ/1P0o7n91DfdPPQV NaQalHXhnMiCJJ94GyAs6C4HR4sSyhJn/MFl2r8FObHV3iyFUBbh3ej0BXstkDdWgWPd hTHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=k+/+jNVpClYYrnYjCdSDtQ5UKgPHQ1Yc+3CgIrq50RY=; b=ZEi4EaZ9Ba1GAVx0ekUzI2P/9UAXHK0/iojMZLxSUJZH7izTo+4h1NFagFB1LeR3+D zHyvlFXeSgwMaAz6bp4Dr02/IaY0S05GhQdrylNk3GYfs0olFVzTCC11PloX+nPPfpCM EaWXnFszAQchJFZCi1xLKz3CWwBUIorBQ0NVGbbd0bln222naEl47Ig/r57uMXpPChDD 3jpNehjD6Ni2HMNz+0QlYaOqFK5nmax3koEKNq+DGwrhGDrAEKt1w1z98ot7fsqhxtCc FyJ8/08IDqpSoem+3/GWxZE2BXXwnGxsgtIsNiI+3mlOPtAuIZbXwsgpadjBAFI74jUq rRqg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=fDiPneiU; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id o82-v6sor15937737pfj.16.2018.10.08.14.16.33 for (Google Transport Security); Mon, 08 Oct 2018 14:16:33 -0700 (PDT) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=fDiPneiU; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=k+/+jNVpClYYrnYjCdSDtQ5UKgPHQ1Yc+3CgIrq50RY=; b=fDiPneiU/6UFz4R1QH02YqIZ/WkPKzDaTlQBxArt122EPNQPQYe4k7I1BgXBoCy2GD lwGKE4fvsonhnvALxgDzpzKa+knf/mDwOmhcxMOUZFQleSGrUph4kP9ikUoeuzXa1pgG l6PvBcAfRFP9PAnChf6HEO+OC2eX4jHLobh6tjlfqZprgJFMQfKOGGA8mG1EeDgMQT/F TVXl79hARghB+r/3ifEnjmfbSWj1NykpxiSAyCSCPuu7M2ynO8ihcGovbvtBqaHKTsBL rT95UgYUYRvMTz8cnXjJobWnwkB+S1ibUPXLW7L8AB9f9Q44JiBE1kLNifQTKsmFFftJ joNw== X-Google-Smtp-Source: ACcGV63PCOzCN2MG+rT/XCxcvCW40Ear5HyFZydHxWNbKohKjbyv4gWznSz60IpepE+x+bqWyt8Z5w== X-Received: by 2002:a62:1655:: with SMTP id 82-v6mr26623699pfw.11.1539033393061; Mon, 08 Oct 2018 14:16:33 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id w127-v6sm23480045pfd.112.2018.10.08.14.16.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Oct 2018 14:16:32 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard , Al Viro , Jerome Glisse , Christoph Hellwig , Ralph Campbell Subject: [PATCH v4 2/3] mm: introduce put_user_page*(), placeholder versions Date: Mon, 8 Oct 2018 14:16:22 -0700 Message-Id: <20181008211623.30796-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008211623.30796-1-jhubbard@nvidia.com> References: <20181008211623.30796-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard Introduces put_user_page(), which simply calls put_page(). This provides a way to update all get_user_pages*() callers, so that they call put_user_page(), instead of put_page(). Also introduces put_user_pages(), and a few dirty/locked variations, as a replacement for release_pages(), and also as a replacement for open-coded loops that release multiple pages. These may be used for subsequent performance improvements, via batching of pages to be released. This prepares for eventually fixing the problem described in [1], and is following a plan listed in [2], [3], [4]. [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubbard@nvidia.com Proposed steps for fixing get_user_pages() + DMA problems. [3]https://lkml.kernel.org/r/20180710082100.mkdwngdv5kkrcz6n@quack2.suse.cz Bounce buffers (otherwise [2] is not really viable). [4] https://lkml.kernel.org/r/20181003162115.GG24030@quack2.suse.cz Follow-up discussions. CC: Matthew Wilcox CC: Michal Hocko CC: Christopher Lameter CC: Jason Gunthorpe CC: Dan Williams CC: Jan Kara CC: Al Viro CC: Jerome Glisse CC: Christoph Hellwig CC: Ralph Campbell Reviewed-by: Jan Kara Signed-off-by: John Hubbard --- include/linux/mm.h | 49 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 47 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0416a7204be3..0490f4a71b9c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -137,6 +137,8 @@ extern int overcommit_ratio_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); extern int overcommit_kbytes_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); +int set_page_dirty(struct page *page); +int set_page_dirty_lock(struct page *page); #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) @@ -943,6 +945,51 @@ static inline void put_page(struct page *page) __put_page(page); } +/* + * Pages that were pinned via get_user_pages*() should be released via + * either put_user_page(), or one of the put_user_pages*() routines + * below. + */ +static inline void put_user_page(struct page *page) +{ + put_page(page); +} + +static inline void put_user_pages_dirty(struct page **pages, + unsigned long npages) +{ + unsigned long index; + + for (index = 0; index < npages; index++) { + if (!PageDirty(pages[index])) + set_page_dirty(pages[index]); + + put_user_page(pages[index]); + } +} + +static inline void put_user_pages_dirty_lock(struct page **pages, + unsigned long npages) +{ + unsigned long index; + + for (index = 0; index < npages; index++) { + if (!PageDirty(pages[index])) + set_page_dirty_lock(pages[index]); + + put_user_page(pages[index]); + } +} + +static inline void put_user_pages(struct page **pages, + unsigned long npages) +{ + unsigned long index; + + for (index = 0; index < npages; index++) + put_user_page(pages[index]); +} + #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define SECTION_IN_PAGE_FLAGS #endif @@ -1534,8 +1581,6 @@ int redirty_page_for_writepage(struct writeback_control *wbc, void account_page_dirtied(struct page *page, struct address_space *mapping); void account_page_cleaned(struct page *page, struct address_space *mapping, struct bdi_writeback *wb); -int set_page_dirty(struct page *page); -int set_page_dirty_lock(struct page *page); void __cancel_dirty_page(struct page *page); static inline void cancel_dirty_page(struct page *page) { From patchwork Mon Oct 8 21:16:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10631449 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 31CF314DB for ; Mon, 8 Oct 2018 21:16:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 22ED829816 for ; Mon, 8 Oct 2018 21:16:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1706E29839; Mon, 8 Oct 2018 21:16:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 54E2D29819 for ; Mon, 8 Oct 2018 21:16:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6C7A46B0007; Mon, 8 Oct 2018 17:16:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6795E6B000C; Mon, 8 Oct 2018 17:16:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 547836B026B; Mon, 8 Oct 2018 17:16:37 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by kanga.kvack.org (Postfix) with ESMTP id 088266B0007 for ; Mon, 8 Oct 2018 17:16:37 -0400 (EDT) Received: by mail-pf1-f199.google.com with SMTP id a64-v6so12661220pfg.16 for ; Mon, 08 Oct 2018 14:16:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=PyXhXj1hpzAg5IeBAhwNtWIzKLKVwaVsmQqwsI++GEc=; b=f1YV/qTyr3uveinIFZPoitKqf9TzJoPlwqaveBN6UEYWTcONgGGDDwgTjNk0DkEHZ/ 0rWksgXL3g1GP1AQr9BJRZcx/e5S8W4wQtv3zEXTvBBWfSrpn5chz9AQ8S/Ejt/2qq/L 8k+j3ux1HFWGvV30HCk1aBr4WSBD5r8PfSgSM393jzd104HjCwvxnlac2d4IFqS9n7o3 W0zpyhEnGSBFz0m925+2Zu2L3AJOBE+p+i4E5joyMd023gehQi9upes0Dh4NTLOD9TCB nvA/jqwOKJ9/TYU3qAniFn5uYf695aeGgZ4lEppCo2xidUjC+x8YZqKfSOOnZQF7PP1o UIYQ== X-Gm-Message-State: ABuFfogbl7qp7g8DBuuzj9H2QMqulmLqRzBktTdWUWZe3Y4IGcaSOhWi Db9bhlE/AAcdTa6NjTZWhdKAW3qcPmW7VKu6YAMI508qYLev0YdlQo+2dR5m+xVXx+WorzbPpOe y1yfxsJW/1qPHyzLCn7y0P5Ea09XNLbzMd3YWlaRmkVdDn98ka5WQuDCZBRt+5lkKQTXjmugq// Enm1DhVNG9gIWzEMNROnEspCUUYZpS8ZDD0Bzod+xvcn2MGUkMKSAbVdEBvp8zdTAfx5EE49jZ2 FaYlZe7pbz8LKU0HTJKEXnJjxFZnVCKRuMJW8jDe3ikDdAapVDUU5z/UZH6WIhphGgNg1i3WIcl TSrdIdbT6EkZO81kaISnVw/004JBDe0Ku5afBcR53BccQH7E/uA1fjKEbRmruf83k79k0X0IOPk U X-Received: by 2002:a17:902:7290:: with SMTP id d16-v6mr11836070pll.90.1539033396695; Mon, 08 Oct 2018 14:16:36 -0700 (PDT) X-Received: by 2002:a17:902:7290:: with SMTP id d16-v6mr11836038pll.90.1539033395730; Mon, 08 Oct 2018 14:16:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539033395; cv=none; d=google.com; s=arc-20160816; b=fDJhiPP5FkLzxURpGCrjkzbjhgQfoYBwm/i+wTzU4QsPE0OjYB7vWyMu1l0nJBbDrh 0PrghTBzVxCcEAO5RTN+DK1h1CEmBeSQYfGIsMAQwGQ951/fcRL96TChCggzQHt26oBk hneD+XixgBO74/n2ssTK/+mKNpPrexLU1vtiaTPHp4dGKUHMHVbRzpshnbExgR8n0ijV 2nlEjlx3GUSJu31Ea7sfk97TBt19y40ZaENd9oD6xg7FJzOUkWXMLUyFSgAVRQKqvoP+ IGoqCGPZwXBw+G0xsSovn4whVuBdvAQAPDicq1qmlelEYKSB0RaVg2tNK7FdpFRODiff wQBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=PyXhXj1hpzAg5IeBAhwNtWIzKLKVwaVsmQqwsI++GEc=; b=hAZIcyCA3G8AUGtsVjxvuxF8G6W+5VCDIXCFAnukWGSxPnG5VNbxioaGi2nMGCR36W X3AJM9wLHo4T0mU0/ior9sgJBsuuJG36ytSjqzAvwLQXPnN0E1uCNVKadszl7TD8Pws6 I7yCEYzqSTlhlXCKssy+3OTcaMtnoQehH9mYqbrRwAr58KPzKlAhJJSiRzImjjwiud4i SC/4XQ87enTLqMu5NJbAUxialL0gvN5tmrcdeqpYqmdMUcmhGjWYHWkJQWiAjjzWk70h +nFrkmf7hZHrSk31mRx3U8QPiZg8MoaX6jFIbBNbxRIQZalAcMj2MbQ32zk+WSOzcmN2 zRfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=UKyO3bvK; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id c14-v6sor13641745pls.60.2018.10.08.14.16.35 for (Google Transport Security); Mon, 08 Oct 2018 14:16:35 -0700 (PDT) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=UKyO3bvK; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PyXhXj1hpzAg5IeBAhwNtWIzKLKVwaVsmQqwsI++GEc=; b=UKyO3bvKMIw9MmrYMLzIt1FXn+6np7taZXhk/g+npa4dy2JmAfeyWH2+4ZOGm9RNAC ZJAYoB+GC5aysMb6rSUgDKRZo1B8hwOdTq0YRhJoeQ3MqIyv3aID4hFYBVFCQ3QLpQ1Y bT9LFPZRI/aF5zjKkZ3Hszkk0zPb6bZtquPDizwifAJBozypG+XpRmWKyrf2kHzdadkS p+VnjIZB3lyBaUy51dKnSGjl3f80D/10ujGfjhmS7DY+4d8sZm+MhKYLy9xmMP1DWTYG AyPub0/LWsOuwjlDSyyW8EWbrURocu27NdRrmejBq++5qaHxq1UoTC8Hn7izmpE6jomQ xlVg== X-Google-Smtp-Source: ACcGV63feZQLZRmY50nllI0RQ2eXAUWMoAY0jPfZnCzfkF66RymjH9oZj853vto5gr4/6zQIyx2FYQ== X-Received: by 2002:a17:902:5ac9:: with SMTP id g9-v6mr26009487plm.311.1539033395401; Mon, 08 Oct 2018 14:16:35 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id w127-v6sm23480045pfd.112.2018.10.08.14.16.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Oct 2018 14:16:34 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard , Doug Ledford , Mike Marciniszyn , Dennis Dalessandro , Christian Benvenuti Subject: [PATCH v4 3/3] infiniband/mm: convert put_page() to put_user_page*() Date: Mon, 8 Oct 2018 14:16:23 -0700 Message-Id: <20181008211623.30796-4-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181008211623.30796-1-jhubbard@nvidia.com> References: <20181008211623.30796-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard For code that retains pages via get_user_pages*(), release those pages via the new put_user_page(), or put_user_pages*(), instead of put_page() This prepares for eventually fixing the problem described in [1], and is following a plan listed in [2], [3], [4]. [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubbard@nvidia.com Proposed steps for fixing get_user_pages() + DMA problems. [3]https://lkml.kernel.org/r/20180710082100.mkdwngdv5kkrcz6n@quack2.suse.cz Bounce buffers (otherwise [2] is not really viable). [4] https://lkml.kernel.org/r/20181003162115.GG24030@quack2.suse.cz Follow-up discussions. CC: Doug Ledford CC: Jason Gunthorpe CC: Mike Marciniszyn CC: Dennis Dalessandro CC: Christian Benvenuti CC: linux-rdma@vger.kernel.org CC: linux-kernel@vger.kernel.org CC: linux-mm@kvack.org Reviewed-by: Jan Kara Reviewed-by: Dennis Dalessandro Acked-by: Jason Gunthorpe Signed-off-by: John Hubbard --- drivers/infiniband/core/umem.c | 7 ++++--- drivers/infiniband/core/umem_odp.c | 2 +- drivers/infiniband/hw/hfi1/user_pages.c | 11 ++++------- drivers/infiniband/hw/mthca/mthca_memfree.c | 6 +++--- drivers/infiniband/hw/qib/qib_user_pages.c | 11 ++++------- drivers/infiniband/hw/qib/qib_user_sdma.c | 8 ++++---- drivers/infiniband/hw/usnic/usnic_uiom.c | 7 ++++--- 7 files changed, 24 insertions(+), 28 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index a41792dbae1f..7ab7a3a35eb4 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -58,9 +58,10 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d for_each_sg(umem->sg_head.sgl, sg, umem->npages, i) { page = sg_page(sg); - if (!PageDirty(page) && umem->writable && dirty) - set_page_dirty_lock(page); - put_page(page); + if (umem->writable && dirty) + put_user_pages_dirty_lock(&page, 1); + else + put_user_page(page); } sg_free_table(&umem->sg_head); diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 6ec748eccff7..6227b89cf05c 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -717,7 +717,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem *umem, u64 user_virt, u64 bcnt, ret = -EFAULT; break; } - put_page(local_page_list[j]); + put_user_page(local_page_list[j]); continue; } diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c index e341e6dcc388..99ccc0483711 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -121,13 +121,10 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np void hfi1_release_user_pages(struct mm_struct *mm, struct page **p, size_t npages, bool dirty) { - size_t i; - - for (i = 0; i < npages; i++) { - if (dirty) - set_page_dirty_lock(p[i]); - put_page(p[i]); - } + if (dirty) + put_user_pages_dirty_lock(p, npages); + else + put_user_pages(p, npages); if (mm) { /* during close after signal, mm can be NULL */ down_write(&mm->mmap_sem); diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniband/hw/mthca/mthca_memfree.c index cc9c0c8ccba3..b8b12effd009 100644 --- a/drivers/infiniband/hw/mthca/mthca_memfree.c +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c @@ -481,7 +481,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, ret = pci_map_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); if (ret < 0) { - put_page(pages[0]); + put_user_page(pages[0]); goto out; } @@ -489,7 +489,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, mthca_uarc_virt(dev, uar, i)); if (ret) { pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_page(sg_page(&db_tab->page[i].mem)); + put_user_page(sg_page(&db_tab->page[i].mem)); goto out; } @@ -555,7 +555,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar, if (db_tab->page[i].uvirt) { mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, uar, i), 1); pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_page(sg_page(&db_tab->page[i].mem)); + put_user_page(sg_page(&db_tab->page[i].mem)); } } diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index 16543d5e80c3..1a5c64c8695f 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -40,13 +40,10 @@ static void __qib_release_user_pages(struct page **p, size_t num_pages, int dirty) { - size_t i; - - for (i = 0; i < num_pages; i++) { - if (dirty) - set_page_dirty_lock(p[i]); - put_page(p[i]); - } + if (dirty) + put_user_pages_dirty_lock(p, num_pages); + else + put_user_pages(p, num_pages); } /* diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c index 926f3c8eba69..14f94d823907 100644 --- a/drivers/infiniband/hw/qib/qib_user_sdma.c +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -266,7 +266,7 @@ static void qib_user_sdma_init_frag(struct qib_user_sdma_pkt *pkt, pkt->addr[i].length = len; pkt->addr[i].first_desc = first_desc; pkt->addr[i].last_desc = last_desc; - pkt->addr[i].put_page = put_page; + pkt->addr[i].put_page = put_user_page; pkt->addr[i].dma_mapped = dma_mapped; pkt->addr[i].page = page; pkt->addr[i].kvaddr = kvaddr; @@ -321,7 +321,7 @@ static int qib_user_sdma_page_to_frags(const struct qib_devdata *dd, * the caller can ignore this page. */ if (put) { - put_page(page); + put_user_page(page); } else { /* coalesce case */ kunmap(page); @@ -635,7 +635,7 @@ static void qib_user_sdma_free_pkt_frag(struct device *dev, kunmap(pkt->addr[i].page); if (pkt->addr[i].put_page) - put_page(pkt->addr[i].page); + put_user_page(pkt->addr[i].page); else __free_page(pkt->addr[i].page); } else if (pkt->addr[i].kvaddr) { @@ -710,7 +710,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd, /* if error, return all pages not managed by pkt */ free_pages: while (i < j) - put_page(pages[i++]); + put_user_page(pages[i++]); done: return ret; diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 9dd39daa602b..9e3615fd05f7 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -89,9 +89,10 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty) for_each_sg(chunk->page_list, sg, chunk->nents, i) { page = sg_page(sg); pa = sg_phys(sg); - if (!PageDirty(page) && dirty) - set_page_dirty_lock(page); - put_page(page); + if (dirty) + put_user_pages_dirty_lock(&page, 1); + else + put_user_page(page); usnic_dbg("pa: %pa\n", &pa); } kfree(chunk);