From patchwork Fri Oct 12 06:00:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10637931 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8028915E2 for ; Fri, 12 Oct 2018 06:01:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 718562BB24 for ; Fri, 12 Oct 2018 06:01:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 65A562BFF5; Fri, 12 Oct 2018 06:01:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ECF652BB24 for ; Fri, 12 Oct 2018 06:01:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727722AbeJLNbT (ORCPT ); Fri, 12 Oct 2018 09:31:19 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:34784 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727056AbeJLNbS (ORCPT ); Fri, 12 Oct 2018 09:31:18 -0400 Received: by mail-pg1-f193.google.com with SMTP id g12-v6so5316880pgs.1; Thu, 11 Oct 2018 23:00:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1XCFIhhXu5KYwGhfpBRnNL+5p8VFETRcpPOZd4dSpDA=; b=fsDeDIPQmKRnI18r/+2FwFVtak9RtcpCRBUnKBYyScKv96iejPuwkAQK5ZDewjXoiS 1qRLlcBoDxq0ldinIPZe2PHajgJ2L0p1vsgKHiTEDhF/ZWNZfL6Yo4btB3G9pYx59v24 7nAHOzUvqbXghHcTNGfOI/eXy88UQV+CFdHOoD2EC9t1CkIBJ70qB/nSZJCGToEXoaQb 9EW+unarM6zGw5G9uHFgfhCnasiHFN4JCl/Gf7HQvce5U+YHL3VFrAk3uYr1k1v5e+CX JyLU88noyz2M50jztrqUyJLuWn/MWfb2V2TDjeakz+eDcn+pdPeKPN38/tDlBu4Hloy7 Kp7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1XCFIhhXu5KYwGhfpBRnNL+5p8VFETRcpPOZd4dSpDA=; b=kPHhqBoHENK/W5smzg6KNtJ6+yeLadiChHQ2ny4IASwNQjsyZZAPfr/3rLaKVzpzV4 IQxzSnVt5P/aBPR+VM8ZcmDH6rqlyd5pLk+cSKm2KwF5hzxh8xDV6aL7zn1LMx1wop1R yB6GL9lhJ7fYe54Q2Nsc+e+O6jTPz7t/CMJ8IxzQrMNDEj9G0CtYHuiD5ZAA+kU8gWeR HtjTJpYTusymblbke5yPJW9hmFartm7Iw0FHh2I8wGjJ/6R63TlbMQC+12E8Jv8E4TPq MhxcXAZJQzniD3gxfdUm1t4/VkBOW0cdLE5khV2A1KhKzjEALwjIEnb5SSD+Y1tsmL11 r33Q== X-Gm-Message-State: ABuFfohZ7M6WXlU/TdcRqO82QZbs+hH5QUst1Pi9kSAUAh8WhDXGhHv1 27SPiXgNhtLNnURkoRiXMNQ= X-Google-Smtp-Source: ACcGV62XNUiEKULSVjMNvWsdYNkkuO1xneOLDXlNTd0PHbasw7NkAHpQ0s2S8aBqXpoWs/95W2x9bA== X-Received: by 2002:a62:70c7:: with SMTP id l190-v6mr4652059pfc.186.1539324029162; Thu, 11 Oct 2018 23:00:29 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id z3-v6sm368579pfm.150.2018.10.11.23.00.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 11 Oct 2018 23:00:28 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara Cc: linux-mm@kvack.org, Andrew Morton , LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard Subject: [PATCH 1/6] mm: get_user_pages: consolidate error handling Date: Thu, 11 Oct 2018 23:00:09 -0700 Message-Id: <20181012060014.10242-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181012060014.10242-1-jhubbard@nvidia.com> References: <20181012060014.10242-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard An upcoming patch requires a way to operate on each page that any of the get_user_pages_*() variants returns. In preparation for that, consolidate the error handling for __get_user_pages(). This provides a single location (the "out:" label) for operating on the collected set of pages that are about to be returned. As long every use of the "ret" variable is being edited, rename "ret" --> "err", so that its name matches its true role. This also gets rid of two shadowed variable declarations, as a tiny beneficial a side effect. Reviewed-by: Jan Kara Reviewed-by: Andrew Morton Signed-off-by: John Hubbard Reviewed-by: Balbir Singh --- mm/gup.c | 37 ++++++++++++++++++++++--------------- 1 file changed, 22 insertions(+), 15 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 1abc8b4afff6..05ee7c18e59a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -660,6 +660,7 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, struct vm_area_struct **vmas, int *nonblocking) { long i = 0; + int err = 0; unsigned int page_mask; struct vm_area_struct *vma = NULL; @@ -685,18 +686,19 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, if (!vma || start >= vma->vm_end) { vma = find_extend_vma(mm, start); if (!vma && in_gate_area(mm, start)) { - int ret; - ret = get_gate_page(mm, start & PAGE_MASK, + err = get_gate_page(mm, start & PAGE_MASK, gup_flags, &vma, pages ? &pages[i] : NULL); - if (ret) - return i ? : ret; + if (err) + goto out; page_mask = 0; goto next_page; } - if (!vma || check_vma_flags(vma, gup_flags)) - return i ? : -EFAULT; + if (!vma || check_vma_flags(vma, gup_flags)) { + err = -EFAULT; + goto out; + } if (is_vm_hugetlb_page(vma)) { i = follow_hugetlb_page(mm, vma, pages, vmas, &start, &nr_pages, i, @@ -709,23 +711,25 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, * If we have a pending SIGKILL, don't keep faulting pages and * potentially allocating memory. */ - if (unlikely(fatal_signal_pending(current))) - return i ? i : -ERESTARTSYS; + if (unlikely(fatal_signal_pending(current))) { + err = -ERESTARTSYS; + goto out; + } cond_resched(); page = follow_page_mask(vma, start, foll_flags, &page_mask); if (!page) { - int ret; - ret = faultin_page(tsk, vma, start, &foll_flags, + err = faultin_page(tsk, vma, start, &foll_flags, nonblocking); - switch (ret) { + switch (err) { case 0: goto retry; case -EFAULT: case -ENOMEM: case -EHWPOISON: - return i ? i : ret; + goto out; case -EBUSY: - return i; + err = 0; + goto out; case -ENOENT: goto next_page; } @@ -737,7 +741,8 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, */ goto next_page; } else if (IS_ERR(page)) { - return i ? i : PTR_ERR(page); + err = PTR_ERR(page); + goto out; } if (pages) { pages[i] = page; @@ -757,7 +762,9 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, start += page_increm * PAGE_SIZE; nr_pages -= page_increm; } while (nr_pages); - return i; + +out: + return i ? i : err; } static bool vma_permits_fault(struct vm_area_struct *vma,