From patchwork Mon Oct 21 15:17:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhong jiang X-Patchwork-Id: 11202507 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 432D9112B for ; Mon, 21 Oct 2019 15:21:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 19B962084B for ; Mon, 21 Oct 2019 15:21:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 19B962084B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3B0226B0003; Mon, 21 Oct 2019 11:21:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 35F076B0005; Mon, 21 Oct 2019 11:21:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 24EA46B0006; Mon, 21 Oct 2019 11:21:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0179.hostedemail.com [216.40.44.179]) by kanga.kvack.org (Postfix) with ESMTP id 022346B0003 for ; Mon, 21 Oct 2019 11:21:28 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 99E20442C for ; Mon, 21 Oct 2019 15:21:28 +0000 (UTC) X-FDA: 76068155856.30.bag22_29e8dd9a50a60 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,zhongjiang@huawei.com,:akpm@linux-foundation.org:jhubbard@nvidia.com:vbabka@suse.cz:,RULES_HIT:30054:30064:30070,0,RBL:45.249.212.190:@huawei.com:.lbl8.mailshell.net-62.18.2.100 64.95.201.95,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: bag22_29e8dd9a50a60 X-Filterd-Recvd-Size: 2711 Received: from huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Mon, 21 Oct 2019 15:21:27 +0000 (UTC) Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 49BE75FDF8BA9DC682CD; Mon, 21 Oct 2019 23:20:59 +0800 (CST) Received: from linux-ibm.site (10.175.102.37) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.439.0; Mon, 21 Oct 2019 23:20:56 +0800 From: zhong jiang To: CC: , , Subject: [PATCH] mm/gup: allow CMA migration to propagate errors back to caller Date: Mon, 21 Oct 2019 23:17:10 +0800 Message-ID: <1571671030-58029-1-git-send-email-zhongjiang@huawei.com> X-Mailer: git-send-email 1.7.12.4 MIME-Version: 1.0 X-Originating-IP: [10.175.102.37] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: check_and_migrate_cma_pages() was recording the result of __get_user_pages_locked() in an unsigned "nr_pages" variable. Because __get_user_pages_locked() returns a signed value that can include negative errno values, this had the effect of hiding errors. Change check_and_migrate_cma_pages() implementation so that it uses a signed variable instead, and propagates the results back to the caller just as other gup internal functions do. This was discovered with the help of unsigned_lesser_than_zero.cocci. Suggested-by: John Hubbard Signed-off-by: zhong jiang Acked-by: Vlastimil Babka Reviewed-by: John Hubbard Reviewed-by: Ira Weiny --- mm/gup.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 8f236a3..c2b3e11 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, bool drain_allow = true; bool migrate_allow = true; LIST_HEAD(cma_page_list); + long ret = nr_pages; check_again: for (i = 0; i < nr_pages;) { @@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, * again migrating any new CMA pages which we failed to isolate * earlier. */ - nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages, + ret = __get_user_pages_locked(tsk, mm, start, nr_pages, pages, vmas, NULL, gup_flags); - if ((nr_pages > 0) && migrate_allow) { + if ((ret > 0) && migrate_allow) { + nr_pages = ret; drain_allow = true; goto check_again; } } - return nr_pages; + return ret; } #else static long check_and_migrate_cma_pages(struct task_struct *tsk,