From patchwork Tue Dec 15 03:13:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11973939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4A0AC2BB9A for ; Tue, 15 Dec 2020 03:13:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5D4F120709 for ; Tue, 15 Dec 2020 03:13:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5D4F120709 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F2A998D006E; Mon, 14 Dec 2020 22:13:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F02528D001C; Mon, 14 Dec 2020 22:13:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E184E8D006E; Mon, 14 Dec 2020 22:13:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0148.hostedemail.com [216.40.44.148]) by kanga.kvack.org (Postfix) with ESMTP id CBC538D001C for ; Mon, 14 Dec 2020 22:13:15 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9ECDD1EF3 for ; Tue, 15 Dec 2020 03:13:15 +0000 (UTC) X-FDA: 77594045550.05.sleep89_040f7ba27420 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id 880BD1800206E for ; Tue, 15 Dec 2020 03:13:15 +0000 (UTC) X-HE-Tag: sleep89_040f7ba27420 X-Filterd-Recvd-Size: 3839 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Dec 2020 03:13:15 +0000 (UTC) Date: Mon, 14 Dec 2020 19:13:13 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608001994; bh=07jHCGFF1rD0iRR3qtOHI5fz4rtKqDAuWIseukuaiTs=; h=From:To:Subject:In-Reply-To:From; b=hT126yuHijl0cP1V1qpp+u+HUDbpBGK+xhbNePMMGlWFnPN+MIm0Cv3oSH8T1q3Zm DEko15ey7ZWIajsfJswUkob5aBr5NYql7/jQZI1uoVPQNhr4Vfs+Bswno1gGDifTGg /zOa80bOV9ov6zPjbuYoIRXatscqrSQDJnJjxtz8= From: Andrew Morton To: akpm@linux-foundation.org, jack@suse.cz, linux-mm@kvack.org, mgorman@suse.de, mhocko@suse.com, mm-commits@vger.kernel.org, shy828301@gmail.com, songliubraving@fb.com, torvalds@linux-foundation.org, willy@infradead.org, ziy@nvidia.com Subject: [patch 169/200] mm: migrate: clean up migrate_prep{_local} Message-ID: <20201215031313.9oRVfPBJ6%akpm@linux-foundation.org> In-Reply-To: <20201214190237.a17b70ae14f129e2dca3d204@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yang Shi Subject: mm: migrate: clean up migrate_prep{_local} The migrate_prep{_local} never fails, so it is pointless to have return value and check the return value. Link: https://lkml.kernel.org/r/20201113205359.556831-5-shy828301@gmail.com Signed-off-by: Yang Shi Reviewed-by: Zi Yan Cc: Jan Kara Cc: Matthew Wilcox Cc: Mel Gorman Cc: Michal Hocko Cc: Song Liu Signed-off-by: Andrew Morton --- include/linux/migrate.h | 4 ++-- mm/mempolicy.c | 8 ++------ mm/migrate.c | 8 ++------ 3 files changed, 6 insertions(+), 14 deletions(-) --- a/include/linux/migrate.h~mm-migrate-clean-up-migrate_prep_local +++ a/include/linux/migrate.h @@ -45,8 +45,8 @@ extern struct page *alloc_migration_targ extern int isolate_movable_page(struct page *page, isolate_mode_t mode); extern void putback_movable_page(struct page *page); -extern int migrate_prep(void); -extern int migrate_prep_local(void); +extern void migrate_prep(void); +extern void migrate_prep_local(void); extern void migrate_page_states(struct page *newpage, struct page *page); extern void migrate_page_copy(struct page *newpage, struct page *page); extern int migrate_huge_page_move_mapping(struct address_space *mapping, --- a/mm/mempolicy.c~mm-migrate-clean-up-migrate_prep_local +++ a/mm/mempolicy.c @@ -1114,9 +1114,7 @@ int do_migrate_pages(struct mm_struct *m int err; nodemask_t tmp; - err = migrate_prep(); - if (err) - return err; + migrate_prep(); mmap_read_lock(mm); @@ -1315,9 +1313,7 @@ static long do_mbind(unsigned long start if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { - err = migrate_prep(); - if (err) - goto mpol_out; + migrate_prep(); } { NODEMASK_SCRATCH(scratch); --- a/mm/migrate.c~mm-migrate-clean-up-migrate_prep_local +++ a/mm/migrate.c @@ -62,7 +62,7 @@ * to be migrated using isolate_lru_page(). If scheduling work on other CPUs is * undesirable, use migrate_prep_local() */ -int migrate_prep(void) +void migrate_prep(void) { /* * Clear the LRU lists so pages can be isolated. @@ -71,16 +71,12 @@ int migrate_prep(void) * pages that may be busy. */ lru_add_drain_all(); - - return 0; } /* Do the necessary work of migrate_prep but not if it involves other CPUs */ -int migrate_prep_local(void) +void migrate_prep_local(void) { lru_add_drain(); - - return 0; } int isolate_movable_page(struct page *page, isolate_mode_t mode)