From patchwork Thu May 3 23:29:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kravetz X-Patchwork-Id: 10379453 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E8BB760159 for ; Thu, 3 May 2018 23:30:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E0393292A4 for ; Thu, 3 May 2018 23:30:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D2A3D292A7; Thu, 3 May 2018 23:30:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1FAC4292A4 for ; Thu, 3 May 2018 23:30:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2A2776B0011; Thu, 3 May 2018 19:29:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 231656B0012; Thu, 3 May 2018 19:29:56 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A4C36B0022; Thu, 3 May 2018 19:29:56 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt0-f199.google.com (mail-qt0-f199.google.com [209.85.216.199]) by kanga.kvack.org (Postfix) with ESMTP id CF1106B0011 for ; Thu, 3 May 2018 19:29:55 -0400 (EDT) Received: by mail-qt0-f199.google.com with SMTP id x2-v6so14548602qto.10 for ; Thu, 03 May 2018 16:29:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=jUk0lM3XJWD9spuqYF4ecKo7JAU6MZLjoNWbamcEEtM=; b=qgOKk8JtrW9Wv74fprimosRjU73sSlO5TyLyudUyMi5H3nP7dQ34nYUNKfUAH4rZGa 0X55HkXi1joJSmhZaPKfB/d/RqaTdS5XXpZ0HlYsto2nnmbmpP1T7eJA3m5Ayi0wvg3S jAnitXRE++/aHjFQXZUbOlwPC5z6mj5tjLBmVp+Xbsdy2ObxXEBaasrflh4KCXUT2E/o UUjHpSaR7wJlU0rhSjD0kqSbZMZzn6/netP26NlKdnBO4/I//c0AvNLeeDZqRyio9k5S jRc66t691SZT96KDKVT4mWEs7L4BYwMj85xFuv7rXFZda7CD0KTcTt0JfcX5RKpGad04 S48g== X-Gm-Message-State: ALQs6tC2o7v43yMJhtOZVEna/0fabZ3XMdOo6CTvW7/N5mcFNHvaFCxN 3WH5EeW586nmzitPLth99FB0iEPcpIOMUnykLhrUPPrWZ1gDdHZffCAtRyjw8WMMnCqSDQHQBDU iJSCYKtvSsWwkOeq5XtUfbSr6mSs7RuHml7ysbSvIIcOKQ/8KhYZ5q/BWK3PWWVL0mA== X-Received: by 2002:aed:3eee:: with SMTP id o43-v6mr21316347qtf.3.1525390195512; Thu, 03 May 2018 16:29:55 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrmpoor9ionbrwHv1wVVHTFkUmUnyB6QbLrWNdu+TgiornWFG5L4FB83aLZt8J0NbJDw65k X-Received: by 2002:aed:3eee:: with SMTP id o43-v6mr21316316qtf.3.1525390194692; Thu, 03 May 2018 16:29:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525390194; cv=none; d=google.com; s=arc-20160816; b=BAWwt0v320h/UMVGMmpo2XW7gyn1ydERFtwPKvopR3Cog3//u+tqn7x/2IWPDdPpdF WvWXRY1/n8YCAXxGwmHjGhlCo/nZFTdhzPRYV7PhMGAAV6x57uVs9oo2JeXtDfrXG5zR DC7gFdKHogrz6FDbMg6XgAH4rkw5ng+UXITAAqz4QXO08SALj55a0pzns8k/MAIvHF5T w7slaOs5Rpu15n2Lq+gQdU7C6MqEtZJcEAbIEwNm/G/pWTIeB7i+djKU+5LrhaqHxuVI lfGwzuzMxIPjrDBoPy+pxI4GFoBUMU3iIifKr0dB6ypGApig6ai55VJ/ZfCyP74+9hrR 8kGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=jUk0lM3XJWD9spuqYF4ecKo7JAU6MZLjoNWbamcEEtM=; b=pwHKd6OFe6tLwzi4LnJ65NfU9Md3Y+02hMGYO4z/C1RBsEhvyCqo3dxNrrxWCyTh+p dnR2fl9fqMHUn5JCU30HtqDmNFDuucQmyxsplxwfoULKj+BeTcbU7a58ZvUcWMJDRmb7 uneClSv7mHwiPxmqez2HvEg4TzMrBbnODNuR0MDAtp8BFcrJpr9pxIWP0JE/2XZBl2Nb aQyoZ0Rn3jtBIcizdPppwtRD5fZa8gxLs9yEDaLVPl8SGLYgnLFJ/B2IjaYwMQI01dh1 /iISAecoNLy3b0VX0H3PI3edcF9oGIAKB+mNF9BkeEfucrlsroE+WCfVIVw3LRUR6gvD 04SA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=i6z6PMqP; spf=pass (google.com: domain of mike.kravetz@oracle.com designates 141.146.126.78 as permitted sender) smtp.mailfrom=mike.kravetz@oracle.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from aserp2120.oracle.com (aserp2120.oracle.com. [141.146.126.78]) by mx.google.com with ESMTPS id b40-v6si13359092qvb.240.2018.05.03.16.29.53 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 03 May 2018 16:29:54 -0700 (PDT) Received-SPF: pass (google.com: domain of mike.kravetz@oracle.com designates 141.146.126.78 as permitted sender) client-ip=141.146.126.78; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=i6z6PMqP; spf=pass (google.com: domain of mike.kravetz@oracle.com designates 141.146.126.78 as permitted sender) smtp.mailfrom=mike.kravetz@oracle.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w43NM2o0103561; Thu, 3 May 2018 23:29:49 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2017-10-26; bh=jUk0lM3XJWD9spuqYF4ecKo7JAU6MZLjoNWbamcEEtM=; b=i6z6PMqPSnDCmcH4AL8a39C9NNUltg8QBseCImjeYzxsn/iNwy0fMriFfj0COsGImHDu mexyUUVyDSugAMI9Pb6Wtcq3CiP/j5yMP22qQk8A2ZTdSzt5IWnsFFsJAFUkwceNRcW6 T4LV9DxaiNADmj0SXzBdq0ka3xOzrWfkEgAU+U6iq+sX5Lw1jgTKjxwDfuI9Hg4U6+MI MqMIQjellSe8kYwp+Z4G25uGwTcBoaQhpcra7uY4zv5PzN1qYdqB8rwqNzIk9rrBJv8b yM88aOVAZ6bBXcwv/T/9yhjkq7wqluSgaTYtO6MdnFYvlGJeIqKqNqFLUBmMp3vU5YIZ Dg== Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by aserp2120.oracle.com with ESMTP id 2hmgxg3xkf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 03 May 2018 23:29:49 +0000 Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id w43NTmOA023052 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 3 May 2018 23:29:48 GMT Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id w43NTlph030995; Thu, 3 May 2018 23:29:47 GMT Received: from monkey.oracle.com (/50.38.38.67) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 03 May 2018 16:29:47 -0700 From: Mike Kravetz To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org Cc: Reinette Chatre , Michal Hocko , Christopher Lameter , Guy Shattah , Anshuman Khandual , Michal Nazarewicz , Vlastimil Babka , David Nellans , Laura Abbott , Pavel Machek , Dave Hansen , Andrew Morton , Mike Kravetz Subject: [PATCH v2 2/4] mm: check for proper migrate type during isolation Date: Thu, 3 May 2018 16:29:33 -0700 Message-Id: <20180503232935.22539-3-mike.kravetz@oracle.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20180503232935.22539-1-mike.kravetz@oracle.com> References: <20180503232935.22539-1-mike.kravetz@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8882 signatures=668698 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1805030204 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The routine start_isolate_page_range and alloc_contig_range have comments saying that migratetype must be either MIGRATE_MOVABLE or MIGRATE_CMA. However, this is not enforced. What is important is that that all pageblocks in the range are of type migratetype. This is because blocks will be set to migratetype on error. Add a boolean argument enforce_migratetype to the routine start_isolate_page_range. If set, it will check that all pageblocks in the range have the passed migratetype. Return -EINVAL is pageblock is wrong type is found in range. A boolean is used for enforce_migratetype as there are two primary users. Contiguous range allocation which wants to enforce migration type checking. Memory offline (hotplug) which is not concerned about type checking. Signed-off-by: Mike Kravetz --- include/linux/page-isolation.h | 8 +++----- mm/memory_hotplug.c | 2 +- mm/page_alloc.c | 17 +++++++++-------- mm/page_isolation.c | 40 ++++++++++++++++++++++++++++++---------- 4 files changed, 43 insertions(+), 24 deletions(-) diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 4ae347cbc36d..2ab7e5a399ac 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -38,8 +38,6 @@ int move_freepages_block(struct zone *zone, struct page *page, /* * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE. - * If specified range includes migrate types other than MOVABLE or CMA, - * this will fail with -EBUSY. * * For isolating all pages in the range finally, the caller have to * free all pages in the range. test_page_isolated() can be used for @@ -47,11 +45,11 @@ int move_freepages_block(struct zone *zone, struct page *page, */ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, - unsigned migratetype, bool skip_hwpoisoned_pages); + unsigned migratetype, bool skip_hwpoisoned_pages, + bool enforce_migratetype); /* - * Changes MIGRATE_ISOLATE to MIGRATE_MOVABLE. - * target range is [start_pfn, end_pfn) + * Changes MIGRATE_ISOLATE to migratetype for range [start_pfn, end_pfn) */ int undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index f74826cdceea..ebc1c8c330e2 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1601,7 +1601,7 @@ static int __ref __offline_pages(unsigned long start_pfn, /* set above range as isolated */ ret = start_isolate_page_range(start_pfn, end_pfn, - MIGRATE_MOVABLE, true); + MIGRATE_MOVABLE, true, false); if (ret) return ret; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0fd5e8e2456e..cb1a5e0be6ee 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7787,9 +7787,10 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, * alloc_contig_range() -- tries to allocate given range of pages * @start: start PFN to allocate * @end: one-past-the-last PFN to allocate - * @migratetype: migratetype of the underlaying pageblocks (either - * #MIGRATE_MOVABLE or #MIGRATE_CMA). All pageblocks - * in range must have the same migratetype and it must + * @migratetype: migratetype of the underlaying pageblocks. All + * pageblocks in range must have the same migratetype. + * migratetype is typically MIGRATE_MOVABLE or + * MIGRATE_CMA, but this is not a requirement. * be either of the two. * @gfp_mask: GFP mask to use during compaction * @@ -7840,15 +7841,15 @@ int alloc_contig_range(unsigned long start, unsigned long end, * allocator removing them from the buddy system. This way * page allocator will never consider using them. * - * This lets us mark the pageblocks back as - * MIGRATE_CMA/MIGRATE_MOVABLE so that free pages in the - * aligned range but not in the unaligned, original range are - * put back to page allocator so that buddy can use them. + * This lets us mark the pageblocks back as their original + * migrate type so that free pages in the aligned range but + * not in the unaligned, original range are put back to page + * allocator so that buddy can use them. */ ret = start_isolate_page_range(pfn_max_align_down(start), pfn_max_align_up(end), migratetype, - false); + false, true); if (ret) return ret; diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 43e085608846..472191cc1909 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -16,7 +16,8 @@ #include static int set_migratetype_isolate(struct page *page, int migratetype, - bool skip_hwpoisoned_pages) + bool skip_hwpoisoned_pages, + bool enforce_migratetype) { struct zone *zone; unsigned long flags, pfn; @@ -36,6 +37,17 @@ static int set_migratetype_isolate(struct page *page, int migratetype, if (is_migrate_isolate_page(page)) goto out; + /* + * If requested, check migration type of pageblock and make sure + * it matches migratetype + */ + if (enforce_migratetype) { + if (get_pageblock_migratetype(page) != migratetype) { + ret = -EINVAL; + goto out; + } + } + pfn = page_to_pfn(page); arg.start_pfn = pfn; arg.nr_pages = pageblock_nr_pages; @@ -167,14 +179,16 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages) * to be MIGRATE_ISOLATE. * @start_pfn: The lower PFN of the range to be isolated. * @end_pfn: The upper PFN of the range to be isolated. - * @migratetype: migrate type to set in error recovery. + * @migratetype: migrate type of all blocks in range. * * Making page-allocation-type to be MIGRATE_ISOLATE means free pages in * the range will never be allocated. Any free pages and pages freed in the * future will not be allocated again. * * start_pfn/end_pfn must be aligned to pageblock_order. - * Return 0 on success and -EBUSY if any part of range cannot be isolated. + * Return 0 on success or error returned by set_migratetype_isolate. Typical + * errors are -EBUSY if any part of range cannot be isolated or -EINVAL if + * any page block is not of migratetype. * * There is no high level synchronization mechanism that prevents two threads * from trying to isolate overlapping ranges. If this happens, one thread @@ -185,11 +199,13 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages) * prevents two threads from simultaneously working on overlapping ranges. */ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, - unsigned migratetype, bool skip_hwpoisoned_pages) + unsigned migratetype, bool skip_hwpoisoned_pages, + bool enforce_migratetype) { unsigned long pfn; unsigned long undo_pfn; struct page *page; + int ret = 0; BUG_ON(!IS_ALIGNED(start_pfn, pageblock_nr_pages)); BUG_ON(!IS_ALIGNED(end_pfn, pageblock_nr_pages)); @@ -198,13 +214,17 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, pfn < end_pfn; pfn += pageblock_nr_pages) { page = __first_valid_page(pfn, pageblock_nr_pages); - if (page && - set_migratetype_isolate(page, migratetype, skip_hwpoisoned_pages)) { - undo_pfn = pfn; - goto undo; + if (page) { + ret = set_migratetype_isolate(page, migratetype, + skip_hwpoisoned_pages, + enforce_migratetype); + if (ret) { + undo_pfn = pfn; + goto undo; + } } } - return 0; + return ret; undo: for (pfn = start_pfn; pfn < undo_pfn; @@ -215,7 +235,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, unset_migratetype_isolate(page, migratetype); } - return -EBUSY; + return ret; } /*