From patchwork Mon Mar 14 07:18:03 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 8576821 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 5DDE59F6E1 for ; Mon, 14 Mar 2016 07:19:00 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5C2842045A for ; Mon, 14 Mar 2016 07:18:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2FA2C20454 for ; Mon, 14 Mar 2016 07:18:58 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1afMko-0003tE-EU; Mon, 14 Mar 2016 07:17:26 +0000 Received: from lgeamrelo11.lge.com ([156.147.23.51]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1afMkk-0003WZ-Cn for linux-arm-kernel@lists.infradead.org; Mon, 14 Mar 2016 07:17:24 +0000 Received: from unknown (HELO lgeamrelo01.lge.com) (156.147.1.125) by 156.147.23.51 with ESMTP; 14 Mar 2016 16:16:59 +0900 X-Original-SENDERIP: 156.147.1.125 X-Original-MAILFROM: iamjoonsoo.kim@lge.com Received: from unknown (HELO localhost) (10.177.222.138) by 156.147.1.125 with ESMTP; 14 Mar 2016 16:16:59 +0900 X-Original-SENDERIP: 10.177.222.138 X-Original-MAILFROM: iamjoonsoo.kim@lge.com Date: Mon, 14 Mar 2016 16:18:03 +0900 From: Joonsoo Kim To: Vlastimil Babka Subject: Re: Suspicious error for CMA stress test Message-ID: <20160314071803.GA28094@js1304-P5Q-DELUXE> References: <56D93ABE.9070406@huawei.com> <20160307043442.GB24602@js1304-P5Q-DELUXE> <56DD38E7.3050107@huawei.com> <56DDCB86.4030709@redhat.com> <56DE30CB.7020207@huawei.com> <56DF7B28.9060108@huawei.com> <56E2FB5C.1040602@suse.cz> <20160314064925.GA27587@js1304-P5Q-DELUXE> <56E662E8.700@suse.cz> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <56E662E8.700@suse.cz> User-Agent: Mutt/1.5.21 (2010-09-15) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160314_001722_854940_72CEF02C X-CRM114-Status: GOOD ( 23.39 ) X-Spam-Score: -1.9 (-) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laura Abbott , Arnd Bergmann , Catalin Marinas , "Leizhen \(ThunderTown\)" , Will Deacon , "linux-kernel@vger.kernel.org" , qiuxishi , "linux-mm@kvack.org" , dingtinahong , Hanjun Guo , Sasha Levin , Andrew Morton , Laura Abbott , "linux-arm-kernel@lists.infradead.org" , chenjie6@huawei.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote: > On 03/14/2016 07:49 AM, Joonsoo Kim wrote: > >On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote: > >>On 03/11/2016 04:00 PM, Joonsoo Kim wrote: > >> > >>How about something like this? Just and idea, probably buggy (off-by-one etc.). > >>Should keep away cost from >>relatively fewer >pageblock_order iterations. > > > >Hmm... I tested this and found that it's code size is a little bit > >larger than mine. I'm not sure why this happens exactly but I guess it would be > >related to compiler optimization. In this case, I'm in favor of my > >implementation because it looks like well abstraction. It adds one > >unlikely branch to the merge loop but compiler would optimize it to > >check it once. > > I would be surprised if compiler optimized that to check it once, as > order increases with each loop iteration. But maybe it's smart > enough to do something like I did by hand? Guess I'll check the > disassembly. Okay. I used following slightly optimized version and I need to add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)' to yours. Please consider it, too. Thanks. ------------------------>8------------------------ From 36b8ffdaa0e7a8d33fd47a62a35a9e507e3e62e9 Mon Sep 17 00:00:00 2001 From: Joonsoo Kim Date: Mon, 14 Mar 2016 15:20:07 +0900 Subject: [PATCH] mm: fix cma Signed-off-by: Joonsoo Kim --- mm/page_alloc.c | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0bb933a..f7baa4f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -627,8 +627,8 @@ static inline void rmv_page_order(struct page *page) * * For recording page's order, we use page_private(page). */ -static inline int page_is_buddy(struct page *page, struct page *buddy, - unsigned int order) +static inline int page_is_buddy(struct zone *zone, struct page *page, + struct page *buddy, unsigned int order, int mt) { if (!pfn_valid_within(page_to_pfn(buddy))) return 0; @@ -651,6 +651,15 @@ static inline int page_is_buddy(struct page *page, struct page *buddy, if (page_zone_id(page) != page_zone_id(buddy)) return 0; + if (unlikely(has_isolate_pageblock(zone) && + order >= pageblock_order)) { + int buddy_mt = get_pageblock_migratetype(buddy); + + if (mt != buddy_mt && (is_migrate_isolate(mt) || + is_migrate_isolate(buddy_mt))) + return 0; + } + VM_BUG_ON_PAGE(page_count(buddy) != 0, buddy); return 1; @@ -698,17 +707,8 @@ static inline void __free_one_page(struct page *page, VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); VM_BUG_ON(migratetype == -1); - if (is_migrate_isolate(migratetype)) { - /* - * We restrict max order of merging to prevent merge - * between freepages on isolate pageblock and normal - * pageblock. Without this, pageblock isolation - * could cause incorrect freepage accounting. - */ - max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1); - } else { + if (!is_migrate_isolate(migratetype)) __mod_zone_freepage_state(zone, 1 << order, migratetype); - } page_idx = pfn & ((1 << max_order) - 1); @@ -718,7 +718,7 @@ static inline void __free_one_page(struct page *page, while (order < max_order - 1) { buddy_idx = __find_buddy_index(page_idx, order); buddy = page + (buddy_idx - page_idx); - if (!page_is_buddy(page, buddy, order)) + if (!page_is_buddy(zone, page, buddy, order, migratetype)) break; /* * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page, @@ -752,7 +752,8 @@ static inline void __free_one_page(struct page *page, higher_page = page + (combined_idx - page_idx); buddy_idx = __find_buddy_index(combined_idx, order + 1); higher_buddy = higher_page + (buddy_idx - combined_idx); - if (page_is_buddy(higher_page, higher_buddy, order + 1)) { + if (page_is_buddy(zone, higher_page, higher_buddy, + order + 1, migratetype)) { list_add_tail(&page->lru, &zone->free_area[order].free_list[migratetype]); goto out;