From patchwork Tue Sep 8 19:55:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11764213 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0652459D for ; Tue, 8 Sep 2020 19:55:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BD18720768 for ; Tue, 8 Sep 2020 19:55:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="MBqUpgzd" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BD18720768 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BA1716B0055; Tue, 8 Sep 2020 15:55:50 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B5A1F6B005A; Tue, 8 Sep 2020 15:55:50 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A9856B005C; Tue, 8 Sep 2020 15:55:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0041.hostedemail.com [216.40.44.41]) by kanga.kvack.org (Postfix) with ESMTP id 80E506B0055 for ; Tue, 8 Sep 2020 15:55:50 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 448D1348D for ; Tue, 8 Sep 2020 19:55:50 +0000 (UTC) X-FDA: 77240949660.04.brick95_100beeb270d7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 16CDA800343E for ; Tue, 8 Sep 2020 19:55:50 +0000 (UTC) X-Spam-Summary: 1,0,0,1d46d90d0df049b5,d41d8cd98f00b204,willy@infradead.org,,RULES_HIT:41:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1542:1711:1730:1747:1777:1792:2393:2553:2559:2562:2898:3138:3139:3140:3141:3142:3353:3865:3866:3868:3870:3874:5007:6117:6261:6653:7576:7903:8603:8957:9010:10004:11026:11473:11658:11914:12043:12296:12297:12438:12555:12679:12895:13161:13184:13229:13894:14096:14181:14394:14721:21080:21451:21611:21627:21972:30034:30054:30070:30090,0,RBL:90.155.50.34:@infradead.org:.lbl8.mailshell.net-62.8.15.100 64.201.201.201;04y84w5ab17gkpznw6sw1i76ogaeaoccy3t4s434gy9sgydz171k8h88nh6bz6w.7caujmfjmad6x7b5agodob4tu59hoap5h1z9wuni5qictfaakxfdkenk9x4gaxa.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: brick95_100beeb270d7 X-Filterd-Recvd-Size: 4269 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Sep 2020 19:55:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ajhdp7uWm4tvObC49hvBR+DtDTTL5/ElgjL8eONIhng=; b=MBqUpgzdlW1flS7pA5+yw2grAQ Iv5tBaRSwSGotIswixBu8lguynUFkO7jAL0wcwNtdhTOcWiur4i0Yxg5jrMKaIiC9pjYQ5Mj0E0Zc fm+xLD8rhIWGM9RqQng/YIqaiodQLbDdN65nyQEU01HtBU5ozWpN7Apon9QpxKW4TjVUfFHm+bLr3 U6zHnlfItLqbqngHs+BGFEJtiTbDkAiqgp5h9TZy6n82AM6MX79g0nr2gLPkObdG0rptJCaQDbR1C AHEMj2kH/HwDFOABVVjvncrNPERhP9YDulfpUjykyGeVG3OlPzJ/CaMCRoSQUVWuoIIP9jifO4E5Q Mg6SDWwQ==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kFjiU-0006m0-Tf; Tue, 08 Sep 2020 19:55:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Cc: "Kirill A. Shutemov" , Huang Ying , Matthew Wilcox Subject: [PATCH 05/11] mm/huge_memory: Fix split assumption of page size Date: Tue, 8 Sep 2020 20:55:32 +0100 Message-Id: <20200908195539.25896-6-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200908195539.25896-1-willy@infradead.org> References: <20200908195539.25896-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 16CDA800343E X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Kirill A. Shutemov" File THPs may now be of arbitrary size, and we can't rely on that size after doing the split so remember the number of pages before we start the split. Signed-off-by: Kirill A. Shutemov Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: SeongJae Park --- mm/huge_memory.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a882d770a812..7bf837c32e3f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2302,13 +2302,13 @@ static void unmap_page(struct page *page) VM_BUG_ON_PAGE(!unmap_success, page); } -static void remap_page(struct page *page) +static void remap_page(struct page *page, unsigned int nr) { int i; if (PageTransHuge(page)) { remove_migration_ptes(page, page, true); } else { - for (i = 0; i < HPAGE_PMD_NR; i++) + for (i = 0; i < nr; i++) remove_migration_ptes(page + i, page + i, true); } } @@ -2383,6 +2383,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, struct lruvec *lruvec; struct address_space *swap_cache = NULL; unsigned long offset = 0; + unsigned int nr = thp_nr_pages(head); int i; lruvec = mem_cgroup_page_lruvec(head, pgdat); @@ -2398,7 +2399,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_lock(&swap_cache->i_pages); } - for (i = HPAGE_PMD_NR - 1; i >= 1; i--) { + for (i = nr - 1; i >= 1; i--) { __split_huge_page_tail(head, i, lruvec, list); /* Some pages can be beyond i_size: drop them from page cache */ if (head[i].index >= end) { @@ -2418,7 +2419,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, ClearPageCompound(head); - split_page_owner(head, HPAGE_PMD_NR); + split_page_owner(head, nr); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { @@ -2437,9 +2438,9 @@ static void __split_huge_page(struct page *page, struct list_head *list, spin_unlock_irqrestore(&pgdat->lru_lock, flags); - remap_page(head); + remap_page(head, nr); - for (i = 0; i < HPAGE_PMD_NR; i++) { + for (i = 0; i < nr; i++) { struct page *subpage = head + i; if (subpage == page) continue; @@ -2693,7 +2694,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) fail: if (mapping) xa_unlock(&mapping->i_pages); spin_unlock_irqrestore(&pgdata->lru_lock, flags); - remap_page(head); + remap_page(head, thp_nr_pages(head)); ret = -EBUSY; }