From patchwork Fri Oct 16 02:42:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11840545 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 40C7014B4 for ; Fri, 16 Oct 2020 02:42:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E665820897 for ; Fri, 16 Oct 2020 02:42:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="dWWoXH21" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E665820897 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0144094001E; Thu, 15 Oct 2020 22:42:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EDEBA940007; Thu, 15 Oct 2020 22:42:12 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5A0294001E; Thu, 15 Oct 2020 22:42:12 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0103.hostedemail.com [216.40.44.103]) by kanga.kvack.org (Postfix) with ESMTP id A4DE6940007 for ; Thu, 15 Oct 2020 22:42:12 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4F1BA180AD807 for ; Fri, 16 Oct 2020 02:42:12 +0000 (UTC) X-FDA: 77376239304.26.park49_5117f9d27219 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 296C21804B661 for ; Fri, 16 Oct 2020 02:42:12 +0000 (UTC) X-Spam-Summary: 1,0,0,4037062f5ad79c68,d41d8cd98f00b204,akpm@linux-foundation.org,,RULES_HIT:41:355:379:800:960:967:973:988:989:1260:1345:1359:1381:1431:1437:1534:1543:1711:1730:1747:1777:1792:2198:2199:2393:2525:2559:2563:2682:2685:2693:2731:2859:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:4605:5007:6119:6261:6653:7576:8603:8957:9025:9545:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12986:13161:13229:13846:14181:14721:21080:21324:21451:21627:21939:21990:30034:30054:30064:30070,0,RBL:198.145.29.99:@linux-foundation.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100;04yfpcf5tr6jrst664kqjhp3is76uopma5mqe7i6g8kgcmueb8amtm74g3d9hk5.s1ga5rsp4gzqow7php4jtkxg5kn1993iiqdqebnxe88of3ptjtas3jcfoomyxj6.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neut ral,Cust X-HE-Tag: park49_5117f9d27219 X-Filterd-Recvd-Size: 4582 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Fri, 16 Oct 2020 02:42:11 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7F43B208C7; Fri, 16 Oct 2020 02:42:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1602816131; bh=3BA0Ii63ANDUuv4mX0H+Tu/qgAN26ZS7Uvm0pBPs3nY=; h=Date:From:To:Subject:In-Reply-To:From; b=dWWoXH214wQcJ32sM74A6FpKAjJzB/G2O/jT8awJIVY+lj7pHeDSc6tnISSCcv4Q5 9oqbGxw5WPF4lehqLJCihQXaRpUSnVO8GFLGIDh/ZC+hR29fWW8LLikVwzSS6bYTPZ pRnwCL83rPS8w2IyjGGdZCht+vNxoF7mQpgXTr+Y= Date: Thu, 15 Oct 2020 19:42:10 -0700 From: Andrew Morton To: akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, sjpark@amazon.de, torvalds@linux-foundation.org, willy@infradead.org, ying.huang@intel.com Subject: [patch 021/156] mm/page_owner: change split_page_owner to take a count Message-ID: <20201016024210.g2-PNM3OR%akpm@linux-foundation.org> In-Reply-To: <20201015192732.f448da14e9854c7cb7299956@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Subject: mm/page_owner: change split_page_owner to take a count The implementation of split_page_owner() prefers a count rather than the old order of the page. When we support a variable size THP, we won't have the order at this point, but we will have the number of pages. So change the interface to what the caller and callee would prefer. Link: https://lkml.kernel.org/r/20200908195539.25896-4-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kirill A. Shutemov Reviewed-by: SeongJae Park Cc: Huang Ying Signed-off-by: Andrew Morton Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/page_owner.h | 6 +++--- mm/huge_memory.c | 2 +- mm/page_owner.c | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) --- a/include/linux/page_owner.h~mm-page_owner-change-split_page_owner-to-take-a-count +++ a/include/linux/page_owner.h @@ -11,7 +11,7 @@ extern struct page_ext_operations page_o extern void __reset_page_owner(struct page *page, unsigned int order); extern void __set_page_owner(struct page *page, unsigned int order, gfp_t gfp_mask); -extern void __split_page_owner(struct page *page, unsigned int order); +extern void __split_page_owner(struct page *page, unsigned int nr); extern void __copy_page_owner(struct page *oldpage, struct page *newpage); extern void __set_page_owner_migrate_reason(struct page *page, int reason); extern void __dump_page_owner(struct page *page); @@ -31,10 +31,10 @@ static inline void set_page_owner(struct __set_page_owner(page, order, gfp_mask); } -static inline void split_page_owner(struct page *page, unsigned int order) +static inline void split_page_owner(struct page *page, unsigned int nr) { if (static_branch_unlikely(&page_owner_inited)) - __split_page_owner(page, order); + __split_page_owner(page, nr); } static inline void copy_page_owner(struct page *oldpage, struct page *newpage) { --- a/mm/huge_memory.c~mm-page_owner-change-split_page_owner-to-take-a-count +++ a/mm/huge_memory.c @@ -2454,7 +2454,7 @@ static void __split_huge_page(struct pag ClearPageCompound(head); - split_page_owner(head, HPAGE_PMD_ORDER); + split_page_owner(head, HPAGE_PMD_NR); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { --- a/mm/page_owner.c~mm-page_owner-change-split_page_owner-to-take-a-count +++ a/mm/page_owner.c @@ -204,7 +204,7 @@ void __set_page_owner_migrate_reason(str page_owner->last_migrate_reason = reason; } -void __split_page_owner(struct page *page, unsigned int order) +void __split_page_owner(struct page *page, unsigned int nr) { int i; struct page_ext *page_ext = lookup_page_ext(page); @@ -213,7 +213,7 @@ void __split_page_owner(struct page *pag if (unlikely(!page_ext)) return; - for (i = 0; i < (1 << order); i++) { + for (i = 0; i < nr; i++) { page_owner = get_page_owner(page_ext); page_owner->order = 0; page_ext = page_ext_next(page_ext);