From patchwork Fri Aug 7 18:33:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11705995 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EA5A414B7 for ; Fri, 7 Aug 2020 18:34:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B630322C9F for ; Fri, 7 Aug 2020 18:34:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="fZBxq9b3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B630322C9F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B97738D0007; Fri, 7 Aug 2020 14:34:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B47968D0005; Fri, 7 Aug 2020 14:34:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A10878D0007; Fri, 7 Aug 2020 14:34:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0185.hostedemail.com [216.40.44.185]) by kanga.kvack.org (Postfix) with ESMTP id 8D57E8D0005 for ; Fri, 7 Aug 2020 14:34:06 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 257813AA7 for ; Fri, 7 Aug 2020 18:34:06 +0000 (UTC) X-FDA: 77124622092.20.flock95_331821126fc2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 5F35B180C061F for ; Fri, 7 Aug 2020 18:34:04 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,jhubbard@nvidia.com,,RULES_HIT:30030:30054:30064,0,RBL:216.228.121.64:@nvidia.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100;04yg5th3kizfwmw58qfnkic7tj5y5ockqb3xrpwrwy9dr58yxsfqsa4564g6dcg.48cxmczek7odqubcp111coysjcsizmqzxk3sfyqn43hijspj99agp5n13wakahg.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: flock95_331821126fc2 X-Filterd-Recvd-Size: 5744 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Fri, 7 Aug 2020 18:34:03 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 07 Aug 2020 11:33:11 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Fri, 07 Aug 2020 11:34:02 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Fri, 07 Aug 2020 11:34:02 -0700 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 7 Aug 2020 18:34:00 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 7 Aug 2020 18:34:00 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.82.82]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Fri, 07 Aug 2020 11:34:00 -0700 From: John Hubbard To: Andrew Morton CC: LKML , , , , , , , , "John Hubbard" , "Kirill A . Shutemov" Subject: [PATCH] mm, dump_page: rename head_mapcount() --> head_compound_mapcount() Date: Fri, 7 Aug 2020 11:33:58 -0700 Message-ID: <20200807183358.105097-1-jhubbard@nvidia.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200807164805.xm4ingj4crdiemol@box> References: <20200807164805.xm4ingj4crdiemol@box> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1596825191; bh=xxywolx6cKpkt3Msomftif/rasP5xo1w2SGk8DOB1Ps=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=fZBxq9b31FTEisGJAH9HHMttW39VPDGCzdUfoWCduj+6+hL6FFkoISsoLPktjnNFf kOpVG1n8D6xGnxTmEOa+gaa2IAyuFVj8pDxa/i5TNat0K6UALDDaAL8Bt15E9QU8l4 77YU4dRu3D5XZ5876r9PFVosCeBwuX3fdOh/uMTDBrcGZZ5guUJvZXBP5oItI0wTER 8cKHU/aAc1hJ049pUnb2K7FrP7bPOO9ObSMB/7ARmvbxdYUPdfoYu3EIjuNBm+OHv0 yCn9vEwbh0z4GUaItJ5F7xeI+fYj+1eYisZjz9ITOYJfdVL+/fGdjPLBIBFoEpBUpk s6qH6BmORQCaA== X-Rspamd-Queue-Id: 5F35B180C061F X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: And similarly, rename head_pincount() --> head_compound_pincount(). These names are more accurate (or less misleading) than the original ones. Cc: Qian Cai Cc: Matthew Wilcox Cc: Vlastimil Babka Cc: Kirill A. Shutemov Signed-off-by: John Hubbard --- Hi, This is a follow-up patch to v2 of "mm, dump_page: do not crash with bad compound_mapcount()", which I see has has already been sent as part of Andrew's pull request. Otherwise I would have sent a v3. Of course, if it's somehow not too late, then squashing this patch into that one, effectively creating a v3 with the preferred names, would be a nice touch. thanks, John Hubbard include/linux/mm.h | 8 ++++---- mm/debug.c | 6 +++--- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 8ab941cf73f4..98d379d9806f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -776,7 +776,7 @@ static inline void *kvcalloc(size_t n, size_t size, gfp_t flags) extern void kvfree(const void *addr); extern void kvfree_sensitive(const void *addr, size_t len); -static inline int head_mapcount(struct page *head) +static inline int head_compound_mapcount(struct page *head) { return atomic_read(compound_mapcount_ptr(head)) + 1; } @@ -790,7 +790,7 @@ static inline int compound_mapcount(struct page *page) { VM_BUG_ON_PAGE(!PageCompound(page), page); page = compound_head(page); - return head_mapcount(page); + return head_compound_mapcount(page); } /* @@ -903,7 +903,7 @@ static inline bool hpage_pincount_available(struct page *page) return PageCompound(page) && compound_order(page) > 1; } -static inline int head_pincount(struct page *head) +static inline int head_compound_pincount(struct page *head) { return atomic_read(compound_pincount_ptr(head)); } @@ -912,7 +912,7 @@ static inline int compound_pincount(struct page *page) { VM_BUG_ON_PAGE(!hpage_pincount_available(page), page); page = compound_head(page); - return head_pincount(page); + return head_compound_pincount(page); } static inline void set_compound_order(struct page *page, unsigned int order) diff --git a/mm/debug.c b/mm/debug.c index 69b60637112b..a0c060abf1e7 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -102,12 +102,12 @@ void __dump_page(struct page *page, const char *reason) if (hpage_pincount_available(page)) { pr_warn("head:%p order:%u compound_mapcount:%d compound_pincount:%d\n", head, compound_order(head), - head_mapcount(head), - head_pincount(head)); + head_compound_mapcount(head), + head_compound_pincount(head)); } else { pr_warn("head:%p order:%u compound_mapcount:%d\n", head, compound_order(head), - head_mapcount(head)); + head_compound_mapcount(head)); } } if (PageKsm(page))