From patchwork Mon Dec 20 20:59:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12688523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81343C433EF for ; Mon, 20 Dec 2021 20:59:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B1AB46B0074; Mon, 20 Dec 2021 15:59:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AC72E6B0075; Mon, 20 Dec 2021 15:59:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B4496B0078; Mon, 20 Dec 2021 15:59:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0188.hostedemail.com [216.40.44.188]) by kanga.kvack.org (Postfix) with ESMTP id 8D1196B0074 for ; Mon, 20 Dec 2021 15:59:52 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2C63E181AC9CC for ; Mon, 20 Dec 2021 20:59:52 +0000 (UTC) X-FDA: 78939389424.10.27D3665 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 4B31A20037 for ; Mon, 20 Dec 2021 20:59:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:In-Reply-To:References; bh=60Yr6Gp5sFlQGuDfH3Lsyh46zOfEavJCI54l6GLk5hA=; b=puTbgrAnChqLj3/Rw0PyiIXbln r2UmRDlmrctrn7BAT/u/1WnQ8W9fBenJ07pwCYOa2nhq5F0fPZTsyamkEcA1/ybuxH8ZGtyZ7Vjhn Q28KtvNpzySMVHoKw93ToNSbRXBK9/s5XGtK1Kiy7l42kqT8AGK+H8Ioi5ljvKyMN0q+oETAhG6PQ 3Ya/DEWAI1/w1WFL8sUFhqJpQk1WyV+jbAuzQfP24PpUOoHKQto9HOh6jApDTn1/yRUWBeL6LoFQd QWTuf1/G9QQZjUjdRgYSsFAOu72kjgrVF4NQymGp5DSRp9UEeItEa8dPePN+jge2Mq/9NnPy606bj u2HO07ZA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzPl4-001uh3-Tq; Mon, 20 Dec 2021 20:59:47 +0000 From: "Matthew Wilcox (Oracle)" To: Linus Torvalds Cc: "Matthew Wilcox (Oracle)" , David Hildenbrand , Andrew Morton , linux-mm@kvack.org Subject: [PATCH 1/3] mm: Remove last argument of reuse_swap_page() Date: Mon, 20 Dec 2021 20:59:41 +0000 Message-Id: <20211220205943.456187-1-willy@infradead.org> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=puTbgrAn; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Queue-Id: 4B31A20037 X-Stat-Signature: ktby3krww78pmrp8kdp6c6co4nzu3n1t X-Rspamd-Server: rspam04 X-HE-Tag: 1640033983-269837 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: None of the callers care about the total_map_swapcount() any more. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: William Kucharski --- include/linux/swap.h | 6 +++--- mm/huge_memory.c | 2 +- mm/khugepaged.c | 2 +- mm/memory.c | 2 +- mm/swapfile.c | 8 +------- 5 files changed, 7 insertions(+), 13 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index d1ea44b31f19..bdccbf1efa61 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -514,7 +514,7 @@ extern int __swp_swapcount(swp_entry_t entry); extern int swp_swapcount(swp_entry_t entry); extern struct swap_info_struct *page_swap_info(struct page *); extern struct swap_info_struct *swp_swap_info(swp_entry_t entry); -extern bool reuse_swap_page(struct page *, int *); +extern bool reuse_swap_page(struct page *); extern int try_to_free_swap(struct page *); struct backing_dev_info; extern int init_swap_address_space(unsigned int type, unsigned long nr_pages); @@ -680,8 +680,8 @@ static inline int swp_swapcount(swp_entry_t entry) return 0; } -#define reuse_swap_page(page, total_map_swapcount) \ - (page_trans_huge_mapcount(page, total_map_swapcount) == 1) +#define reuse_swap_page(page) \ + (page_trans_huge_mapcount(page, NULL) == 1) static inline int try_to_free_swap(struct page *page) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e5483347291c..b61fbe95c856 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1322,7 +1322,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) * We can only reuse the page if nobody else maps the huge page or it's * part. */ - if (reuse_swap_page(page, NULL)) { + if (reuse_swap_page(page)) { pmd_t entry; entry = pmd_mkyoung(orig_pmd); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index e99101162f1a..11794bdf513a 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -681,7 +681,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, goto out; } if (!pte_write(pteval) && PageSwapCache(page) && - !reuse_swap_page(page, NULL)) { + !reuse_swap_page(page)) { /* * Page is in the swap cache and cannot be re-used. * It cannot be collapsed into a THP. diff --git a/mm/memory.c b/mm/memory.c index 8f1de811a1dc..dd85fd07cb24 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3626,7 +3626,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS); pte = mk_pte(page, vma->vm_page_prot); - if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) { + if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page)) { pte = maybe_mkwrite(pte_mkdirty(pte), vma); vmf->flags &= ~FAULT_FLAG_WRITE; ret |= VM_FAULT_WRITE; diff --git a/mm/swapfile.c b/mm/swapfile.c index e59e08ef46e1..a4f48189300a 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1668,12 +1668,8 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount, * to it. And as a side-effect, free up its swap: because the old content * on disk will never be read, and seeking back there to write new content * later would only waste time away from clustering. - * - * NOTE: total_map_swapcount should not be relied upon by the caller if - * reuse_swap_page() returns false, but it may be always overwritten - * (see the other implementation for CONFIG_SWAP=n). */ -bool reuse_swap_page(struct page *page, int *total_map_swapcount) +bool reuse_swap_page(struct page *page) { int count, total_mapcount, total_swapcount; @@ -1682,8 +1678,6 @@ bool reuse_swap_page(struct page *page, int *total_map_swapcount) return false; count = page_trans_huge_map_swapcount(page, &total_mapcount, &total_swapcount); - if (total_map_swapcount) - *total_map_swapcount = total_mapcount + total_swapcount; if (count == 1 && PageSwapCache(page) && (likely(!PageTransCompound(page)) || /* The remaining swap count will be freed soon */ From patchwork Mon Dec 20 20:59:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12688527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53DEFC433EF for ; Mon, 20 Dec 2021 20:59:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BB3E16B0078; Mon, 20 Dec 2021 15:59:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B15F46B007B; Mon, 20 Dec 2021 15:59:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A2AE86B007D; Mon, 20 Dec 2021 15:59:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0175.hostedemail.com [216.40.44.175]) by kanga.kvack.org (Postfix) with ESMTP id 968A16B0078 for ; Mon, 20 Dec 2021 15:59:56 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 5DF96180B7D47 for ; Mon, 20 Dec 2021 20:59:56 +0000 (UTC) X-FDA: 78939389592.07.613327A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id EDA9F2003E for ; Mon, 20 Dec 2021 20:59:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=xpXK5KLThtrhEg++2Bh9HPFcIeQfaLWq05xrOf+j7vk=; b=isBDZCer9c96eRAcotbMtZoRUA xUUiiebSLX/d9EPaE88nDj+8qs7KqFoiEYU0PUv+N0EwzIpLrUu0wpfw9kWT1h3XiHCN0aKqK6gHI hesAQqO6L2Snzad3Bz2qvcGH/wJjYaJ5iJPKOSO5tgqpKF3KjNbPzR2+LbpMGioDMWWytMyS0MPIV 6Fkb/e6xjFZfOs0q1BcmMMbh+rWYMQ0QfulDukWJtRAoFvE7NbFCnj3ZskVQTJkPIJ93A3RSrTtlZ b9d6z9QDZu3TQlLhFPwTGkq/WqRYP01l553lpLbASK9EEvmeWCUPEeh5c2+DhORNPz2j03b5GQbpX jOw8ILsQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzPl5-001uh5-6K; Mon, 20 Dec 2021 20:59:47 +0000 From: "Matthew Wilcox (Oracle)" To: Linus Torvalds Cc: "Matthew Wilcox (Oracle)" , David Hildenbrand , Andrew Morton , linux-mm@kvack.org Subject: [PATCH 2/3] mm: Remove the total_mapcount argument from page_trans_huge_map_swapcount() Date: Mon, 20 Dec 2021 20:59:42 +0000 Message-Id: <20211220205943.456187-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211220205943.456187-1-willy@infradead.org> References: <20211220205943.456187-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: EDA9F2003E X-Stat-Signature: if1u7cmicjkm595bm5rb85xkntm3ggya Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=isBDZCer; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1640033987-445035 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that we don't report it to the caller of reuse_swap_page(), we don't need to request it from page_trans_huge_map_swapcount(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/swapfile.c | 32 ++++++++++++-------------------- 1 file changed, 12 insertions(+), 20 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index a4f48189300a..cb1a04135804 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1601,31 +1601,30 @@ static bool page_swapped(struct page *page) return false; } -static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount, +static int page_trans_huge_map_swapcount(struct page *page, int *total_swapcount) { - int i, map_swapcount, _total_mapcount, _total_swapcount; + int i, map_swapcount, _total_swapcount; unsigned long offset = 0; struct swap_info_struct *si; struct swap_cluster_info *ci = NULL; unsigned char *map = NULL; - int mapcount, swapcount = 0; + int swapcount = 0; /* hugetlbfs shouldn't call it */ VM_BUG_ON_PAGE(PageHuge(page), page); if (!IS_ENABLED(CONFIG_THP_SWAP) || likely(!PageTransCompound(page))) { - mapcount = page_trans_huge_mapcount(page, total_mapcount); if (PageSwapCache(page)) swapcount = page_swapcount(page); if (total_swapcount) *total_swapcount = swapcount; - return mapcount + swapcount; + return swapcount + page_trans_huge_mapcount(page, NULL); } page = compound_head(page); - _total_mapcount = _total_swapcount = map_swapcount = 0; + _total_swapcount = map_swapcount = 0; if (PageSwapCache(page)) { swp_entry_t entry; @@ -1639,8 +1638,7 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount, if (map) ci = lock_cluster(si, offset); for (i = 0; i < HPAGE_PMD_NR; i++) { - mapcount = atomic_read(&page[i]._mapcount) + 1; - _total_mapcount += mapcount; + int mapcount = atomic_read(&page[i]._mapcount) + 1; if (map) { swapcount = swap_count(map[offset + i]); _total_swapcount += swapcount; @@ -1648,19 +1646,14 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount, map_swapcount = max(map_swapcount, mapcount + swapcount); } unlock_cluster(ci); - if (PageDoubleMap(page)) { + + if (PageDoubleMap(page)) map_swapcount -= 1; - _total_mapcount -= HPAGE_PMD_NR; - } - mapcount = compound_mapcount(page); - map_swapcount += mapcount; - _total_mapcount += mapcount; - if (total_mapcount) - *total_mapcount = _total_mapcount; + if (total_swapcount) *total_swapcount = _total_swapcount; - return map_swapcount; + return map_swapcount + compound_mapcount(page); } /* @@ -1671,13 +1664,12 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount, */ bool reuse_swap_page(struct page *page) { - int count, total_mapcount, total_swapcount; + int count, total_swapcount; VM_BUG_ON_PAGE(!PageLocked(page), page); if (unlikely(PageKsm(page))) return false; - count = page_trans_huge_map_swapcount(page, &total_mapcount, - &total_swapcount); + count = page_trans_huge_map_swapcount(page, &total_swapcount); if (count == 1 && PageSwapCache(page) && (likely(!PageTransCompound(page)) || /* The remaining swap count will be freed soon */ From patchwork Mon Dec 20 20:59:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12688525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B498C433F5 for ; Mon, 20 Dec 2021 20:59:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF2196B0075; Mon, 20 Dec 2021 15:59:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A516A6B0078; Mon, 20 Dec 2021 15:59:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 918CC6B007B; Mon, 20 Dec 2021 15:59:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id 865716B0075 for ; Mon, 20 Dec 2021 15:59:54 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 47D108249980 for ; Mon, 20 Dec 2021 20:59:54 +0000 (UTC) X-FDA: 78939389508.23.5C4CA8A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id CE7A0180030 for ; Mon, 20 Dec 2021 20:59:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kgQbkDJbYnbr+naS+MxqASUz9iuk/0e3gQYdO2UBgCU=; b=Sllfzn0oUjvHEOIQNup0jW42aw S5UT2D+FzXA5RAp0uhPmz6vkB4yNdgpIRQa4jtWDyacYyZp5lSV9+qRarpv5m9qZR7GswdKbFldXf S+a3uaF5+5coTRsaxST9IXx6qP+YSrtyRpuyf2paJB5EjW2oKzzhNgHps+Mzn1nrkukZyIfY5Pj5A wDtz/anp0z999X4ir1E7GwLSWiJvvWWyFAVDGfP0t+0w0I1U3MxoT7Z5vdAWS/pVsSOwYmZjm1yZ0 aprBrQnSSRELUqxM7nfYh7lPQ3AjAApVN2IvXUltYFt5o/7D7wytNjtEp0f2EjkGIuLTdUI+sGYR4 +Ttnhz4Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzPl5-001uh7-9p; Mon, 20 Dec 2021 20:59:47 +0000 From: "Matthew Wilcox (Oracle)" To: Linus Torvalds Cc: "Matthew Wilcox (Oracle)" , David Hildenbrand , Andrew Morton , linux-mm@kvack.org Subject: [PATCH 3/3] mm: Remove the total_mapcount argument from page_trans_huge_mapcount() Date: Mon, 20 Dec 2021 20:59:43 +0000 Message-Id: <20211220205943.456187-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211220205943.456187-1-willy@infradead.org> References: <20211220205943.456187-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: CE7A0180030 X-Stat-Signature: s9z1zfjhzd9wmcrogeo6hdjpmfecpnuw Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Sllfzn0o; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1640033993-881061 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All callers pass NULL, so we can stop calculating the value we would store in it. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 10 +++------- include/linux/swap.h | 2 +- mm/huge_memory.c | 30 ++++++++++-------------------- mm/swapfile.c | 2 +- 4 files changed, 15 insertions(+), 29 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a7e4a9e7d807..286eb4155c80 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -840,19 +840,15 @@ static inline int page_mapcount(struct page *page) #ifdef CONFIG_TRANSPARENT_HUGEPAGE int total_mapcount(struct page *page); -int page_trans_huge_mapcount(struct page *page, int *total_mapcount); +int page_trans_huge_mapcount(struct page *page); #else static inline int total_mapcount(struct page *page) { return page_mapcount(page); } -static inline int page_trans_huge_mapcount(struct page *page, - int *total_mapcount) +static inline int page_trans_huge_mapcount(struct page *page) { - int mapcount = page_mapcount(page); - if (total_mapcount) - *total_mapcount = mapcount; - return mapcount; + return page_mapcount(page); } #endif diff --git a/include/linux/swap.h b/include/linux/swap.h index bdccbf1efa61..1d38d9475c4d 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -681,7 +681,7 @@ static inline int swp_swapcount(swp_entry_t entry) } #define reuse_swap_page(page) \ - (page_trans_huge_mapcount(page, NULL) == 1) + (page_trans_huge_mapcount(page) == 1) static inline int try_to_free_swap(struct page *page) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b61fbe95c856..6ed86a8f6a5b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2542,38 +2542,28 @@ int total_mapcount(struct page *page) * need full accuracy to avoid breaking page pinning, because * page_trans_huge_mapcount() is slower than page_mapcount(). */ -int page_trans_huge_mapcount(struct page *page, int *total_mapcount) +int page_trans_huge_mapcount(struct page *page) { - int i, ret, _total_mapcount, mapcount; + int i, ret; /* hugetlbfs shouldn't call it */ VM_BUG_ON_PAGE(PageHuge(page), page); - if (likely(!PageTransCompound(page))) { - mapcount = atomic_read(&page->_mapcount) + 1; - if (total_mapcount) - *total_mapcount = mapcount; - return mapcount; - } + if (likely(!PageTransCompound(page))) + return atomic_read(&page->_mapcount) + 1; page = compound_head(page); - _total_mapcount = ret = 0; + ret = 0; for (i = 0; i < thp_nr_pages(page); i++) { - mapcount = atomic_read(&page[i]._mapcount) + 1; + int mapcount = atomic_read(&page[i]._mapcount) + 1; ret = max(ret, mapcount); - _total_mapcount += mapcount; } - if (PageDoubleMap(page)) { + + if (PageDoubleMap(page)) ret -= 1; - _total_mapcount -= thp_nr_pages(page); - } - mapcount = compound_mapcount(page); - ret += mapcount; - _total_mapcount += mapcount; - if (total_mapcount) - *total_mapcount = _total_mapcount; - return ret; + + return ret + compound_mapcount(page); } /* Racy check whether the huge page can be split */ diff --git a/mm/swapfile.c b/mm/swapfile.c index cb1a04135804..7d19c0facce2 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1619,7 +1619,7 @@ static int page_trans_huge_map_swapcount(struct page *page, swapcount = page_swapcount(page); if (total_swapcount) *total_swapcount = swapcount; - return swapcount + page_trans_huge_mapcount(page, NULL); + return swapcount + page_trans_huge_mapcount(page); } page = compound_head(page);