From patchwork Tue Jun 1 05:31:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12290431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD7D5C47092 for ; Tue, 1 Jun 2021 05:32:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 43FDC61042 for ; Tue, 1 Jun 2021 05:32:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 43FDC61042 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9F2E08D0002; Tue, 1 Jun 2021 01:32:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A2D96B006E; Tue, 1 Jun 2021 01:32:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 843AE8D0002; Tue, 1 Jun 2021 01:32:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id 4FF086B006C for ; Tue, 1 Jun 2021 01:32:05 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id CCCB12C0C for ; Tue, 1 Jun 2021 05:32:04 +0000 (UTC) X-FDA: 78204033768.19.019CDED Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf25.hostedemail.com (Postfix) with ESMTP id 7319E6000565 for ; Tue, 1 Jun 2021 05:31:51 +0000 (UTC) IronPort-SDR: GUEtPmCKIFuGUDRxRgJCpJRN8HHY50g+utQSNeLnLWg6oCcsnLbpXJwAyWoeaXjEYc3bnFKKkG FHFRsrYlmC4Q== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="200462174" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="200462174" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2021 22:32:00 -0700 IronPort-SDR: fnblnD0+UTg3vn/P/LDqEk8ZOwct9IwAzMhJ+x1MnkosNlQ1Xgb00d+km28i8Xsim4wddIO+Kk KsUIZ1Xz5C7A== X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="446821780" Received: from yhuang6-desk2.sh.intel.com ([10.239.159.119]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2021 22:31:57 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Johannes Weiner , Matthew Wilcox , Linus Torvalds , Peter Xu , Hugh Dickins , Mel Gorman , Rik van Riel , Andrea Arcangeli , Michal Hocko , Dave Hansen , Tim Chen Subject: [PATCH] mm: free idle swap cache page after COW Date: Tue, 1 Jun 2021 13:31:43 +0800 Message-Id: <20210601053143.1380078-1-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf25.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=ying.huang@intel.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 7319E6000565 X-Stat-Signature: h718usg8841imodw5kh93qqmp15gomzy X-HE-Tag: 1622525511-124693 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With commit 09854ba94c6a ("mm: do_wp_page() simplification"), after COW, the idle swap cache page (neither the page nor the corresponding swap entry is mapped by any process) will be left in the LRU list, even if it's in the active list or the head of the inactive list. So, the page reclaimer may take quite some overhead to reclaim these actually unused pages. To help the page reclaiming, in this patch, after COW, the idle swap cache page will be tried to be freed. To avoid to introduce much overhead to the hot COW code path, a) there's almost zero overhead for non-swap case via checking PageSwapCache() firstly. b) the page lock is acquired via trylock only. To test the patch, we used pmbench memory accessing benchmark with working-set larger than available memory on a 2-socket Intel server with a NVMe SSD as swap device. Test results shows that the pmbench score increases up to 23.8% with the decreased size of swap cache and swapin throughput. Signed-off-by: "Huang, Ying" Suggested-by: Johannes Weiner # use free_swap_cache() Cc: Matthew Wilcox Cc: Linus Torvalds Cc: Peter Xu Cc: Hugh Dickins Cc: Mel Gorman Cc: Rik van Riel Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Dave Hansen Cc: Tim Chen Acked-by: Johannes Weiner --- include/linux/swap.h | 5 +++++ mm/memory.c | 2 ++ mm/swap_state.c | 2 +- 3 files changed, 8 insertions(+), 1 deletion(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 032485ee7597..bb4889369a22 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -451,6 +451,7 @@ extern void __delete_from_swap_cache(struct page *page, extern void delete_from_swap_cache(struct page *); extern void clear_shadow_from_swap_cache(int type, unsigned long begin, unsigned long end); +extern void free_swap_cache(struct page *); extern void free_page_and_swap_cache(struct page *); extern void free_pages_and_swap_cache(struct page **, int); extern struct page *lookup_swap_cache(swp_entry_t entry, @@ -560,6 +561,10 @@ static inline struct address_space *swap_address_space(swp_entry_t entry) #define free_pages_and_swap_cache(pages, nr) \ release_pages((pages), (nr)); +static inline void free_swap_cache(struct page *page) +{ +} + static inline void show_swap_cache_info(void) { } diff --git a/mm/memory.c b/mm/memory.c index 2b7ffcbca175..d44425820240 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3104,6 +3104,8 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) munlock_vma_page(old_page); unlock_page(old_page); } + if (page_copied) + free_swap_cache(old_page); put_page(old_page); } return page_copied ? VM_FAULT_WRITE : 0; diff --git a/mm/swap_state.c b/mm/swap_state.c index b5a3dc8f47a1..95e391f46468 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -285,7 +285,7 @@ void clear_shadow_from_swap_cache(int type, unsigned long begin, * try_to_free_swap() _with_ the lock. * - Marcelo */ -static inline void free_swap_cache(struct page *page) +void free_swap_cache(struct page *page) { if (PageSwapCache(page) && !page_mapped(page) && trylock_page(page)) { try_to_free_swap(page);