From patchwork Wed Jul 31 13:31:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhaoyu Liu X-Patchwork-Id: 13748801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2D47C3DA7F for ; Wed, 31 Jul 2024 13:31:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BFE956B0082; Wed, 31 Jul 2024 09:31:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BAE3E6B0083; Wed, 31 Jul 2024 09:31:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A75986B0085; Wed, 31 Jul 2024 09:31:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 89DC86B0082 for ; Wed, 31 Jul 2024 09:31:11 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 00EC7120387 for ; Wed, 31 Jul 2024 13:31:10 +0000 (UTC) X-FDA: 82400133942.14.B6D470D Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by imf18.hostedemail.com (Postfix) with ESMTP id DAF571C0028 for ; Wed, 31 Jul 2024 13:31:08 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=EhKjK11o; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf18.hostedemail.com: domain of liuzhaoyu.zackary@bytedance.com designates 209.85.210.177 as permitted sender) smtp.mailfrom=liuzhaoyu.zackary@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722432607; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=z52FQ6IMmQxAE0Vu/r8CqKepvmtrEbW9y0jq8OJjDbo=; b=vR6iC32RPDtx3oLBhuWL49KC3+VQUx2+sI8mi4OpD+c4YeR5Vo/OkMt1XFSTm6rDXyZd5a pYOw6Za8qOOLMZnimAONcqEp0BQgctVLFVD6PES8UqH3gyRRPOnyBUlffp0yOcdweTGMPI rTDwgc1zb6ppgqtcOMHa4z+IhCTYoS4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722432607; a=rsa-sha256; cv=none; b=FCNwYQPz3xy/mnxGjgVWOTTB5FETdP0uOBC/oRzA992MZbzumbyvbxbyPoX7HufvgM/Ma8 HftjgSborh9eO/1hCvNKPNBsimpqo7xsPnztKwZ4sZ8SOUvMN7H7uL5ju4ZWRZRFwMkPsA g7IUVpDDBaAUN+I4Hyx8YD/+cBghQ8Y= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=EhKjK11o; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf18.hostedemail.com: domain of liuzhaoyu.zackary@bytedance.com designates 209.85.210.177 as permitted sender) smtp.mailfrom=liuzhaoyu.zackary@bytedance.com Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-70d150e8153so671512b3a.0 for ; Wed, 31 Jul 2024 06:31:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1722432667; x=1723037467; darn=kvack.org; h=content-disposition:mime-version:message-id:subject:cc:to:from:date :from:to:cc:subject:date:message-id:reply-to; bh=z52FQ6IMmQxAE0Vu/r8CqKepvmtrEbW9y0jq8OJjDbo=; b=EhKjK11odQS/7b9luj72Z1xkpUjXPGVo1ImwTYyMx0f/52XiczQuwDVzpl9xPmtzw7 s9lG+jZ5MnM0QHn7XHXjHmuE5ntNL8uWmUt6ddeto6GKMbxmvML0dJFaFemcgwfenFTk 2R1Ptnt3t52QQ/0fLJOzPiTaZNKunP+ic4TusGjHadCnHpiG750LzHzlJuCRPB6lQLCd dnM1k1ri+H1sIoC+oraF02fGEC+D4yxOWafr6PRUM8aPOIKWpAsrVBp+KSyF0Tt5E45U J+NUouMAZSmga++0w1mSImQqOxlsmgGo5pW/CjgpmEOP3yiUSjWgNB/8O/OuKINQPlwy 298w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722432667; x=1723037467; h=content-disposition:mime-version:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z52FQ6IMmQxAE0Vu/r8CqKepvmtrEbW9y0jq8OJjDbo=; b=hUmNhUK2oNowuwLtmKaMAZwfjIhnCqulCNBjl+RCF013uZLJcw+7V7ZbNlKDSVrU/f 2LnicWHZzt87CCJXi35SANK81nXlwkDVLSKDvZjVyEepJ9XbhsOVpxDzvsq9j3+rdXp1 iW3Bne2ph9OjCEK/Yp2dEMjlmq1WWoir7SREO+Tc+tsDDpu6gGNQB4qBDfGM0hffseqs oyez1zz1sVICrkjHMXXOK5dC4gTmuE9TTCojqUWDAwFX5f/0tYijiIaRyjnNuQ8fWk2K O2LsK9nRPmz9dK22QobzVVuBTsXkR1Y52o7e/L8we4CWtuECRwNRrVvVZc8SHezdpnN+ HXtg== X-Gm-Message-State: AOJu0Yyk2MT1wBKYpH5yk2g0AwtO2GDYlse1cZAl26lUFBWj/dJXAVqC wTy1VZ1MGfgm02Y0qrAJ3C1ZYWCJwlSVkSC3qtAjwDSNaulteMiTrGiwTuQKOS/LAmBZHwyl4Z3 g X-Google-Smtp-Source: AGHT+IF93ibecEsrYcc1mRikbveAvCcWnsS0go9P9hn4E6LK6lktVWBPrpHsI1u0MoKg5zsWOyR1kw== X-Received: by 2002:aa7:9088:0:b0:704:2516:8d17 with SMTP id d2e1a72fcca58-70efe449177mr7721685b3a.8.1722432667231; Wed, 31 Jul 2024 06:31:07 -0700 (PDT) Received: from bytedance ([61.213.176.6]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70ead819d68sm9967873b3a.122.2024.07.31.06.31.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Jul 2024 06:31:06 -0700 (PDT) Date: Wed, 31 Jul 2024 21:31:01 +0800 From: Zhaoyu Liu To: akpm@linux-foundation.org, kasong@tencent.com, willy@infradead.org, cerasuolodomenico@gmail.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm: swap: allocate folio only first time in __read_swap_cache_async() Message-ID: <20240731133101.GA2096752@bytedance> MIME-Version: 1.0 Content-Disposition: inline X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: DAF571C0028 X-Stat-Signature: epmxijgk94uawkaie4thkadnh5otxbwm X-Rspam-User: X-HE-Tag: 1722432668-995197 X-HE-Meta: U2FsdGVkX1/H3Q9IBv4+Ir2c7sjMv0f6nVcHkyLPESAEGiK7Cx1cWoF0Mj39gVjRN+8em19GuA9Oe+YA1oXgmokwrpPTsMcKNlc66wERCflzwh1WSX6CHnkTRaCWN2bmYUUYgkNRFHhE6hGjM3h0il/rcVaL8omCZjvqN3rOfG7sfaKQYKuo3uPZ7o3AgxYwglyJ6jxUR0kOFgzxjpfY/5xb1qmafPGnPlNz5gkuxZa6a4KT7o2ckD64BX0HPm/wEnKb9tHFLqlgLAsFhSl1zfk9OM3xEhfgW6uq8UgVoOdRfVMu4R3+Ojl7aUFN4c6vS77eFcVI0LVNVpdEfRyxLaFPDpvaBCNILixzSpD4s3UlReaa4nYf9ktRJWn1jq/WJwCy0g9dnZVgkTMwPI9ereSqAviZ+6vLhf4CMGORU0H5IUvowFgWHnCRTN07CotmafRLI4VsgcwdYDTR/fe+HdJCrZeA1fwwNAzfExUN9TMbCkRMJhzdtwOicfr7Biv5WfI44I7AfgfoVvlr7o3fR7aapXbwQwO4GbdKLb+SL1gq5YbsWM8047rngD2rL4P9exhaBbEDs77xGrY+QVBMmFVvYvogN6LmNkhUYvSV9juh0Nqjac1uF4Lk5nDm3U7CsFtWnvhQKoRwCGpuiYhBSyLuD+gGiR/R8JE4LEzzIYLZna9zdHIbKVWhJvImDmhSMGgo9Cz64sOPLFP9qiuoDosz+1OxTeb063iClUFy+rqrzwB7wbVNZL2BVu8eWmokvpNnvRVAtfAAGbprrnK9V9f81KGuZQSzYiaGPCdpW824XT3DddpoVU71KCTwDzet19aEt0hPrVJGCJXPYvaPipTREggnWbKuDZ2lIMZRQLgGv8eZCMJKePiVCo1L8N7DPcRuAnkXD7MQTobzLPfKS6tKWvUuu4piH49CuE23GX9ceDbBRlXNA7myHl/r7xjqwkyAN/XNPU2TcT+ipFI ApMCErJf hPGY1XalsMDdXGBU1/L6gD/Gp5zUb/ujXJMlonOtVtMIutxRFBuZ3eiLBnYxEjoDP8wxkiT9SBLeR1kJfCNolfJg5ifTIH2YdY7UB9gkOlZb6LyiaBsHrXniOqVQE0RCOz2aZRu9PW2SRtyY1GPiuWp6/xQv7sUp064vszDRlTVKDyrz9Xc0U758RgVLmXvqqX/mWH8GqImaCPy+ekmUSzp4Tzh7KJOKqsmpEXtCZjIckrq6xeodDsxnpTU/oJkqeSQiNur3H+1bskKxzF9892504FO5LqwSAiZB7F0ea9S6aup4j2N1UCLwxB4WFQf/a0Z1Ydv3xw/YVFEaeAKSpQbv+kh84h500si+vpW3F9x+QbqIG+hwjaVDmFhdfJtwwahDgEuTlTExuKLucgkQ7LEwjIQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: It should be checked by filemap_get_folio() if SWAP_HAS_CACHE was marked while reading a share swap page. It would re-allocate a folio if the swap cache was not ready now. We save the new folio to avoid page allocating again. Signed-off-by: Zhaoyu Liu --- mm/swap_state.c | 58 ++++++++++++++++++++++++++----------------------- 1 file changed, 31 insertions(+), 27 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index a1726e49a5eb..65370d148175 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -435,6 +435,8 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, { struct swap_info_struct *si; struct folio *folio; + struct folio *new_folio = NULL; + struct folio *result = NULL; void *shadow = NULL; *new_page_allocated = false; @@ -463,16 +465,19 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * else swap_off will be aborted if we return NULL. */ if (!swap_swapcount(si, entry) && swap_slot_cache_enabled) - goto fail_put_swap; + goto put_and_return; /* - * Get a new folio to read into from swap. Allocate it now, - * before marking swap_map SWAP_HAS_CACHE, when -EEXIST will - * cause any racers to loop around until we add it to cache. + * Get a new folio to read into from swap. Allocate it now if + * new_folio not exist, before marking swap_map SWAP_HAS_CACHE, + * when -EEXIST will cause any racers to loop around until we + * add it to cache. */ - folio = folio_alloc_mpol(gfp_mask, 0, mpol, ilx, numa_node_id()); - if (!folio) - goto fail_put_swap; + if (!new_folio) { + new_folio = folio_alloc_mpol(gfp_mask, 0, mpol, ilx, numa_node_id()); + if (!new_folio) + goto put_and_return; + } /* * Swap entry may have been freed since our caller observed it. @@ -480,10 +485,8 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, err = swapcache_prepare(entry); if (!err) break; - - folio_put(folio); - if (err != -EEXIST) - goto fail_put_swap; + else if (err != -EEXIST) + goto put_and_return; /* * Protect against a recursive call to __read_swap_cache_async() @@ -494,7 +497,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * __read_swap_cache_async() in the writeback path. */ if (skip_if_exists) - goto fail_put_swap; + goto put_and_return; /* * We might race against __delete_from_swap_cache(), and @@ -509,36 +512,37 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, /* * The swap entry is ours to swap in. Prepare the new folio. */ + __folio_set_locked(new_folio); + __folio_set_swapbacked(new_folio); - __folio_set_locked(folio); - __folio_set_swapbacked(folio); - - if (mem_cgroup_swapin_charge_folio(folio, NULL, gfp_mask, entry)) + if (mem_cgroup_swapin_charge_folio(new_folio, NULL, gfp_mask, entry)) goto fail_unlock; /* May fail (-ENOMEM) if XArray node allocation failed. */ - if (add_to_swap_cache(folio, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow)) + if (add_to_swap_cache(new_folio, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow)) goto fail_unlock; mem_cgroup_swapin_uncharge_swap(entry); if (shadow) - workingset_refault(folio, shadow); + workingset_refault(new_folio, shadow); - /* Caller will initiate read into locked folio */ - folio_add_lru(folio); + /* Caller will initiate read into locked new_folio */ + folio_add_lru(new_folio); *new_page_allocated = true; + folio = new_folio; got_folio: - put_swap_device(si); - return folio; + result = folio; + goto put_and_return; fail_unlock: - put_swap_folio(folio, entry); - folio_unlock(folio); - folio_put(folio); -fail_put_swap: + put_swap_folio(new_folio, entry); + folio_unlock(new_folio); +put_and_return: put_swap_device(si); - return NULL; + if (!(*new_page_allocated) && new_folio) + folio_put(new_folio); + return result; } /*