From patchwork Mon May 22 07:09:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 13249899 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A6D2C77B73 for ; Mon, 22 May 2023 07:09:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D7A4D900006; Mon, 22 May 2023 03:09:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D52AD900007; Mon, 22 May 2023 03:09:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF3CC900006; Mon, 22 May 2023 03:09:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B0695900002 for ; Mon, 22 May 2023 03:09:41 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 8C8631C7C64 for ; Mon, 22 May 2023 07:09:41 +0000 (UTC) X-FDA: 80817015762.29.8DCD4AF Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf19.hostedemail.com (Postfix) with ESMTP id 75A041A0010 for ; Mon, 22 May 2023 07:09:39 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="ky/n7CFS"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf19.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684739379; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mmTtbxx0JX7bu5URmyD6CSLsM6RrcyZ7zeLsuHTH/5k=; b=l+2oNX3hoZXl+qgzAkBLMZZCY5IyOvbgMWH8s1R59wlyz4EJoXo5+9bYVvbDva9/LVrm2J QAEEmWKHt6bIKS2OnqtsjllE1rnE2dUNtOJct/vZ9g/1mn9ab8A5XWwj8SN+BpGpYBuTSf tqPscOz/plPccLy5cBbr4EdpUJHKnlc= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="ky/n7CFS"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf19.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684739379; a=rsa-sha256; cv=none; b=ur9mXpctRqVHfPlVvzehv52F6KOcqzHawcaYQnOsQ5KWHagVd+CzEbh9tsv9HI6gNy2p0D A9ji3MvPutXSCnzwYGW6Wh9JhE8n8Hr108LMA56FrNQwKi5iF64c0ZGrqa5mqGAcGqcjiC ZTdg8iVvjVQ9LZSqLYw38eFBLfgS4RY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684739379; x=1716275379; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=i1WT96efAZrjTV7W7kwHNRkRI/omQzn1KtV16Slt7b8=; b=ky/n7CFSAhdncYKQX33HveqtNNhBePn5lqytXu+jb9HZKvEsed+wqBhD 17O5/tqhU1GmFcPoBj5tvXremdhqB46HVTeWYOrwxquMY5tYYuMwi3rtX CIoY2B+qHuLwe2PsycIHR2uFogSlGuof+h+qeU/xy7gvTjd/V2zoQ+STD L9KSnl/yIqNgjLTNPoiC0sMvhMOYkOxFz4zzxxoaVdooEAZ9JAG+Mwvju YBtv+xHAlpKM6cU4mg30EJ50xmALbfBEF6myhoJPGyqRa5+0opYUO0c9l XX557TP8Xh0vY8TdfypnMQo6CPf/06q4mOzrYr1fk7aco/Uhtd1Xjn7mW g==; X-IronPort-AV: E=McAfee;i="6600,9927,10717"; a="337436992" X-IronPort-AV: E=Sophos;i="6.00,183,1681196400"; d="scan'208";a="337436992" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 May 2023 00:09:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10717"; a="773212692" X-IronPort-AV: E=Sophos;i="6.00,183,1681196400"; d="scan'208";a="773212692" Received: from yhuang6-mobl2.sh.intel.com ([10.238.5.152]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 May 2023 00:09:25 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Minchan Kim , Tim Chen , Yang Shi , Yu Zhao Subject: [PATCH -V2 2/5] swap, __read_swap_cache_async(): enlarge get/put_swap_device protection range Date: Mon, 22 May 2023 15:09:02 +0800 Message-Id: <20230522070905.16773-3-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230522070905.16773-1-ying.huang@intel.com> References: <20230522070905.16773-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 75A041A0010 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: upenp5mpt3ygr3mkwuszzyoao88m8s8b X-HE-Tag: 1684739379-339889 X-HE-Meta: U2FsdGVkX1+bJDqGKG7D/juixyoeaXJcf9O6WXl4p1z2QB1zJira6/H/bQLiqJwwps2pIYSY0wGu5VeGZ7RIwCqBdkYNApUEmL5EnYDcihgfobinYuYzPhkra2Gpqp055mvBuvR2FGeYpulHsiMx1WllGIHdI802IAuku4jgkvQsK5hbCAxtTevfKESMBwHoJLRGgNHmhCBWbckxWnI0JbLZs6Myf+5CoKpKGKeZfYfmx2uU8Q5cjb18PxBYjtXB0efkmuBGOItFdis34yCOgAPheqKA3aTYRwGUqVpLbkVus6PPmOL7X9Wbw08blDJol62pyo1bxFohl2Kq5JkO96PYQ8PP2nVfolByzHlv2t2O4pwyibJ8VW2g6JWCQ0Ku42y9Rnu9LYpm6JslXm/rKukSERxHH0aKAlhxAUuFyJDvjRA3mWvio0uTYqbp9G8OrpXTlzlKEvbQiCh4byiGSp9X1X1k4ZGquNljO+3mQMjmTJD8y9eQ/y1ntJY/mxCzfyn3R155S8ZB2C6HEwiEKGyQfrIJDeMgBaacXc3dWnZV5+oTn/SM//9rbulFpgouilVc7dndruffMqZob0u57vsh8YhtYDPmd0EyjCuMjATB/u6UkEIPIvP0EeOivCFCnn7tu3X+m2SxtSaWzHmfkbDPtB81jKfVQo+XDOK2mZGmEeFPvcj8F9FdOzCgArGg0Yzbl4MQg7fPfaoO7d9md9eeB77xUsbTPk1+1PD3RNL7zAaBdzo5w3D9wm0c9/batqD+rZYbnH3INCNeBhJTBTrk5j+3eGmgsQkaowfUysKgkc5ZToKkB2U7F7XgOgSrlhpsxpAmEtWARGrXzx0JOG93AbiS+fxB8kPCCoKEkT9Wa8pDNnCj4qigCuJSkuagLkUU5LpzJKc5Cx3kRtyTwazKUexz8+aj96ZUSECgYzPPO5wdu1alhwH6h/L4/DemYvSVQ+ExsecWqKR87dE skyERPg5 K+MTY/pCL7rNk9gvY/L9+yM4dzuQyKbDTgAj9aOt77lD2rnOAH98j/zHbMARExesySYH8FyYIFpVV/5akcGxWGzlAVqOWk6ITvKBeH88VrOngHVHv7Pdq0+UQiobSnUbxACds5D78pu4tm+eAr8PneooVsVGUrJ+gV1lVPl6U1h2evKYBSkHC1cjDeEYRv9rnZjp/rzKRSX4mbySMfvPXhQh9ymYZhFPU4TmesaqCTN/KzVlAtFZe1onaHJyFrUjFje+A6TlOQkf3PfGPnJEYc1anzz6rqG0EI25N3EjVWCb9dOX2Jt1EQlymu9KWQx5Ekq+Awy9YAGxwzK2LN8CifwfiryKRz9hyikxrsKm21mum1QvjcvmduNJxtke8HZKIEsKpyl9kSrqUVSDtzl+gFZTBKdevEDJsjjFBr/xxEZmY9mjA3b1/hLypc9GLHJ9onIG3S7edgNrZclivWKErsVVwqMzRr6zLqOewpd1fiQqieOSGqgj8mxceCI5L1enFxthkMcGfy99WvCY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This makes the function a little easier to be understood because we don't need to consider swapoff. And this makes it possible to remove get/put_swap_device() calling in some functions called by __read_swap_cache_async(). Signed-off-by: "Huang, Ying" Cc: David Hildenbrand Cc: Hugh Dickins Cc: Johannes Weiner Cc: Matthew Wilcox Cc: Michal Hocko Cc: Minchan Kim Cc: Tim Chen Cc: Yang Shi Cc: Yu Zhao --- mm/swap_state.c | 33 ++++++++++++++++++++++----------- 1 file changed, 22 insertions(+), 11 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index b76a65ac28b3..a1028fe7214e 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -417,9 +417,13 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, { struct swap_info_struct *si; struct folio *folio; + struct page *page; void *shadow = NULL; *new_page_allocated = false; + si = get_swap_device(entry); + if (!si) + return NULL; for (;;) { int err; @@ -428,14 +432,12 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * called after swap_cache_get_folio() failed, re-calling * that would confuse statistics. */ - si = get_swap_device(entry); - if (!si) - return NULL; folio = filemap_get_folio(swap_address_space(entry), swp_offset(entry)); - put_swap_device(si); - if (!IS_ERR(folio)) - return folio_file_page(folio, swp_offset(entry)); + if (!IS_ERR(folio)) { + page = folio_file_page(folio, swp_offset(entry)); + goto got_page; + } /* * Just skip read ahead for unused swap slot. @@ -445,8 +447,8 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * as SWAP_HAS_CACHE. That's done in later part of code or * else swap_off will be aborted if we return NULL. */ - if (!__swp_swapcount(entry) && swap_slot_cache_enabled) - return NULL; + if (!swap_swapcount(si, entry) && swap_slot_cache_enabled) + goto fail; /* * Get a new page to read into from swap. Allocate it now, @@ -455,7 +457,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, */ folio = vma_alloc_folio(gfp_mask, 0, vma, addr, false); if (!folio) - return NULL; + goto fail; /* * Swap entry may have been freed since our caller observed it. @@ -466,7 +468,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, folio_put(folio); if (err != -EEXIST) - return NULL; + goto fail; /* * We might race against __delete_from_swap_cache(), and @@ -500,12 +502,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, /* Caller will initiate read into locked folio */ folio_add_lru(folio); *new_page_allocated = true; - return &folio->page; + page = &folio->page; +got_page: + put_swap_device(si); + return page; fail_unlock: put_swap_folio(folio, entry); folio_unlock(folio); folio_put(folio); +fail: + put_swap_device(si); return NULL; } @@ -514,6 +521,10 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * and reading the disk if it is not already cached. * A failure return means that either the page allocation failed or that * the swap entry is no longer in use. + * + * get/put_swap_device() aren't needed to call this function, because + * __read_swap_cache_async() call them and swap_readpage() holds the + * swap cache folio lock. */ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma,