From patchwork Mon Aug 21 16:08:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13359618 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36091EE49A6 for ; Mon, 21 Aug 2023 16:09:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D96F78E0007; Mon, 21 Aug 2023 12:09:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CFB008E0002; Mon, 21 Aug 2023 12:09:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B728F8E0007; Mon, 21 Aug 2023 12:09:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A28F38E0002 for ; Mon, 21 Aug 2023 12:09:08 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 81311A0825 for ; Mon, 21 Aug 2023 16:09:08 +0000 (UTC) X-FDA: 81148595976.13.F7DBF86 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf14.hostedemail.com (Postfix) with ESMTP id B52B7100029 for ; Mon, 21 Aug 2023 16:09:06 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=e3JNvKto; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692634146; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GdnFUGqNF8uC/Mhnd72wov++fKDyb2cH73FrJqKy4cQ=; b=Srh1IHilYgEfTo6356QzpMVuXTKniE+Q5Ad0Mxbiq9z5gQF14W9zqg053ciLXd5AIubwLV rmjnW35xhxdNyd3ioUVIb9V7ViAatPNCiuwpCJlJ3AhiHLN+L3aIwt6v95u/jS/cE4dMtO W8yuzy3joPpcDkjOH3khcU4EKZyH+94= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=e3JNvKto; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692634146; a=rsa-sha256; cv=none; b=DdGCK/2jw9oO7PTGEybIYL6nlIJ1xISXLq7+ddBNV6CdeQzNjoywRR4tNTTT3983JPlVqm MRqiJn9VS8P7vR6l8oZkfp8e+g/j07HcI7s0L8h4hTssLLZ9Xed3vBvXqPK1vji3qu45lh P02x3S3d4Ds9Tw1X8cWziYxqwY+FJNU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1692634146; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GdnFUGqNF8uC/Mhnd72wov++fKDyb2cH73FrJqKy4cQ=; b=e3JNvKtoQghdCsspwNPKLI95sboueYhn/r7/VOHRgcn7kQ3IaXGyiwAiPm9gM8Mj/WNs5U IzUuNNKvwEVrBKnnG+U8HXSRh40bvwzZS/TLmd5XmHRF/Hr+tSNg94Zg4XKnAREQHg4ZxV lxIS09ET667vXTzTTeNvM9yGYCjNixE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-343-L421dwfjPouKmahmihEKAA-1; Mon, 21 Aug 2023 12:09:02 -0400 X-MC-Unique: L421dwfjPouKmahmihEKAA-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DBFE01875842; Mon, 21 Aug 2023 16:09:01 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.192.184]) by smtp.corp.redhat.com (Postfix) with ESMTP id B9873492C13; Mon, 21 Aug 2023 16:08:59 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, David Hildenbrand , Andrew Morton , Matthew Wilcox , Peter Xu , Catalin Marinas , Will Deacon , Hugh Dickins , Seth Jennings , Dan Streetman , Vitaly Wool Subject: [PATCH mm-unstable v1 3/4] mm/swap: inline folio_set_swap_entry() and folio_swap_entry() Date: Mon, 21 Aug 2023 18:08:48 +0200 Message-ID: <20230821160849.531668-4-david@redhat.com> In-Reply-To: <20230821160849.531668-1-david@redhat.com> References: <20230821160849.531668-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B52B7100029 X-Stat-Signature: i6w9u7sxt9wxpdx9od8cdn4upuzfkxoe X-HE-Tag: 1692634146-613835 X-HE-Meta: U2FsdGVkX1/NBzaxSQnm/1IYMTEGQCWiUmrehAZgbWVm0siHE3cIp5Ch4ZlzFD7zwbcY8c+cDF83X+sY+zpQGGehVvAasmpQcwt+IVsc51ZvVGaTBSLZtRWQa7xZehQOLcfuHbu6+Ws1Xwg75swvqeWD7YEnRes4b81+MuThFQIBOOIdiXbo0mkbnOaYOOUBNg7wee9hqxUB5TYz/bMbyij0p7tk9cfzi97xjhIJxASnGM+ImDvX9jGa8FXCYd9rwlOQ/59HtdfjHG0EQ0zq7SdC58Z9n5BEcnj9cgZU+UWj8Nt7297mo2VqnRz45M8Hye4x848CJ5vWNzWwNNyaieykM3y7V04dd0xoXrA0NiUf3LrFR35nq5DL+pAkiOS961AtyuF1XDMvrDy1B7/v1f/jhrz5YYtafszoNr7RbCyia49gVyT+J1g+phPA8khltvCaNfhKbyJ7h2oMwczwTSBBDOim2QpRM0iL0upo/T7tKaRHfcL3k03VFE58TGXx1pd4JvzCuzmFBezDsHkxv2XEhMji5Ar0rPBFuh0ZyMdy08KMurVwfeafkfohSqkQsqoPht6YDtS0pbZdHTF24faEvRAhF5xjEPFij1qqN0M6kmcNr/gluzLhYtg8XL6pGay644KmlBP1SKUFcL/Jpgti07blGBWIP69wVvN7QZcCrB9gYSTlmRWX6o/q6NkX43J+LDmgP41ahDP6Cs/OaSXHykWcmU+YyX5fnpfL1UBu6l/AykS4U1tIbF/NUOY6pE4hukgb9o1NOJ/F+P0LBiYcAO58wnjRiL42aDErI4snuAwaO9HllAf6uylRvxxqcQ8Ix3R7Ek70IOrBs+exVfKwCzL8o9bxLBOCQ41PxKvD/cGg3QAkLaRMDbUs3BkTvUzcjruSnKZIyOp8MyBbPoehsyKrQ9lDjfyqDfYQfZdiaEqTNp9CUYUC8JbQcpOr+WEW4q6Bvay15/Y6WeM EqkTq1+O oPJRC9D5ZjX2W3a3GEK23LVMl+zC/59H8ZX3qsaFSiwh6YDRpreUGCzTjVHRCi482aCvhUD/ZFHhCwtXkKOoIDB4kfH1wJgEpaKtGhF+ujI674qEMxqYpOoESbSS4Rk5hZW6ljXOIJ0UibDH9COqZDvzZ0J+G2tBkKah/rArhqujsaHtwZawMX++xeQSSuwbf83GZk43ar0CiedDkU7PtGrVjDreCWZQYkQ2WZxvqtRSSqmAaexvv8cC7IIcQ/g70Qxv4hQ6Ewa3BKkOUg60mxu1cMg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Let's simply work on the folio directly and remove the helpers. Suggested-by: Matthew Wilcox Signed-off-by: David Hildenbrand Reviewed-by: Chris Li --- include/linux/swap.h | 12 +----------- mm/memory.c | 2 +- mm/shmem.c | 6 +++--- mm/swap_state.c | 7 +++---- mm/swapfile.c | 2 +- mm/util.c | 2 +- mm/vmscan.c | 2 +- mm/zswap.c | 4 ++-- 8 files changed, 13 insertions(+), 24 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 82859a1944f5..603acf813873 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -333,25 +333,15 @@ struct swap_info_struct { */ }; -static inline swp_entry_t folio_swap_entry(struct folio *folio) -{ - return folio->swap; -} - static inline swp_entry_t page_swap_entry(struct page *page) { struct folio *folio = page_folio(page); - swp_entry_t entry = folio_swap_entry(folio); + swp_entry_t entry = folio->swap; entry.val += page - &folio->page; return entry; } -static inline void folio_set_swap_entry(struct folio *folio, swp_entry_t entry) -{ - folio->swap = entry; -} - /* linux/mm/workingset.c */ bool workingset_test_recent(void *shadow, bool file, bool *workingset); void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages); diff --git a/mm/memory.c b/mm/memory.c index ff13242c1589..c51800dbfa9b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3831,7 +3831,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_add_lru(folio); /* To provide entry to swap_readpage() */ - folio_set_swap_entry(folio, entry); + folio->swap = entry; swap_readpage(page, true, NULL); folio->private = NULL; } diff --git a/mm/shmem.c b/mm/shmem.c index 7a0c1e19d9f8..fc1afe9dfcfe 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1657,7 +1657,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp, int error; old = *foliop; - entry = folio_swap_entry(old); + entry = old->swap; swap_index = swp_offset(entry); swap_mapping = swap_address_space(entry); @@ -1678,7 +1678,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp, __folio_set_locked(new); __folio_set_swapbacked(new); folio_mark_uptodate(new); - folio_set_swap_entry(new, entry); + new->swap = entry; folio_set_swapcache(new); /* @@ -1800,7 +1800,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, /* We have to do this with folio locked to prevent races */ folio_lock(folio); if (!folio_test_swapcache(folio) || - folio_swap_entry(folio).val != swap.val || + folio->swap.val != swap.val || !shmem_confirm_swap(mapping, index, swap)) { error = -EEXIST; goto unlock; diff --git a/mm/swap_state.c b/mm/swap_state.c index 2f2417810052..b3b14bd0dd64 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -100,7 +100,7 @@ int add_to_swap_cache(struct folio *folio, swp_entry_t entry, folio_ref_add(folio, nr); folio_set_swapcache(folio); - folio_set_swap_entry(folio, entry); + folio->swap = entry; do { xas_lock_irq(&xas); @@ -156,8 +156,7 @@ void __delete_from_swap_cache(struct folio *folio, VM_BUG_ON_PAGE(entry != folio, entry); xas_next(&xas); } - entry.val = 0; - folio_set_swap_entry(folio, entry); + folio->swap.val = 0; folio_clear_swapcache(folio); address_space->nrpages -= nr; __node_stat_mod_folio(folio, NR_FILE_PAGES, -nr); @@ -233,7 +232,7 @@ bool add_to_swap(struct folio *folio) */ void delete_from_swap_cache(struct folio *folio) { - swp_entry_t entry = folio_swap_entry(folio); + swp_entry_t entry = folio->swap; struct address_space *address_space = swap_address_space(entry); xa_lock_irq(&address_space->i_pages); diff --git a/mm/swapfile.c b/mm/swapfile.c index bd9d904671b9..e52f486834eb 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1536,7 +1536,7 @@ static bool swap_page_trans_huge_swapped(struct swap_info_struct *si, static bool folio_swapped(struct folio *folio) { - swp_entry_t entry = folio_swap_entry(folio); + swp_entry_t entry = folio->swap; struct swap_info_struct *si = _swap_info_get(entry); if (!si) diff --git a/mm/util.c b/mm/util.c index cde229b05eb3..f31e2ca62cfa 100644 --- a/mm/util.c +++ b/mm/util.c @@ -764,7 +764,7 @@ struct address_space *folio_mapping(struct folio *folio) return NULL; if (unlikely(folio_test_swapcache(folio))) - return swap_address_space(folio_swap_entry(folio)); + return swap_address_space(folio->swap); mapping = folio->mapping; if ((unsigned long)mapping & PAGE_MAPPING_FLAGS) diff --git a/mm/vmscan.c b/mm/vmscan.c index c7c149cb8d66..6f13394b112e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1423,7 +1423,7 @@ static int __remove_mapping(struct address_space *mapping, struct folio *folio, } if (folio_test_swapcache(folio)) { - swp_entry_t swap = folio_swap_entry(folio); + swp_entry_t swap = folio->swap; if (reclaimed && !mapping_exiting(mapping)) shadow = workingset_eviction(folio, target_memcg); diff --git a/mm/zswap.c b/mm/zswap.c index 7300b98d4a03..412b1409a0d7 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1190,7 +1190,7 @@ static void zswap_fill_page(void *ptr, unsigned long value) bool zswap_store(struct folio *folio) { - swp_entry_t swp = folio_swap_entry(folio); + swp_entry_t swp = folio->swap; int type = swp_type(swp); pgoff_t offset = swp_offset(swp); struct page *page = &folio->page; @@ -1370,7 +1370,7 @@ bool zswap_store(struct folio *folio) bool zswap_load(struct folio *folio) { - swp_entry_t swp = folio_swap_entry(folio); + swp_entry_t swp = folio->swap; int type = swp_type(swp); pgoff_t offset = swp_offset(swp); struct page *page = &folio->page;