From patchwork Thu Jul 11 02:13:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13729930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED36FC3DA41 for ; Thu, 11 Jul 2024 02:13:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7BC376B0095; Wed, 10 Jul 2024 22:13:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 76B936B0093; Wed, 10 Jul 2024 22:13:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E5A86B0095; Wed, 10 Jul 2024 22:13:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 3D7546B0092 for ; Wed, 10 Jul 2024 22:13:28 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id BEAB9140327 for ; Thu, 11 Jul 2024 02:13:27 +0000 (UTC) X-FDA: 82325850054.07.3B3E55B Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf08.hostedemail.com (Postfix) with ESMTP id 12CDE160017 for ; Thu, 11 Jul 2024 02:13:25 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=D1d71zuc; spf=pass (imf08.hostedemail.com: domain of 3xD-PZgYKCLszv0ibphpphmf.dpnmjovy-nnlwbdl.psh@flex--yuzhao.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3xD-PZgYKCLszv0ibphpphmf.dpnmjovy-nnlwbdl.psh@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720663990; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Q2KA7HqAEDJDFjninnbueVy/cjuPGw3T2Kp1IRta9cE=; b=cOGJOlu8mxlTM3E3ZUOCSnoy/FNVHNdpisdUc74QW+qp6RMLsTIHSx5abJHVfnPERRxT+c XvEVXKFCN3QunvilcYNeSrC+zzY7O1G/InRfTwx0PdEwiD2I0iMIgLKLDMKSHLAMgq0ZZ5 xE0vc5BIQZQoZQSkam4y++bi72MgQOs= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=D1d71zuc; spf=pass (imf08.hostedemail.com: domain of 3xD-PZgYKCLszv0ibphpphmf.dpnmjovy-nnlwbdl.psh@flex--yuzhao.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3xD-PZgYKCLszv0ibphpphmf.dpnmjovy-nnlwbdl.psh@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720663990; a=rsa-sha256; cv=none; b=zeKCXsVtVzIqnTDnyz97V7Q1OC5SIloCqD58lKnPbBqt2Qh/xs1B0hXT1s7+fra8Xq92gG Yf14yTMcM5QQuwGUjGGbKJNsEGABQnNDPJGaQFzHHe4Op6HyR/53/r2AG3gmPj83Y2G+t/ u85wPdOmf7b6o5vgauZY1hXNtbnho3Q= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6525230ceeaso6929857b3.2 for ; Wed, 10 Jul 2024 19:13:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1720664005; x=1721268805; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Q2KA7HqAEDJDFjninnbueVy/cjuPGw3T2Kp1IRta9cE=; b=D1d71zuc+W/Oyq+hORKxYX1wYM9Ot2XYms6kNoQebqR+yYgdQjiabp+G0tSVNnahSi UZeGOXKOsTnKbPmgvH5JTeon4G4q53rPw/07KyMYkoI1S+3dhzRx7V+eO+6ZuQH7VOK+ QcQ0IojFozma148+vPOWvgJsgP7nJxDaSFBBeySSRkiqLtsA52Paf3zjsnHA5zIlrrw5 xkLybLoRlOGzcUhsoc07ejDks92Kxxv4SUUDmFKjs3bVDgRX6EnSvUjBP9rn7FwOzVTi 6k6km1M65ZzBfdXFcvyd40s/p5KcLQqOr0GNCt3fy3VFx/ekyuvLYny5kN/yPz7M1J2p Yckg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720664005; x=1721268805; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Q2KA7HqAEDJDFjninnbueVy/cjuPGw3T2Kp1IRta9cE=; b=a7IYj05lxPPWSZYPBIqWxSfOz36FzmgClXS2pYuWsvNiU4ixYG2WDmWxFNV+pksS7j p2NZfeVXHmD7FfVGIlkKf5JRawIZdUfwxapHluo3MOEu19b2swFphHehEJb9Q99MKoQj dzITp6xLDdCmBlIlJZvmaz08TdvA/CQQp8YcTxVeh0BCuFw8/C/O6Bq0Gv6y5g1jsxDn 6QEqYyXdZXjojnjRBISi8Ddphb8Zi9r18u038/o8b3c+g2KM88no596lqB2FH9IXAKHE +FAitBtyTQ/8Pmopkxz7+/eziNAPn74n8rmZrJGTuSQzaB1v02+ZiVGgJLpjjaPYY1zE EeMg== X-Gm-Message-State: AOJu0Yywb+Aj0+bakfN6R+iLstmY+WpqtKLljtlg+ueFxj8jbyeHI2Jc IR6Fxhbihb+Qf5GOixY7n45M/LKYKgMdPJuGj1z1HKoTgPds6oG3x3zwl6Rr+OGopc4VNi3VAAz 4GA== X-Google-Smtp-Source: AGHT+IH/Rp/IjlPFZgIAEzdNYzL8aBM88d5tP9iD3BV755kvnBakj7CsVrD7R8NFWV17wOfNsmdxA2dCn38= X-Received: from yuzhao2.bld.corp.google.com ([2a00:79e0:2e28:6:9b06:2f28:6675:a98b]) (user=yuzhao job=sendgmr) by 2002:a05:6902:1701:b0:e02:c478:c8b9 with SMTP id 3f1490d57ef6-e041b1449e9mr15176276.12.1720664004897; Wed, 10 Jul 2024 19:13:24 -0700 (PDT) Date: Wed, 10 Jul 2024 20:13:13 -0600 In-Reply-To: <20240711021317.596178-1-yuzhao@google.com> Mime-Version: 1.0 References: <20240711021317.596178-1-yuzhao@google.com> X-Mailer: git-send-email 2.45.2.803.g4e1b14247a-goog Message-ID: <20240711021317.596178-2-yuzhao@google.com> Subject: [PATCH mm-unstable v1 1/5] mm/swap: reduce indentation level From: Yu Zhao To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 12CDE160017 X-Stat-Signature: szgs6z3nyhggbbdugxitf86enbygfjh7 X-HE-Tag: 1720664005-809998 X-HE-Meta: U2FsdGVkX19cxpRqEBESHi64Xl/2xkj8ex7V7YBFUOZxYvqde4m3k6fJxbNxMebNcIGtiwahSfl7uSO/JJWKx4aHd6PrabKNQiK5fYkuP13St4P9AW5++FomU9W14pQ7ts56xLVoDdgXTB7isqFWe9FTH9lVU8CgF3/4Q50NNYJHj3xVW3sz1kLsUwLpFHmokwWFrC9vlWvq6yyCbVL1HlU5QT+JR0IkhwLZVIkCwQ8BvV9NQoEf56tTvwTciDtRTQat6qgAzyquLwP9N54VrCy1Y/mPJDpRQcvNI8EOtC274IH6pFHhc7WSyHTcO+FGHitDLT5MdZCfhSC8RisxqMnaqjQU6aM7tfmm9kWLQE2ywJH1pArwnJnSkvIQtl+mWFaJ/j2qmRhwUlGMv0ep6lBMdyN6Y1uZd23sFVXUv7KRvj7x9OV05HW7CTCObeJiYMGHtjp9bz2KYZABPeoNGJx9pR5TbtIlXJuqAbE9U7b7glET+kJSfs6uNVikEaWNh5nrQb9d4IT9Q0+Ed40mOe+Kn2Knmvom+F+IY+22jCoOHnxNlf9H7pjfiwolUXJMLYxccX/aZGjzQniGxzjWbwiA2iqd8UXZOKtaLGXDnvpNLoIJUVZGiLEgUav1sVr3ZAIcZOB2pb8kVCdM4q5k+U0+unQXibnf3ZB7t35iY4Y7tiBC6vWphwYqKnLqoVrPWQ7wNuBor9jnenQqOAZPWUJnsB/Ntl0+wXtQL47DMrUSzlygCfFmzY3ABwmefVOhnkji44M+QmCGkEe+z+S22nmDjTDtxngkLt9/9xl49o1W8P1dScSKoZZYhtamFikE08+aaut7/CC7w0nRx1uvqLfbamCk/SeCUOz5093+lRq63R6UXYktf6gr4/JOlD4fwjY8zthGFRQlhaI42B5UZmOoiN35yISTfCgz7Xf5zPhjW5Er9yYEcN4wtHE3+nlHMr1AVUglCJxE591G2BL yuJtxNCR O6U5Fzyp4a4bYsjqrCLBabWmH8Wg6druq5X25j2HDmc/MlVUO+i3AdCc7vLxcviNSKfP2qDmtEstGb24x1GFTh17yeLzYbKGBXvLKvbFDpqeaZPaZNbY/mGJjL2ahzGKPnbzuNpJcKlpLnGTBBuS/jqCjwyTF522iBtjptpuo10H9n8b9KN1k31AKFowt6H/jyIZWRRiQ/b48ukagcnyv3fDz+HT4NwNG+VMzsP57liqs9NWSjgbUnbds9Ew9ZmQqwFbjAkuGdmPl5XqyvcdJFbSsZT1Po1l8wlBWO1+WvKd0BeH+LSDOqMPbs8FrM3DLN1T0/AnEj9dU2syq6MwloShWKmZLqkDgKx1GnKkpwmuPbyKu1Weahkp33US1J7M5Zkij+ovwR1geEnnAjWpSGknxbHQlWViGvseatQjjnqJolz6U3+imik2anHMRw0nwSuBGLVXMn1OiiYwkvRFuHFcDFRKyNQf7ARY+0NVqEJ9I4AAgndjE9eIsnS9dmALtVAOJ6ninGWpnMgs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Reduce indentation level by returning directly when there is no cleanup needed, i.e., if (condition) { | if (condition) { do_this(); | do_this(); return; | return; } else { | } do_that(); | } | do_that(); and if (condition) { | if (!condition) do_this(); | return; do_that(); | } | do_this(); return; | do_that(); Presumably the old style became repetitive as the result of copy and paste. Signed-off-by: Yu Zhao --- mm/swap.c | 209 ++++++++++++++++++++++++++++-------------------------- 1 file changed, 109 insertions(+), 100 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 9caf6b017cf0..952e4aac6eb1 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -117,7 +117,9 @@ void __folio_put(struct folio *folio) if (unlikely(folio_is_zone_device(folio))) { free_zone_device_folio(folio); return; - } else if (folio_test_hugetlb(folio)) { + } + + if (folio_test_hugetlb(folio)) { free_huge_folio(folio); return; } @@ -228,17 +230,19 @@ static void folio_batch_add_and_move(struct folio_batch *fbatch, if (folio_batch_add(fbatch, folio) && !folio_test_large(folio) && !lru_cache_disabled()) return; + folio_batch_move_lru(fbatch, move_fn); } static void lru_move_tail_fn(struct lruvec *lruvec, struct folio *folio) { - if (!folio_test_unevictable(folio)) { - lruvec_del_folio(lruvec, folio); - folio_clear_active(folio); - lruvec_add_folio_tail(lruvec, folio); - __count_vm_events(PGROTATED, folio_nr_pages(folio)); - } + if (folio_test_unevictable(folio)) + return; + + lruvec_del_folio(lruvec, folio); + folio_clear_active(folio); + lruvec_add_folio_tail(lruvec, folio); + __count_vm_events(PGROTATED, folio_nr_pages(folio)); } /* @@ -250,22 +254,23 @@ static void lru_move_tail_fn(struct lruvec *lruvec, struct folio *folio) */ void folio_rotate_reclaimable(struct folio *folio) { - if (!folio_test_locked(folio) && !folio_test_dirty(folio) && - !folio_test_unevictable(folio)) { - struct folio_batch *fbatch; - unsigned long flags; + struct folio_batch *fbatch; + unsigned long flags; - folio_get(folio); - if (!folio_test_clear_lru(folio)) { - folio_put(folio); - return; - } + if (folio_test_locked(folio) || folio_test_dirty(folio) || + folio_test_unevictable(folio)) + return; - local_lock_irqsave(&lru_rotate.lock, flags); - fbatch = this_cpu_ptr(&lru_rotate.fbatch); - folio_batch_add_and_move(fbatch, folio, lru_move_tail_fn); - local_unlock_irqrestore(&lru_rotate.lock, flags); + folio_get(folio); + if (!folio_test_clear_lru(folio)) { + folio_put(folio); + return; } + + local_lock_irqsave(&lru_rotate.lock, flags); + fbatch = this_cpu_ptr(&lru_rotate.fbatch); + folio_batch_add_and_move(fbatch, folio, lru_move_tail_fn); + local_unlock_irqrestore(&lru_rotate.lock, flags); } void lru_note_cost(struct lruvec *lruvec, bool file, @@ -328,18 +333,19 @@ void lru_note_cost_refault(struct folio *folio) static void folio_activate_fn(struct lruvec *lruvec, struct folio *folio) { - if (!folio_test_active(folio) && !folio_test_unevictable(folio)) { - long nr_pages = folio_nr_pages(folio); - - lruvec_del_folio(lruvec, folio); - folio_set_active(folio); - lruvec_add_folio(lruvec, folio); - trace_mm_lru_activate(folio); - - __count_vm_events(PGACTIVATE, nr_pages); - __count_memcg_events(lruvec_memcg(lruvec), PGACTIVATE, - nr_pages); - } + long nr_pages = folio_nr_pages(folio); + + if (folio_test_active(folio) || folio_test_unevictable(folio)) + return; + + + lruvec_del_folio(lruvec, folio); + folio_set_active(folio); + lruvec_add_folio(lruvec, folio); + trace_mm_lru_activate(folio); + + __count_vm_events(PGACTIVATE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGACTIVATE, nr_pages); } #ifdef CONFIG_SMP @@ -353,20 +359,21 @@ static void folio_activate_drain(int cpu) void folio_activate(struct folio *folio) { - if (!folio_test_active(folio) && !folio_test_unevictable(folio)) { - struct folio_batch *fbatch; + struct folio_batch *fbatch; - folio_get(folio); - if (!folio_test_clear_lru(folio)) { - folio_put(folio); - return; - } + if (folio_test_active(folio) || folio_test_unevictable(folio)) + return; - local_lock(&cpu_fbatches.lock); - fbatch = this_cpu_ptr(&cpu_fbatches.activate); - folio_batch_add_and_move(fbatch, folio, folio_activate_fn); - local_unlock(&cpu_fbatches.lock); + folio_get(folio); + if (!folio_test_clear_lru(folio)) { + folio_put(folio); + return; } + + local_lock(&cpu_fbatches.lock); + fbatch = this_cpu_ptr(&cpu_fbatches.activate); + folio_batch_add_and_move(fbatch, folio, folio_activate_fn); + local_unlock(&cpu_fbatches.lock); } #else @@ -378,12 +385,13 @@ void folio_activate(struct folio *folio) { struct lruvec *lruvec; - if (folio_test_clear_lru(folio)) { - lruvec = folio_lruvec_lock_irq(folio); - folio_activate_fn(lruvec, folio); - unlock_page_lruvec_irq(lruvec); - folio_set_lru(folio); - } + if (!folio_test_clear_lru(folio)) + return; + + lruvec = folio_lruvec_lock_irq(folio); + folio_activate_fn(lruvec, folio); + unlock_page_lruvec_irq(lruvec); + folio_set_lru(folio); } #endif @@ -610,41 +618,41 @@ static void lru_deactivate_file_fn(struct lruvec *lruvec, struct folio *folio) static void lru_deactivate_fn(struct lruvec *lruvec, struct folio *folio) { - if (!folio_test_unevictable(folio) && (folio_test_active(folio) || lru_gen_enabled())) { - long nr_pages = folio_nr_pages(folio); + long nr_pages = folio_nr_pages(folio); - lruvec_del_folio(lruvec, folio); - folio_clear_active(folio); - folio_clear_referenced(folio); - lruvec_add_folio(lruvec, folio); + if (folio_test_unevictable(folio) || !(folio_test_active(folio) || lru_gen_enabled())) + return; - __count_vm_events(PGDEACTIVATE, nr_pages); - __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, - nr_pages); - } + lruvec_del_folio(lruvec, folio); + folio_clear_active(folio); + folio_clear_referenced(folio); + lruvec_add_folio(lruvec, folio); + + __count_vm_events(PGDEACTIVATE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); } static void lru_lazyfree_fn(struct lruvec *lruvec, struct folio *folio) { - if (folio_test_anon(folio) && folio_test_swapbacked(folio) && - !folio_test_swapcache(folio) && !folio_test_unevictable(folio)) { - long nr_pages = folio_nr_pages(folio); + long nr_pages = folio_nr_pages(folio); - lruvec_del_folio(lruvec, folio); - folio_clear_active(folio); - folio_clear_referenced(folio); - /* - * Lazyfree folios are clean anonymous folios. They have - * the swapbacked flag cleared, to distinguish them from normal - * anonymous folios - */ - folio_clear_swapbacked(folio); - lruvec_add_folio(lruvec, folio); + if (!folio_test_anon(folio) || !folio_test_swapbacked(folio) || + folio_test_swapcache(folio) || folio_test_unevictable(folio)) + return; - __count_vm_events(PGLAZYFREE, nr_pages); - __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, - nr_pages); - } + lruvec_del_folio(lruvec, folio); + folio_clear_active(folio); + folio_clear_referenced(folio); + /* + * Lazyfree folios are clean anonymous folios. They have + * the swapbacked flag cleared, to distinguish them from normal + * anonymous folios + */ + folio_clear_swapbacked(folio); + lruvec_add_folio(lruvec, folio); + + __count_vm_events(PGLAZYFREE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, nr_pages); } /* @@ -726,21 +734,21 @@ void deactivate_file_folio(struct folio *folio) */ void folio_deactivate(struct folio *folio) { - if (!folio_test_unevictable(folio) && (folio_test_active(folio) || - lru_gen_enabled())) { - struct folio_batch *fbatch; + struct folio_batch *fbatch; - folio_get(folio); - if (!folio_test_clear_lru(folio)) { - folio_put(folio); - return; - } + if (folio_test_unevictable(folio) || !(folio_test_active(folio) || lru_gen_enabled())) + return; - local_lock(&cpu_fbatches.lock); - fbatch = this_cpu_ptr(&cpu_fbatches.lru_deactivate); - folio_batch_add_and_move(fbatch, folio, lru_deactivate_fn); - local_unlock(&cpu_fbatches.lock); + folio_get(folio); + if (!folio_test_clear_lru(folio)) { + folio_put(folio); + return; } + + local_lock(&cpu_fbatches.lock); + fbatch = this_cpu_ptr(&cpu_fbatches.lru_deactivate); + folio_batch_add_and_move(fbatch, folio, lru_deactivate_fn); + local_unlock(&cpu_fbatches.lock); } /** @@ -752,21 +760,22 @@ void folio_deactivate(struct folio *folio) */ void folio_mark_lazyfree(struct folio *folio) { - if (folio_test_anon(folio) && folio_test_swapbacked(folio) && - !folio_test_swapcache(folio) && !folio_test_unevictable(folio)) { - struct folio_batch *fbatch; + struct folio_batch *fbatch; - folio_get(folio); - if (!folio_test_clear_lru(folio)) { - folio_put(folio); - return; - } + if (!folio_test_anon(folio) || !folio_test_swapbacked(folio) || + folio_test_swapcache(folio) || folio_test_unevictable(folio)) + return; - local_lock(&cpu_fbatches.lock); - fbatch = this_cpu_ptr(&cpu_fbatches.lru_lazyfree); - folio_batch_add_and_move(fbatch, folio, lru_lazyfree_fn); - local_unlock(&cpu_fbatches.lock); + folio_get(folio); + if (!folio_test_clear_lru(folio)) { + folio_put(folio); + return; } + + local_lock(&cpu_fbatches.lock); + fbatch = this_cpu_ptr(&cpu_fbatches.lru_lazyfree); + folio_batch_add_and_move(fbatch, folio, lru_lazyfree_fn); + local_unlock(&cpu_fbatches.lock); } void lru_add_drain(void)