From patchwork Wed Jan 18 00:18:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "T.J. Alumbaugh" X-Patchwork-Id: 13105288 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE93BC38159 for ; Wed, 18 Jan 2023 00:18:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DDAF06B007D; Tue, 17 Jan 2023 19:18:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D646F6B007E; Tue, 17 Jan 2023 19:18:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACC1A6B0080; Tue, 17 Jan 2023 19:18:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 9B3406B007D for ; Tue, 17 Jan 2023 19:18:50 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 6EB0980224 for ; Wed, 18 Jan 2023 00:18:50 +0000 (UTC) X-FDA: 80366009220.04.A912E14 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) by imf26.hostedemail.com (Postfix) with ESMTP id CAAC014000C for ; Wed, 18 Jan 2023 00:18:48 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=KmtgBZ0n; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 36DrHYwgKCJ8S9KTLA9TFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--talumbau.bounces.google.com designates 209.85.166.73 as permitted sender) smtp.mailfrom=36DrHYwgKCJ8S9KTLA9TFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--talumbau.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674001128; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CTCu27j3NsgPeq6sMC4ftRhidA8hrlAj8BNmRhunsTg=; b=6Wv6REBeKROew03/zQG+65uoifDrsB48JlUqVij4c0AorwIsBqC34OXg2ErT4nUAnGVs1D wQeOf/UkjrAdlReetcFt/hb33VENX2D8GrhrrGUJokCosrF5kSIEOMpnmLIyNTWcAIEoMW iZQEydCEdlvsYgu1Ng7jN6tHojU2OiI= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=KmtgBZ0n; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 36DrHYwgKCJ8S9KTLA9TFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--talumbau.bounces.google.com designates 209.85.166.73 as permitted sender) smtp.mailfrom=36DrHYwgKCJ8S9KTLA9TFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--talumbau.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674001128; a=rsa-sha256; cv=none; b=YvPHxKGOHWp9YjLks0dgUyuzTtZB5MyMnU4O3TJnLvfy998KM2CXjpw6lSmsF5mEUvCv7Q GAUxtqbtTQ7MT4bpHtbhR63+9JHnVl/4FcuGPJg5/r58WWwIATPWylcvEKr0NNuOq49JSg w+sc6GibhT3GPDKIbL7H1ti+PyB5kUg= Received: by mail-io1-f73.google.com with SMTP id t8-20020a6bdb08000000b007047a1fb158so10251520ioc.20 for ; Tue, 17 Jan 2023 16:18:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CTCu27j3NsgPeq6sMC4ftRhidA8hrlAj8BNmRhunsTg=; b=KmtgBZ0ns5KBrFINUJN1UyCkT0OkztNePcFoXQ/KcNMIWVI4/Md6wgIu7rD9n2Z9W7 OHP52GbTDEUvON3Ld0n87PN+YeYWmganlqHvab14UiI2dOJRoBrxCTfp/qt8Xhwz53OB mPEpoJUArFb1uvBq5NwobsbT+aICrf6M2jI1VnONWmew09b3yBrXI5tYV14dkq1e3YUD 0EHv/w9MXarhkO+fJBlUWX4vvx6dkSXDUwGOj0NKr7HEFADLxh0DrE/USqzuEbBO+WOd QBw+bLV+MykCDnCfOAw7H7t3evyBdMLMvH1I1gxmNRV/8Dny+rXsipWoINWSU52z73y8 GcWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CTCu27j3NsgPeq6sMC4ftRhidA8hrlAj8BNmRhunsTg=; b=SgyOPEU6Jf7nLZzIL+3vyRWtJ3CXzPvVCynRf1r9bLNnepaigHcC5BuXueTl5IXnae +/++2vLaoXF2JcClFJrwCNb7MHeneV4GY17vBB9bo+pZSfrJQ3tfiDnvpgPJ2J/kPaE0 uQE7ydetMJaI1L+8zS9d+6yrI/AOZisNQ482VIXm3xfC9hyi2KBsQJJJh2/EFDopzV/u xzSXmRdE67jtLbXU6ddyPipsRYu8+yb8MfDzxc+HErAJ3m3wpxV9bFNJGbOFGqCIGMzD AdvnOCw7uafmQpoFYOL5GTJcejYX78IGuKMOgxLqWkwesJ3xlU7rchvGqTjfEq3vwsvH LJCw== X-Gm-Message-State: AFqh2kol2bHcT2yXslg1TNhkYmdt7Z4wZxj4wOwWrdnox5SSdTFkzvcd mKm08L+r6ZrMnvxmve8OPrm7SQT4EJ9aEw== X-Google-Smtp-Source: AMrXdXuxcjNNoKg2rw1D3GgeuKMfxw+4OUiAhpy9aCzXSpOxc3V+thh4Ty8+l3G41CLAAnqXADnDMl74AePdDg== X-Received: from talumbau.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:90d]) (user=talumbau job=sendgmr) by 2002:a02:b795:0:b0:39e:9c44:3fe6 with SMTP id f21-20020a02b795000000b0039e9c443fe6mr578418jam.297.1674001128137; Tue, 17 Jan 2023 16:18:48 -0800 (PST) Date: Wed, 18 Jan 2023 00:18:26 +0000 In-Reply-To: <20230118001827.1040870-1-talumbau@google.com> Mime-Version: 1.0 References: <20230118001827.1040870-1-talumbau@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230118001827.1040870-7-talumbau@google.com> Subject: [PATCH mm-unstable v1 6/7] mm: multi-gen LRU: improve walk_pmd_range() From: "T.J. Alumbaugh" To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mm@google.com, "T.J. Alumbaugh" X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: CAAC014000C X-Stat-Signature: z79o1txuq1hk5xzsktrdus19c94i9k3x X-HE-Tag: 1674001128-144874 X-HE-Meta: U2FsdGVkX199eowid3EFv0GeLshdR1U0XAJh8f62oJ9Y2YY+mPz4pjO35GvR+fLWHHQYN97lWMp9Ew0STKNPdCFTDP06bUREM6q0p9S+DmnxiyUvxNyGbS05mC0yjzOz1HT/S2QxYW7rx5AGIYXF4l+5Enfx0UJbWdD2wlMyvnXCKI4vr6xJHhZ+JbOmb6h76bDafrrG6z7/ivzPUOXkPOLL4J5FTS+3Mrs3dB6cMiZ+n1MLbLI7GlOAVp1HQKelsOeAz7kvRR6pTPzWyKzCvBQVlTDpKIaOjUev2lHX6lQ6ZvvnKTCYYQK2mhjbpeEd5KH+1n1burso3xaxv1LLD0/AGwxkeaFoFh61EQNTv2TX3W/6/A+2DGYuDQPguAJgXqKwskOZmEDviDMK27EgDf+f7KV95B9vSaWuTLmlJuSXROXimJkk4vYjApk6zIGjj//0XCVsSG9xeGNSQHSjZ/SPfin4hw+2HyggFcnsvVCBDgltdwcdFzslOjTOwwn3fa1dki38ZKOXSZ5HhgnSxm5RzhaGgJGA0X14XZ1vACQW736k0SOW52nDz3o1M0KN2KKBgiSLgDcJIN/oKwNedJso9R+t2poqZPoRYNcYt+6RJppvvRY0Ke27DjFelfMMrfIezHbSwQNUThlov+pQUsR4VsrWDFjh1ib6ZA8Zf7IHZnmfhJTUmQDiadPFq6LyWNgmX3S+AzCfMNmsBU3sDAeV6wTeMlNjzQ+RHlvtuF2uN7drhc371/U3MUqr5sNN/v4sxrd/LgKiVJ1C4CA9KEjZnr3wBYM69nio6goHvOqAFjX/QQ4cGrTDLgfj1N9hK2YjAq9G4JJnvPlqBjCfnLedRQo4/wxB7MuYLe+lTJnYaJ8H0+rTjFP7KEBqOxORf0Kc7BW82d0hFr0l7Zh1Tf4c5FEBc2hU8Cl2ctDgyZMGsjIHT9zZuafd0CcJpkKkFLeS7pQ+awpdxbCkUz0 7t6CD1bi w+b09jwEF2Wq7peU/dSHcaYV/ND5foTo6fIVhd9hdtL1uby7N7I3Tu4BgRm+nQ+HKoNxmnSNJoCABTdgNMwT+S1Zfv+tYo0tTzveh+gptSTinHg+lIOpWdallPCzLUjDVsTC3HITp+doZUBI1Yq0Z8Raz1F1pinqPgi805ppz+ATLl4vA26LSRiI0qEHMKGqDEsSc7bOxZM1brekkUuRvAkyFWhPLIVPymFyYFubxcjLUhCdpbikb3TmfCABj8R8NLDsq6dQvmb+BxtGtUb8FTMoAfR7GA0srMmkvlyYQQOaufkV9JJZz7glZV2bxKMf2WvV+i7p+1q6iUF3m8Y6AbaLu59tkIlchT6vCkRGvoSNOyjHMOOCYHq/k6on7xaO1DggJG454vU5PzFGpFXhe3u3nwJNAty1nggnrRzJjtOI6oxs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improve readability of walk_pmd_range() and walk_pmd_range_locked(). Signed-off-by: T.J. Alumbaugh --- mm/vmscan.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index c2e6ad53447b..ff3b4aa3c31f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3999,8 +3999,8 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end, } #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) -static void walk_pmd_range_locked(pud_t *pud, unsigned long next, struct vm_area_struct *vma, - struct mm_walk *args, unsigned long *bitmap, unsigned long *start) +static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area_struct *vma, + struct mm_walk *args, unsigned long *bitmap, unsigned long *first) { int i; pmd_t *pmd; @@ -4013,18 +4013,19 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long next, struct vm_area VM_WARN_ON_ONCE(pud_leaf(*pud)); /* try to batch at most 1+MIN_LRU_BATCH+1 entries */ - if (*start == -1) { - *start = next; + if (*first == -1) { + *first = addr; + bitmap_zero(bitmap, MIN_LRU_BATCH); return; } - i = next == -1 ? 0 : pmd_index(next) - pmd_index(*start); + i = addr == -1 ? 0 : pmd_index(addr) - pmd_index(*first); if (i && i <= MIN_LRU_BATCH) { __set_bit(i - 1, bitmap); return; } - pmd = pmd_offset(pud, *start); + pmd = pmd_offset(pud, *first); ptl = pmd_lockptr(args->mm, pmd); if (!spin_trylock(ptl)) @@ -4035,15 +4036,16 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long next, struct vm_area do { unsigned long pfn; struct folio *folio; - unsigned long addr = i ? (*start & PMD_MASK) + i * PMD_SIZE : *start; + + /* don't round down the first address */ + addr = i ? (*first & PMD_MASK) + i * PMD_SIZE : *first; pfn = get_pmd_pfn(pmd[i], vma, addr); if (pfn == -1) goto next; if (!pmd_trans_huge(pmd[i])) { - if (arch_has_hw_nonleaf_pmd_young() && - get_cap(LRU_GEN_NONLEAF_YOUNG)) + if (arch_has_hw_nonleaf_pmd_young() && get_cap(LRU_GEN_NONLEAF_YOUNG)) pmdp_test_and_clear_young(vma, addr, pmd + i); goto next; } @@ -4072,12 +4074,11 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long next, struct vm_area arch_leave_lazy_mmu_mode(); spin_unlock(ptl); done: - *start = -1; - bitmap_zero(bitmap, MIN_LRU_BATCH); + *first = -1; } #else -static void walk_pmd_range_locked(pud_t *pud, unsigned long next, struct vm_area_struct *vma, - struct mm_walk *args, unsigned long *bitmap, unsigned long *start) +static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area_struct *vma, + struct mm_walk *args, unsigned long *bitmap, unsigned long *first) { } #endif @@ -4090,9 +4091,9 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, unsigned long next; unsigned long addr; struct vm_area_struct *vma; - unsigned long pos = -1; + unsigned long bitmap[BITS_TO_LONGS(MIN_LRU_BATCH)]; + unsigned long first = -1; struct lru_gen_mm_walk *walk = args->private; - unsigned long bitmap[BITS_TO_LONGS(MIN_LRU_BATCH)] = {}; VM_WARN_ON_ONCE(pud_leaf(*pud)); @@ -4131,18 +4132,17 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, if (pfn < pgdat->node_start_pfn || pfn >= pgdat_end_pfn(pgdat)) continue; - walk_pmd_range_locked(pud, addr, vma, args, bitmap, &pos); + walk_pmd_range_locked(pud, addr, vma, args, bitmap, &first); continue; } #endif walk->mm_stats[MM_NONLEAF_TOTAL]++; - if (arch_has_hw_nonleaf_pmd_young() && - get_cap(LRU_GEN_NONLEAF_YOUNG)) { + if (arch_has_hw_nonleaf_pmd_young() && get_cap(LRU_GEN_NONLEAF_YOUNG)) { if (!pmd_young(val)) continue; - walk_pmd_range_locked(pud, addr, vma, args, bitmap, &pos); + walk_pmd_range_locked(pud, addr, vma, args, bitmap, &first); } if (!walk->force_scan && !test_bloom_filter(walk->lruvec, walk->max_seq, pmd + i)) @@ -4159,7 +4159,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, update_bloom_filter(walk->lruvec, walk->max_seq + 1, pmd + i); } - walk_pmd_range_locked(pud, -1, vma, args, bitmap, &pos); + walk_pmd_range_locked(pud, -1, vma, args, bitmap, &first); if (i < PTRS_PER_PMD && get_next_vma(PUD_MASK, PMD_SIZE, args, &start, &end)) goto restart;