From patchwork Tue Mar 1 07:58:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaomeng Tong X-Patchwork-Id: 12764277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F996C433EF for ; Tue, 1 Mar 2022 07:59:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B42718D0007; Tue, 1 Mar 2022 02:59:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ACB5F8D0001; Tue, 1 Mar 2022 02:59:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 946C98D0007; Tue, 1 Mar 2022 02:59:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0202.hostedemail.com [216.40.44.202]) by kanga.kvack.org (Postfix) with ESMTP id 85E588D0001 for ; Tue, 1 Mar 2022 02:59:15 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 449FD181CAC52 for ; Tue, 1 Mar 2022 07:59:15 +0000 (UTC) X-FDA: 79195067070.22.5B43AE8 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by imf12.hostedemail.com (Postfix) with ESMTP id D510540004 for ; Tue, 1 Mar 2022 07:59:14 +0000 (UTC) Received: by mail-pl1-f169.google.com with SMTP id p17so12811177plo.9 for ; Mon, 28 Feb 2022 23:59:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DH7q20NUg2fMsVec64lFAIumDGct4fZ105Sw3WAxbUA=; b=HLRBF069/um4dwnSHVDbI8dMFG7O4YOwO13iD6T3NtC46gVCf+cqdA32nJr444gw/e ZGXMD/+Sq19GBj7ri2stQfg2YRHBVyL1h3wjdYyUNAQFRvV/NO/yhTObYAjYR6yB4O1P R2gnrnnWh7bFxCGgXzrazITF3jtmnFfFzcl8cTwldg9M0evnPLez8C4LIY0MzmQU5Gr6 WWooK8bKBefjGTEctbNTWyvit1VOWagfNoh+RIOqOTt7wVNiBBvqpSh0+CtHAwc2RXTL eiYUzqx5xKpj02Dnqcwx6p2YICPj+SXQrmhNW655CZVzyEClYIKvi+T8zCIx067+f19l ALgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DH7q20NUg2fMsVec64lFAIumDGct4fZ105Sw3WAxbUA=; b=z+VGhuTP9eE+HArsb6D8lKTitGrj4bze5v6y1jIlWfCr49PJWPyyRFxXf9AUKKYElo pCJO6oTSfU782HK+Gq7zhisVB+ffxzSpS+OHERCemY3oWOipw6a8qJrXaBjQB0CeMG9n l4Dv30Re81+AiOegVPhbGq3Owwt4hMn8XTBqu5vsI2b/nc1kn/6N7n+6A9U8vbyp4ghh jHwCqkwTNlSjhyUlpTWDf+UuvKdqMZCSmwiM98cRpXnVrWD66OD8dJHnGj/3hctQMSTN EPYZeuygnOj4JVTjK/G/CMkLCaLNgiAgW1UVjSHAL5Q9kHaJ6jrCApRgSSAvVSTCQ3TB wIKw== X-Gm-Message-State: AOAM530OfV97+ZyQp139mM5zf0sycWZFsvqIUxEbHe6aGji3sGZfTYE4 uu1g/y+0GmP4ihNkOoF6EMk= X-Google-Smtp-Source: ABdhPJzfIGUgp26nNkeuQ+F6tXH5OU7cPOWBmYg0OuNdWuCuc04wHa/W/eDEP40nsuuNMSdEw+IExA== X-Received: by 2002:a17:90a:788f:b0:1bd:59c5:748e with SMTP id x15-20020a17090a788f00b001bd59c5748emr7588034pjk.158.1646121554019; Mon, 28 Feb 2022 23:59:14 -0800 (PST) Received: from ubuntu.huawei.com ([119.3.119.18]) by smtp.googlemail.com with ESMTPSA id o12-20020a17090aac0c00b001b9e5286c90sm1662745pjq.0.2022.02.28.23.59.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 23:59:13 -0800 (PST) From: Xiaomeng Tong To: torvalds@linux-foundation.org Cc: arnd@arndb.de, jakobkoschel@gmail.com, linux-kernel@vger.kernel.org, gregkh@linuxfoundation.org, keescook@chromium.org, jannh@google.com, linux-kbuild@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, Xiaomeng Tong Subject: [PATCH 4/6] mm: remove iterator use outside the loop Date: Tue, 1 Mar 2022 15:58:37 +0800 Message-Id: <20220301075839.4156-5-xiam0nd.tong@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220301075839.4156-1-xiam0nd.tong@gmail.com> References: <20220301075839.4156-1-xiam0nd.tong@gmail.com> X-Rspamd-Queue-Id: D510540004 X-Stat-Signature: j51pgtajns7x9x94xm49884pm7w591no X-Rspam-User: Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=HLRBF069; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf12.hostedemail.com: domain of xiam0nd.tong@gmail.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=xiam0nd.tong@gmail.com X-Rspamd-Server: rspam03 X-HE-Tag: 1646121554-74755 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Demonstrations for: - list_for_each_entry_inside - list_for_each_entry_reverse_inside - list_for_each_entry_safe_inside - list_for_each_entry_from_inside - list_for_each_entry_continue_reverse_inside Signed-off-by: Xiaomeng Tong --- mm/list_lru.c | 10 ++++++---- mm/slab_common.c | 7 ++----- mm/vmalloc.c | 6 +++--- 3 files changed, 11 insertions(+), 12 deletions(-) diff --git a/mm/list_lru.c b/mm/list_lru.c index 0cd5e89ca..d8aab53a7 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -493,20 +493,22 @@ static void memcg_cancel_update_list_lru(struct list_lru *lru, int memcg_update_all_list_lrus(int new_size) { int ret = 0; - struct list_lru *lru; + struct list_lru *ll = NULL; int old_size = memcg_nr_cache_ids; mutex_lock(&list_lrus_mutex); - list_for_each_entry(lru, &memcg_list_lrus, list) { + list_for_each_entry_inside(lru, struct list_lru, &memcg_list_lrus, list) { ret = memcg_update_list_lru(lru, old_size, new_size); - if (ret) + if (ret) { + ll = lru; goto fail; + } } out: mutex_unlock(&list_lrus_mutex); return ret; fail: - list_for_each_entry_continue_reverse(lru, &memcg_list_lrus, list) + list_for_each_entry_continue_reverse_inside(lru, ll, &memcg_list_lrus, list) memcg_cancel_update_list_lru(lru, old_size, new_size); goto out; } diff --git a/mm/slab_common.c b/mm/slab_common.c index 23f2ab071..68a25d385 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -186,8 +186,6 @@ int slab_unmergeable(struct kmem_cache *s) struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, slab_flags_t flags, const char *name, void (*ctor)(void *)) { - struct kmem_cache *s; - if (slab_nomerge) return NULL; @@ -202,7 +200,7 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, if (flags & SLAB_NEVER_MERGE) return NULL; - list_for_each_entry_reverse(s, &slab_caches, list) { + list_for_each_entry_reverse_inside(s, struct kmem_cache, &slab_caches, list) { if (slab_unmergeable(s)) continue; @@ -419,7 +417,6 @@ EXPORT_SYMBOL(kmem_cache_create); static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work) { LIST_HEAD(to_destroy); - struct kmem_cache *s, *s2; /* * On destruction, SLAB_TYPESAFE_BY_RCU kmem_caches are put on the @@ -439,7 +436,7 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work) rcu_barrier(); - list_for_each_entry_safe(s, s2, &to_destroy, list) { + list_for_each_entry_safe_inside(s, s2, struct kmem_cache, &to_destroy, list) { debugfs_slab_release(s); kfence_shutdown_cache(s); #ifdef SLAB_SUPPORTS_SYSFS diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 4165304d3..65a9f1db7 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3417,14 +3417,14 @@ long vread(char *buf, char *addr, unsigned long count) if ((unsigned long)addr + count <= va->va_start) goto finished; - list_for_each_entry_from(va, &vmap_area_list, list) { + list_for_each_entry_from_inside(iter, va, &vmap_area_list, list) { if (!count) break; - if (!va->vm) + if (!iter->vm) continue; - vm = va->vm; + vm = iter->vm; vaddr = (char *) vm->addr; if (addr >= vaddr + get_vm_area_size(vm)) continue;