From patchwork Sat Apr 16 12:39:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: dthex5d X-Patchwork-Id: 12815825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7633C433F5 for ; Sat, 16 Apr 2022 12:39:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ECDA26B0072; Sat, 16 Apr 2022 08:39:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E7DD56B0073; Sat, 16 Apr 2022 08:39:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D6CB46B0074; Sat, 16 Apr 2022 08:39:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0232.hostedemail.com [216.40.44.232]) by kanga.kvack.org (Postfix) with ESMTP id C78CF6B0072 for ; Sat, 16 Apr 2022 08:39:45 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7B30FA62F1 for ; Sat, 16 Apr 2022 12:39:45 +0000 (UTC) X-FDA: 79362698730.28.613623B Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) by imf12.hostedemail.com (Postfix) with ESMTP id 1937040006 for ; Sat, 16 Apr 2022 12:39:44 +0000 (UTC) Received: by mail-pj1-f44.google.com with SMTP id mp16-20020a17090b191000b001cb5efbcab6so13687853pjb.4 for ; Sat, 16 Apr 2022 05:39:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id; bh=OuLgF0sGBVEmNCjVq+cOXROdNbMI0rT63XN/p3hGmng=; b=hARp3WFWVUgRzvrpmdyzZB5sQJ2AJEPglHEhsfs7T+ybC3nnS7mGhWPmPMnhY5AYvV 5fm7slmAr8kaligcEWVmnjAzBMMljDvp6GjCDc9myDczuOTcrkWq4eSyz24gsYtZ9xB1 7RkVZwOdXkUF8xrb6hO+ee2k01hv5p5l4grfvIn1Z51K0V8RF1JEzCbPYxIzXYYWs0ue q+2r01w6SUB4Y2kXcKp3728Q8/BUMPTEbBS+FTm4yyB8pcvpPjIxwuAhl95PXDn1reIy sBDEiuCKV25nuxcEW+7es+IMzDb4QusjPYem97FUqBhuMIRKuYUEJHgUylQQaoyOyMPp N41Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=OuLgF0sGBVEmNCjVq+cOXROdNbMI0rT63XN/p3hGmng=; b=Iuk+PLcMWfJHlyOtqbkECOZNcCAspObxnJOWrY3E/UX1Rp9uSkCp6uYqsWDUoUbcs7 Hd1jadcWwa32kQk3hGSR3IK/E+zGAkmCt7MHXgo3cdodInPDUWcyp87s5h+GuKz3kqmc cJCriqFddegmN65KIGHW+BpYBEDVkN1b4PIvZd+oslJLLMVHv958dmhe2izsZxnK0N56 YQLZKbDBYgWjg2K0ztzNVbR3G3YCa5E+ZTZSX03SJgkHq5CUPjq9qj5yLf9xXeGC6BpA OUR85LkjXj/y4dbfz6VAefjFQoMPAuo0IOoAdikRgpt0BTa4D76D/74LXIIRUoJrxjAI zCyg== X-Gm-Message-State: AOAM533hEzZ+x8ooOIg7x1ysNCgWPWd8jiZ/ESYgNQBIN6exjZRIYVTM wW2OKcE9ogkBuFY8aFG6+0c= X-Google-Smtp-Source: ABdhPJxYiPfzVdmv0QFcUBzemBTuuLeEoMGyqL+qQ5Ir8BNZoLPG7tf2rKa4YKJ1Vd49lauWlBuIrA== X-Received: by 2002:a17:902:7884:b0:158:b5b6:572c with SMTP id q4-20020a170902788400b00158b5b6572cmr3306258pll.144.1650112783981; Sat, 16 Apr 2022 05:39:43 -0700 (PDT) Received: from DESKTOP-MAINPC.localdomain ([14.32.250.76]) by smtp.gmail.com with ESMTPSA id q12-20020a056a00084c00b0050a4bae6531sm2960425pfk.165.2022.04.16.05.39.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Apr 2022 05:39:43 -0700 (PDT) From: Donghyeok Kim To: Andrew Morton , Mike Kravetz , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin Cc: Ohhoon Kwon , JaeSang Yoo , Wonhyuk Yang , Jiyoup Kim , Donghyeok Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm/mmzone: Introduce a new macro for_each_node_zonelist() Date: Sat, 16 Apr 2022 21:39:28 +0900 Message-Id: <20220416123930.5956-1-dthex5d@gmail.com> X-Mailer: git-send-email 2.17.1 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 1937040006 X-Rspam-User: Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=hARp3WFW; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf12.hostedemail.com: domain of dthex5d@gmail.com designates 209.85.216.44 as permitted sender) smtp.mailfrom=dthex5d@gmail.com X-Stat-Signature: u19wjx65ke655q796tdyr1kdinuhzcn7 X-HE-Tag: 1650112784-222804 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are some codes using for_each_zone_zonelist() even when only iterating each node is needed. This commit introduces a new macro for_each_node_zonelist() which iterates through valid nodes in the zonelist. By using this new macro, code can be written in a much simpler form. Also, slab/slub can now skip trying to allocate from the node which was previously tried and failed. Co-developed-by: Ohhoon Kwon Signed-off-by: Ohhoon Kwon Signed-off-by: Donghyeok Kim --- include/linux/mmzone.h | 36 ++++++++++++++++++++++++++++++++++++ mm/hugetlb.c | 17 +++++++---------- mm/mmzone.c | 17 +++++++++++++++++ mm/slab.c | 7 ++----- mm/slub.c | 8 ++++---- mm/vmscan.c | 15 ++++++--------- 6 files changed, 72 insertions(+), 28 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 9aaa04ac862f..cb2ddd0b4c95 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1464,6 +1464,42 @@ static inline struct zoneref *first_zones_zonelist(struct zonelist *zonelist, #define for_each_zone_zonelist(zone, z, zlist, highidx) \ for_each_zone_zonelist_nodemask(zone, z, zlist, highidx, NULL) + +struct zoneref *next_node_zones_zonelist(struct zoneref *z, + int prev_nid, + enum zone_type highest_zoneidx, + nodemask_t *nodes); + +/** + * for_each_node_zonelist_nodemask - helper macro to iterate over valid nodes in a zonelist which have at least one zone at or below a given zone index and within a nodemask + * @node: The current node in the iterator + * @z: First matched zoneref within current node + * @zlist: The zonelist being iterated + * @highidx: The zone index of the highest zone in the node + * @nodemask: Nodemask allowed by the allocator + * + * This iterator iterates through all nodes which have at least one zone at or below a given zone index and + * within a given nodemask + */ +#define for_each_node_zonelist_nodemask(node, z, zlist, highidx, nodemask) \ + for (z = first_zones_zonelist(zlist, highidx, nodemask), \ + node = zonelist_zone(z) ? zonelist_node_idx(z) : NUMA_NO_NODE; \ + zonelist_zone(z); \ + z = next_node_zones_zonelist(++z, node, highidx, nodemask), \ + node = zonelist_zone(z) ? zonelist_node_idx(z) : NUMA_NO_NODE) + +/** + * for_each_node_zonelist - helper macro to iterate over nodes in a zonelist which have at least one zone at or below a given zone index + * @node: The current node in the iterator + * @z: First matched zoneref within current node + * @zlist: The zonelist being iterated + * @highidx: The zone index of the highest zone in the node + * + * This iterator iterates through all nodes which have at least one zone at or below a given zone index. + */ +#define for_each_node_zonelist(node, z, zlist, highidx) \ + for_each_node_zonelist_nodemask(node, z, zlist, highidx, NULL) + /* Whether the 'nodes' are all movable nodes */ static inline bool movable_only_nodes(nodemask_t *nodes) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index daa4bdd6c26c..283f28f1aca8 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1157,7 +1157,6 @@ static struct page *dequeue_huge_page_nodemask(struct hstate *h, gfp_t gfp_mask, { unsigned int cpuset_mems_cookie; struct zonelist *zonelist; - struct zone *zone; struct zoneref *z; int node = NUMA_NO_NODE; @@ -1165,18 +1164,16 @@ static struct page *dequeue_huge_page_nodemask(struct hstate *h, gfp_t gfp_mask, retry_cpuset: cpuset_mems_cookie = read_mems_allowed_begin(); - for_each_zone_zonelist_nodemask(zone, z, zonelist, gfp_zone(gfp_mask), nmask) { + + /* + * no need to ask again on the same node. Pool is node rather than + * zone aware + */ + for_each_node_zonelist_nodemask(node, z, zonelist, gfp_zone(gfp_mask), nmask) { struct page *page; - if (!cpuset_zone_allowed(zone, gfp_mask)) - continue; - /* - * no need to ask again on the same node. Pool is node rather than - * zone aware - */ - if (zone_to_nid(zone) == node) + if (!cpuset_node_allowed(node, gfp_mask)) continue; - node = zone_to_nid(zone); page = dequeue_huge_page_node_exact(h, node); if (page) diff --git a/mm/mmzone.c b/mm/mmzone.c index 68e1511be12d..8b7d6286056e 100644 --- a/mm/mmzone.c +++ b/mm/mmzone.c @@ -72,6 +72,23 @@ struct zoneref *__next_zones_zonelist(struct zoneref *z, return z; } +/* Returns the zone in the next node and at or below highest_zoneidx in a zonelist */ +struct zoneref *next_node_zones_zonelist(struct zoneref *z, + int prev_nid, + enum zone_type highest_zoneidx, + nodemask_t *nodes) +{ + if (likely(nodes == NULL)) + while (z->zone && (zonelist_node_idx(z) == prev_nid || zonelist_zone_idx(z) > highest_zoneidx)) + z++; + else + while (z->zone && (zonelist_node_idx(z) == prev_nid || zonelist_zone_idx(z) > highest_zoneidx || + !zref_in_nodemask(z, nodes))) + z++; + + return z; +} + void lruvec_init(struct lruvec *lruvec) { enum lru_list lru; diff --git a/mm/slab.c b/mm/slab.c index a301f266efd1..b374fb88f80e 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3077,7 +3077,6 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags) { struct zonelist *zonelist; struct zoneref *z; - struct zone *zone; enum zone_type highest_zoneidx = gfp_zone(flags); void *obj = NULL; struct slab *slab; @@ -3096,10 +3095,8 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags) * Look through allowed nodes for objects available * from existing per node queues. */ - for_each_zone_zonelist(zone, z, zonelist, highest_zoneidx) { - nid = zone_to_nid(zone); - - if (cpuset_zone_allowed(zone, flags) && + for_each_node_zonelist(nid, z, zonelist, highest_zoneidx) { + if (cpuset_node_allowed(nid, flags) && get_node(cache, nid) && get_node(cache, nid)->free_objects) { obj = ____cache_alloc_node(cache, diff --git a/mm/slub.c b/mm/slub.c index 6dc703488d30..3e8b4aa98b84 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2192,7 +2192,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, #ifdef CONFIG_NUMA struct zonelist *zonelist; struct zoneref *z; - struct zone *zone; + int nid; enum zone_type highest_zoneidx = gfp_zone(flags); void *object; unsigned int cpuset_mems_cookie; @@ -2222,12 +2222,12 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, do { cpuset_mems_cookie = read_mems_allowed_begin(); zonelist = node_zonelist(mempolicy_slab_node(), flags); - for_each_zone_zonelist(zone, z, zonelist, highest_zoneidx) { + for_each_node_zonelist(nid, z, zonelist, highest_zoneidx) { struct kmem_cache_node *n; - n = get_node(s, zone_to_nid(zone)); + n = get_node(s, nid); - if (n && cpuset_zone_allowed(zone, flags) && + if (n && cpuset_node_allowed(nid, flags) && n->nr_partial > s->min_partial) { object = get_partial_node(s, n, ret_slab, flags); if (object) { diff --git a/mm/vmscan.c b/mm/vmscan.c index d4a7d2bd276d..f25b71bf8f61 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -6176,9 +6176,9 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist, struct scan_control *sc) { int initial_priority = sc->priority; - pg_data_t *last_pgdat; + pg_data_t *pgdat; struct zoneref *z; - struct zone *zone; + int nid; retry: delayacct_freepages_start(); @@ -6206,19 +6206,16 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist, } while (--sc->priority >= 0); last_pgdat = NULL; - for_each_zone_zonelist_nodemask(zone, z, zonelist, sc->reclaim_idx, + for_each_node_zonelist_nodemask(nid, z, zonelist, sc->reclaim_idx, sc->nodemask) { - if (zone->zone_pgdat == last_pgdat) - continue; - last_pgdat = zone->zone_pgdat; + pgdat = NODE_DATA(nid); - snapshot_refaults(sc->target_mem_cgroup, zone->zone_pgdat); + snapshot_refaults(sc->target_mem_cgroup, pgdat); if (cgroup_reclaim(sc)) { struct lruvec *lruvec; - lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, - zone->zone_pgdat); + lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); clear_bit(LRUVEC_CONGESTED, &lruvec->flags); } }