From patchwork Wed Sep 8 02:53:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12479805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0077C433EF for ; Wed, 8 Sep 2021 02:53:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7FA8361101 for ; Wed, 8 Sep 2021 02:53:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7FA8361101 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2929994000E; Tue, 7 Sep 2021 22:53:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2405F940008; Tue, 7 Sep 2021 22:53:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1570A94000E; Tue, 7 Sep 2021 22:53:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0101.hostedemail.com [216.40.44.101]) by kanga.kvack.org (Postfix) with ESMTP id 07583940008 for ; Tue, 7 Sep 2021 22:53:24 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C2E5B31E74 for ; Wed, 8 Sep 2021 02:53:23 +0000 (UTC) X-FDA: 78562885086.38.DD7F02D Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf26.hostedemail.com (Postfix) with ESMTP id 7224420019C6 for ; Wed, 8 Sep 2021 02:53:23 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 42E2F60EE6; Wed, 8 Sep 2021 02:53:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1631069602; bh=8zbw5zTrlvM+2+p8e3dG9yATGd/Pj2gzYCpsj9E6DHE=; h=Date:From:To:Subject:In-Reply-To:From; b=PBQNm9wlP6Vcfl/Y7J24H7R9az/H47FOQt9FawxllT/Yuge1MOIB1ybh50SWwEjrd 4ETgg4MROUQNkV9gABFpv2MCmsNZ1OXp/TDMQ1ThPft3JFIEug9z7OTqr3CEOaqqms 4+s9eFRIQt+NXhLGRGGAYzw2bk+IpMM5BHhHgkEM= Date: Tue, 07 Sep 2021 19:53:21 -0700 From: Andrew Morton To: akpm@linux-foundation.org, bigeasy@linutronix.de, brouer@redhat.com, cl@linux.com, efault@gmx.de, iamjoonsoo.kim@lge.com, jannh@google.com, linux-mm@kvack.org, mgorman@techsingularity.net, mm-commits@vger.kernel.org, penberg@kernel.org, quic_qiancai@quicinc.com, rientjes@google.com, tglx@linutronix.de, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 008/147] mm, slub: return slab page from get_partial() and set c->page afterwards Message-ID: <20210908025321.U6LqQVlTz%akpm@linux-foundation.org> In-Reply-To: <20210907195226.14b1d22a07c085b22968b933@linux-foundation.org> User-Agent: s-nail v14.8.16 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=PBQNm9wl; spf=pass (imf26.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 7224420019C6 X-Stat-Signature: sm6nwphz7qz3zoudn5xohewasm1bgxpr X-HE-Tag: 1631069603-281440 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vlastimil Babka Subject: mm, slub: return slab page from get_partial() and set c->page afterwards The function get_partial() finds a suitable page on a partial list, acquires and returns its freelist and assigns the page pointer to kmem_cache_cpu. In later patch we will need more control over the kmem_cache_cpu.page assignment, so instead of passing a kmem_cache_cpu pointer, pass a pointer to a pointer to a page that get_partial() can fill and the caller can assign the kmem_cache_cpu.page pointer. No functional change as all of this still happens with disabled IRQs. Link: https://lkml.kernel.org/r/20210904105003.11688-9-vbabka@suse.cz Signed-off-by: Vlastimil Babka Cc: Christoph Lameter Cc: David Rientjes Cc: Jann Horn Cc: Jesper Dangaard Brouer Cc: Joonsoo Kim Cc: Mel Gorman Cc: Mike Galbraith Cc: Pekka Enberg Cc: Qian Cai Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Signed-off-by: Andrew Morton --- mm/slub.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) --- a/mm/slub.c~mm-slub-return-slab-page-from-get_partial-and-set-c-page-afterwards +++ a/mm/slub.c @@ -2017,7 +2017,7 @@ static inline bool pfmemalloc_match(stru * Try to allocate a partial slab from a specific node. */ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, - struct kmem_cache_cpu *c, gfp_t flags) + struct page **ret_page, gfp_t flags) { struct page *page, *page2; void *object = NULL; @@ -2046,7 +2046,7 @@ static void *get_partial_node(struct kme available += objects; if (!object) { - c->page = page; + *ret_page = page; stat(s, ALLOC_FROM_PARTIAL); object = t; } else { @@ -2066,7 +2066,7 @@ static void *get_partial_node(struct kme * Get a page from somewhere. Search in increasing NUMA distances. */ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, - struct kmem_cache_cpu *c) + struct page **ret_page) { #ifdef CONFIG_NUMA struct zonelist *zonelist; @@ -2108,7 +2108,7 @@ static void *get_any_partial(struct kmem if (n && cpuset_zone_allowed(zone, flags) && n->nr_partial > s->min_partial) { - object = get_partial_node(s, n, c, flags); + object = get_partial_node(s, n, ret_page, flags); if (object) { /* * Don't check read_mems_allowed_retry() @@ -2130,7 +2130,7 @@ static void *get_any_partial(struct kmem * Get a partial page, lock it and return it. */ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node, - struct kmem_cache_cpu *c) + struct page **ret_page) { void *object; int searchnode = node; @@ -2138,11 +2138,11 @@ static void *get_partial(struct kmem_cac if (node == NUMA_NO_NODE) searchnode = numa_mem_id(); - object = get_partial_node(s, get_node(s, searchnode), c, flags); + object = get_partial_node(s, get_node(s, searchnode), ret_page, flags); if (object || node != NUMA_NO_NODE) return object; - return get_any_partial(s, flags, c); + return get_any_partial(s, flags, ret_page); } #ifdef CONFIG_PREEMPTION @@ -2754,9 +2754,11 @@ new_slab: goto redo; } - freelist = get_partial(s, gfpflags, node, c); - if (freelist) + freelist = get_partial(s, gfpflags, node, &page); + if (freelist) { + c->page = page; goto check_new_page; + } page = new_slab(s, gfpflags, node); @@ -2780,7 +2782,6 @@ new_slab: c->page = page; check_new_page: - page = c->page; if (likely(!kmem_cache_debug(s) && pfmemalloc_match(page, gfpflags))) goto load_freelist;