From patchwork Tue Oct 17 15:44:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13425539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 461BDCDB474 for ; Tue, 17 Oct 2023 15:45:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D5E3B6B024B; Tue, 17 Oct 2023 11:45:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D0BE66B024C; Tue, 17 Oct 2023 11:45:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B382A6B0250; Tue, 17 Oct 2023 11:45:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A042F6B024B for ; Tue, 17 Oct 2023 11:45:31 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 73BBE80A8F for ; Tue, 17 Oct 2023 15:45:31 +0000 (UTC) X-FDA: 81355378062.10.994CEB7 Received: from out-209.mta1.migadu.com (out-209.mta1.migadu.com [95.215.58.209]) by imf21.hostedemail.com (Postfix) with ESMTP id 97C311C0022 for ; Tue, 17 Oct 2023 15:45:29 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=QTQRTtR4; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf21.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.209 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697557529; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SwVUZ0CSOlUV0/uHFVqKY/DwonXk9nrbYMCg7m1DgMw=; b=bVn4oXI8CMaBoXM+Ai8V2o1bg2sG098j+PEVjJZSyYzXx5auI/4gPudGAmyhj2bdFHuzoW xw3qUhs/V0olGkBwDB9oNhutPOoe4SM5SgDa5sBJXpjK2NlPKYqXYUZeJwlPO42BshdqZ8 fWO/cfTAPXfTqZFvsHddldc+Zf3mxC4= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=QTQRTtR4; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf21.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.209 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697557529; a=rsa-sha256; cv=none; b=AOaRmGpYM7dnt4vB3fA6K/s15MiHwl6B+V1CFeaWzHmc8SvLky+Iqpu/rGtxbo26j/zv8T DI5w4oAKQXd8sqQWC3mnck5n15jloOhbHFnG50qslIdJ/MbrWlojyaXNYXacf1PCSOwtVm 8azggsvmtMmQkMudaqpebGSTzC+TSp0= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1697557528; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SwVUZ0CSOlUV0/uHFVqKY/DwonXk9nrbYMCg7m1DgMw=; b=QTQRTtR4TDMx815F2FxWR0MO4r4tizfmzdquqGgIm6079a5vKw/+Kv5swHYxfc+xr4bwXq Qc4wl110RF7gV0nbPrV0c1kK94qUoAu4HC34L59IfxeRumqBLP9gWTX7ryMGjzSz0nXUJR 3HtIn99NiO2gQzsXZGKH13+VO/pQ1ig= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH 4/5] slub: Don't freeze slabs for cpu partial Date: Tue, 17 Oct 2023 15:44:38 +0000 Message-Id: <20231017154439.3036608-5-chengming.zhou@linux.dev> In-Reply-To: <20231017154439.3036608-1-chengming.zhou@linux.dev> References: <20231017154439.3036608-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 97C311C0022 X-Stat-Signature: x4byeczajiihicep7nu7gke94w86t5if X-HE-Tag: 1697557529-155892 X-HE-Meta: U2FsdGVkX1+SvcTkMChBfAxhkdZBzZIOQPDXPrObizXonV0W5LKxXEbVMw+PkxwG/yYT/SJ+9/fTEww0QQrVXzYN/vrqNky4+r7qiQTAt1vVHkHPqRRzHEaztq8T3V6QQnQDeBv7O/XlahnyouEwsUl/pwLepkwaZYayrlbmrvnj1oUxspLVfAVnOjnW/wAcOFXlfVrZU/D/bOmdqdZ4D4nPivyHG7Laqb9kjDFppTYXF3Mt4FcLGK5zYhShjR0aBIaMO2bXFHiUnBXlqj9ebYpwTfPpCKUGMk01Zr19NXXOgIuYZW9/u3rly3QMRigocxCNmOdHTvFLBwhyrzfZk+kyqKPwyJ0+hyLmpu9qojC5tUCglOHF0HuTJCF6FqF1HPUIhftPtDuwum9Gn7MTOmkG/PUWp+8Hi18iovwcmN5Own3Tg+HF0X/B7ADvlobtavhYaaJadvk7Y+26/uetIOv6esyHnVd3IGR72qsBzvoqydA7P/O8e4zmk4ixoBFlFDdFw3wzJbx8PbKd4Nh9R3MVtkU1nfQ8cEbKks3dk7BQIrYUweF915cJCii4vWepeq1W2LE+EzNij2ck4mxnVb8WDZdS86VmQxd9nKlKzmzFNSoF/nACoPnLs+v9Wy8EGkJkgts+kH2uoA9SHjPpz1q+KMBoXcw7fedTPbVjcirzA+M118APVZLevRteC3uuSMHLz4WBbuHjEQ84hIJtx67EgBhvapET3JEqdjsHl0R+d/daohAh25ZBOXfQbeJBwiIQKp4kdmt6TBtDeK5jeDjgqu9BYBKS7SrJivIzGUtBQBpfljpsuVWZMdrFIynRIpI0AK8EF9/pl09mZvrFMA4NlQhjkACVkkiJ2jyaKoFryG0PkBs9u/yjZ1s4AmFFaVg6ZZQfHcf6+5AfOafyuS8zsy7zW7Otc/utxTDNhg/IPipbEQRbo9Vk1M3mhvdQW5iknfOPDzUemUur9zt kmTckKz5 7H+63Cg+Odr618i6PgrVGPJrhaF2ZyFRXbejmxrKykOlnmifmcQCWoEY8oQ8DWutFy/Rnz3RxdLLOauloT2GwqOsB+QxGSbh0U0322LZ46WZnd8e5/ofmK38jXzGsnIreJaXldDFaR5u7Ti3RLXk0Z+Pnqoo36U45jlN3r3F4HPBD16rwX5zNH+UqDDHi9E8CbdMSRf20UptsLwY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Chengming Zhou Now we will freeze slabs when moving them out of node partial list to cpu partial list, this method needs two cmpxchg_double operations: 1. freeze slab (acquire_slab()) under the node list_lock 2. get_freelist() when pick used in ___slab_alloc() Actually we don't need to freeze when moving slabs out of node partial list, we can delay freeze to get slab freelist in ___slab_alloc(), so we can save one cmpxchg_double(). And there are other good points: 1. The moving of slabs between node partial list and cpu partial list becomes simpler, since we don't need to freeze or unfreeze at all. 2. The node list_lock contention would be less, since we only need to freeze one slab under the node list_lock. (In fact, we can first move slabs out of node partial list, don't need to freeze any slab at all, so the contention on slab won't transfer to the node list_lock contention.) We can achieve this because there is no concurrent path would manipulate the partial slab list except the __slab_free() path, which is serialized using the new introduced slab->flags. Note this patch just change the part of moving the partial slabs for easy code review, we will fix other parts in the following patches. Signed-off-by: Chengming Zhou --- mm/slub.c | 61 ++++++++++++++++--------------------------------------- 1 file changed, 17 insertions(+), 44 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 5a9711b35c74..044235bd8a45 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2329,19 +2329,21 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, continue; } - t = acquire_slab(s, n, slab, object == NULL); - if (!t) - break; - if (!object) { - *pc->slab = slab; - stat(s, ALLOC_FROM_PARTIAL); - object = t; - } else { - put_cpu_partial(s, slab, 0); - stat(s, CPU_PARTIAL_NODE); - partial_slabs++; + t = acquire_slab(s, n, slab, object == NULL); + if (t) { + *pc->slab = slab; + stat(s, ALLOC_FROM_PARTIAL); + object = t; + continue; + } } + + remove_partial(n, slab); + put_cpu_partial(s, slab, 0); + stat(s, CPU_PARTIAL_NODE); + partial_slabs++; + #ifdef CONFIG_SLUB_CPU_PARTIAL if (!kmem_cache_has_cpu_partial(s) || partial_slabs > s->cpu_partial_slabs / 2) @@ -2612,9 +2614,6 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) unsigned long flags = 0; while (partial_slab) { - struct slab new; - struct slab old; - slab = partial_slab; partial_slab = slab->next; @@ -2627,23 +2626,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) spin_lock_irqsave(&n->list_lock, flags); } - do { - - old.freelist = slab->freelist; - old.counters = slab->counters; - VM_BUG_ON(!old.frozen); - - new.counters = old.counters; - new.freelist = old.freelist; - - new.frozen = 0; - - } while (!__slab_update_freelist(s, slab, - old.freelist, old.counters, - new.freelist, new.counters, - "unfreezing slab")); - - if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { + if (unlikely(!slab->inuse && n->nr_partial >= s->min_partial)) { slab->next = slab_to_discard; slab_to_discard = slab; } else { @@ -3640,18 +3623,8 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, was_frozen = new.frozen; new.inuse -= cnt; if ((!new.inuse || !prior) && !was_frozen) { - - if (kmem_cache_has_cpu_partial(s) && !prior) { - - /* - * Slab was on no list before and will be - * partially empty - * We can defer the list move and instead - * freeze it. - */ - new.frozen = 1; - - } else { /* Needs to be taken off a list */ + /* Needs to be taken off a list */ + if (!kmem_cache_has_cpu_partial(s) || prior) { n = get_node(s, slab_nid(slab)); /* @@ -3681,7 +3654,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * activity can be necessary. */ stat(s, FREE_FROZEN); - } else if (new.frozen) { + } else if (kmem_cache_has_cpu_partial(s) && !prior) { /* * If we just froze the slab then put it onto the * per cpu partial list.