From patchwork Fri Sep 15 10:59:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Rizzo X-Patchwork-Id: 13386897 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 694FEEE6458 for ; Fri, 15 Sep 2023 11:00:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234258AbjIOLAF (ORCPT ); Fri, 15 Sep 2023 07:00:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234266AbjIOK7y (ORCPT ); Fri, 15 Sep 2023 06:59:54 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 172AF186 for ; Fri, 15 Sep 2023 03:59:49 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d81e72d4ec0so458499276.0 for ; Fri, 15 Sep 2023 03:59:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1694775588; x=1695380388; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xbAGox+1BQp1J2SCvFRdM4+gVemYBYzERzmxGZTZFIk=; b=S9XslvuI5gBbHvYu8f7Ky91OO+Y1hZncxlL21uUgx5MO/6rdEDpG76N2oSViFvwES8 pfJbPgJgmsNYUMj+iKxbvzeqx29PMB1MVrDAmj6Tgtz2NgLTjhhO1PtVSIPHwqz3meOm QBRMSCuftb9Jei3z8+MmglO2IAPtTm1gdJEyp2c8FRobWY8VwYMQc9IbKSLvNzxFzKFk R0WJByLuE59mANduQXQA7w4DguI7zC2ac+6ZZ7xmgMkeAFlABS2LXCaseJC9jU38vZ3H 8fI/jfB0pgdQaVGTdAveXUqn2MZ9Zqj/Gqh9pvIHrh0Exc8SVaRwFXT6zMTVtIAFK/c3 /QIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694775588; x=1695380388; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xbAGox+1BQp1J2SCvFRdM4+gVemYBYzERzmxGZTZFIk=; b=jNvOUAm3/qYk7lr7yp8reO3wfofE4xp4YNYxNOxrTLP9Shu9CSqDLGJ58Kyu7/rr1j LogZG/Ud3c2rlVonv01v+gku7MxpmqcHLgX2wn0+R3HSfEQVfb2w/WeUYce9i3Jkv5vr PmG5M97u0wHjOZYCvOaQ6iZ/N5G1Dp3l1OCXxeIBF8TJOvaJejECe2qWv9ZimGccPSCs oPU4xYv882TYq1AqpmRtEZ37SCSJmIntqun9sDnJtd9OgLoR8Bqm4hHuj8OX2FSdbG9H 41iRrhzkRt+Xh8vIZwiYL5rlpo/NoY62s64iEvTmFw5A76cqhScHeqsaHHuxrp/k0O90 N4rQ== X-Gm-Message-State: AOJu0Yy+rB3fgRcjJSMyvkWcMjYzKhib6MKrcHTlmez/N+sJ10VjVFq2 civCeDe5fRPzxdvuh1mCSA8s3sRzpUbu8wAlkA== X-Google-Smtp-Source: AGHT+IFkQIG3OgVqqlEN6/c9lGi9RDx6q7XJsfPR2EVN/OdFKbEkhspIMm/2iOfMAd3xJyQxw6Wg1f3WaLFs8ikxSg== X-Received: from mr-cloudtop2.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:2a6]) (user=matteorizzo job=sendgmr) by 2002:a05:6902:11cd:b0:d81:5c03:df99 with SMTP id n13-20020a05690211cd00b00d815c03df99mr40387ybu.3.1694775588301; Fri, 15 Sep 2023 03:59:48 -0700 (PDT) Date: Fri, 15 Sep 2023 10:59:23 +0000 In-Reply-To: <20230915105933.495735-1-matteorizzo@google.com> Mime-Version: 1.0 References: <20230915105933.495735-1-matteorizzo@google.com> X-Mailer: git-send-email 2.42.0.459.ge4e396fd5e-goog Message-ID: <20230915105933.495735-5-matteorizzo@google.com> Subject: [RFC PATCH 04/14] mm: use virt_to_slab instead of folio_slab From: Matteo Rizzo To: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, keescook@chromium.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-hardening@vger.kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, corbet@lwn.net, luto@kernel.org, peterz@infradead.org Cc: jannh@google.com, matteorizzo@google.com, evn@google.com, poprdi@google.com, jordyzomer@google.com Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org From: Jann Horn This is refactoring in preparation for the introduction of SLAB_VIRTUAL which does not implement folio_slab. With SLAB_VIRTUAL there is no longer a 1:1 correspondence between slabs and pages of physical memory used by the slab allocator. There is no way to look up the slab which corresponds to a specific page of physical memory without iterating over all slabs or over the page tables. Instead of doing that, we can look up the slab starting from its virtual address which can still be performed cheaply with both SLAB_VIRTUAL enabled and disabled. Signed-off-by: Jann Horn Co-developed-by: Matteo Rizzo Signed-off-by: Matteo Rizzo Reviewed-by: Kees Cook --- mm/memcontrol.c | 2 +- mm/slab_common.c | 12 +++++++----- mm/slub.c | 14 ++++++-------- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e8ca4bdcb03c..0ab9f5323db7 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2936,7 +2936,7 @@ struct mem_cgroup *mem_cgroup_from_obj_folio(struct folio *folio, void *p) struct slab *slab; unsigned int off; - slab = folio_slab(folio); + slab = virt_to_slab(p); objcgs = slab_objcgs(slab); if (!objcgs) return NULL; diff --git a/mm/slab_common.c b/mm/slab_common.c index 79102d24f099..42ceaf7e9f47 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1062,13 +1062,13 @@ void kfree(const void *object) if (unlikely(ZERO_OR_NULL_PTR(object))) return; - folio = virt_to_folio(object); if (unlikely(!is_slab_addr(object))) { + folio = virt_to_folio(object); free_large_kmalloc(folio, (void *)object); return; } - slab = folio_slab(folio); + slab = virt_to_slab(object); s = slab->slab_cache; __kmem_cache_free(s, (void *)object, _RET_IP_); } @@ -1089,12 +1089,13 @@ EXPORT_SYMBOL(kfree); size_t __ksize(const void *object) { struct folio *folio; + struct kmem_cache *s; if (unlikely(object == ZERO_SIZE_PTR)) return 0; - folio = virt_to_folio(object); if (unlikely(!is_slab_addr(object))) { + folio = virt_to_folio(object); if (WARN_ON(folio_size(folio) <= KMALLOC_MAX_CACHE_SIZE)) return 0; if (WARN_ON(object != folio_address(folio))) @@ -1102,11 +1103,12 @@ size_t __ksize(const void *object) return folio_size(folio); } + s = virt_to_slab(object)->slab_cache; #ifdef CONFIG_SLUB_DEBUG - skip_orig_size_check(folio_slab(folio)->slab_cache, object); + skip_orig_size_check(s, object); #endif - return slab_ksize(folio_slab(folio)->slab_cache); + return slab_ksize(s); } void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) diff --git a/mm/slub.c b/mm/slub.c index df2529c03bd3..ad33d9e1601d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3848,25 +3848,23 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, { int lookahead = 3; void *object; - struct folio *folio; + struct slab *slab; size_t same; object = p[--size]; - folio = virt_to_folio(object); + slab = virt_to_slab(object); if (!s) { /* Handle kalloc'ed objects */ - if (unlikely(!folio_test_slab(folio))) { - free_large_kmalloc(folio, object); + if (unlikely(slab == NULL)) { + free_large_kmalloc(virt_to_folio(object), object); df->slab = NULL; return size; } - /* Derive kmem_cache from object */ - df->slab = folio_slab(folio); - df->s = df->slab->slab_cache; + df->s = slab->slab_cache; } else { - df->slab = folio_slab(folio); df->s = cache_from_obj(s, object); /* Support for memcg */ } + df->slab = slab; /* Start new detached freelist */ df->tail = object;