From patchwork Mon Oct 4 13:45:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF37EC433F5 for ; Mon, 4 Oct 2021 13:50:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9B8DC61207 for ; Mon, 4 Oct 2021 13:50:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9B8DC61207 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 100FE940016; Mon, 4 Oct 2021 09:50:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0B10494000B; Mon, 4 Oct 2021 09:50:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE2D1940016; Mon, 4 Oct 2021 09:50:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id DB75094000B for ; Mon, 4 Oct 2021 09:50:51 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8E5891809AF04 for ; Mon, 4 Oct 2021 13:50:51 +0000 (UTC) X-FDA: 78658890702.32.605CD7A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id 453BB4001C7D for ; Mon, 4 Oct 2021 13:50:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6Mxw6bpAxr2NSxJJKy2lgXKhcK5iMbJlpNLhXDkeTF4=; b=k0hz3fOknhQI9l6h6IJS+7ulvZ jkETpv1NimVjLNhUsGIGfj07EmC46naotqbbesUT4fLGeAsmi1Fk+IsXswvNFELqUa1noFtfrplwJ C1JGcdlN4qiLOsjForlEPnK7DwryHeGaKCrGtB97eIDyZ0tFVob8bPgIkNBW5J7GRUnuf3T/zzdwI fX497WOPIo9WPUHxiQyhNgcapc6DP1yHcJDl3UrhkJVUQ1EWFpIYfbEkRIMlAL5irvgfNkSjawzdH AiDI7YWV0c5dJrwg3i4oxX8uUnGPZC4UPt6AW8Lhw9pyw6PWntL0Rf56gS0F2HNl0DJSzarQXtb9S SGy6NHVQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOJk-00Gv4s-Pz; Mon, 04 Oct 2021 13:48:39 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 01/62] mm: Convert page_to_section() to pgflags_section() Date: Mon, 4 Oct 2021 14:45:49 +0100 Message-Id: <20211004134650.4031813-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 453BB4001C7D X-Stat-Signature: n1pr6ohuttkeanhqmse5bofy7qeikfyj Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=k0hz3fOk; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633355451-394470 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pass the page->flags to this function instead of the struct page. This is in preparation for splitting struct page into separate types. Signed-off-by: Matthew Wilcox (Oracle) --- include/asm-generic/memory_model.h | 2 +- include/linux/mm.h | 4 ++-- mm/sparse.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h index a2c8ed60233a..ee9c285f5af2 100644 --- a/include/asm-generic/memory_model.h +++ b/include/asm-generic/memory_model.h @@ -32,7 +32,7 @@ */ #define __page_to_pfn(pg) \ ({ const struct page *__pg = (pg); \ - int __sec = page_to_section(__pg); \ + int __sec = pgflags_section(__pg->flags); \ (unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec))); \ }) diff --git a/include/linux/mm.h b/include/linux/mm.h index 73a52aba448f..db63653f403c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1553,9 +1553,9 @@ static inline void set_page_section(struct page *page, unsigned long section) page->flags |= (section & SECTIONS_MASK) << SECTIONS_PGSHIFT; } -static inline unsigned long page_to_section(const struct page *page) +static inline unsigned long pgflags_section(unsigned long pgflags) { - return (page->flags >> SECTIONS_PGSHIFT) & SECTIONS_MASK; + return (flags >> SECTIONS_PGSHIFT) & SECTIONS_MASK; } #endif diff --git a/mm/sparse.c b/mm/sparse.c index 120bc8ea5293..4c59ed8c1d5a 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -45,7 +45,7 @@ static u16 section_to_node_table[NR_MEM_SECTIONS] __cacheline_aligned; int page_to_nid(const struct page *page) { - return section_to_node_table[page_to_section(page)]; + return section_to_node_table[pgflags_section(page->flags)]; } EXPORT_SYMBOL(page_to_nid); From patchwork Mon Oct 4 13:45:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533967 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E72BBC433F5 for ; Mon, 4 Oct 2021 13:51:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8487B60E76 for ; Mon, 4 Oct 2021 13:51:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8487B60E76 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 25EE2940017; Mon, 4 Oct 2021 09:51:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 20E9094000B; Mon, 4 Oct 2021 09:51:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0FF40940017; Mon, 4 Oct 2021 09:51:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 01DFB94000B for ; Mon, 4 Oct 2021 09:51:53 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B089027DF5 for ; Mon, 4 Oct 2021 13:51:52 +0000 (UTC) X-FDA: 78658893264.24.4FBE37B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id 601AB4002A4D for ; Mon, 4 Oct 2021 13:51:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=1s4N3zyFn2JcQrlJ7Fi5ajWzvyhcI8O2Gr+gvhoj994=; b=DQA24qNttaFr1weQaaV60DE9B2 Iw8dJXTCQhFVPJB9cc+Wjs1QO83YLuLZ6+MBt4xMQZBqPHH+d1o3WUFKx7/BJkIm7nBDAaE1A7JKi IfjXr1IqkD7TaPhZRbTNrlK88iojlaBVwesZd60qRCNJhBfUlPxSUWKKDAJmpcWrtMODSJiJTtSjH u9Mhl9mVSCqaS6NIO5hEotZeSEgZmjngkV19aXQUXyx6t8+jbPbMnSzz+FURaoRwzg6qK+r4q0HVJ BCpFvD0JT89dQ9fnDffEDz9sMF12Lw6/TDgZFOx1xbqTT4IcJH6t7ZNdAYfjSDkEY2k7HWa3Gqgpd sxvmlENQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOLe-00GvBK-Cy; Mon, 04 Oct 2021 13:50:08 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 02/62] mm: Add pgflags_nid() Date: Mon, 4 Oct 2021 14:45:50 +0100 Message-Id: <20211004134650.4031813-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 601AB4002A4D X-Stat-Signature: 7e51myxt44a83zbumfbmk843perak4tb Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DQA24qNt; dmarc=none; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633355512-897077 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert page_to_nid() into a wrapper around pgflags_nid(). This is in preparation for splitting struct page into separate types. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 12 ++++++++---- mm/sparse.c | 6 +++--- 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index db63653f403c..adc7e843148b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1369,16 +1369,20 @@ static inline int page_zone_id(struct page *page) } #ifdef NODE_NOT_IN_PAGE_FLAGS -extern int page_to_nid(const struct page *page); +extern int pgflags_nid(unsigned long pgflags); #else +static inline int pgflags_nid(unsigned long pgflags) +{ + return (pgflags >> NODES_PGSHIFT) & NODES_MASK; +} +#endif + static inline int page_to_nid(const struct page *page) { struct page *p = (struct page *)page; - return (PF_POISONED_CHECK(p)->flags >> NODES_PGSHIFT) & NODES_MASK; + return pgflags_nid(PF_POISONED_CHECK(p)->flags); } -#endif - #ifdef CONFIG_NUMA_BALANCING static inline int cpu_pid_to_cpupid(int cpu, int pid) { diff --git a/mm/sparse.c b/mm/sparse.c index 4c59ed8c1d5a..818bdb84be99 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -43,11 +43,11 @@ static u8 section_to_node_table[NR_MEM_SECTIONS] __cacheline_aligned; static u16 section_to_node_table[NR_MEM_SECTIONS] __cacheline_aligned; #endif -int page_to_nid(const struct page *page) +int pgflags_nid(unsigned long pgflags) { - return section_to_node_table[pgflags_section(page->flags)]; + return section_to_node_table[pgflags_section(pgflags)]; } -EXPORT_SYMBOL(page_to_nid); +EXPORT_SYMBOL(pgflags_nid); static void set_section_nid(unsigned long section_nr, int nid) { From patchwork Mon Oct 4 13:45:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC333C433EF for ; Mon, 4 Oct 2021 13:53:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 598C16121F for ; Mon, 4 Oct 2021 13:53:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 598C16121F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E6CCC940018; Mon, 4 Oct 2021 09:53:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E1B9594000B; Mon, 4 Oct 2021 09:53:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D0A16940018; Mon, 4 Oct 2021 09:53:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0096.hostedemail.com [216.40.44.96]) by kanga.kvack.org (Postfix) with ESMTP id C05A794000B for ; Mon, 4 Oct 2021 09:53:12 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 75EEB18045A7F for ; Mon, 4 Oct 2021 13:53:12 +0000 (UTC) X-FDA: 78658896624.10.1701AA3 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id 061061001B11 for ; Mon, 4 Oct 2021 13:53:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IG9Gy/qdjbCO493LHTJpZ0WwO2sOMrto/J1E/UmDWX4=; b=b/2pqJcKFR92MG4oyXuMQCFH1V 7lu+IbtBZzP38J6xJ/hd0de5bxj6f7RnB/DOrdEIvcVJVM/CwwRv8egrWqVVFQi0h4DEhjykCiWJW x1x5KfYbo5IkClSz4zJ2EuuYglR0xHlS5ZoKZUbgNnbn3X9xcUR9dykdNA66FlijZwcyRfbYqlviO h1Sfx3x7dtlqkmd+qxZcGec8Z6qEK8qgZXQL4L2P5RRdh0zGFkiYLuEliegIsbYawyLS5phSuOdjW Pk82aY/BXiDoPHq0cVJotW4NvSOyLq4TzCXeNVgYq1dVxupAnEaAGxI0lneIiSqI2HIazavm6SAJB 7Uq2zt0A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXONB-00GvKE-D3; Mon, 04 Oct 2021 13:51:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 03/62] mm: Split slab into its own type Date: Mon, 4 Oct 2021 14:45:51 +0100 Message-Id: <20211004134650.4031813-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 061061001B11 X-Stat-Signature: 4qfso6wo4q4c9omt7udq1bjoh4g68rq9 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="b/2pqJcK"; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633355591-568163 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make struct slab independent of struct page. It still uses the underlying memory in struct page for storing slab-specific data, but slab and slub can now be weaned off using struct page directly. Some of the wrapper functions (slab_address() and slab_order()) still need to cast to struct page, but this is a significant disentanglement. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm_types.h | 56 +++++++++++++++++++++++++++++ include/linux/page-flags.h | 29 +++++++++++++++ mm/slab.h | 73 ++++++++++++++++++++++++++++++++++++++ mm/slub.c | 8 ++--- 4 files changed, 162 insertions(+), 4 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 7f8ee09c711f..c2ea71aba84e 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -239,6 +239,62 @@ struct page { #endif } _struct_page_alignment; +/* Reuses the bits in struct page */ +struct slab { + unsigned long flags; + union { + struct list_head slab_list; + struct { /* Partial pages */ + struct slab *next; +#ifdef CONFIG_64BIT + int slabs; /* Nr of slabs left */ + int pobjects; /* Approximate count */ +#else + short int slabs; + short int pobjects; +#endif + }; + struct rcu_head rcu_head; + }; + struct kmem_cache *slab_cache; /* not slob */ + /* Double-word boundary */ + void *freelist; /* first free object */ + union { + void *s_mem; /* slab: first object */ + unsigned long counters; /* SLUB */ + struct { /* SLUB */ + unsigned inuse:16; + unsigned objects:15; + unsigned frozen:1; + }; + }; + + union { + unsigned int active; /* SLAB */ + int units; /* SLOB */ + }; + atomic_t _refcount; +#ifdef CONFIG_MEMCG + unsigned long memcg_data; +#endif +}; + +#define SLAB_MATCH(pg, sl) \ + static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl)) +SLAB_MATCH(flags, flags); +SLAB_MATCH(compound_head, slab_list); /* Ensure bit 0 is clear */ +SLAB_MATCH(slab_list, slab_list); +SLAB_MATCH(rcu_head, rcu_head); +SLAB_MATCH(slab_cache, slab_cache); +SLAB_MATCH(s_mem, s_mem); +SLAB_MATCH(active, active); +SLAB_MATCH(_refcount, _refcount); +#ifdef CONFIG_MEMCG +SLAB_MATCH(memcg_data, memcg_data); +#endif +#undef SLAB_MATCH +static_assert(sizeof(struct slab) <= sizeof(struct page)); + static inline atomic_t *compound_mapcount_ptr(struct page *page) { return &page[1].compound_mapcount; diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index a558d67ee86f..57bdb1eb2f29 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -165,6 +165,8 @@ enum pageflags { /* Remapped by swiotlb-xen. */ PG_xen_remapped = PG_owner_priv_1, + /* SLAB / SLUB / SLOB */ + PG_pfmemalloc = PG_active, /* SLOB */ PG_slob_free = PG_private, @@ -193,6 +195,33 @@ static inline unsigned long _compound_head(const struct page *page) #define compound_head(page) ((typeof(page))_compound_head(page)) +/** + * page_slab - Converts from page to slab. + * @p: The page. + * + * This function cannot be called on a NULL pointer. It can be called + * on a non-slab page; the caller should check is_slab() to be sure + * that the slab really is a slab. + * + * Return: The slab which contains this page. + */ +#define page_slab(p) (_Generic((p), \ + const struct page *: (const struct slab *)_compound_head(p), \ + struct page *: (struct slab *)_compound_head(p))) + +/** + * slab_page - The first struct page allocated for a slab + * @slab: The slab. + * + * Slabs are allocated as one-or-more pages. It is occasionally necessary + * to convert back to a struct page in order to communicate with the rest + * of the mm. Please use this helper function instead of casting yourself, + * as the implementation may change in the future. + */ +#define slab_page(s) (_Generic((s), \ + const struct slab *: (const struct page *)s, \ + struct slab *: (struct page *)s)) + static __always_inline int PageTail(struct page *page) { return READ_ONCE(page->compound_head) & 1; diff --git a/mm/slab.h b/mm/slab.h index 58c01a34e5b8..54b05f4d9eb5 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -5,6 +5,79 @@ * Internal slab definitions */ +/* + * Does this memory belong to a slab cache? Slub can return page allocator + * memory for certain size allocations. + */ +static inline bool slab_test_cache(const struct slab *slab) +{ + return test_bit(PG_slab, &slab->flags); +} + +static inline bool slab_test_multi_page(const struct slab *slab) +{ + return test_bit(PG_head, &slab->flags); +} + +/* + * If network-based swap is enabled, sl*b must keep track of whether pages + * were allocated from pfmemalloc reserves. + */ +static inline bool slab_test_pfmemalloc(const struct slab *slab) +{ + return test_bit(PG_pfmemalloc, &slab->flags); +} + +static inline void slab_set_pfmemalloc(struct slab *slab) +{ + set_bit(PG_pfmemalloc, &slab->flags); +} + +static inline void slab_clear_pfmemalloc(struct slab *slab) +{ + clear_bit(PG_pfmemalloc, &slab->flags); +} + +static inline void __slab_clear_pfmemalloc(struct slab *slab) +{ + __clear_bit(PG_pfmemalloc, &slab->flags); +} + +static inline void *slab_address(const struct slab *slab) +{ + return page_address(slab_page(slab)); +} + +static inline int slab_nid(const struct slab *slab) +{ + return pgflags_nid(slab->flags); +} + +static inline pg_data_t *slab_pgdat(const struct slab *slab) +{ + return NODE_DATA(slab_nid(slab)); +} + +static inline struct slab *virt_to_slab(const void *addr) +{ + struct page *page = virt_to_page(addr); + + return page_slab(page); +} + +static inline int slab_order(const struct slab *slab) +{ + if (!slab_test_multi_page(slab)) + return 0; + return ((struct page *)slab)[1].compound_order; +} + +static inline size_t slab_size(const struct slab *slab) +{ + return PAGE_SIZE << slab_order(slab); +} + + #ifdef CONFIG_SLOB /* * Common fields provided in kmem_cache by all slab allocators diff --git a/mm/slub.c b/mm/slub.c index 3d2025f7163b..7e429a31b326 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3755,7 +3755,7 @@ static unsigned int slub_min_objects; * requested a higher minimum order then we start with that one instead of * the smallest order which will fit the object. */ -static inline unsigned int slab_order(unsigned int size, +static inline unsigned int calc_slab_order(unsigned int size, unsigned int min_objects, unsigned int max_order, unsigned int fract_leftover) { @@ -3819,7 +3819,7 @@ static inline int calculate_order(unsigned int size) fraction = 16; while (fraction >= 4) { - order = slab_order(size, min_objects, + order = calc_slab_order(size, min_objects, slub_max_order, fraction); if (order <= slub_max_order) return order; @@ -3832,14 +3832,14 @@ static inline int calculate_order(unsigned int size) * We were unable to place multiple objects in a slab. Now * lets see if we can place a single object there. */ - order = slab_order(size, 1, slub_max_order, 1); + order = calc_slab_order(size, 1, slub_max_order, 1); if (order <= slub_max_order) return order; /* * Doh this slab cannot be placed using slub_max_order. */ - order = slab_order(size, 1, MAX_ORDER, 1); + order = calc_slab_order(size, 1, MAX_ORDER, 1); if (order < MAX_ORDER) return order; return -ENOSYS; From patchwork Mon Oct 4 13:45:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5329FC433F5 for ; Mon, 4 Oct 2021 13:54:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0596861184 for ; Mon, 4 Oct 2021 13:54:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0596861184 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 98879940019; Mon, 4 Oct 2021 09:54:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9382194000B; Mon, 4 Oct 2021 09:54:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 84EBD940019; Mon, 4 Oct 2021 09:54:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0048.hostedemail.com [216.40.44.48]) by kanga.kvack.org (Postfix) with ESMTP id 79BD594000B for ; Mon, 4 Oct 2021 09:54:10 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 37C0582499A8 for ; Mon, 4 Oct 2021 13:54:10 +0000 (UTC) X-FDA: 78658899060.31.2515527 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id D44B5600209A for ; Mon, 4 Oct 2021 13:54:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YEIZzBsSgMV0Vd/2EhX1IJVUdiK352avcGgYQAQX3Gs=; b=A1zYJxBVYGImKFAYTbA+20ofoO sOgojN1mJXyNFPvkPD4fdT5AAnPpbJ1WlJsh5OjiMjNI5pdaoqUavHrPEowHT4HTi1s+IAD8k80na TV6Wse71a82QStFpn22kNsQ5eAVw256L+yFnUimiF5rd9Y6kcqEe0URgTM/zFz2Q3PJ6TdIlpBxLx OoNurUocG+KNE/7iHZ3owKy7mFsBdSY0vwKGr40ua9beYUYIEhlDzHAxMkQtKnfIyzpVInlYV05yz cmeX++HnmfKxjpq287DWvwF7wLY0DfUQSTq03lJsdlimZafOCBVh7akhkRO+HPzdicoq3ismHuPPS in6Z4mvA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXONw-00GvQy-1G; Mon, 04 Oct 2021 13:52:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 04/62] mm: Add account_slab() and unaccount_slab() Date: Mon, 4 Oct 2021 14:45:52 +0100 Message-Id: <20211004134650.4031813-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: D44B5600209A X-Stat-Signature: nfbxh59f4xgcmdrn3iy6kszjkss14zjs Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=A1zYJxBV; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633355649-658112 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These functions simply call their page equivalents for now. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slab.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/mm/slab.h b/mm/slab.h index 54b05f4d9eb5..305cc8c7fed8 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -510,6 +510,18 @@ static __always_inline void unaccount_slab_page(struct page *page, int order, -(PAGE_SIZE << order)); } +static __always_inline void account_slab(struct slab *slab, int order, + struct kmem_cache *s, gfp_t gfp) +{ + account_slab_page(slab_page(slab), order, s, gfp); +} + +static __always_inline void unaccount_slab(struct slab *slab, int order, + struct kmem_cache *s) +{ + unaccount_slab_page(slab_page(slab), order, s); +} + static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) { struct kmem_cache *cachep; From patchwork Mon Oct 4 13:45:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75A36C433F5 for ; Mon, 4 Oct 2021 13:55:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 32CED6121F for ; Mon, 4 Oct 2021 13:55:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 32CED6121F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id A984094001A; Mon, 4 Oct 2021 09:55:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A474794000B; Mon, 4 Oct 2021 09:55:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9361A94001A; Mon, 4 Oct 2021 09:55:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0180.hostedemail.com [216.40.44.180]) by kanga.kvack.org (Postfix) with ESMTP id 80BCB94000B for ; Mon, 4 Oct 2021 09:55:55 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 2C36A2BA83 for ; Mon, 4 Oct 2021 13:55:55 +0000 (UTC) X-FDA: 78658903470.03.3D2337C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id D5ED7F00240C for ; Mon, 4 Oct 2021 13:55:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dbX4+bcrCti9QCWaHhUTDascE/YvuKgq/0CFKLieCbY=; b=WrFLLSBx75MRY/GEyCp78JCnqK Hg9C4Eb+7/nEw+sKMCuRNeOuqlE37brDqRc7z4iaIH2BP6kALbsAwsMzmSTkDuKRk5d79GHO7Gyz3 P0EVyOjBHm3/npOoaDUIrUtPAaub8147lUlz8t0aMLgCGghOCr+EqJOumKCA66PUNJHprGQufdzWX xEir+Y2+mArg0iE4ixCWHPRRCLUQjw1wsfEUQcGzMN543O8tTVcxJSmyzaHLzl1RFPa2JXig8FdSo 5p5Mmb0Mt1t68K/jrtUFhxZsleVnQdlhHqRQH2ltKMlOvgWf9MB0zQy/Hi5iFGSCRaTlQjOZ0kWlP decBlw2Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOPQ-00GvXQ-Vw; Mon, 04 Oct 2021 13:54:08 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 05/62] mm: Convert virt_to_cache() to use struct slab Date: Mon, 4 Oct 2021 14:45:53 +0100 Message-Id: <20211004134650.4031813-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D5ED7F00240C X-Stat-Signature: 7noarfkips6fxkasn1tbw3eoies4zfr9 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WrFLLSBx; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633355754-133296 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function is entirely self-contained, so can be converted from page to slab. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slab.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 305cc8c7fed8..3c691ef6b492 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -480,13 +480,13 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, static inline struct kmem_cache *virt_to_cache(const void *obj) { - struct page *page; + struct slab *slab; - page = virt_to_head_page(obj); - if (WARN_ONCE(!PageSlab(page), "%s: Object is not a Slab page!\n", + slab = virt_to_slab(obj); + if (WARN_ONCE(!SlabAllocation(slab), "%s: Object is not a Slab page!\n", __func__)) return NULL; - return page->slab_cache; + return slab->slab_cache; } static __always_inline void account_slab_page(struct page *page, int order, From patchwork Mon Oct 4 13:45:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1A33C433F5 for ; Mon, 4 Oct 2021 13:57:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5378060F4F for ; Mon, 4 Oct 2021 13:57:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5378060F4F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E18F294001B; Mon, 4 Oct 2021 09:57:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DC85394000B; Mon, 4 Oct 2021 09:57:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C90C294001B; Mon, 4 Oct 2021 09:57:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0099.hostedemail.com [216.40.44.99]) by kanga.kvack.org (Postfix) with ESMTP id B80B594000B for ; Mon, 4 Oct 2021 09:57:27 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 68A278249980 for ; Mon, 4 Oct 2021 13:57:27 +0000 (UTC) X-FDA: 78658907334.11.0431AA5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 245F8B000D7C for ; Mon, 4 Oct 2021 13:57:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Wii0IJfv4YqgjnIofW8iZlElv/sc3Cql02RygCdXuSI=; b=ukZUT3/vmEhyskhi+J9FE9bneO 78dTC9KLHrTItoIX9W08GZtnMSPy37KkiFhqNieWr8L1fDe94ndeX9WIqv2JJ0GZ5ESN6GGHIKq/T XSEP/QAdh55fRJln/Kb5iUFIUXDViFej2QcGrzOYIw48FJ7mSX2qPCEzQMTJYUZzc04p+VhthSfov eDlzdyhTzetoqJb00pxGw/lc9gkTy9N4jAJFKQ+fIQ4jsljRtBkcMnGp4MI/4i9oXUko3VZKmoXIP /+4UPTth4sCYH+DbuSufxZQKZtOu4jbUnoMAUjufPQZMG7DrQaYbi7MqU0HRK73x+VMrkxJDDWk36 Rii/g9cg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOQe-00GvfQ-GY; Mon, 04 Oct 2021 13:55:23 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 06/62] mm: Convert __ksize() to struct slab Date: Mon, 4 Oct 2021 14:45:54 +0100 Message-Id: <20211004134650.4031813-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="ukZUT3/v"; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 245F8B000D7C X-Stat-Signature: kjrtewfyxkc1eio9it5omeijyxmkkpww X-HE-Tag: 1633355846-9323 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: slub and slob both use struct page here; convert them to struct slab. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slab.h | 6 +++--- mm/slob.c | 8 ++++---- mm/slub.c | 12 ++++++------ 3 files changed, 13 insertions(+), 13 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 3c691ef6b492..ac89b656de67 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -14,7 +14,7 @@ static inline bool slab_test_cache(const struct slab *slab) return test_bit(PG_slab, &slab->flags); } -static inline bool slab_test_multi_page(const struct slab *slab) +static inline bool slab_test_multipage(const struct slab *slab) { return test_bit(PG_head, &slab->flags); } @@ -67,7 +67,7 @@ static inline struct slab *virt_to_slab(const void *addr) static inline int slab_order(const struct slab *slab) { - if (!slab_test_multi_page(slab)) + if (!slab_test_multipage(slab)) return 0; return ((struct page *)slab)[1].compound_order; } @@ -483,7 +483,7 @@ static inline struct kmem_cache *virt_to_cache(const void *obj) struct slab *slab; slab = virt_to_slab(obj); - if (WARN_ONCE(!SlabAllocation(slab), "%s: Object is not a Slab page!\n", + if (WARN_ONCE(!slab_test_cache(slab), "%s: Object is not a Slab page!\n", __func__)) return NULL; return slab->slab_cache; diff --git a/mm/slob.c b/mm/slob.c index 74d3f6e60666..90996e8f7337 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -570,7 +570,7 @@ EXPORT_SYMBOL(kfree); /* can't use ksize for kmem_cache_alloc memory, only kmalloc */ size_t __ksize(const void *block) { - struct page *sp; + struct slab *sp; int align; unsigned int *m; @@ -578,9 +578,9 @@ size_t __ksize(const void *block) if (unlikely(block == ZERO_SIZE_PTR)) return 0; - sp = virt_to_page(block); - if (unlikely(!PageSlab(sp))) - return page_size(sp); + sp = virt_to_slab(block); + if (unlikely(!slab_test_cache(sp))) + return slab_size(sp); align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN); m = (unsigned int *)(block - align); diff --git a/mm/slub.c b/mm/slub.c index 7e429a31b326..2780342395dc 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4509,19 +4509,19 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, size_t __ksize(const void *object) { - struct page *page; + struct slab *slab; if (unlikely(object == ZERO_SIZE_PTR)) return 0; - page = virt_to_head_page(object); + slab = virt_to_slab(object); - if (unlikely(!PageSlab(page))) { - WARN_ON(!PageCompound(page)); - return page_size(page); + if (unlikely(!slab_test_cache(slab))) { + WARN_ON(!slab_test_multipage(slab)); + return slab_size(slab); } - return slab_ksize(page->slab_cache); + return slab_ksize(slab->slab_cache); } EXPORT_SYMBOL(__ksize); From patchwork Mon Oct 4 13:45:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC508C433EF for ; Mon, 4 Oct 2021 13:58:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5FEC761039 for ; Mon, 4 Oct 2021 13:58:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5FEC761039 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 05C1F94001C; Mon, 4 Oct 2021 09:58:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 00BF494000B; Mon, 4 Oct 2021 09:58:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E3D7594001C; Mon, 4 Oct 2021 09:58:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id D2C0F94000B for ; Mon, 4 Oct 2021 09:58:09 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8C55431E5C for ; Mon, 4 Oct 2021 13:58:09 +0000 (UTC) X-FDA: 78658909098.36.0371B83 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 3CBDF5001518 for ; Mon, 4 Oct 2021 13:58:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=qbFL4hgvr3qsMtJl6E2nTlaTLB7TKfuqhurhGzZODe8=; b=M+H2lniIV1APFC0UrV8curtXSw bBa84YRqzbnywfmR8XNAs6SdAdILQoDdQLeW3502axQ0ngyiDtzSi4Jxh+NXOexu3iOkr4ULWaa4t JCnLjcme/7C9eTLlJPsCDO74Bv5aJ/CyNICXTNEPnVKG6zCz9zRqTALtMXR5NTBeu1XXRfixKMN+S yoZ5j2RhYOFQ5QGGI15wkCBsls0ysMtczlF3Iyr9xPsm2VoqNYL5UUwvXKyX08PlB3JLsrt9zfCRq RfmTfujfnMfILa8d3ahdA1/7teMbJT4mBIm4exJj76AyCzDwrEOnlSbn+ySd9a/HMmvF+uLvIOnqD 37c4378g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXORo-00GvmF-Ia; Mon, 04 Oct 2021 13:56:20 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 07/62] mm: Use struct slab in kmem_obj_info() Date: Mon, 4 Oct 2021 14:45:55 +0100 Message-Id: <20211004134650.4031813-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=M+H2lniI; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 3CBDF5001518 X-Stat-Signature: dfemrmp9kjnxqowe3c6s5e7wjrbppctm X-HE-Tag: 1633355889-768197 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All three implementations of slab support kmem_obj_info() which reports details of an object allocated from the slab allocator. By using the slab type instead of the page type, we make it obvious that this can only be called for slabs. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slab.c | 12 ++++++------ mm/slab.h | 4 ++-- mm/slab_common.c | 8 ++++---- mm/slob.c | 4 ++-- mm/slub.c | 12 ++++++------ 5 files changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index d0f725637663..4a6bdbdcf0db 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3657,21 +3657,21 @@ EXPORT_SYMBOL(__kmalloc_node_track_caller); #endif /* CONFIG_NUMA */ #ifdef CONFIG_PRINTK -void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page) +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) { struct kmem_cache *cachep; unsigned int objnr; void *objp; kpp->kp_ptr = object; - kpp->kp_page = page; - cachep = page->slab_cache; + kpp->kp_slab = slab; + cachep = slab->slab_cache; kpp->kp_slab_cache = cachep; objp = object - obj_offset(cachep); kpp->kp_data_offset = obj_offset(cachep); - page = virt_to_head_page(objp); - objnr = obj_to_index(cachep, page, objp); - objp = index_to_obj(cachep, page, objnr); + slab = virt_to_slab(objp); + objnr = obj_to_index(cachep, slab_page(slab), objp); + objp = index_to_obj(cachep, slab_page(slab), objnr); kpp->kp_objp = objp; if (DEBUG && cachep->flags & SLAB_STORE_USER) kpp->kp_ret = *dbg_userword(cachep, objp); diff --git a/mm/slab.h b/mm/slab.h index ac89b656de67..29a0bf827a82 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -720,7 +720,7 @@ static inline void debugfs_slab_release(struct kmem_cache *s) { } #define KS_ADDRS_COUNT 16 struct kmem_obj_info { void *kp_ptr; - struct page *kp_page; + struct slab *kp_slab; void *kp_objp; unsigned long kp_data_offset; struct kmem_cache *kp_slab_cache; @@ -728,7 +728,7 @@ struct kmem_obj_info { void *kp_stack[KS_ADDRS_COUNT]; void *kp_free_stack[KS_ADDRS_COUNT]; }; -void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page); +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab); #endif #endif /* MM_SLAB_H */ diff --git a/mm/slab_common.c b/mm/slab_common.c index ec2bb0beed75..c2605c77920b 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -587,18 +587,18 @@ void kmem_dump_obj(void *object) { char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc"; int i; - struct page *page; + struct slab *slab; unsigned long ptroffset; struct kmem_obj_info kp = { }; if (WARN_ON_ONCE(!virt_addr_valid(object))) return; - page = virt_to_head_page(object); - if (WARN_ON_ONCE(!PageSlab(page))) { + slab = virt_to_slab(object); + if (WARN_ON_ONCE(!slab_test_cache(slab))) { pr_cont(" non-slab memory.\n"); return; } - kmem_obj_info(&kp, object, page); + kmem_obj_info(&kp, object, slab); if (kp.kp_slab_cache) pr_cont(" slab%s %s", cp, kp.kp_slab_cache->name); else diff --git a/mm/slob.c b/mm/slob.c index 90996e8f7337..8cede39054fc 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -462,10 +462,10 @@ static void slob_free(void *block, int size) } #ifdef CONFIG_PRINTK -void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page) +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) { kpp->kp_ptr = object; - kpp->kp_page = page; + kpp->kp_slab = slab; } #endif diff --git a/mm/slub.c b/mm/slub.c index 2780342395dc..517450561840 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4290,31 +4290,31 @@ int __kmem_cache_shutdown(struct kmem_cache *s) } #ifdef CONFIG_PRINTK -void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page) +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) { void *base; int __maybe_unused i; unsigned int objnr; void *objp; void *objp0; - struct kmem_cache *s = page->slab_cache; + struct kmem_cache *s = slab->slab_cache; struct track __maybe_unused *trackp; kpp->kp_ptr = object; - kpp->kp_page = page; + kpp->kp_slab = slab; kpp->kp_slab_cache = s; - base = page_address(page); + base = slab_address(slab); objp0 = kasan_reset_tag(object); #ifdef CONFIG_SLUB_DEBUG objp = restore_red_left(s, objp0); #else objp = objp0; #endif - objnr = obj_to_index(s, page, objp); + objnr = obj_to_index(s, slab_page(slab), objp); kpp->kp_data_offset = (unsigned long)((char *)objp0 - (char *)objp); objp = base + s->size * objnr; kpp->kp_objp = objp; - if (WARN_ON_ONCE(objp < base || objp >= base + page->objects * s->size || (objp - base) % s->size) || + if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size || (objp - base) % s->size) || !(s->flags & SLAB_STORE_USER)) return; #ifdef CONFIG_SLUB_DEBUG From patchwork Mon Oct 4 13:45:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CE60C433EF for ; Mon, 4 Oct 2021 13:58:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A7B036108F for ; Mon, 4 Oct 2021 13:58:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A7B036108F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 27E1B94001D; Mon, 4 Oct 2021 09:58:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 22E0B94000B; Mon, 4 Oct 2021 09:58:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F63494001D; Mon, 4 Oct 2021 09:58:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0160.hostedemail.com [216.40.44.160]) by kanga.kvack.org (Postfix) with ESMTP id 004F194000B for ; Mon, 4 Oct 2021 09:58:46 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AB47531E7C for ; Mon, 4 Oct 2021 13:58:46 +0000 (UTC) X-FDA: 78658910652.31.C536577 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 567C110006B8 for ; Mon, 4 Oct 2021 13:58:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LK+komd8mlv96bVF9kA3pUjWdTY+V5KvsJsK/bQW7Gg=; b=MvZ040q4mogEqwEGia3FgVXl+r E5Pkb69Hqeb1i3zbBvq3PwhpmPc16DukyzJpjMniHcRB1Ys5iS9AdOzccw07FGHMmwEHIg9vW3yX+ XWzAx+xCM3CfmEOrZFFCiAvHbxcie3LDbkd7EA3Fca95itwjjZ19xLbzQme/DEDJ8sjspuYU5l6eE B7nIn6xq4NOMipn8NjCxJ8+msj4Q5CxMu77raDvib8XhKPkCdZVPE4/Qgxy56ao1ySJOxsVTubfIC hAKlYL23KvVkIa1QpoRtYsNK/HHTq0VQ2ZSebwJ8gsQ2XXbCJsDS7+DaSYPDp0DSN5EFjgSi8E6m4 HHt01g6g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOT7-00GvzW-Vl; Mon, 04 Oct 2021 13:57:54 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 08/62] mm: Convert check_heap_object() to use struct slab Date: Mon, 4 Oct 2021 14:45:56 +0100 Message-Id: <20211004134650.4031813-9-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 567C110006B8 X-Stat-Signature: 4i9eqfmxmd5owiifsfpw6c9cyf8qiis1 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MvZ040q4; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633355926-688414 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Ensure that we're not seeing a tail page inside __check_heap_object() by converting to a slab instead of a page. Take the opportunity to mark the slab as const since we're not modifying it. Also move the declaration of __check_heap_object() to mm/slab.h so it's not available to the wider kernel. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/slab.h | 8 -------- mm/slab.c | 14 +++++++------- mm/slab.h | 9 +++++++++ mm/slub.c | 10 +++++----- mm/usercopy.c | 13 +++++++------ 5 files changed, 28 insertions(+), 26 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 083f3ce550bc..830051d4af58 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -191,14 +191,6 @@ bool kmem_valid_obj(void *object); void kmem_dump_obj(void *object); #endif -#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR -void __check_heap_object(const void *ptr, unsigned long n, struct page *page, - bool to_user); -#else -static inline void __check_heap_object(const void *ptr, unsigned long n, - struct page *page, bool to_user) { } -#endif - /* * Some archs want to perform DMA into kmalloc caches and need a guaranteed * alignment larger than the alignment of a 64-bit integer. diff --git a/mm/slab.c b/mm/slab.c index 4a6bdbdcf0db..0d515fd697a0 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -372,8 +372,8 @@ static void **dbg_userword(struct kmem_cache *cachep, void *objp) static int slab_max_order = SLAB_MAX_ORDER_LO; static bool slab_max_order_set __initdata; -static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, - unsigned int idx) +static inline void *index_to_obj(struct kmem_cache *cache, + const struct page *page, unsigned int idx) { return page->s_mem + cache->size * idx; } @@ -4181,8 +4181,8 @@ ssize_t slabinfo_write(struct file *file, const char __user *buffer, * Returns NULL if check passes, otherwise const char * to name of cache * to indicate an error. */ -void __check_heap_object(const void *ptr, unsigned long n, struct page *page, - bool to_user) +void __check_heap_object(const void *ptr, unsigned long n, + const struct slab *slab, bool to_user) { struct kmem_cache *cachep; unsigned int objnr; @@ -4191,15 +4191,15 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, ptr = kasan_reset_tag(ptr); /* Find and validate object. */ - cachep = page->slab_cache; - objnr = obj_to_index(cachep, page, (void *)ptr); + cachep = slab->slab_cache; + objnr = obj_to_index(cachep, slab_page(slab), (void *)ptr); BUG_ON(objnr >= cachep->num); /* Find offset within object. */ if (is_kfence_address(ptr)) offset = ptr - kfence_object_start(ptr); else - offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep); + offset = ptr - index_to_obj(cachep, slab_page(slab), objnr) - obj_offset(cachep); /* Allow address range falling entirely within usercopy region. */ if (offset >= cachep->useroffset && diff --git a/mm/slab.h b/mm/slab.h index 29a0bf827a82..53fe3a746973 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -731,4 +731,13 @@ struct kmem_obj_info { void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab); #endif +#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR +void __check_heap_object(const void *ptr, unsigned long n, + const struct slab *slab, bool to_user); +#else +static inline +void __check_heap_object(const void *ptr, unsigned long n, + const struct slab *slab, bool to_user) { } +#endif + #endif /* MM_SLAB_H */ diff --git a/mm/slub.c b/mm/slub.c index 517450561840..b34ca1ff3e1c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4452,8 +4452,8 @@ EXPORT_SYMBOL(__kmalloc_node); * Returns NULL if check passes, otherwise const char * to name of cache * to indicate an error. */ -void __check_heap_object(const void *ptr, unsigned long n, struct page *page, - bool to_user) +void __check_heap_object(const void *ptr, unsigned long n, + const struct slab *slab, bool to_user) { struct kmem_cache *s; unsigned int offset; @@ -4463,10 +4463,10 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, ptr = kasan_reset_tag(ptr); /* Find object and usable object size. */ - s = page->slab_cache; + s = slab->slab_cache; /* Reject impossible pointers. */ - if (ptr < page_address(page)) + if (ptr < slab_address(slab)) usercopy_abort("SLUB object not in SLUB page?!", NULL, to_user, 0, n); @@ -4474,7 +4474,7 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, if (is_kfence) offset = ptr - kfence_object_start(ptr); else - offset = (ptr - page_address(page)) % s->size; + offset = (ptr - slab_address(slab)) % s->size; /* Adjust for redzone and reject if within the redzone. */ if (!is_kfence && kmem_cache_debug_flags(s, SLAB_RED_ZONE)) { diff --git a/mm/usercopy.c b/mm/usercopy.c index b3de3c4eefba..07e86a360d49 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -20,6 +20,7 @@ #include #include #include +#include "slab.h" /* * Checks if a given pointer and length is contained by the current @@ -223,7 +224,7 @@ static inline void check_page_span(const void *ptr, unsigned long n, static inline void check_heap_object(const void *ptr, unsigned long n, bool to_user) { - struct page *page; + struct slab *slab; if (!virt_addr_valid(ptr)) return; @@ -231,16 +232,16 @@ static inline void check_heap_object(const void *ptr, unsigned long n, /* * When CONFIG_HIGHMEM=y, kmap_to_page() will give either the * highmem page or fallback to virt_to_page(). The following - * is effectively a highmem-aware virt_to_head_page(). + * is effectively a highmem-aware virt_to_slab(). */ - page = compound_head(kmap_to_page((void *)ptr)); + slab = page_slab(kmap_to_page((void *)ptr)); - if (PageSlab(page)) { + if (slab_test_cache(slab)) { /* Check slab allocator for flags and size. */ - __check_heap_object(ptr, n, page, to_user); + __check_heap_object(ptr, n, slab, to_user); } else { /* Verify object does not incorrectly span multiple pages. */ - check_page_span(ptr, n, page, to_user); + check_page_span(ptr, n, slab_page(slab), to_user); } } From patchwork Mon Oct 4 13:45:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6442C433EF for ; Mon, 4 Oct 2021 13:59:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 518D16108F for ; Mon, 4 Oct 2021 13:59:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 518D16108F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E652594001E; Mon, 4 Oct 2021 09:59:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E138894000B; Mon, 4 Oct 2021 09:59:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D030194001E; Mon, 4 Oct 2021 09:59:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0148.hostedemail.com [216.40.44.148]) by kanga.kvack.org (Postfix) with ESMTP id C0DCF94000B for ; Mon, 4 Oct 2021 09:59:38 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 849C518233661 for ; Mon, 4 Oct 2021 13:59:38 +0000 (UTC) X-FDA: 78658912836.21.DB62BB6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 411EB801B7F0 for ; Mon, 4 Oct 2021 13:59:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=AMdVt1MO7X7URT1Zqnl/ga3S3L3V4jEvWQ8xBBzWwI0=; b=TUWppNiKwbk2AWFCEK4Es0Dyq4 7fFirnmnDkDzt1o5ZHOuUDvP+2SXP+MxGepmryatuCUktNKi54VEartbfH0TbaVXPGKtg9rdSK/l4 0Hj4wVmKj28254t0HEnoA3ZxToDvDrFwi1ovQ4EllsF/M/1jFNJftQ/HSmJcGN1Yj4h4W6Gg/iEMV f7Qfxg4MiQ6NG5BIaYm2c19dTS5KekU3KnD9O7V3Au2igT5Oi1I4uaO324Nholdv8YJTg0dK5QJJw OOG+80qftyMQ6g++AJduRZJzh01KgZa3cwxV51pPl5bOPNVqkuzcZLqVQGAiXmqw/iDLr3lOTEk26 hwph2Iew==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOU0-00GwAg-NI; Mon, 04 Oct 2021 13:58:31 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 09/62] mm/slub: Convert process_slab() to take a struct slab Date: Mon, 4 Oct 2021 14:45:57 +0100 Message-Id: <20211004134650.4031813-10-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 411EB801B7F0 X-Stat-Signature: gow7acr6kyiswow8nsitc11uqacnqfgo Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=TUWppNiK; dmarc=none; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633355978-512598 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add some type safety by passing a struct slab instead of a struct page. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index b34ca1ff3e1c..f5aadbccdab4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5169,15 +5169,15 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, } static void process_slab(struct loc_track *t, struct kmem_cache *s, - struct page *page, enum track_item alloc, + struct slab *slab, enum track_item alloc, unsigned long *obj_map) { - void *addr = page_address(page); + void *addr = slab_address(slab); void *p; - __fill_map(obj_map, s, page); + __fill_map(obj_map, s, slab_page(slab)); - for_each_object(p, s, addr, page->objects) + for_each_object(p, s, addr, slab->objects) if (!test_bit(__obj_to_index(s, addr, p), obj_map)) add_location(t, s, get_track(s, p, alloc)); } @@ -6124,16 +6124,16 @@ static int slab_debug_trace_open(struct inode *inode, struct file *filep) for_each_kmem_cache_node(s, node, n) { unsigned long flags; - struct page *page; + struct slab *slab; if (!atomic_long_read(&n->nr_slabs)) continue; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, slab_list) - process_slab(t, s, page, alloc, obj_map); - list_for_each_entry(page, &n->full, slab_list) - process_slab(t, s, page, alloc, obj_map); + list_for_each_entry(slab, &n->partial, slab_list) + process_slab(t, s, slab, alloc, obj_map); + list_for_each_entry(slab, &n->full, slab_list) + process_slab(t, s, slab, alloc, obj_map); spin_unlock_irqrestore(&n->list_lock, flags); } From patchwork Mon Oct 4 13:45:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 208D0C433F5 for ; Mon, 4 Oct 2021 14:00:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B647E61038 for ; Mon, 4 Oct 2021 14:00:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B647E61038 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 560F694001F; Mon, 4 Oct 2021 10:00:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5114094000B; Mon, 4 Oct 2021 10:00:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 400A994001F; Mon, 4 Oct 2021 10:00:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0123.hostedemail.com [216.40.44.123]) by kanga.kvack.org (Postfix) with ESMTP id 3249794000B for ; Mon, 4 Oct 2021 10:00:57 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D886D182A8417 for ; Mon, 4 Oct 2021 14:00:56 +0000 (UTC) X-FDA: 78658916112.29.5CED938 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 725C4103C0E0 for ; Mon, 4 Oct 2021 14:00:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=J9jukaufFgVvnjNL3vVX/GH1ni/Bj1sYb4ZFjw8v05U=; b=UyQiuv1uRG0LLswAvCL5YnehcH pDu109FFLHAFbZzHuml97tEpc/ga2XRYioYomeO9s04+aKsUXVzqN/vAoZzNY3UuBwGZSe1SL4hiH 6dM6+kgre/XY5wMYsgoEm8GxLyEAvultmzlRjMe8es1kvNQkqNXKvvAf+ip7tXbGHXHAh12boNsiJ r9neKVDLLiF+ljAiu5UMAByQOJshGZpWkouHDx/AHHm5cVwhj+GFsDpFpnYScotc8As0pF7yzY6Iz yNnpn0wa8DAQRpYXbNGGKOBKPqHaAAWnw5yOgbZdILtn58HYC100T2mi3k8PhGUR3epSA/GtjooJ1 l/mrMBhQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOUo-00GwF8-0W; Mon, 04 Oct 2021 13:59:18 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 10/62] mm/slub: Convert detached_freelist to use a struct slab Date: Mon, 4 Oct 2021 14:45:58 +0100 Message-Id: <20211004134650.4031813-11-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 725C4103C0E0 X-Stat-Signature: cmgs47bt6peycgmzrzay8wcad5qqzzix Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=UyQiuv1u; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633356056-660439 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This gives us a little bit of extra typesafety as we know that nobody called virt_to_page() instead of virt_to_head_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index f5aadbccdab4..050a0610b3ef 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3502,7 +3502,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) EXPORT_SYMBOL(kmem_cache_free); struct detached_freelist { - struct page *page; + struct slab *slab; void *tail; void *freelist; int cnt; @@ -3522,8 +3522,8 @@ static inline void free_nonslab_page(struct page *page, void *object) /* * This function progressively scans the array with free objects (with * a limited look ahead) and extract objects belonging to the same - * page. It builds a detached freelist directly within the given - * page/objects. This can happen without any need for + * slab. It builds a detached freelist directly within the given + * slab/objects. This can happen without any need for * synchronization, because the objects are owned by running process. * The freelist is build up as a single linked list in the objects. * The idea is, that this detached freelist can then be bulk @@ -3538,10 +3538,10 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, size_t first_skipped_index = 0; int lookahead = 3; void *object; - struct page *page; + struct slab *slab; /* Always re-init detached_freelist */ - df->page = NULL; + df->slab = NULL; do { object = p[--size]; @@ -3551,16 +3551,16 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, if (!object) return 0; - page = virt_to_head_page(object); + slab = virt_to_slab(object); if (!s) { /* Handle kalloc'ed objects */ - if (unlikely(!PageSlab(page))) { - free_nonslab_page(page, object); + if (unlikely(!slab_test_cache(slab))) { + free_nonslab_page(slab_page(slab), object); p[size] = NULL; /* mark object processed */ return size; } /* Derive kmem_cache from object */ - df->s = page->slab_cache; + df->s = slab->slab_cache; } else { df->s = cache_from_obj(s, object); /* Support for memcg */ } @@ -3573,7 +3573,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, } /* Start new detached freelist */ - df->page = page; + df->slab = slab; set_freepointer(df->s, object, NULL); df->tail = object; df->freelist = object; @@ -3585,8 +3585,8 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, if (!object) continue; /* Skip processed objects */ - /* df->page is always set at this point */ - if (df->page == virt_to_head_page(object)) { + /* df->slab is always set at this point */ + if (df->slab == virt_to_slab(object)) { /* Opportunity build freelist */ set_freepointer(df->s, object, df->freelist); df->freelist = object; @@ -3618,10 +3618,10 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) struct detached_freelist df; size = build_detached_freelist(s, size, p, &df); - if (!df.page) + if (!df.slab) continue; - slab_free(df.s, df.page, df.freelist, df.tail, df.cnt, _RET_IP_); + slab_free(df.s, slab_page(df.slab), df.freelist, df.tail, df.cnt, _RET_IP_); } while (likely(size)); } EXPORT_SYMBOL(kmem_cache_free_bulk); From patchwork Mon Oct 4 13:45:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05A6CC433EF for ; Mon, 4 Oct 2021 14:02:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ABAC961131 for ; Mon, 4 Oct 2021 14:02:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org ABAC961131 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 47669940020; Mon, 4 Oct 2021 10:02:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4256F94000B; Mon, 4 Oct 2021 10:02:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 31566940020; Mon, 4 Oct 2021 10:02:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0027.hostedemail.com [216.40.44.27]) by kanga.kvack.org (Postfix) with ESMTP id 228E094000B for ; Mon, 4 Oct 2021 10:02:01 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D6DDE18018737 for ; Mon, 4 Oct 2021 14:02:00 +0000 (UTC) X-FDA: 78658918800.01.EDE885E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 729A8500150C for ; Mon, 4 Oct 2021 14:02:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aWJ46jxRDg6jG4PJeNanroj8cPTE6X3OYlOsmPUu3Ug=; b=KTh39MmCLt+rdR9EAKrTtDudjh OUjfd2zYieEqvaOAuPoYH+kXoeLngQeBxPNCFwLCSczICgcgtAbluFtruY7PunFlI+dZzYU59Dk7F DLB/JaYg63w4u8FaUWY7knV9Xu14iarRhPcYwtyAnGpKr/UsaD+x/nhEi8qjA+4tMrgDN2kKd9kB2 j+oJ3CQDDBjDY4aio3lo1RCgUpa9os/Jf5I5Id1nkb3Wydpl8OvUv/wYWS9kv+7lt7TYvaIV5U0R/ RMP+Ph2HOwBnRdEcE2F5C2/eJ+1TDPiomPxrkaWMq9Ya/jlGy074SfSxvy48vjc6WHroHLpGwXFSC 6lQlBDcA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOVi-00GwNj-FU; Mon, 04 Oct 2021 14:00:36 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 11/62] mm/slub: Convert kfree() to use a struct slab Date: Mon, 4 Oct 2021 14:45:59 +0100 Message-Id: <20211004134650.4031813-12-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 729A8500150C X-Stat-Signature: bew5ksji44wq7ms81wi1q844b1heckwb Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=KTh39MmC; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633356120-951864 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With kfree() using a struct slab, we can also convert slab_free() and do_slab_free() to use a slab instead of a page. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 050a0610b3ef..15996ea165ac 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3402,11 +3402,11 @@ static void __slab_free(struct kmem_cache *s, struct page *page, * with all sorts of special processing. * * Bulk free of a freelist with several objects (all pointing to the - * same page) possible by specifying head and tail ptr, plus objects + * same slab) possible by specifying head and tail ptr, plus objects * count (cnt). Bulk free indicated by tail pointer being set. */ static __always_inline void do_slab_free(struct kmem_cache *s, - struct page *page, void *head, void *tail, + struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { void *tail_obj = tail ? : head; @@ -3427,7 +3427,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s, /* Same with comment on barrier() in slab_alloc_node() */ barrier(); - if (likely(page == c->page)) { + if (likely(slab_page(slab) == c->page)) { #ifndef CONFIG_PREEMPT_RT void **freelist = READ_ONCE(c->freelist); @@ -3453,7 +3453,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s, local_lock(&s->cpu_slab->lock); c = this_cpu_ptr(s->cpu_slab); - if (unlikely(page != c->page)) { + if (unlikely(slab_page(slab) != c->page)) { local_unlock(&s->cpu_slab->lock); goto redo; } @@ -3468,11 +3468,11 @@ static __always_inline void do_slab_free(struct kmem_cache *s, #endif stat(s, FREE_FASTPATH); } else - __slab_free(s, page, head, tail_obj, cnt, addr); + __slab_free(s, slab_page(slab), head, tail_obj, cnt, addr); } -static __always_inline void slab_free(struct kmem_cache *s, struct page *page, +static __always_inline void slab_free(struct kmem_cache *s, struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { @@ -3481,13 +3481,13 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page, * to remove objects, whose reuse must be delayed. */ if (slab_free_freelist_hook(s, &head, &tail)) - do_slab_free(s, page, head, tail, cnt, addr); + do_slab_free(s, slab, head, tail, cnt, addr); } #ifdef CONFIG_KASAN_GENERIC void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) { - do_slab_free(cache, virt_to_head_page(x), x, NULL, 1, addr); + do_slab_free(cache, virt_to_slab(x), x, NULL, 1, addr); } #endif @@ -3496,7 +3496,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) s = cache_from_obj(s, x); if (!s) return; - slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_); + slab_free(s, virt_to_slab(x), x, NULL, 1, _RET_IP_); trace_kmem_cache_free(_RET_IP_, x, s->name); } EXPORT_SYMBOL(kmem_cache_free); @@ -3621,7 +3621,7 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) if (!df.slab) continue; - slab_free(df.s, slab_page(df.slab), df.freelist, df.tail, df.cnt, _RET_IP_); + slab_free(df.s, df.slab, df.freelist, df.tail, df.cnt, _RET_IP_); } while (likely(size)); } EXPORT_SYMBOL(kmem_cache_free_bulk); @@ -4527,7 +4527,7 @@ EXPORT_SYMBOL(__ksize); void kfree(const void *x) { - struct page *page; + struct slab *slab; void *object = (void *)x; trace_kfree(_RET_IP_, x); @@ -4535,12 +4535,12 @@ void kfree(const void *x) if (unlikely(ZERO_OR_NULL_PTR(x))) return; - page = virt_to_head_page(x); - if (unlikely(!PageSlab(page))) { - free_nonslab_page(page, object); + slab = virt_to_slab(x); + if (unlikely(!SlabAllocation(slab))) { + free_nonslab_page(slab_page(slab), object); return; } - slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_); + slab_free(slab->slab_cache, slab, object, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kfree); From patchwork Mon Oct 4 13:46:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA783C433F5 for ; Mon, 4 Oct 2021 14:03:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 492E861019 for ; Mon, 4 Oct 2021 14:03:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 492E861019 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D9BEB940021; Mon, 4 Oct 2021 10:03:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D249394000B; Mon, 4 Oct 2021 10:03:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC559940021; Mon, 4 Oct 2021 10:03:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0179.hostedemail.com [216.40.44.179]) by kanga.kvack.org (Postfix) with ESMTP id A902694000B for ; Mon, 4 Oct 2021 10:03:43 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6C47C8249980 for ; Mon, 4 Oct 2021 14:03:43 +0000 (UTC) X-FDA: 78658923126.14.E1DFC9E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 290F63000781 for ; Mon, 4 Oct 2021 14:03:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=BaJ+FyAZ98xC9dLGu/+Y1slNk42dYE38zPd2lQIOSiU=; b=Kg+ztRceQospwR4QOPICjWV2pC iGZ9kTIHn1HoY3BYs8MblV4ssoxm3nAy8fedwURnsPM2GWQB4RMTl42yp7dxv1kGQutGF5RXOyAJV aVtWqocZ78GrJE6GJKSQohrOBUGBA/O0by+edqhD3YTGTYb9k/AkXfrYgIa+0GD+sFfV4ZSSdUeo/ tFECa96OjSz4grud/DpraHgIVdGSWnKdMN8MCs59BNwPt0T1JXE/gwsbYqCXHjrKpZaeaZSaQ+fX0 xcJAdaUbI08+SbLEYXXDmqtR7GBIcF1tsRPLKbw8GdNJGyoFkpegoa+iukIxafDCXBQquXkc4qj1A o+/H8fFQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOWa-00GwUp-FV; Mon, 04 Oct 2021 14:01:27 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 12/62] mm/slub: Convert __slab_free() to take a struct slab Date: Mon, 4 Oct 2021 14:46:00 +0100 Message-Id: <20211004134650.4031813-13-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 290F63000781 X-Stat-Signature: dsmh5f8m9mtay63nbpfu4bzs5ohm1r5h Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Kg+ztRce; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633356223-790982 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide a little more typesafety and also convert free_debug_processing() to take a struct slab. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 52 ++++++++++++++++++++++++++-------------------------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 15996ea165ac..0a566a03d424 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1342,21 +1342,21 @@ static inline int free_consistency_checks(struct kmem_cache *s, /* Supports checking bulk free of a constructed freelist */ static noinline int free_debug_processing( - struct kmem_cache *s, struct page *page, + struct kmem_cache *s, struct slab *slab, void *head, void *tail, int bulk_cnt, unsigned long addr) { - struct kmem_cache_node *n = get_node(s, page_to_nid(page)); + struct kmem_cache_node *n = get_node(s, slab_nid(slab)); void *object = head; int cnt = 0; unsigned long flags, flags2; int ret = 0; spin_lock_irqsave(&n->list_lock, flags); - slab_lock(page, &flags2); + slab_lock(slab_page(slab), &flags2); if (s->flags & SLAB_CONSISTENCY_CHECKS) { - if (!check_slab(s, page)) + if (!check_slab(s, slab_page(slab))) goto out; } @@ -1364,13 +1364,13 @@ static noinline int free_debug_processing( cnt++; if (s->flags & SLAB_CONSISTENCY_CHECKS) { - if (!free_consistency_checks(s, page, object, addr)) + if (!free_consistency_checks(s, slab_page(slab), object, addr)) goto out; } if (s->flags & SLAB_STORE_USER) set_track(s, object, TRACK_FREE, addr); - trace(s, page, object, 0); + trace(s, slab_page(slab), object, 0); /* Freepointer not overwritten by init_object(), SLAB_POISON moved it */ init_object(s, object, SLUB_RED_INACTIVE); @@ -1383,10 +1383,10 @@ static noinline int free_debug_processing( out: if (cnt != bulk_cnt) - slab_err(s, page, "Bulk freelist count(%d) invalid(%d)\n", + slab_err(s, slab_page(slab), "Bulk freelist count(%d) invalid(%d)\n", bulk_cnt, cnt); - slab_unlock(page, &flags2); + slab_unlock(slab_page(slab), &flags2); spin_unlock_irqrestore(&n->list_lock, flags); if (!ret) slab_fix(s, "Object at 0x%p not freed", object); @@ -1609,7 +1609,7 @@ static inline int alloc_debug_processing(struct kmem_cache *s, struct page *page, void *object, unsigned long addr) { return 0; } static inline int free_debug_processing( - struct kmem_cache *s, struct page *page, + struct kmem_cache *s, struct slab *slab, void *head, void *tail, int bulk_cnt, unsigned long addr) { return 0; } @@ -3270,17 +3270,17 @@ EXPORT_SYMBOL(kmem_cache_alloc_node_trace); * have a longer lifetime than the cpu slabs in most processing loads. * * So we still attempt to reduce cache line usage. Just take the slab - * lock and free the item. If there is no additional partial page + * lock and free the item. If there is no additional partial slab * handling required then we can return immediately. */ -static void __slab_free(struct kmem_cache *s, struct page *page, +static void __slab_free(struct kmem_cache *s, struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { void *prior; int was_frozen; - struct page new; + struct slab new; unsigned long counters; struct kmem_cache_node *n = NULL; unsigned long flags; @@ -3291,7 +3291,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, return; if (kmem_cache_debug(s) && - !free_debug_processing(s, page, head, tail, cnt, addr)) + !free_debug_processing(s, slab, head, tail, cnt, addr)) return; do { @@ -3299,8 +3299,8 @@ static void __slab_free(struct kmem_cache *s, struct page *page, spin_unlock_irqrestore(&n->list_lock, flags); n = NULL; } - prior = page->freelist; - counters = page->counters; + prior = slab->freelist; + counters = slab->counters; set_freepointer(s, tail, prior); new.counters = counters; was_frozen = new.frozen; @@ -3319,7 +3319,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, } else { /* Needs to be taken off a list */ - n = get_node(s, page_to_nid(page)); + n = get_node(s, slab_nid(slab)); /* * Speculatively acquire the list_lock. * If the cmpxchg does not succeed then we may @@ -3333,7 +3333,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, } } - } while (!cmpxchg_double_slab(s, page, + } while (!cmpxchg_double_slab(s, slab_page(slab), prior, counters, head, new.counters, "__slab_free")); @@ -3348,10 +3348,10 @@ static void __slab_free(struct kmem_cache *s, struct page *page, stat(s, FREE_FROZEN); } else if (new.frozen) { /* - * If we just froze the page then put it onto the + * If we just froze the slab then put it onto the * per cpu partial list. */ - put_cpu_partial(s, page, 1); + put_cpu_partial(s, slab_page(slab), 1); stat(s, CPU_PARTIAL_FREE); } @@ -3366,8 +3366,8 @@ static void __slab_free(struct kmem_cache *s, struct page *page, * then add it. */ if (!kmem_cache_has_cpu_partial(s) && unlikely(!prior)) { - remove_full(s, n, page); - add_partial(n, page, DEACTIVATE_TO_TAIL); + remove_full(s, n, slab_page(slab)); + add_partial(n, slab_page(slab), DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } spin_unlock_irqrestore(&n->list_lock, flags); @@ -3378,16 +3378,16 @@ static void __slab_free(struct kmem_cache *s, struct page *page, /* * Slab on the partial list. */ - remove_partial(n, page); + remove_partial(n, slab_page(slab)); stat(s, FREE_REMOVE_PARTIAL); } else { /* Slab must be on the full list */ - remove_full(s, n, page); + remove_full(s, n, slab_page(slab)); } spin_unlock_irqrestore(&n->list_lock, flags); stat(s, FREE_SLAB); - discard_slab(s, page); + discard_slab(s, slab_page(slab)); } /* @@ -3468,7 +3468,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s, #endif stat(s, FREE_FASTPATH); } else - __slab_free(s, slab_page(slab), head, tail_obj, cnt, addr); + __slab_free(s, slab, head, tail_obj, cnt, addr); } @@ -4536,7 +4536,7 @@ void kfree(const void *x) return; slab = virt_to_slab(x); - if (unlikely(!SlabAllocation(slab))) { + if (unlikely(!slab_test_cache(slab))) { free_nonslab_page(slab_page(slab), object); return; } From patchwork Mon Oct 4 13:46:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98F81C433EF for ; Mon, 4 Oct 2021 14:03:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3E8CD61019 for ; Mon, 4 Oct 2021 14:03:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3E8CD61019 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D4E80940022; Mon, 4 Oct 2021 10:03:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CFD9E94000B; Mon, 4 Oct 2021 10:03:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC57F940022; Mon, 4 Oct 2021 10:03:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0018.hostedemail.com [216.40.44.18]) by kanga.kvack.org (Postfix) with ESMTP id AE7CD94000B for ; Mon, 4 Oct 2021 10:03:58 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 670D718116E0D for ; Mon, 4 Oct 2021 14:03:58 +0000 (UTC) X-FDA: 78658923756.39.6A7B184 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id A1CE49002F10 for ; Mon, 4 Oct 2021 14:03:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=HdJtNDMKWGqGg3a+1JjP6TXy0ykw9trtwmpAPPdvVBY=; b=JgHW0OHHfvpKi4hL7NJWSLvv64 U5HRsD66MhlPfKWivycu29cX0l5YEbGNOHG/St2GMlK3fhNp6EBRZHnocxK89PmjsBW5S/+fcjaLY L/8n0FmVCBTJWSNL7ZoOsTmtke/AALdNjVbWqVPNLJ9ddXB4Di9GbBJfFLe50d/+T12SlyDpNT87h kICwqafwDEf+ufJ5J4T0na3iDUynatSXcdSdo1In8n/TaWpi4Rw3JaCtHpcHv7ZVOJRN/xlk9YSv1 60IZJWAjHoLZuQ7c5SKXZLio9tt7pemrpDtuyb/kLkv/5eA2eE0NMlP7YfPqFr64OSUfv7YyQ3yDa h5Lg8a/w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOXw-00Gwcn-Ga; Mon, 04 Oct 2021 14:02:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 13/62] mm/slub: Convert new_slab() to return a struct slab Date: Mon, 4 Oct 2021 14:46:01 +0100 Message-Id: <20211004134650.4031813-14-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=JgHW0OHH; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: A1CE49002F10 X-Stat-Signature: qi8hjaxmh4b6qrogeuho7abq3aemcj4c X-HE-Tag: 1633356228-534408 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We can cast directly from struct page to struct slab in alloc_slab_page() because the page pointer returned from the page allocator is guaranteed to be a head page. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 62 +++++++++++++++++++++++++++---------------------------- 1 file changed, 31 insertions(+), 31 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 0a566a03d424..555c46cbae1f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1753,8 +1753,8 @@ static void *setup_object(struct kmem_cache *s, struct page *page, /* * Slab allocation and freeing */ -static inline struct page *alloc_slab_page(struct kmem_cache *s, - gfp_t flags, int node, struct kmem_cache_order_objects oo) +static inline struct slab *alloc_slab(struct kmem_cache *s, gfp_t flags, + int node, struct kmem_cache_order_objects oo) { struct page *page; unsigned int order = oo_order(oo); @@ -1764,7 +1764,7 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s, else page = __alloc_pages_node(node, flags, order); - return page; + return (struct slab *)page; } #ifdef CONFIG_SLAB_FREELIST_RANDOM @@ -1876,9 +1876,9 @@ static inline bool shuffle_freelist(struct kmem_cache *s, struct page *page) } #endif /* CONFIG_SLAB_FREELIST_RANDOM */ -static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) +static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) { - struct page *page; + struct slab *slab; struct kmem_cache_order_objects oo = s->oo; gfp_t alloc_gfp; void *start, *p, *next; @@ -1897,63 +1897,63 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) if ((alloc_gfp & __GFP_DIRECT_RECLAIM) && oo_order(oo) > oo_order(s->min)) alloc_gfp = (alloc_gfp | __GFP_NOMEMALLOC) & ~(__GFP_RECLAIM|__GFP_NOFAIL); - page = alloc_slab_page(s, alloc_gfp, node, oo); - if (unlikely(!page)) { + slab = alloc_slab(s, alloc_gfp, node, oo); + if (unlikely(!slab)) { oo = s->min; alloc_gfp = flags; /* * Allocation may have failed due to fragmentation. * Try a lower order alloc if possible */ - page = alloc_slab_page(s, alloc_gfp, node, oo); - if (unlikely(!page)) + slab = alloc_slab(s, alloc_gfp, node, oo); + if (unlikely(!slab)) goto out; stat(s, ORDER_FALLBACK); } - page->objects = oo_objects(oo); + slab->objects = oo_objects(oo); - account_slab_page(page, oo_order(oo), s, flags); + account_slab(slab, oo_order(oo), s, flags); - page->slab_cache = s; - __SetPageSlab(page); - if (page_is_pfmemalloc(page)) - SetPageSlabPfmemalloc(page); + slab->slab_cache = s; + __SetPageSlab(slab_page(slab)); + if (page_is_pfmemalloc(slab_page(slab))) + slab_set_pfmemalloc(slab); - kasan_poison_slab(page); + kasan_poison_slab(slab_page(slab)); - start = page_address(page); + start = slab_address(slab); - setup_page_debug(s, page, start); + setup_page_debug(s, slab_page(slab), start); - shuffle = shuffle_freelist(s, page); + shuffle = shuffle_freelist(s, slab_page(slab)); if (!shuffle) { start = fixup_red_left(s, start); - start = setup_object(s, page, start); - page->freelist = start; - for (idx = 0, p = start; idx < page->objects - 1; idx++) { + start = setup_object(s, slab_page(slab), start); + slab->freelist = start; + for (idx = 0, p = start; idx < slab->objects - 1; idx++) { next = p + s->size; - next = setup_object(s, page, next); + next = setup_object(s, slab_page(slab), next); set_freepointer(s, p, next); p = next; } set_freepointer(s, p, NULL); } - page->inuse = page->objects; - page->frozen = 1; + slab->inuse = slab->objects; + slab->frozen = 1; out: - if (!page) + if (!slab) return NULL; - inc_slabs_node(s, page_to_nid(page), page->objects); + inc_slabs_node(s, slab_nid(slab), slab->objects); - return page; + return slab; } -static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node) +static struct slab *new_slab(struct kmem_cache *s, gfp_t flags, int node) { if (unlikely(flags & GFP_SLAB_BUG_MASK)) flags = kmalloc_fix_flags(flags); @@ -2991,7 +2991,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, goto check_new_page; slub_put_cpu_ptr(s->cpu_slab); - page = new_slab(s, gfpflags, node); + page = slab_page(new_slab(s, gfpflags, node)); c = slub_get_cpu_ptr(s->cpu_slab); if (unlikely(!page)) { @@ -3896,7 +3896,7 @@ static void early_kmem_cache_node_alloc(int node) BUG_ON(kmem_cache_node->size < sizeof(struct kmem_cache_node)); - page = new_slab(kmem_cache_node, GFP_NOWAIT, node); + page = slab_page(new_slab(kmem_cache_node, GFP_NOWAIT, node)); BUG_ON(!page); if (page_to_nid(page) != node) { From patchwork Mon Oct 4 13:46:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A4CBC433F5 for ; Mon, 4 Oct 2021 14:04:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D749D611C1 for ; Mon, 4 Oct 2021 14:04:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D749D611C1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 758D9940023; Mon, 4 Oct 2021 10:04:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 70B3394000B; Mon, 4 Oct 2021 10:04:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F87B940023; Mon, 4 Oct 2021 10:04:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0070.hostedemail.com [216.40.44.70]) by kanga.kvack.org (Postfix) with ESMTP id 4F8BD94000B for ; Mon, 4 Oct 2021 10:04:47 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0D77282499A8 for ; Mon, 4 Oct 2021 14:04:47 +0000 (UTC) X-FDA: 78658925814.39.E812C72 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id B8809B00244D for ; Mon, 4 Oct 2021 14:04:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4nWAHAlDFDgLThN8urngdmIGHoeLFUUlwM8UhgaXB94=; b=ewC5UKLuCe7K2g/g1axre9ZxwI GiXmX9Db4BOyrHEGJ13LkIfyuVN6dGncbFSgIkvIc2nsUWJ+KONzbKu5F0Q13r1frWC/abuJKkeV8 X/v/KzPdIuVjACwZl5FYiuQ/SnIIprvpeYBD7uZpGjO3ZDIuT7/s7xdGaa5mwuYrwc5VAZDozAjzn D+QHF4hStfpbgr43/L+xkj0aS2xsJytUK2b14KjXKTNz0qymHjXR+g27NkwAgsYSBP/Or3Hsg54OI b/9hkHVTWP5RlmdpIkbSu4LR2CUNx8h8xYkr5r8shCCem3iTzHAuH/WGEXRm6kZIteyljFpUwNFec MG2VlnBw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOYj-00Gwl4-O5; Mon, 04 Oct 2021 14:03:35 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 14/62] mm/slub: Convert early_kmem_cache_node_alloc() to use struct slab Date: Mon, 4 Oct 2021 14:46:02 +0100 Message-Id: <20211004134650.4031813-15-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: B8809B00244D X-Stat-Signature: 7r9r8mpfaof474akipwuuxdh5no5qqnp Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ewC5UKLu; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633356286-630968 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a little type safety. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 555c46cbae1f..41c4ccd67d95 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3891,38 +3891,38 @@ static struct kmem_cache *kmem_cache_node; */ static void early_kmem_cache_node_alloc(int node) { - struct page *page; + struct slab *slab; struct kmem_cache_node *n; BUG_ON(kmem_cache_node->size < sizeof(struct kmem_cache_node)); - page = slab_page(new_slab(kmem_cache_node, GFP_NOWAIT, node)); + slab = new_slab(kmem_cache_node, GFP_NOWAIT, node); - BUG_ON(!page); - if (page_to_nid(page) != node) { + BUG_ON(!slab); + if (slab_nid(slab) != node) { pr_err("SLUB: Unable to allocate memory from node %d\n", node); pr_err("SLUB: Allocating a useless per node structure in order to be able to continue\n"); } - n = page->freelist; + n = slab->freelist; BUG_ON(!n); #ifdef CONFIG_SLUB_DEBUG init_object(kmem_cache_node, n, SLUB_RED_ACTIVE); init_tracking(kmem_cache_node, n); #endif n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL, false); - page->freelist = get_freepointer(kmem_cache_node, n); - page->inuse = 1; - page->frozen = 0; + slab->freelist = get_freepointer(kmem_cache_node, n); + slab->inuse = 1; + slab->frozen = 0; kmem_cache_node->node[node] = n; init_kmem_cache_node(n); - inc_slabs_node(kmem_cache_node, node, page->objects); + inc_slabs_node(kmem_cache_node, node, slab->objects); /* * No locks need to be taken here as it has just been * initialized and there is no concurrent access. */ - __add_partial(n, page, DEACTIVATE_TO_HEAD); + __add_partial(n, slab_page(slab), DEACTIVATE_TO_HEAD); } static void free_kmem_cache_nodes(struct kmem_cache *s) From patchwork Mon Oct 4 13:46:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4939C433F5 for ; Mon, 4 Oct 2021 14:05:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6F19C61184 for ; Mon, 4 Oct 2021 14:05:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6F19C61184 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0EF4C940025; Mon, 4 Oct 2021 10:05:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 09FD294000B; Mon, 4 Oct 2021 10:05:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5BFF940025; Mon, 4 Oct 2021 10:05:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id D405C94000B for ; Mon, 4 Oct 2021 10:05:44 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 907F1181C98E5 for ; Mon, 4 Oct 2021 14:05:44 +0000 (UTC) X-FDA: 78658928208.20.8FBE781 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id 0ABC37008654 for ; Mon, 4 Oct 2021 14:05:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Job6CpG8PoMKpEgpVLGmMbDGiQduhajJXwL+GlTmnwQ=; b=c/5YNPNgtsD2z4WC9i6ZLk1bA6 ce60ORrWmg+oVi2tZ4HiGZ+9YCdbSsvIUa2MPJLBDt7+LDofxyTqmQqy7ZaA76H/4fMaTJAS0j98m tbdpW1dqbNtdR/fDyngqa8JWBTdWfKy9CJ4vtRx97zo2C1MeUtaxg32SzLHZGYNuJ9eS6rYRl6Nuz omvmuEscoEJSofUFdRaKnUjPO5gk7PKgJzOVUI56/kEYrSroXY7Xe9M88PeHJGzTW5mCHKCqw9CWL DJQ5sQPJV7CZ8x91Wg3s0WLM58pvuHvIWf9boJMnYY0jlf0Nlhhi+511kG7VDKPQim2oeDhp6np9B Z0vgGoBA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOZR-00GwsX-5v; Mon, 04 Oct 2021 14:04:14 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 15/62] mm/slub: Convert kmem_cache_cpu to struct slab Date: Mon, 4 Oct 2021 14:46:03 +0100 Message-Id: <20211004134650.4031813-16-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0ABC37008654 X-Stat-Signature: xe6o47xtc1m9gcc4em5fxphzo931jbce Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="c/5YNPNg"; dmarc=none; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633356343-105933 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To avoid converting from page to slab, we have to convert all these functions at once. Adds a little type-safety. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/slub_def.h | 4 +- mm/slub.c | 208 +++++++++++++++++++-------------------- 2 files changed, 106 insertions(+), 106 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 85499f0586b0..3cc64e9f988c 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -48,9 +48,9 @@ enum stat_item { struct kmem_cache_cpu { void **freelist; /* Pointer to next available object */ unsigned long tid; /* Globally unique transaction id */ - struct page *page; /* The slab from which we are allocating */ + struct slab *slab; /* The slab from which we are allocating */ #ifdef CONFIG_SLUB_CPU_PARTIAL - struct page *partial; /* Partially allocated frozen slabs */ + struct slab *partial; /* Partially allocated frozen slabs */ #endif local_lock_t lock; /* Protects the fields above */ #ifdef CONFIG_SLUB_STATS diff --git a/mm/slub.c b/mm/slub.c index 41c4ccd67d95..d849b644d0ed 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2084,9 +2084,9 @@ static inline void *acquire_slab(struct kmem_cache *s, } #ifdef CONFIG_SLUB_CPU_PARTIAL -static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain); +static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain); #else -static inline void put_cpu_partial(struct kmem_cache *s, struct page *page, +static inline void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain) { } #endif static inline bool pfmemalloc_match(struct page *page, gfp_t gfpflags); @@ -2095,9 +2095,9 @@ static inline bool pfmemalloc_match(struct page *page, gfp_t gfpflags); * Try to allocate a partial slab from a specific node. */ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, - struct page **ret_page, gfp_t gfpflags) + struct slab **ret_slab, gfp_t gfpflags) { - struct page *page, *page2; + struct slab *slab, *slab2; void *object = NULL; unsigned int available = 0; unsigned long flags; @@ -2113,23 +2113,23 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, return NULL; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry_safe(page, page2, &n->partial, slab_list) { + list_for_each_entry_safe(slab, slab2, &n->partial, slab_list) { void *t; - if (!pfmemalloc_match(page, gfpflags)) + if (!pfmemalloc_match(slab_page(slab), gfpflags)) continue; - t = acquire_slab(s, n, page, object == NULL, &objects); + t = acquire_slab(s, n, slab_page(slab), object == NULL, &objects); if (!t) break; available += objects; if (!object) { - *ret_page = page; + *ret_slab = slab; stat(s, ALLOC_FROM_PARTIAL); object = t; } else { - put_cpu_partial(s, page, 0); + put_cpu_partial(s, slab, 0); stat(s, CPU_PARTIAL_NODE); } if (!kmem_cache_has_cpu_partial(s) @@ -2142,10 +2142,10 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, } /* - * Get a page from somewhere. Search in increasing NUMA distances. + * Get a slab from somewhere. Search in increasing NUMA distances. */ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, - struct page **ret_page) + struct slab **ret_slab) { #ifdef CONFIG_NUMA struct zonelist *zonelist; @@ -2187,7 +2187,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, if (n && cpuset_zone_allowed(zone, flags) && n->nr_partial > s->min_partial) { - object = get_partial_node(s, n, ret_page, flags); + object = get_partial_node(s, n, ret_slab, flags); if (object) { /* * Don't check read_mems_allowed_retry() @@ -2206,10 +2206,10 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, } /* - * Get a partial page, lock it and return it. + * Get a partial slab, lock it and return it. */ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node, - struct page **ret_page) + struct slab **ret_slab) { void *object; int searchnode = node; @@ -2217,11 +2217,11 @@ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node, if (node == NUMA_NO_NODE) searchnode = numa_mem_id(); - object = get_partial_node(s, get_node(s, searchnode), ret_page, flags); + object = get_partial_node(s, get_node(s, searchnode), ret_slab, flags); if (object || node != NUMA_NO_NODE) return object; - return get_any_partial(s, flags, ret_page); + return get_any_partial(s, flags, ret_slab); } #ifdef CONFIG_PREEMPTION @@ -2506,7 +2506,7 @@ static void unfreeze_partials(struct kmem_cache *s) unsigned long flags; local_lock_irqsave(&s->cpu_slab->lock, flags); - partial_page = this_cpu_read(s->cpu_slab->partial); + partial_page = slab_page(this_cpu_read(s->cpu_slab->partial)); this_cpu_write(s->cpu_slab->partial, NULL); local_unlock_irqrestore(&s->cpu_slab->lock, flags); @@ -2519,7 +2519,7 @@ static void unfreeze_partials_cpu(struct kmem_cache *s, { struct page *partial_page; - partial_page = slub_percpu_partial(c); + partial_page = slab_page(slub_percpu_partial(c)); c->partial = NULL; if (partial_page) @@ -2527,52 +2527,52 @@ static void unfreeze_partials_cpu(struct kmem_cache *s, } /* - * Put a page that was just frozen (in __slab_free|get_partial_node) into a - * partial page slot if available. + * Put a slab that was just frozen (in __slab_free|get_partial_node) into a + * partial slab slot if available. * * If we did not find a slot then simply move all the partials to the * per node partial list. */ -static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) +static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain) { - struct page *oldpage; - struct page *page_to_unfreeze = NULL; + struct slab *oldslab; + struct slab *slab_to_unfreeze = NULL; unsigned long flags; - int pages = 0; + int slabs = 0; int pobjects = 0; local_lock_irqsave(&s->cpu_slab->lock, flags); - oldpage = this_cpu_read(s->cpu_slab->partial); + oldslab = this_cpu_read(s->cpu_slab->partial); - if (oldpage) { - if (drain && oldpage->pobjects > slub_cpu_partial(s)) { + if (oldslab) { + if (drain && oldslab->pobjects > slub_cpu_partial(s)) { /* * Partial array is full. Move the existing set to the * per node partial list. Postpone the actual unfreezing * outside of the critical section. */ - page_to_unfreeze = oldpage; - oldpage = NULL; + slab_to_unfreeze = oldslab; + oldslab = NULL; } else { - pobjects = oldpage->pobjects; - pages = oldpage->pages; + pobjects = oldslab->pobjects; + slabs = oldslab->slabs; } } - pages++; - pobjects += page->objects - page->inuse; + slabs++; + pobjects += slab->objects - slab->inuse; - page->pages = pages; - page->pobjects = pobjects; - page->next = oldpage; + slab->slabs = slabs; + slab->pobjects = pobjects; + slab->next = oldslab; - this_cpu_write(s->cpu_slab->partial, page); + this_cpu_write(s->cpu_slab->partial, slab); local_unlock_irqrestore(&s->cpu_slab->lock, flags); - if (page_to_unfreeze) { - __unfreeze_partials(s, page_to_unfreeze); + if (slab_to_unfreeze) { + __unfreeze_partials(s, slab_page(slab_to_unfreeze)); stat(s, CPU_PARTIAL_DRAIN); } } @@ -2593,10 +2593,10 @@ static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) local_lock_irqsave(&s->cpu_slab->lock, flags); - page = c->page; + page = slab_page(c->slab); freelist = c->freelist; - c->page = NULL; + c->slab = NULL; c->freelist = NULL; c->tid = next_tid(c->tid); @@ -2612,9 +2612,9 @@ static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) { struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); void *freelist = c->freelist; - struct page *page = c->page; + struct page *page = slab_page(c->slab); - c->page = NULL; + c->slab = NULL; c->freelist = NULL; c->tid = next_tid(c->tid); @@ -2648,7 +2648,7 @@ static void flush_cpu_slab(struct work_struct *w) s = sfw->s; c = this_cpu_ptr(s->cpu_slab); - if (c->page) + if (c->slab) flush_slab(s, c); unfreeze_partials(s); @@ -2658,7 +2658,7 @@ static bool has_cpu_slab(int cpu, struct kmem_cache *s) { struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); - return c->page || slub_percpu_partial(c); + return c->slab || slub_percpu_partial(c); } static DEFINE_MUTEX(flush_lock); @@ -2872,15 +2872,15 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, unsigned long addr, struct kmem_cache_cpu *c) { void *freelist; - struct page *page; + struct slab *slab; unsigned long flags; stat(s, ALLOC_SLOWPATH); -reread_page: +reread_slab: - page = READ_ONCE(c->page); - if (!page) { + slab = READ_ONCE(c->slab); + if (!slab) { /* * if the node is not online or has no normal memory, just * ignore the node constraint @@ -2892,7 +2892,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, } redo: - if (unlikely(!node_match(page, node))) { + if (unlikely(!node_match(slab_page(slab), node))) { /* * same as above but node_match() being false already * implies node != NUMA_NO_NODE @@ -2907,27 +2907,27 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, } /* - * By rights, we should be searching for a slab page that was - * PFMEMALLOC but right now, we are losing the pfmemalloc + * By rights, we should be searching for a slab that was + * PFMEMALLOC but right now, we lose the pfmemalloc * information when the page leaves the per-cpu allocator */ - if (unlikely(!pfmemalloc_match_unsafe(page, gfpflags))) + if (unlikely(!pfmemalloc_match_unsafe(slab_page(slab), gfpflags))) goto deactivate_slab; - /* must check again c->page in case we got preempted and it changed */ + /* must check again c->slab in case we got preempted and it changed */ local_lock_irqsave(&s->cpu_slab->lock, flags); - if (unlikely(page != c->page)) { + if (unlikely(slab != c->slab)) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); - goto reread_page; + goto reread_slab; } freelist = c->freelist; if (freelist) goto load_freelist; - freelist = get_freelist(s, page); + freelist = get_freelist(s, slab_page(slab)); if (!freelist) { - c->page = NULL; + c->slab = NULL; local_unlock_irqrestore(&s->cpu_slab->lock, flags); stat(s, DEACTIVATE_BYPASS); goto new_slab; @@ -2941,10 +2941,10 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, /* * freelist is pointing to the list of objects to be used. - * page is pointing to the page from which the objects are obtained. - * That page must be frozen for per cpu allocations to work. + * slab is pointing to the slab from which the objects are obtained. + * That slab must be frozen for per cpu allocations to work. */ - VM_BUG_ON(!c->page->frozen); + VM_BUG_ON(!c->slab->frozen); c->freelist = get_freepointer(s, freelist); c->tid = next_tid(c->tid); local_unlock_irqrestore(&s->cpu_slab->lock, flags); @@ -2953,23 +2953,23 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, deactivate_slab: local_lock_irqsave(&s->cpu_slab->lock, flags); - if (page != c->page) { + if (slab != c->slab) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); - goto reread_page; + goto reread_slab; } freelist = c->freelist; - c->page = NULL; + c->slab = NULL; c->freelist = NULL; local_unlock_irqrestore(&s->cpu_slab->lock, flags); - deactivate_slab(s, page, freelist); + deactivate_slab(s, slab_page(slab), freelist); new_slab: if (slub_percpu_partial(c)) { local_lock_irqsave(&s->cpu_slab->lock, flags); - if (unlikely(c->page)) { + if (unlikely(c->slab)) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); - goto reread_page; + goto reread_slab; } if (unlikely(!slub_percpu_partial(c))) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); @@ -2977,8 +2977,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, goto new_objects; } - page = c->page = slub_percpu_partial(c); - slub_set_percpu_partial(c, page); + slab = c->slab = slub_percpu_partial(c); + slub_set_percpu_partial(c, slab); local_unlock_irqrestore(&s->cpu_slab->lock, flags); stat(s, CPU_PARTIAL_ALLOC); goto redo; @@ -2986,32 +2986,32 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, new_objects: - freelist = get_partial(s, gfpflags, node, &page); + freelist = get_partial(s, gfpflags, node, &slab); if (freelist) - goto check_new_page; + goto check_new_slab; slub_put_cpu_ptr(s->cpu_slab); - page = slab_page(new_slab(s, gfpflags, node)); + slab = new_slab(s, gfpflags, node); c = slub_get_cpu_ptr(s->cpu_slab); - if (unlikely(!page)) { + if (unlikely(!slab)) { slab_out_of_memory(s, gfpflags, node); return NULL; } /* - * No other reference to the page yet so we can + * No other reference to the slab yet so we can * muck around with it freely without cmpxchg */ - freelist = page->freelist; - page->freelist = NULL; + freelist = slab->freelist; + slab->freelist = NULL; stat(s, ALLOC_SLAB); -check_new_page: +check_new_slab: if (kmem_cache_debug(s)) { - if (!alloc_debug_processing(s, page, freelist, addr)) { + if (!alloc_debug_processing(s, slab_page(slab), freelist, addr)) { /* Slab failed checks. Next slab needed */ goto new_slab; } else { @@ -3023,39 +3023,39 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, } } - if (unlikely(!pfmemalloc_match(page, gfpflags))) + if (unlikely(!pfmemalloc_match(slab_page(slab), gfpflags))) /* * For !pfmemalloc_match() case we don't load freelist so that * we don't make further mismatched allocations easier. */ goto return_single; -retry_load_page: +retry_load_slab: local_lock_irqsave(&s->cpu_slab->lock, flags); - if (unlikely(c->page)) { + if (unlikely(c->slab)) { void *flush_freelist = c->freelist; - struct page *flush_page = c->page; + struct slab *flush_slab = c->slab; - c->page = NULL; + c->slab = NULL; c->freelist = NULL; c->tid = next_tid(c->tid); local_unlock_irqrestore(&s->cpu_slab->lock, flags); - deactivate_slab(s, flush_page, flush_freelist); + deactivate_slab(s, slab_page(flush_slab), flush_freelist); stat(s, CPUSLAB_FLUSH); - goto retry_load_page; + goto retry_load_slab; } - c->page = page; + c->slab = slab; goto load_freelist; return_single: - deactivate_slab(s, page, get_freepointer(s, freelist)); + deactivate_slab(s, slab_page(slab), get_freepointer(s, freelist)); return freelist; } @@ -3159,7 +3159,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, */ object = c->freelist; - page = c->page; + page = slab_page(c->slab); /* * We cannot use the lockless fastpath on PREEMPT_RT because if a * slowpath has taken the local_lock_irqsave(), it is not protected @@ -3351,7 +3351,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * If we just froze the slab then put it onto the * per cpu partial list. */ - put_cpu_partial(s, slab_page(slab), 1); + put_cpu_partial(s, slab, 1); stat(s, CPU_PARTIAL_FREE); } @@ -3427,7 +3427,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s, /* Same with comment on barrier() in slab_alloc_node() */ barrier(); - if (likely(slab_page(slab) == c->page)) { + if (likely(slab == c->slab)) { #ifndef CONFIG_PREEMPT_RT void **freelist = READ_ONCE(c->freelist); @@ -3453,7 +3453,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s, local_lock(&s->cpu_slab->lock); c = this_cpu_ptr(s->cpu_slab); - if (unlikely(slab_page(slab) != c->page)) { + if (unlikely(slab != c->slab)) { local_unlock(&s->cpu_slab->lock); goto redo; } @@ -5221,7 +5221,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s, int node; struct page *page; - page = READ_ONCE(c->page); + page = slab_page(READ_ONCE(c->slab)); if (!page) continue; @@ -5236,7 +5236,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s, total += x; nodes[node] += x; - page = slub_percpu_partial_read_once(c); + page = slab_page(slub_percpu_partial_read_once(c)); if (page) { node = page_to_nid(page); if (flags & SO_TOTAL) @@ -5441,31 +5441,31 @@ SLAB_ATTR_RO(objects_partial); static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf) { int objects = 0; - int pages = 0; + int slabs = 0; int cpu; int len = 0; for_each_online_cpu(cpu) { - struct page *page; + struct slab *slab; - page = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); + slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); - if (page) { - pages += page->pages; - objects += page->pobjects; + if (slab) { + slabs += slab->slabs; + objects += slab->pobjects; } } - len += sysfs_emit_at(buf, len, "%d(%d)", objects, pages); + len += sysfs_emit_at(buf, len, "%d(%d)", objects, slabs); #ifdef CONFIG_SMP for_each_online_cpu(cpu) { - struct page *page; + struct slab *slab; - page = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); - if (page) + slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); + if (slab) len += sysfs_emit_at(buf, len, " C%d=%d(%d)", - cpu, page->pobjects, page->pages); + cpu, slab->pobjects, slab->slabs); } #endif len += sysfs_emit_at(buf, len, "\n"); From patchwork Mon Oct 4 13:46:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0488C4332F for ; Mon, 4 Oct 2021 14:06:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 47D7C6124F for ; Mon, 4 Oct 2021 14:06:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 47D7C6124F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E0DBD940026; Mon, 4 Oct 2021 10:06:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DBD9094000B; Mon, 4 Oct 2021 10:06:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CAC80940026; Mon, 4 Oct 2021 10:06:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0194.hostedemail.com [216.40.44.194]) by kanga.kvack.org (Postfix) with ESMTP id BAC4A94000B for ; Mon, 4 Oct 2021 10:06:40 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 66BA72D3A0 for ; Mon, 4 Oct 2021 14:06:40 +0000 (UTC) X-FDA: 78658930560.17.A642692 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 025676002FCB for ; Mon, 4 Oct 2021 14:06:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IcIZfDe6bRASper2161FIhxyOnyUhqvkRgXrt9VdUBU=; b=nxD5HJJaL4kO3v9iN1JbZXgNwt zqeVSt1KO4fgaZkzNSoqIeL2smZhiEG5Qw4LORLGG4r6zWX3TmyhD2LEVr32NunO3r/xGh99/pGS1 r8zA0eYHQ58r32OKSFPZTaO3xM5Pvns1k3ShWmNc7/5gCbagc9XAnrJg8zQYmDUD0y8rD8P93MFbd EraozWRvyVG+reMKtnXtkszUXHfW4g1iDGNpHXxzvbV3RG96cQDVBUdFwuDhlQgsuU1kanQulKl7z B1dRensB9eHtC6Ey2+rV4atNsebJFOzHCsv/MOd1Nl63lLa0krgFKLYtlcooOAk2x15wVtTeArbOW OwRZpDyA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOaZ-00Gwyw-7x; Mon, 04 Oct 2021 14:05:26 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 16/62] mm/slub: Convert show_slab_objects() to struct slab Date: Mon, 4 Oct 2021 14:46:04 +0100 Message-Id: <20211004134650.4031813-17-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 025676002FCB X-Stat-Signature: e3zpwexk5iw8ys96rgu7own3n66cy4wd Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=nxD5HJJa; dmarc=none; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633356399-740457 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds a little bit of type safety. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index d849b644d0ed..fdf3dbd4665f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5219,32 +5219,32 @@ static ssize_t show_slab_objects(struct kmem_cache *s, struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); int node; - struct page *page; + struct slab *slab; - page = slab_page(READ_ONCE(c->slab)); - if (!page) + slab = READ_ONCE(c->slab); + if (!slab) continue; - node = page_to_nid(page); + node = slab_nid(slab); if (flags & SO_TOTAL) - x = page->objects; + x = slab->objects; else if (flags & SO_OBJECTS) - x = page->inuse; + x = slab->inuse; else x = 1; total += x; nodes[node] += x; - page = slab_page(slub_percpu_partial_read_once(c)); - if (page) { - node = page_to_nid(page); + slab = slub_percpu_partial_read_once(c); + if (slab) { + node = slab_nid(slab); if (flags & SO_TOTAL) WARN_ON_ONCE(1); else if (flags & SO_OBJECTS) WARN_ON_ONCE(1); else - x = page->pages; + x = slab->slabs; total += x; nodes[node] += x; } From patchwork Mon Oct 4 13:46:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF33EC433F5 for ; Mon, 4 Oct 2021 14:08:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 58E5C61251 for ; Mon, 4 Oct 2021 14:08:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 58E5C61251 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 02958940028; Mon, 4 Oct 2021 10:08:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F1B2794000B; Mon, 4 Oct 2021 10:08:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E3216940028; Mon, 4 Oct 2021 10:08:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D40C594000B for ; Mon, 4 Oct 2021 10:08:00 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8D9791827E8F5 for ; Mon, 4 Oct 2021 14:08:00 +0000 (UTC) X-FDA: 78658933920.19.5629CE8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 44071500151C for ; Mon, 4 Oct 2021 14:08:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=XgmwUKIS46s2UOAO4nXRxiUqvAtUpV6qEuc4BH2qxP4=; b=I8jikpASYwPXoFk1GB45fmboAU IX0xB1OgTZaTfOQg/lngO5zduiu3JKCvbL3dUt/3im/Mz6LlycYpbn4eQf4quiGBAuogUc3XKE+v6 oOFNYNp4bFxi16PQpMXo5VBbp4KnCI9i73im0NoynA+UpaljMfIC1z9jMLvPsasozptLpEN8XTB2i Lni6eBUe+cdTVuMexs2b7+4bZUCCemd/DbfWgtv69j6y+a2MaLPMkZ3u/ca277riMio74F6x0jlVz aKCvRX0gTQYWYTgd4h/mISP7rBtIolwjA7dOVRvUPRjXLfcN9yNqoNf1vXEiqIvvevJ1IBrSB36BL 9o+lbeDQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXObI-00Gx5T-MU; Mon, 04 Oct 2021 14:06:17 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 17/62] mm/slub: Convert validate_slab() to take a struct slab Date: Mon, 4 Oct 2021 14:46:05 +0100 Message-Id: <20211004134650.4031813-18-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=I8jikpAS; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 44071500151C X-Stat-Signature: bkkt4o8ds7mps9y867st7r76r9ejuj6y X-HE-Tag: 1633356480-430684 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also convert validate_slab_node to use a struct slab. Adds a little typesafety. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index fdf3dbd4665f..5e10a9cc6939 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4956,42 +4956,42 @@ static int count_total(struct page *page) #endif #ifdef CONFIG_SLUB_DEBUG -static void validate_slab(struct kmem_cache *s, struct page *page, +static void validate_slab(struct kmem_cache *s, struct slab *slab, unsigned long *obj_map) { void *p; - void *addr = page_address(page); + void *addr = slab_address(slab); unsigned long flags; - slab_lock(page, &flags); + slab_lock(slab_page(slab), &flags); - if (!check_slab(s, page) || !on_freelist(s, page, NULL)) + if (!check_slab(s, slab_page(slab)) || !on_freelist(s, slab_page(slab), NULL)) goto unlock; /* Now we know that a valid freelist exists */ - __fill_map(obj_map, s, page); - for_each_object(p, s, addr, page->objects) { + __fill_map(obj_map, s, slab_page(slab)); + for_each_object(p, s, addr, slab->objects) { u8 val = test_bit(__obj_to_index(s, addr, p), obj_map) ? SLUB_RED_INACTIVE : SLUB_RED_ACTIVE; - if (!check_object(s, page, p, val)) + if (!check_object(s, slab_page(slab), p, val)) break; } unlock: - slab_unlock(page, &flags); + slab_unlock(slab_page(slab), &flags); } static int validate_slab_node(struct kmem_cache *s, struct kmem_cache_node *n, unsigned long *obj_map) { unsigned long count = 0; - struct page *page; + struct slab *slab; unsigned long flags; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, slab_list) { - validate_slab(s, page, obj_map); + list_for_each_entry(slab, &n->partial, slab_list) { + validate_slab(s, slab, obj_map); count++; } if (count != n->nr_partial) { @@ -5003,8 +5003,8 @@ static int validate_slab_node(struct kmem_cache *s, if (!(s->flags & SLAB_STORE_USER)) goto out; - list_for_each_entry(page, &n->full, slab_list) { - validate_slab(s, page, obj_map); + list_for_each_entry(slab, &n->full, slab_list) { + validate_slab(s, slab, obj_map); count++; } if (count != atomic_long_read(&n->nr_slabs)) { From patchwork Mon Oct 4 13:46:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534041 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67ABBC433EF for ; Mon, 4 Oct 2021 14:10:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0681161216 for ; Mon, 4 Oct 2021 14:10:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0681161216 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 8DA9F940029; Mon, 4 Oct 2021 10:10:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 88D3394000B; Mon, 4 Oct 2021 10:10:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A08D940029; Mon, 4 Oct 2021 10:10:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id 6C6B494000B for ; Mon, 4 Oct 2021 10:10:06 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 194AB18280027 for ; Mon, 4 Oct 2021 14:10:06 +0000 (UTC) X-FDA: 78658939212.35.9784913 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id B0EDDD038252 for ; Mon, 4 Oct 2021 14:10:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yQh4jCE7s08sk7cRWN15Crj+RcQAcSB4bHSQ+00lqjk=; b=P5uoZLigJMTdtvqXhfxjbyCDBR XbWGIBvOeO+WPfMxwSvQUOvIM9PujIn3SZXVMRlk94KEW2iJjoIbDP1i+nIW3j5AKSay02JUHCvaJ luvET0Miuqz4LXEJUREz8PSU+U0dIBRhBa95Cw2U9mKFkKsbVvVx30sfcN6mQ/bxIE1/PNpwsXjDo FSjMa+Sts7lJaKsXulnXTzj/qLBP7b1mGbk1Zor4Cct52d4m4o2qR2psa+BU20uOJttE7mF6GOvLD k895p9Epk2x3xdPU9T0kane7QtDTrbDg4La/h6TKVSymHoqxOj9Kt244Tqxu9gHb6KwHK5k001RrO BbYrHOVw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOcK-00GxHz-8a; Mon, 04 Oct 2021 14:07:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 18/62] mm/slub: Convert count_partial() to struct slab Date: Mon, 4 Oct 2021 14:46:06 +0100 Message-Id: <20211004134650.4031813-19-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=P5uoZLig; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B0EDDD038252 X-Stat-Signature: arciiz9mokoedofjp6c1s9d7346gnmh8 X-HE-Tag: 1633356605-100547 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert all its helper functions at the same time. Adds a little typesafety. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 5e10a9cc6939..fc1a7f7832c0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2730,9 +2730,9 @@ static inline int node_match(struct page *page, int node) } #ifdef CONFIG_SLUB_DEBUG -static int count_free(struct page *page) +static int count_free(struct slab *slab) { - return page->objects - page->inuse; + return slab->objects - slab->inuse; } static inline unsigned long node_nr_objs(struct kmem_cache_node *n) @@ -2743,15 +2743,15 @@ static inline unsigned long node_nr_objs(struct kmem_cache_node *n) #if defined(CONFIG_SLUB_DEBUG) || defined(CONFIG_SYSFS) static unsigned long count_partial(struct kmem_cache_node *n, - int (*get_count)(struct page *)) + int (*get_count)(struct slab *)) { unsigned long flags; unsigned long x = 0; - struct page *page; + struct slab *slab; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, slab_list) - x += get_count(page); + list_for_each_entry(slab, &n->partial, slab_list) + x += get_count(slab); spin_unlock_irqrestore(&n->list_lock, flags); return x; } @@ -4944,14 +4944,14 @@ EXPORT_SYMBOL(__kmalloc_node_track_caller); #endif #ifdef CONFIG_SYSFS -static int count_inuse(struct page *page) +static int count_inuse(struct slab *slab) { - return page->inuse; + return slab->inuse; } -static int count_total(struct page *page) +static int count_total(struct slab *slab) { - return page->objects; + return slab->objects; } #endif From patchwork Mon Oct 4 13:46:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 460E4C433F5 for ; Mon, 4 Oct 2021 14:11:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E7D446023F for ; Mon, 4 Oct 2021 14:11:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E7D446023F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 813EC94002A; Mon, 4 Oct 2021 10:11:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 79D4794000B; Mon, 4 Oct 2021 10:11:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 63E7E94002A; Mon, 4 Oct 2021 10:11:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0039.hostedemail.com [216.40.44.39]) by kanga.kvack.org (Postfix) with ESMTP id 5499894000B for ; Mon, 4 Oct 2021 10:11:55 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 176A582499A8 for ; Mon, 4 Oct 2021 14:11:55 +0000 (UTC) X-FDA: 78658943790.04.6AC728F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id C9C7A9001ABB for ; Mon, 4 Oct 2021 14:11:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=cXBdrdA9TcdFx59rYNaEfuBgOW0hmiV75gdg/RFtgIo=; b=C10/AReJdH6wvcs8fGyYIwqk7t zYAbCeeVnsm4VLMhmuYK0Tr2x280kKlWieTqaQRVVYH/KRp4hilcgb8jDCLUP5oBGP4/vB3ndHos2 vPA/OQjjFkxLdZ+QVaSBPMH/NQidkzEMdQKTjwOsqBvnFqlHHhWDjQcT51oooz/Ulq9l9Zpv4tg3r u1rxrmEeJ3hvvSYRG5Q295nTL1o9mw6NPNb8/FclOs7P3rp+xW+n1Ikfdu0vPovZiI3eRYuk+VJHz fn+8CHn0h18CXOiENYdmeB3CznzNQTCgxvaLeOWi2J7bdhcxrDu74kp2yhhfD/6zGmaoR/KC+mq9/ 3KljraeQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOdm-00GxYN-ID; Mon, 04 Oct 2021 14:09:17 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 19/62] mm/slub: Convert bootstrap() to struct slab Date: Mon, 4 Oct 2021 14:46:07 +0100 Message-Id: <20211004134650.4031813-20-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: C9C7A9001ABB X-Stat-Signature: j6kkxw13tnwga3bcgncqzo4qw8urimxg Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="C10/AReJ"; dmarc=none; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633356714-242972 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds a little type safety. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index fc1a7f7832c0..f760accb0feb 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4768,14 +4768,14 @@ static struct kmem_cache * __init bootstrap(struct kmem_cache *static_cache) */ __flush_cpu_slab(s, smp_processor_id()); for_each_kmem_cache_node(s, node, n) { - struct page *p; + struct slab *slab; - list_for_each_entry(p, &n->partial, slab_list) - p->slab_cache = s; + list_for_each_entry(slab, &n->partial, slab_list) + slab->slab_cache = s; #ifdef CONFIG_SLUB_DEBUG - list_for_each_entry(p, &n->full, slab_list) - p->slab_cache = s; + list_for_each_entry(slab, &n->full, slab_list) + slab->slab_cache = s; #endif } list_add(&s->list, &slab_caches); From patchwork Mon Oct 4 13:46:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B3A3C433EF for ; Mon, 4 Oct 2021 14:13:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C5C5C61216 for ; Mon, 4 Oct 2021 14:13:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C5C5C61216 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 5A2D794002B; Mon, 4 Oct 2021 10:13:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5526C94000B; Mon, 4 Oct 2021 10:13:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 41B0394002B; Mon, 4 Oct 2021 10:13:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0075.hostedemail.com [216.40.44.75]) by kanga.kvack.org (Postfix) with ESMTP id 32F9D94000B for ; Mon, 4 Oct 2021 10:13:06 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id EFA5F2BC02 for ; Mon, 4 Oct 2021 14:13:05 +0000 (UTC) X-FDA: 78658946730.27.91C73AC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id A2BEA70021F8 for ; Mon, 4 Oct 2021 14:13:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=TAxo61k4BjUZoubEyus1/GNA+hXfkFB7KfhsvaZqgpY=; b=BBQ6oBM9hxylEt6ce95DvoJS/o fQv7vwACFUtoMsF9p9lsUWTsABIIMaWyVzcHJG1vIxgTr8c+rCHYcBxIiVbr+vCHjqVFz/vu1r9WK tVg3NH+m7k7C8SupqxCdDvy6eXbcXLsrCJh1DLHA0nRrz60/MbTBgWs4uxmUXhHntzzVb+Q/upOLd I+SnSVQ+b+j454Jll1sDN5nQ/2UTvS7fBi+vBTioRHIB6U4Mf/r6hIVyeUug9pCJBfpVd8wCMpyTR sX4ITjtlVi3W4HrUSZNUM7kO3reJXXlNV8mjXnhvUDEKyZQKXeGMHkQahtD3x1Qjc8Cd2XC5mnGlz s7+a9NMw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOfl-00GxiN-9t; Mon, 04 Oct 2021 14:10:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 20/62] mm/slub: Convert __kmem_cache_do_shrink() to struct slab Date: Mon, 4 Oct 2021 14:46:08 +0100 Message-Id: <20211004134650.4031813-21-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A2BEA70021F8 X-Stat-Signature: 6rydgjjeuxuwhzo6t3jyx65x5tcak3qe Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BBQ6oBM9; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633356785-590481 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds a little type safety. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index f760accb0feb..ea7f8d9716e0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4560,8 +4560,7 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) int node; int i; struct kmem_cache_node *n; - struct page *page; - struct page *t; + struct slab *slab, *t; struct list_head discard; struct list_head promote[SHRINK_PROMOTE_MAX]; unsigned long flags; @@ -4578,22 +4577,22 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) * Build lists of slabs to discard or promote. * * Note that concurrent frees may occur while we hold the - * list_lock. page->inuse here is the upper limit. + * list_lock. slab->inuse here is the upper limit. */ - list_for_each_entry_safe(page, t, &n->partial, slab_list) { - int free = page->objects - page->inuse; + list_for_each_entry_safe(slab, t, &n->partial, slab_list) { + int free = slab->objects - slab->inuse; - /* Do not reread page->inuse */ + /* Do not reread slab->inuse */ barrier(); /* We do not keep full slabs on the list */ BUG_ON(free <= 0); - if (free == page->objects) { - list_move(&page->slab_list, &discard); + if (free == slab->objects) { + list_move(&slab->slab_list, &discard); n->nr_partial--; } else if (free <= SHRINK_PROMOTE_MAX) - list_move(&page->slab_list, promote + free - 1); + list_move(&slab->slab_list, promote + free - 1); } /* @@ -4606,8 +4605,8 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) spin_unlock_irqrestore(&n->list_lock, flags); /* Release empty slabs */ - list_for_each_entry_safe(page, t, &discard, slab_list) - discard_slab(s, page); + list_for_each_entry_safe(slab, t, &discard, slab_list) + discard_slab(s, slab_page(slab)); if (slabs_node(s, node)) ret = 1; From patchwork Mon Oct 4 13:46:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39E1FC433F5 for ; Mon, 4 Oct 2021 14:14:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DD4A761251 for ; Mon, 4 Oct 2021 14:14:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org DD4A761251 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7EA8C94002C; Mon, 4 Oct 2021 10:14:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 79A6D94000B; Mon, 4 Oct 2021 10:14:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6891D94002C; Mon, 4 Oct 2021 10:14:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id 593DF94000B for ; Mon, 4 Oct 2021 10:14:31 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 151B28249980 for ; Mon, 4 Oct 2021 14:14:31 +0000 (UTC) X-FDA: 78658950342.24.C6D7389 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id B52E410003E3 for ; Mon, 4 Oct 2021 14:14:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vyDRbjvC+gdDVgnyBlZ2ptutLUiqpA5PXce7uCWEcwg=; b=E+WIr3/oeZl76KKs4uEx20a+Fw wa0oW3W6aafUa9ZB8/JvdWFktby4dMMUA7RrZwaQVW9SwUBwi2h9BGZHUn78V24TV1qQmhpKMSqcV jY3AMjI4F85kPpum5hs/r+AoJr5cgl70LF+jC3jbJmIiLecL4o2+Ag/FwU27p0w2iXNgJNTVKjlPs U6vQIoBuocPB4IR8xbXgwbA+t4Im3WMnGXz+HptzKVNlufK+Z/BfTc0Z+C5bebPLKPpGd41Um8wrR 0dn2+Hl58J0Pb9oszvNoA2T8aHACa68Eu1p3xIaKgzWgwKnXRBDjFeqnREvPevo24OAmuUbPq1568 /qKpd0AQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOgl-00GxqR-Qy; Mon, 04 Oct 2021 14:11:53 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 21/62] mm/slub: Convert free_partial() to use struct slab Date: Mon, 4 Oct 2021 14:46:09 +0100 Message-Id: <20211004134650.4031813-22-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: B52E410003E3 X-Stat-Signature: pu9p16rrkwkimzdy4hhyecm67eaq1uah Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="E+WIr3/o"; dmarc=none; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633356870-331704 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a little type safety. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index ea7f8d9716e0..875f3f6c1ae6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4241,23 +4241,23 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page, static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) { LIST_HEAD(discard); - struct page *page, *h; + struct slab *slab, *h; BUG_ON(irqs_disabled()); spin_lock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &n->partial, slab_list) { - if (!page->inuse) { - remove_partial(n, page); - list_add(&page->slab_list, &discard); + list_for_each_entry_safe(slab, h, &n->partial, slab_list) { + if (!slab->inuse) { + remove_partial(n, slab_page(slab)); + list_add(&slab->slab_list, &discard); } else { - list_slab_objects(s, page, + list_slab_objects(s, slab_page(slab), "Objects remaining in %s on __kmem_cache_shutdown()"); } } spin_unlock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &discard, slab_list) - discard_slab(s, page); + list_for_each_entry_safe(slab, h, &discard, slab_list) + discard_slab(s, slab_page(slab)); } bool __kmem_cache_empty(struct kmem_cache *s) From patchwork Mon Oct 4 13:46:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A787C433EF for ; Mon, 4 Oct 2021 14:15:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AF97861251 for ; Mon, 4 Oct 2021 14:15:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org AF97861251 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 4DBF294002D; Mon, 4 Oct 2021 10:15:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 48AE794000B; Mon, 4 Oct 2021 10:15:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A1A794002D; Mon, 4 Oct 2021 10:15:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0250.hostedemail.com [216.40.44.250]) by kanga.kvack.org (Postfix) with ESMTP id 2A7A094000B for ; Mon, 4 Oct 2021 10:15:40 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E1CA02D248 for ; Mon, 4 Oct 2021 14:15:39 +0000 (UTC) X-FDA: 78658953198.03.D86F544 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id A52ACF000AEA for ; Mon, 4 Oct 2021 14:15:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=0CBV26gMpDyHnBtt1L8euXGcM6/GY9CP3dzT+jXyX+g=; b=nNvsFvAZJV6Y7sAp2RIJrI+2RE T4q85VYLWMx0zVNBMQRJMOxe5qzy8GA9//8MURtO5dCXE16S7BlNWWjZgCgrMMff7Fx/U/6IWdtG2 /wEzScaIVCS/rakj7MyBRcTZI+yGuLS2hJ6SOtHGqrPcD9xjQAGjfVar01YjfoxijuH78n+ucIxQX 4oDpd/yIXWOdqTUOfco8yWzygyylz9OycgA+uZjPD5F8x8EXoKXrm8ZH94LYDz+uAQ4eY6fmSKUOw H9m8JEuVXug/x0o6ppmtQ4+DWNTbdZ86Ib2mX12oIC5m7+ctJ5q/PLJtN3JR9KQOy/6+OJMYyFBeP ttob04Gw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOiG-00GxzZ-G6; Mon, 04 Oct 2021 14:13:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 22/62] mm/slub: Convert list_slab_objects() to take a struct slab Date: Mon, 4 Oct 2021 14:46:10 +0100 Message-Id: <20211004134650.4031813-23-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A52ACF000AEA X-Stat-Signature: fppg9rebigrcgcptru9xrihsmnmwcomp Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=nNvsFvAZ; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633356939-556579 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert the one caller to pass a slab instead. Adds a little type safety. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 875f3f6c1ae6..29703bba0a7f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4208,20 +4208,20 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) return -EINVAL; } -static void list_slab_objects(struct kmem_cache *s, struct page *page, +static void list_slab_objects(struct kmem_cache *s, struct slab *slab, const char *text) { #ifdef CONFIG_SLUB_DEBUG - void *addr = page_address(page); + void *addr = slab_address(slab); unsigned long flags; unsigned long *map; void *p; - slab_err(s, page, text, s->name); - slab_lock(page, &flags); + slab_err(s, slab_page(slab), text, s->name); + slab_lock(slab_page(slab), &flags); - map = get_map(s, page); - for_each_object(p, s, addr, page->objects) { + map = get_map(s, slab_page(slab)); + for_each_object(p, s, addr, slab->objects) { if (!test_bit(__obj_to_index(s, addr, p), map)) { pr_err("Object 0x%p @offset=%tu\n", p, p - addr); @@ -4229,7 +4229,7 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page, } } put_map(map); - slab_unlock(page, &flags); + slab_unlock(slab_page(slab), &flags); #endif } @@ -4250,7 +4250,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) remove_partial(n, slab_page(slab)); list_add(&slab->slab_list, &discard); } else { - list_slab_objects(s, slab_page(slab), + list_slab_objects(s, slab, "Objects remaining in %s on __kmem_cache_shutdown()"); } } From patchwork Mon Oct 4 13:46:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20356C433EF for ; Mon, 4 Oct 2021 14:18:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C49DB61215 for ; Mon, 4 Oct 2021 14:18:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C49DB61215 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 59BA494002E; Mon, 4 Oct 2021 10:18:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 54B3194000B; Mon, 4 Oct 2021 10:18:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 43A0494002E; Mon, 4 Oct 2021 10:18:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id 374F994000B for ; Mon, 4 Oct 2021 10:18:17 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E06112AF06 for ; Mon, 4 Oct 2021 14:18:16 +0000 (UTC) X-FDA: 78658959792.08.8DDB0B6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id A53442002830 for ; Mon, 4 Oct 2021 14:18:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VO+yFyY2quDkQ0TLFd7cnnSzmCRrtWwgOmru8Ju/QdI=; b=jalF9vWDUSbSK6ZR7yzvdRqbU0 Cx3letVgOwv1tRw0vtYpObEWJw9WLiDjuc4pjVUJ0zajwX5JXHIHRjwRXjIt9TsKu3hy+08o1GnHA DVg/8F5ZZHV6hDCNpLH7TCQ/X7au4vykbG4bXmQ9xxxz1obI2z75eOBJswW7VPHoZK8aPMBL1RvQR /n6Kdh5efSTtnqv8U+5trldKFwtR+WmWxwd9RWeHgfUjO0jgeCcHZJLaxLZFk1hrmntMLeETU6TrT NgKK8FKuWi3t/wjbe/kFop9VENFX5G7FH8uzHBqbm29CQMEYIlvkTuAETuTDNfAn/J+1aqB/rkRM3 g6plR6Bw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOkC-00Gy8t-Pv; Mon, 04 Oct 2021 14:15:22 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 23/62] mm/slub: Convert slab_alloc_node() to use a struct slab Date: Mon, 4 Oct 2021 14:46:11 +0100 Message-Id: <20211004134650.4031813-24-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A53442002830 X-Stat-Signature: ps56t8ux394q69fg1onoy9q7d68m7qch Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jalF9vWD; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633357096-400925 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds a little type safety. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 29703bba0a7f..fd04aa96602c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3112,7 +3112,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, { void *object; struct kmem_cache_cpu *c; - struct page *page; + struct slab *slab; unsigned long tid; struct obj_cgroup *objcg = NULL; bool init = false; @@ -3144,9 +3144,9 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, /* * Irqless object alloc/free algorithm used here depends on sequence * of fetching cpu_slab's data. tid should be fetched before anything - * on c to guarantee that object and page associated with previous tid + * on c to guarantee that object and slab associated with previous tid * won't be used with current tid. If we fetch tid first, object and - * page could be one associated with next tid and our alloc/free + * slab could be one associated with next tid and our alloc/free * request will be failed. In this case, we will retry. So, no problem. */ barrier(); @@ -3159,7 +3159,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, */ object = c->freelist; - page = slab_page(c->slab); + slab = c->slab; /* * We cannot use the lockless fastpath on PREEMPT_RT because if a * slowpath has taken the local_lock_irqsave(), it is not protected @@ -3168,7 +3168,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, * there is a suitable cpu freelist. */ if (IS_ENABLED(CONFIG_PREEMPT_RT) || - unlikely(!object || !page || !node_match(page, node))) { + unlikely(!object || !slab || !node_match(slab_page(slab), node))) { object = __slab_alloc(s, gfpflags, node, addr, c); } else { void *next_object = get_freepointer_safe(s, object); From patchwork Mon Oct 4 13:46:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41758C433EF for ; Mon, 4 Oct 2021 14:20:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CFFD26121F for ; Mon, 4 Oct 2021 14:20:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org CFFD26121F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 6AC1794002F; Mon, 4 Oct 2021 10:20:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 65AD194000B; Mon, 4 Oct 2021 10:20:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5499694002F; Mon, 4 Oct 2021 10:20:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id 42C3394000B for ; Mon, 4 Oct 2021 10:20:08 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id ED2CB2D23B for ; Mon, 4 Oct 2021 14:20:07 +0000 (UTC) X-FDA: 78658964454.29.DEBD8AD Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id A499DD0389F7 for ; Mon, 4 Oct 2021 14:20:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=krFF0Z/a4Ew65k3tldk9lxA1Avqxu292lbtRxFqo2wM=; b=ZykqIvvFXC97I6bOUuQwmdbXVr 6AD7yUmZipaGvzvx9d9eEjgfEvBkHV3OZEYTddvpJTiNg7fMLr4s4M0VfL4FiGhvG23CZ4ISqZ0hc dtkjCRt3/ZVu4XI0HCdNCjyQmqyyZ1ek1G/rnPyll9x7j3Yb7YSdVAJLcQ+Ihn6Y4bEQ5jkh+cHzJ W0/a+p66HIzNJTpPT04JsxJ2DkUhWZ9Gjs9Q9hJ2nv5Ihjv+zTsriMJBw0lgLobLZbwURkOMdi1lO VfoVi0eYmd1QzngcD/yRW7z964H7jv3vDvxZfw2i0TAVpcQDRZf3tIlb+8bzZYvhOv/OC4Ry5T+8x Q7KhO0fQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOla-00GyFW-Iq; Mon, 04 Oct 2021 14:17:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 24/62] mm/slub: Convert get_freelist() to take a struct slab Date: Mon, 4 Oct 2021 14:46:12 +0100 Message-Id: <20211004134650.4031813-25-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: A499DD0389F7 X-Stat-Signature: 6a76d97soz7unc1q5eitbs3ykt4jwzkp Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ZykqIvvF; dmarc=none; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633357207-448626 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds a little bit of type safety. Convert the one caller. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index fd04aa96602c..827196f0aee5 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2816,32 +2816,32 @@ static inline bool pfmemalloc_match_unsafe(struct page *page, gfp_t gfpflags) } /* - * Check the page->freelist of a page and either transfer the freelist to the - * per cpu freelist or deactivate the page. + * Check the freelist of a slab and either transfer the freelist to the + * per cpu freelist or deactivate the slab * - * The page is still frozen if the return value is not NULL. + * The slab is still frozen if the return value is not NULL. * - * If this function returns NULL then the page has been unfrozen. + * If this function returns NULL then the slab has been unfrozen. */ -static inline void *get_freelist(struct kmem_cache *s, struct page *page) +static inline void *get_freelist(struct kmem_cache *s, struct slab *slab) { - struct page new; + struct slab new; unsigned long counters; void *freelist; lockdep_assert_held(this_cpu_ptr(&s->cpu_slab->lock)); do { - freelist = page->freelist; - counters = page->counters; + freelist = slab->freelist; + counters = slab->counters; new.counters = counters; VM_BUG_ON(!new.frozen); - new.inuse = page->objects; + new.inuse = slab->objects; new.frozen = freelist != NULL; - } while (!__cmpxchg_double_slab(s, page, + } while (!__cmpxchg_double_slab(s, slab_page(slab), freelist, counters, NULL, new.counters, "get_freelist")); @@ -2924,7 +2924,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, if (freelist) goto load_freelist; - freelist = get_freelist(s, slab_page(slab)); + freelist = get_freelist(s, slab); if (!freelist) { c->slab = NULL; From patchwork Mon Oct 4 13:46:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534105 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0F10C433EF for ; Mon, 4 Oct 2021 14:21:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 56A1761244 for ; Mon, 4 Oct 2021 14:21:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 56A1761244 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id EE7B5940030; Mon, 4 Oct 2021 10:21:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E969994000B; Mon, 4 Oct 2021 10:21:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5ECF940030; Mon, 4 Oct 2021 10:21:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0030.hostedemail.com [216.40.44.30]) by kanga.kvack.org (Postfix) with ESMTP id C4DCC94000B for ; Mon, 4 Oct 2021 10:21:35 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 871EB2D232 for ; Mon, 4 Oct 2021 14:21:35 +0000 (UTC) X-FDA: 78658968150.36.736F0FF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 45DF770021E9 for ; Mon, 4 Oct 2021 14:21:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VSY1hMRN80I3ORRgiFdBbz5KuY7aO7CvwRaBTtxUTMw=; b=iSjuJsWjZwm/GbAfAVLUd8vqX1 99S6MRTSn1eLDDR1YZjvyqWicIlBIvdKUYbpAQyEeObTPQjOOwmlNktszQi7QzrHsyvY1OPDQWv6L tu+twNH+1Ff9MwZj8/RGbT/4n1vq1Zj202yebSxOgE1sA0ogIz5PjD5fVReILIZ2zX4xguiym0Xak j3bUQz7TXlajx9NCEGqMnbjI1j/xDJj0Mo8mO2U8ZTVBWTdV9I9BsfTG4fA4AwDgQLXZqAkM30TyA G1gsmSNIwTjVJjcPPSsHYE6ctbxqh9B7WBppnQa9Quqd+buGfrwsnTonJGoCG1v2ijPNc4QMRNtYv 8CP2pAmQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOnl-00Gydh-Ha; Mon, 04 Oct 2021 14:19:20 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 25/62] mm/slub: Convert node_match() to take a struct slab Date: Mon, 4 Oct 2021 14:46:13 +0100 Message-Id: <20211004134650.4031813-26-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 45DF770021E9 X-Stat-Signature: egm3iddbd36szofm8q8q7zat8ec6gb1h Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=iSjuJsWj; dmarc=none; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633357295-792321 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes a few calls to slab_page() Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 827196f0aee5..e6c363d8de22 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2720,10 +2720,10 @@ static int slub_cpu_dead(unsigned int cpu) * Check if the objects in a per cpu structure fit numa * locality expectations. */ -static inline int node_match(struct page *page, int node) +static inline int node_match(struct slab *slab, int node) { #ifdef CONFIG_NUMA - if (node != NUMA_NO_NODE && page_to_nid(page) != node) + if (node != NUMA_NO_NODE && slab_nid(slab) != node) return 0; #endif return 1; @@ -2892,7 +2892,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, } redo: - if (unlikely(!node_match(slab_page(slab), node))) { + if (unlikely(!node_match(slab, node))) { /* * same as above but node_match() being false already * implies node != NUMA_NO_NODE @@ -3168,7 +3168,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, * there is a suitable cpu freelist. */ if (IS_ENABLED(CONFIG_PREEMPT_RT) || - unlikely(!object || !slab || !node_match(slab_page(slab), node))) { + unlikely(!object || !slab || !node_match(slab, node))) { object = __slab_alloc(s, gfpflags, node, addr, c); } else { void *next_object = get_freepointer_safe(s, object); From patchwork Mon Oct 4 13:46:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 379CAC433EF for ; Mon, 4 Oct 2021 14:22:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C8ABC61244 for ; Mon, 4 Oct 2021 14:22:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C8ABC61244 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 633AF940031; Mon, 4 Oct 2021 10:22:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E2E394000B; Mon, 4 Oct 2021 10:22:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4D1B0940031; Mon, 4 Oct 2021 10:22:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0026.hostedemail.com [216.40.44.26]) by kanga.kvack.org (Postfix) with ESMTP id 3E24394000B for ; Mon, 4 Oct 2021 10:22:31 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id F238F8249980 for ; Mon, 4 Oct 2021 14:22:30 +0000 (UTC) X-FDA: 78658970460.09.B4485D8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 8677CD03827D for ; Mon, 4 Oct 2021 14:22:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wxV7xNK7tFKJ7rxuq2yiCvycho7KN/DrxkwaR/lLO2o=; b=DtbdD9Qkubcf7zcOkCZzmvfo5r fEy85NOHfSTGOuNyiFO3SOdMGrPM4qZZilMtNxJvvz1fTV2/WKalVNui3f47SnjiX7AkgycoLc8S7 27qw8wrCB7XlmjfIFmfXG3VBWXqnYzTssg1ECWzQbyWB/FwNpwPvlIQ6EO/m5t8SY+VXg7YwSpTLM O4PKoNOIFw3Gh/bSU8R43xNxoicgupmJ2qnG1wYMtUhkrngS3BAJakebGBmmJhOka6BcIJLngfoet UjcLOYU+MoiiItX6xOhMGbWJwstDvpZjIehLYN9o/RKCfrIwSKk5NAkQyuHZ/LT9YJXyqiKTx/BP8 YDyvOAZg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOpQ-00GynN-Gy; Mon, 04 Oct 2021 14:20:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 26/62] mm/slub: Convert slab flushing to struct slab Date: Mon, 4 Oct 2021 14:46:14 +0100 Message-Id: <20211004134650.4031813-27-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DtbdD9Qk; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 8677CD03827D X-Stat-Signature: ewtmxtfojnk17m9bid3daafemdh6bto8 X-HE-Tag: 1633357350-422016 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Moves a few calls to slab_page() around. Gets us a step closer to allowing deactivate_slab() to take a slab instead of a page. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index e6c363d8de22..f33a196fe64f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2588,12 +2588,12 @@ static inline void unfreeze_partials_cpu(struct kmem_cache *s, static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) { unsigned long flags; - struct page *page; + struct slab *slab; void *freelist; local_lock_irqsave(&s->cpu_slab->lock, flags); - page = slab_page(c->slab); + slab = c->slab; freelist = c->freelist; c->slab = NULL; @@ -2602,8 +2602,8 @@ static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) local_unlock_irqrestore(&s->cpu_slab->lock, flags); - if (page) { - deactivate_slab(s, page, freelist); + if (slab) { + deactivate_slab(s, slab_page(slab), freelist); stat(s, CPUSLAB_FLUSH); } } @@ -2612,14 +2612,14 @@ static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) { struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); void *freelist = c->freelist; - struct page *page = slab_page(c->slab); + struct slab *slab = c->slab; c->slab = NULL; c->freelist = NULL; c->tid = next_tid(c->tid); - if (page) { - deactivate_slab(s, page, freelist); + if (slab) { + deactivate_slab(s, slab_page(slab), freelist); stat(s, CPUSLAB_FLUSH); } From patchwork Mon Oct 4 13:46:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534109 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 937BEC433F5 for ; Mon, 4 Oct 2021 14:23:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 29D876121F for ; Mon, 4 Oct 2021 14:23:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 29D876121F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id AEE02940033; Mon, 4 Oct 2021 10:23:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A9D5D94000B; Mon, 4 Oct 2021 10:23:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98C47940033; Mon, 4 Oct 2021 10:23:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id 896C594000B for ; Mon, 4 Oct 2021 10:23:17 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4A57230141 for ; Mon, 4 Oct 2021 14:23:17 +0000 (UTC) X-FDA: 78658972434.01.4AF8A7C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id EEDC39001B4C for ; Mon, 4 Oct 2021 14:23:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=zbvCDz8Kn1Ai0oWuet7gu1HqiHu8Tje3nC93LHIjlN8=; b=I/2ojAGrfYa7MsYRp38HW7b/LX t4X1YpGDhA5NsyXqWPEq4Xc8mtXk/54PVJtufGIreDjIznmKqrzC7Zdpki0O8UD5TqjwFeMV9XSgl XnrH4Rn9MBF9iXlVxrI+N5N2c5oGgY+VtxObADZLENn2eC9EVcipHWjPPQ2l+7OgMKpl2LarkDG9p UkG8/EQOSGnvmhLiBelIiBfFRXChXwyOnibhALHFvqc4iedZ8OjO7DfJdS7S6teuJ1tvDhONpYZAK sfGE7uox0Kc9gm/In0XIh9wfNVZqm3teNzLfLSJ0bMeHG+HUGTYksUksSP2ARg2JNY3ExBSIDvvQJ ljjs7EAA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOqH-00Gyr6-8U; Mon, 04 Oct 2021 14:21:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 27/62] mm/slub: Convert __unfreeze_partials to take a struct slab Date: Mon, 4 Oct 2021 14:46:15 +0100 Message-Id: <20211004134650.4031813-28-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EEDC39001B4C X-Stat-Signature: ijtnw9it9msmbmyk38snpx3ok4y4qkxc Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="I/2ojAGr"; dmarc=none; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633357396-603705 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improves type safety while removing a few calls to slab_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 54 +++++++++++++++++++++++++++--------------------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index f33a196fe64f..e6fd0619d1f2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2437,20 +2437,20 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, } #ifdef CONFIG_SLUB_CPU_PARTIAL -static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) +static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) { struct kmem_cache_node *n = NULL, *n2 = NULL; - struct page *page, *discard_page = NULL; + struct slab *slab, *unusable = NULL; unsigned long flags = 0; - while (partial_page) { - struct page new; - struct page old; + while (partial_slab) { + struct slab new; + struct slab old; - page = partial_page; - partial_page = page->next; + slab = partial_slab; + partial_slab = slab->next; - n2 = get_node(s, page_to_nid(page)); + n2 = get_node(s, slab_nid(slab)); if (n != n2) { if (n) spin_unlock_irqrestore(&n->list_lock, flags); @@ -2461,8 +2461,8 @@ static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) do { - old.freelist = page->freelist; - old.counters = page->counters; + old.freelist = slab->freelist; + old.counters = slab->counters; VM_BUG_ON(!old.frozen); new.counters = old.counters; @@ -2470,16 +2470,16 @@ static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) new.frozen = 0; - } while (!__cmpxchg_double_slab(s, page, + } while (!__cmpxchg_double_slab(s, slab_page(slab), old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")); if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { - page->next = discard_page; - discard_page = page; + slab->next = unusable; + unusable = slab; } else { - add_partial(n, page, DEACTIVATE_TO_TAIL); + add_partial(n, slab_page(slab), DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } } @@ -2487,12 +2487,12 @@ static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) if (n) spin_unlock_irqrestore(&n->list_lock, flags); - while (discard_page) { - page = discard_page; - discard_page = discard_page->next; + while (unusable) { + slab = unusable; + unusable = unusable->next; stat(s, DEACTIVATE_EMPTY); - discard_slab(s, page); + discard_slab(s, slab_page(slab)); stat(s, FREE_SLAB); } } @@ -2502,28 +2502,28 @@ static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) */ static void unfreeze_partials(struct kmem_cache *s) { - struct page *partial_page; + struct slab *partial_slab; unsigned long flags; local_lock_irqsave(&s->cpu_slab->lock, flags); - partial_page = slab_page(this_cpu_read(s->cpu_slab->partial)); + partial_slab = this_cpu_read(s->cpu_slab->partial); this_cpu_write(s->cpu_slab->partial, NULL); local_unlock_irqrestore(&s->cpu_slab->lock, flags); - if (partial_page) - __unfreeze_partials(s, partial_page); + if (partial_slab) + __unfreeze_partials(s, partial_slab); } static void unfreeze_partials_cpu(struct kmem_cache *s, struct kmem_cache_cpu *c) { - struct page *partial_page; + struct slab *partial_slab; - partial_page = slab_page(slub_percpu_partial(c)); + partial_slab = slub_percpu_partial(c); c->partial = NULL; - if (partial_page) - __unfreeze_partials(s, partial_page); + if (partial_slab) + __unfreeze_partials(s, partial_slab); } /* @@ -2572,7 +2572,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain) local_unlock_irqrestore(&s->cpu_slab->lock, flags); if (slab_to_unfreeze) { - __unfreeze_partials(s, slab_page(slab_to_unfreeze)); + __unfreeze_partials(s, slab_to_unfreeze); stat(s, CPU_PARTIAL_DRAIN); } } From patchwork Mon Oct 4 13:46:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88A10C433F5 for ; Mon, 4 Oct 2021 14:24:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3425B6121F for ; Mon, 4 Oct 2021 14:24:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3425B6121F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id CC02F940034; Mon, 4 Oct 2021 10:24:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C700F94000B; Mon, 4 Oct 2021 10:24:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5F22940034; Mon, 4 Oct 2021 10:24:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0171.hostedemail.com [216.40.44.171]) by kanga.kvack.org (Postfix) with ESMTP id A782094000B for ; Mon, 4 Oct 2021 10:24:41 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6225C82499A8 for ; Mon, 4 Oct 2021 14:24:41 +0000 (UTC) X-FDA: 78658975962.23.DAF7AEA Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 1B048801C350 for ; Mon, 4 Oct 2021 14:24:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=NkcJY9C1P+XdPROdq74kteMBcBunECOoflm0ZOH2fCA=; b=r8pPAYzhLX8jqQlWQdztcLUVXw 4FfePFa2mfmYR/S5H5msiObPlnnNLRfhZHiyO/jlnNMIni+SFeOHqWut+i+0QpazGa6s3c1WM4pq2 nIK+fEYMbGq8AI5aZsCw8yL6ibOtYMOIpSVBNB5SyStG04HZ5QlR/5+jyXoyDOYOscwoUQ9QhxTes G1/UgS6wVsmsIjXWbKJbVMWXjNFeBqfD1NVY/0WlqSN/jph159xHwy6fDgamjMdSbddyJAnoKZASH fnAyE/IKaLglMMVprR7qQF50fSFwHPHdncu7GMQTxKbI2pa7CkTlzQaXB6mXHLZLGxh1R/CcqIcBG O4LoMK/A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOrT-00Gyz2-MX; Mon, 04 Oct 2021 14:22:54 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 28/62] mm/slub: Convert deactivate_slab() to take a struct slab Date: Mon, 4 Oct 2021 14:46:16 +0100 Message-Id: <20211004134650.4031813-29-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1B048801C350 X-Stat-Signature: wtt7a9bbxmua7erfbzwempigj8onuact Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=r8pPAYzh; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633357480-869008 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improves type safety and removes calls to slab_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 54 +++++++++++++++++++++++++++--------------------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index e6fd0619d1f2..5330d0b02f13 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2298,25 +2298,25 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) } /* - * Finishes removing the cpu slab. Merges cpu's freelist with page's freelist, + * Finishes removing the cpu slab. Merges cpu's freelist with slab's freelist, * unfreezes the slabs and puts it on the proper list. * Assumes the slab has been already safely taken away from kmem_cache_cpu * by the caller. */ -static void deactivate_slab(struct kmem_cache *s, struct page *page, +static void deactivate_slab(struct kmem_cache *s, struct slab *slab, void *freelist) { enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE }; - struct kmem_cache_node *n = get_node(s, page_to_nid(page)); + struct kmem_cache_node *n = get_node(s, slab_nid(slab)); int lock = 0, free_delta = 0; enum slab_modes l = M_NONE, m = M_NONE; void *nextfree, *freelist_iter, *freelist_tail; int tail = DEACTIVATE_TO_HEAD; unsigned long flags = 0; - struct page new; - struct page old; + struct slab new; + struct slab old; - if (page->freelist) { + if (slab->freelist) { stat(s, DEACTIVATE_REMOTE_FREES); tail = DEACTIVATE_TO_TAIL; } @@ -2335,7 +2335,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, * 'freelist_iter' is already corrupted. So isolate all objects * starting at 'freelist_iter' by skipping them. */ - if (freelist_corrupted(s, page, &freelist_iter, nextfree)) + if (freelist_corrupted(s, slab_page(slab), &freelist_iter, nextfree)) break; freelist_tail = freelist_iter; @@ -2345,25 +2345,25 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, } /* - * Stage two: Unfreeze the page while splicing the per-cpu - * freelist to the head of page's freelist. + * Stage two: Unfreeze the slab while splicing the per-cpu + * freelist to the head of slab's freelist. * - * Ensure that the page is unfrozen while the list presence + * Ensure that the slab is unfrozen while the list presence * reflects the actual number of objects during unfreeze. * * We setup the list membership and then perform a cmpxchg - * with the count. If there is a mismatch then the page - * is not unfrozen but the page is on the wrong list. + * with the count. If there is a mismatch then the slab + * is not unfrozen but the slab is on the wrong list. * * Then we restart the process which may have to remove - * the page from the list that we just put it on again + * the slab from the list that we just put it on again * because the number of objects in the slab may have * changed. */ redo: - old.freelist = READ_ONCE(page->freelist); - old.counters = READ_ONCE(page->counters); + old.freelist = READ_ONCE(slab->freelist); + old.counters = READ_ONCE(slab->counters); VM_BUG_ON(!old.frozen); /* Determine target state of the slab */ @@ -2385,7 +2385,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, lock = 1; /* * Taking the spinlock removes the possibility - * that acquire_slab() will see a slab page that + * that acquire_slab() will see a slab that * is frozen */ spin_lock_irqsave(&n->list_lock, flags); @@ -2405,18 +2405,18 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, if (l != m) { if (l == M_PARTIAL) - remove_partial(n, page); + remove_partial(n, slab_page(slab)); else if (l == M_FULL) - remove_full(s, n, page); + remove_full(s, n, slab_page(slab)); if (m == M_PARTIAL) - add_partial(n, page, tail); + add_partial(n, slab_page(slab), tail); else if (m == M_FULL) - add_full(s, n, page); + add_full(s, n, slab_page(slab)); } l = m; - if (!cmpxchg_double_slab(s, page, + if (!cmpxchg_double_slab(s, slab_page(slab), old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")) @@ -2431,7 +2431,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, stat(s, DEACTIVATE_FULL); else if (m == M_FREE) { stat(s, DEACTIVATE_EMPTY); - discard_slab(s, page); + discard_slab(s, slab_page(slab)); stat(s, FREE_SLAB); } } @@ -2603,7 +2603,7 @@ static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) local_unlock_irqrestore(&s->cpu_slab->lock, flags); if (slab) { - deactivate_slab(s, slab_page(slab), freelist); + deactivate_slab(s, slab, freelist); stat(s, CPUSLAB_FLUSH); } } @@ -2619,7 +2619,7 @@ static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) c->tid = next_tid(c->tid); if (slab) { - deactivate_slab(s, slab_page(slab), freelist); + deactivate_slab(s, slab, freelist); stat(s, CPUSLAB_FLUSH); } @@ -2961,7 +2961,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, c->slab = NULL; c->freelist = NULL; local_unlock_irqrestore(&s->cpu_slab->lock, flags); - deactivate_slab(s, slab_page(slab), freelist); + deactivate_slab(s, slab, freelist); new_slab: @@ -3043,7 +3043,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, local_unlock_irqrestore(&s->cpu_slab->lock, flags); - deactivate_slab(s, slab_page(flush_slab), flush_freelist); + deactivate_slab(s, flush_slab, flush_freelist); stat(s, CPUSLAB_FLUSH); @@ -3055,7 +3055,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, return_single: - deactivate_slab(s, slab_page(slab), get_freepointer(s, freelist)); + deactivate_slab(s, slab, get_freepointer(s, freelist)); return freelist; } From patchwork Mon Oct 4 13:46:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0DB7C433F5 for ; Mon, 4 Oct 2021 14:26:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7A26261251 for ; Mon, 4 Oct 2021 14:26:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7A26261251 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 173E3940035; Mon, 4 Oct 2021 10:26:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 126A094000B; Mon, 4 Oct 2021 10:26:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 039C5940035; Mon, 4 Oct 2021 10:26:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0041.hostedemail.com [216.40.44.41]) by kanga.kvack.org (Postfix) with ESMTP id E790B94000B for ; Mon, 4 Oct 2021 10:26:20 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A79CD8249980 for ; Mon, 4 Oct 2021 14:26:20 +0000 (UTC) X-FDA: 78658980120.18.43074D4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 5DE00D001BB9 for ; Mon, 4 Oct 2021 14:26:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Eak+V8QsxEznXLKuUCzr08pMoWS7JY5BsWW/Agcv3Hk=; b=f/almd+bWxaTlMsZ+lKeCDxAQi qiHeDkSVPJto6Mo78qP0rwAVBQrbExJmAEt8FqWZMOakhYSyGlTQfqj7ZNdSdAtXm6UrpDNUdGIfb 2ydyefsmTIs05QuIkaHf4yhS0bNX4tiTcRuDs2m9QxTgHZ1GAl4fw/FpEPc85ZN/9/gqbPhPeMLJL 5MADSDbk1pT6e4HS2ienlLzcdV6nyNY+wRdyZX3F1G2pAfzjMoyyvSR5pXU6jat2AWcTdQBvLIOMb w6Of8ALrCRaEZiKd4y7pj4tLktAeF3bFVeMogq8Mb/ZldLoQWt4U6ybdNaAMO5Uu+Pa3KGxZo1eGG z0ENDUfA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOsJ-00Gz6O-0A; Mon, 04 Oct 2021 14:23:55 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 29/62] mm/slub: Convert acquire_slab() to take a struct page Date: Mon, 4 Oct 2021 14:46:17 +0100 Message-Id: <20211004134650.4031813-30-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 5DE00D001BB9 X-Stat-Signature: qkmkxk64eyq1z97irk8hkits9swy6ktg Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="f/almd+b"; dmarc=none; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633357580-175254 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improves type safety. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 5330d0b02f13..3468f2b2fe3a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2044,12 +2044,12 @@ static inline void remove_partial(struct kmem_cache_node *n, * Returns a list of objects or NULL if it fails. */ static inline void *acquire_slab(struct kmem_cache *s, - struct kmem_cache_node *n, struct page *page, + struct kmem_cache_node *n, struct slab *slab, int mode, int *objects) { void *freelist; unsigned long counters; - struct page new; + struct slab new; lockdep_assert_held(&n->list_lock); @@ -2058,12 +2058,12 @@ static inline void *acquire_slab(struct kmem_cache *s, * The old freelist is the list of objects for the * per cpu allocation list. */ - freelist = page->freelist; - counters = page->counters; + freelist = slab->freelist; + counters = slab->counters; new.counters = counters; *objects = new.objects - new.inuse; if (mode) { - new.inuse = page->objects; + new.inuse = slab->objects; new.freelist = NULL; } else { new.freelist = freelist; @@ -2072,13 +2072,13 @@ static inline void *acquire_slab(struct kmem_cache *s, VM_BUG_ON(new.frozen); new.frozen = 1; - if (!__cmpxchg_double_slab(s, page, + if (!__cmpxchg_double_slab(s, slab_page(slab), freelist, counters, new.freelist, new.counters, "acquire_slab")) return NULL; - remove_partial(n, page); + remove_partial(n, slab_page(slab)); WARN_ON(!freelist); return freelist; } @@ -2119,7 +2119,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, if (!pfmemalloc_match(slab_page(slab), gfpflags)) continue; - t = acquire_slab(s, n, slab_page(slab), object == NULL, &objects); + t = acquire_slab(s, n, slab, object == NULL, &objects); if (!t) break; From patchwork Mon Oct 4 13:46:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F492C433F5 for ; Mon, 4 Oct 2021 14:28:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3537961251 for ; Mon, 4 Oct 2021 14:28:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3537961251 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id BD28B940036; Mon, 4 Oct 2021 10:28:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B82A594000B; Mon, 4 Oct 2021 10:28:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A49D4940036; Mon, 4 Oct 2021 10:28:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0253.hostedemail.com [216.40.44.253]) by kanga.kvack.org (Postfix) with ESMTP id 90D1994000B for ; Mon, 4 Oct 2021 10:28:02 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 453811828EE05 for ; Mon, 4 Oct 2021 14:28:02 +0000 (UTC) X-FDA: 78658984404.31.B5D5494 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id E75F250714DC for ; Mon, 4 Oct 2021 14:28:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=psKfLaxwvqSXN5SoFl0tcEZxdCcqcbuJyQ75i+0JEok=; b=es7X7I4rmYrS8GvWwJmk5uKaWi WKqKegcc7uaJk2g13vGhs2iRV7priobht/rctFrwOaOTl5D1oMjTL7hmHsstNQuDQoqvnjH8Lu/Bn dwi5ej62JwCZNm5X2IKtovXiOEj59l0CZ5M2KXq64dgKd02Y0Gm9H79rKa8ZjZkaDhxzYEcyNXq4b cR+wT2f/Pv1B3j9SxaSQuiyXPJukU/2qAQGqWpA09ZrkFQ6WRpLa/v7Rqn5u4COnHC8SF63YHuT2b I3aFndtWCPsguOvLbbc8AFghUTKnV5E9H/vtGO6tBeVoC0P2xzTDMhs9g4+/MDfUp6/SlPUu0CCUb tMm5ioRg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOtx-00GzFE-3S; Mon, 04 Oct 2021 14:25:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 30/62] mm/slub: Convert partial slab management to struct slab Date: Mon, 4 Oct 2021 14:46:18 +0100 Message-Id: <20211004134650.4031813-31-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E75F250714DC X-Stat-Signature: km6ymca9dtee4zanra8mrefwcudrmdd4 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=es7X7I4r; dmarc=none; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633357681-748904 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert __add_partial(), add_partial() and remove_partial(). Improves type safety and removes calls to slab_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 3468f2b2fe3a..e3c8893f9bd5 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2013,27 +2013,27 @@ static void discard_slab(struct kmem_cache *s, struct page *page) * Management of partially allocated slabs. */ static inline void -__add_partial(struct kmem_cache_node *n, struct page *page, int tail) +__add_partial(struct kmem_cache_node *n, struct slab *slab, int tail) { n->nr_partial++; if (tail == DEACTIVATE_TO_TAIL) - list_add_tail(&page->slab_list, &n->partial); + list_add_tail(&slab->slab_list, &n->partial); else - list_add(&page->slab_list, &n->partial); + list_add(&slab->slab_list, &n->partial); } static inline void add_partial(struct kmem_cache_node *n, - struct page *page, int tail) + struct slab *slab, int tail) { lockdep_assert_held(&n->list_lock); - __add_partial(n, page, tail); + __add_partial(n, slab, tail); } static inline void remove_partial(struct kmem_cache_node *n, - struct page *page) + struct slab *slab) { lockdep_assert_held(&n->list_lock); - list_del(&page->slab_list); + list_del(&slab->slab_list); n->nr_partial--; } @@ -2078,7 +2078,7 @@ static inline void *acquire_slab(struct kmem_cache *s, "acquire_slab")) return NULL; - remove_partial(n, slab_page(slab)); + remove_partial(n, slab); WARN_ON(!freelist); return freelist; } @@ -2405,12 +2405,12 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, if (l != m) { if (l == M_PARTIAL) - remove_partial(n, slab_page(slab)); + remove_partial(n, slab); else if (l == M_FULL) remove_full(s, n, slab_page(slab)); if (m == M_PARTIAL) - add_partial(n, slab_page(slab), tail); + add_partial(n, slab, tail); else if (m == M_FULL) add_full(s, n, slab_page(slab)); } @@ -2479,7 +2479,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) slab->next = unusable; unusable = slab; } else { - add_partial(n, slab_page(slab), DEACTIVATE_TO_TAIL); + add_partial(n, slab, DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } } @@ -3367,7 +3367,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, */ if (!kmem_cache_has_cpu_partial(s) && unlikely(!prior)) { remove_full(s, n, slab_page(slab)); - add_partial(n, slab_page(slab), DEACTIVATE_TO_TAIL); + add_partial(n, slab, DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } spin_unlock_irqrestore(&n->list_lock, flags); @@ -3378,7 +3378,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, /* * Slab on the partial list. */ - remove_partial(n, slab_page(slab)); + remove_partial(n, slab); stat(s, FREE_REMOVE_PARTIAL); } else { /* Slab must be on the full list */ @@ -3922,7 +3922,7 @@ static void early_kmem_cache_node_alloc(int node) * No locks need to be taken here as it has just been * initialized and there is no concurrent access. */ - __add_partial(n, slab_page(slab), DEACTIVATE_TO_HEAD); + __add_partial(n, slab, DEACTIVATE_TO_HEAD); } static void free_kmem_cache_nodes(struct kmem_cache *s) @@ -4180,7 +4180,7 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) #endif /* - * The larger the object size is, the more pages we want on the partial + * The larger the object size is, the more slabs we want on the partial * list to avoid pounding the page allocator excessively. */ set_min_partial(s, ilog2(s->size) / 2); @@ -4247,7 +4247,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) spin_lock_irq(&n->list_lock); list_for_each_entry_safe(slab, h, &n->partial, slab_list) { if (!slab->inuse) { - remove_partial(n, slab_page(slab)); + remove_partial(n, slab); list_add(&slab->slab_list, &discard); } else { list_slab_objects(s, slab, From patchwork Mon Oct 4 13:46:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19A7AC433F5 for ; Mon, 4 Oct 2021 14:28:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4100761251 for ; Mon, 4 Oct 2021 14:28:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4100761251 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id DA7BF940039; Mon, 4 Oct 2021 10:28:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D55BC94000B; Mon, 4 Oct 2021 10:28:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C455B940039; Mon, 4 Oct 2021 10:28:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0254.hostedemail.com [216.40.44.254]) by kanga.kvack.org (Postfix) with ESMTP id B0DA594000B for ; Mon, 4 Oct 2021 10:28:50 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 772873016F for ; Mon, 4 Oct 2021 14:28:50 +0000 (UTC) X-FDA: 78658986420.08.2381821 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id EF243F0013B2 for ; Mon, 4 Oct 2021 14:28:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=y5HMCCBu/sfusCANeFIy7iYhPlqD1LZaeiw9o0MqC/I=; b=pe6nSmf1H4EWbKp81rAPeyddVU 2Fcj3Cn6NF+fojW9D/X20jDXifB+/93UOkoccYKaMtxch54wnWEISPuDjY7goYvyNnZ1AEA3D02JX T83SUFeh3aEGcvuY2sw84z0IkqREWWUd74T+L4WzchTGRTWtWDSkxAq0T7Evl3XWfmQFTjp9cx7XE E01aO+mLLQPTVb0OO/LvACJuERFqXcHpueZrzA3D3vN6kxuxL6grbVhVDS3xNm87+17aUb6Tmj1Hd PGfdWfk0+VEbm7t16qGm4XZPULh7+oAjo1shb1i6p1hNTV2IIuoaS1pYlSpU7Vpjz+srn0Wc0iNjf hz4m4J9A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOvK-00GzNs-Ug; Mon, 04 Oct 2021 14:27:00 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 31/62] mm/slub: Convert slab freeing to struct slab Date: Mon, 4 Oct 2021 14:46:19 +0100 Message-Id: <20211004134650.4031813-32-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: EF243F0013B2 X-Stat-Signature: k3qmhntwsdrnhifqtkrujyqnb975dhfm Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=pe6nSmf1; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633357729-859267 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improve type safety by passing a slab pointer through discard_slab() to free_slab() and __free_slab(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 45 ++++++++++++++++++++++----------------------- 1 file changed, 22 insertions(+), 23 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index e3c8893f9bd5..75a411d6b76e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1964,49 +1964,48 @@ static struct slab *new_slab(struct kmem_cache *s, gfp_t flags, int node) flags & (GFP_RECLAIM_MASK | GFP_CONSTRAINT_MASK), node); } -static void __free_slab(struct kmem_cache *s, struct page *page) +static void __free_slab(struct kmem_cache *s, struct slab *slab) { - int order = compound_order(page); + struct page *page = slab_page(slab); + int order = slab_order(slab); int pages = 1 << order; if (kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) { void *p; - slab_pad_check(s, page); - for_each_object(p, s, page_address(page), - page->objects) - check_object(s, page, p, SLUB_RED_INACTIVE); + slab_pad_check(s, slab_page(slab)); + for_each_object(p, s, slab_address(slab), slab->objects) + check_object(s, slab_page(slab), p, SLUB_RED_INACTIVE); } - __ClearPageSlabPfmemalloc(page); + __slab_clear_pfmemalloc(slab); __ClearPageSlab(page); - /* In union with page->mapping where page allocator expects NULL */ - page->slab_cache = NULL; + page->mapping = NULL; if (current->reclaim_state) current->reclaim_state->reclaimed_slab += pages; - unaccount_slab_page(page, order, s); + unaccount_slab(slab, order, s); __free_pages(page, order); } static void rcu_free_slab(struct rcu_head *h) { - struct page *page = container_of(h, struct page, rcu_head); + struct slab *slab = container_of(h, struct slab, rcu_head); - __free_slab(page->slab_cache, page); + __free_slab(slab->slab_cache, slab); } -static void free_slab(struct kmem_cache *s, struct page *page) +static void free_slab(struct kmem_cache *s, struct slab *slab) { if (unlikely(s->flags & SLAB_TYPESAFE_BY_RCU)) { - call_rcu(&page->rcu_head, rcu_free_slab); + call_rcu(&slab->rcu_head, rcu_free_slab); } else - __free_slab(s, page); + __free_slab(s, slab); } -static void discard_slab(struct kmem_cache *s, struct page *page) +static void discard_slab(struct kmem_cache *s, struct slab *slab) { - dec_slabs_node(s, page_to_nid(page), page->objects); - free_slab(s, page); + dec_slabs_node(s, slab_nid(slab), slab->objects); + free_slab(s, slab); } /* @@ -2431,7 +2430,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, stat(s, DEACTIVATE_FULL); else if (m == M_FREE) { stat(s, DEACTIVATE_EMPTY); - discard_slab(s, slab_page(slab)); + discard_slab(s, slab); stat(s, FREE_SLAB); } } @@ -2492,7 +2491,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) unusable = unusable->next; stat(s, DEACTIVATE_EMPTY); - discard_slab(s, slab_page(slab)); + discard_slab(s, slab); stat(s, FREE_SLAB); } } @@ -3387,7 +3386,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, spin_unlock_irqrestore(&n->list_lock, flags); stat(s, FREE_SLAB); - discard_slab(s, slab_page(slab)); + discard_slab(s, slab); } /* @@ -4257,7 +4256,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) spin_unlock_irq(&n->list_lock); list_for_each_entry_safe(slab, h, &discard, slab_list) - discard_slab(s, slab_page(slab)); + discard_slab(s, slab); } bool __kmem_cache_empty(struct kmem_cache *s) @@ -4606,7 +4605,7 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) /* Release empty slabs */ list_for_each_entry_safe(slab, t, &discard, slab_list) - discard_slab(s, slab_page(slab)); + discard_slab(s, slab); if (slabs_node(s, node)) ret = 1; From patchwork Mon Oct 4 13:46:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534151 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4565DC433F5 for ; Mon, 4 Oct 2021 14:29:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EB4A86136F for ; Mon, 4 Oct 2021 14:29:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EB4A86136F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7161194003A; Mon, 4 Oct 2021 10:29:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C54294000B; Mon, 4 Oct 2021 10:29:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58E0294003A; Mon, 4 Oct 2021 10:29:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id 48B2394000B for ; Mon, 4 Oct 2021 10:29:40 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DD65330C6B for ; Mon, 4 Oct 2021 14:29:39 +0000 (UTC) X-FDA: 78658988478.18.4D2A5A7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 7D00D3002E77 for ; Mon, 4 Oct 2021 14:29:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=F9wA1P4YWA/sankqM98VLszVm6D3QJEu/8dMbbYQLIM=; b=peQ52vO25QITIbT6MZT1iZyz8U r8PEmeCzX/pv2HfwVWpfXIsgxlt4LJoBJLpdcO0Px3EtJ4TOW1i0s+O6uqSQCOclOKqQ7A8DH0VuX 0dKwjkEsM/jM0YfWzD5xPwLuNdXhz8wTWnpR3/eU4a+D1beRTNVyjXyQ1hhSxOdBc4CBrnaEtpybd V54/K53Ey3qXx37IXc7POoXQn8xJeHXrIhRHcoJH2kT4BWBC/8L5F7dbKl84wKFllMihUh6PDm7PR OyB4TG6hakJdsXf8kqYfHvV24ZZZZ0kDmxLHQlqBCnPnlwAnCf6GAVmN7uldshz/bq0LH4QZHMh6A YE/gk4CQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOwg-00GzjG-TD; Mon, 04 Oct 2021 14:28:19 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 32/62] mm/slub: Convert shuffle_freelist to struct slab Date: Mon, 4 Oct 2021 14:46:20 +0100 Message-Id: <20211004134650.4031813-33-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 7D00D3002E77 X-Stat-Signature: 1nao7jsr73881nnyqdknna3398idq4ku Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=peQ52vO2; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633357779-773309 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improve type safety. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 75a411d6b76e..9a67dda37951 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1831,32 +1831,32 @@ static void *next_freelist_entry(struct kmem_cache *s, struct page *page, } /* Shuffle the single linked freelist based on a random pre-computed sequence */ -static bool shuffle_freelist(struct kmem_cache *s, struct page *page) +static bool shuffle_freelist(struct kmem_cache *s, struct slab *slab) { void *start; void *cur; void *next; - unsigned long idx, pos, page_limit, freelist_count; + unsigned long idx, pos, slab_limit, freelist_count; - if (page->objects < 2 || !s->random_seq) + if (slab->objects < 2 || !s->random_seq) return false; freelist_count = oo_objects(s->oo); pos = get_random_int() % freelist_count; - page_limit = page->objects * s->size; - start = fixup_red_left(s, page_address(page)); + slab_limit = slab->objects * s->size; + start = fixup_red_left(s, slab_address(slab)); /* First entry is used as the base of the freelist */ - cur = next_freelist_entry(s, page, &pos, start, page_limit, + cur = next_freelist_entry(s, slab_page(slab), &pos, start, slab_limit, freelist_count); - cur = setup_object(s, page, cur); - page->freelist = cur; + cur = setup_object(s, slab_page(slab), cur); + slab->freelist = cur; - for (idx = 1; idx < page->objects; idx++) { - next = next_freelist_entry(s, page, &pos, start, page_limit, + for (idx = 1; idx < slab->objects; idx++) { + next = next_freelist_entry(s, slab_page(slab), &pos, start, slab_limit, freelist_count); - next = setup_object(s, page, next); + next = setup_object(s, slab_page(slab), next); set_freepointer(s, cur, next); cur = next; } @@ -1870,7 +1870,7 @@ static inline int init_cache_random_seq(struct kmem_cache *s) return 0; } static inline void init_freelist_randomization(void) { } -static inline bool shuffle_freelist(struct kmem_cache *s, struct page *page) +static inline bool shuffle_freelist(struct kmem_cache *s, struct slab *slab) { return false; } @@ -1926,7 +1926,7 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) setup_page_debug(s, slab_page(slab), start); - shuffle = shuffle_freelist(s, slab_page(slab)); + shuffle = shuffle_freelist(s, slab); if (!shuffle) { start = fixup_red_left(s, start); From patchwork Mon Oct 4 13:46:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D2CFC433F5 for ; Mon, 4 Oct 2021 14:30:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1B32561372 for ; Mon, 4 Oct 2021 14:30:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1B32561372 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 9D58B94003C; Mon, 4 Oct 2021 10:30:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 984AA94000B; Mon, 4 Oct 2021 10:30:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 873F294003C; Mon, 4 Oct 2021 10:30:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7418D94000B for ; Mon, 4 Oct 2021 10:30:30 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 13D33181AEF1E for ; Mon, 4 Oct 2021 14:30:30 +0000 (UTC) X-FDA: 78658990620.04.B11EB3A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 9DE8BD03885E for ; Mon, 4 Oct 2021 14:30:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=MTPfjwjEO8iLMn79GIpKe/tWLPxEwdidd1nR5RiMsi8=; b=oJxWKqqDWk86a+jbf4XNTizOXc Y4pBpaBZOBj0VXFekkWhNPIOS0yBXeaN/zXs90FHm2Uc0iDEG8UO/VngFfKS2WHaARI1Dr1VNdhAW xR8K/j2COaAdOX1f3zq8yJs9OWOEgoL6zG1UPgXdMbKJaOUorSraKqnJabzcB5rHDrjib5VtfE/r4 aKbpeI/J6Fp+ibCd6WBcoGRvP7rxmPh1Dv1+CT6H1J+YsvO9azE0L+5G4o3CGehvZnW3wvX41Mj8K eNmh5UyhNCKw9V6dehtwPQe3otSHBJIAGxfP+2QpBQTIj3HeH9tPh70gz1xNpkOWPqZcGzzQFRVAK AfVh0qNw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOxh-00GzsF-Np; Mon, 04 Oct 2021 14:29:19 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 33/62] mm/slub: Remove struct page argument to next_freelist_entry() Date: Mon, 4 Oct 2021 14:46:21 +0100 Message-Id: <20211004134650.4031813-34-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 9DE8BD03885E X-Stat-Signature: hfq4sf351ot4yb3b891axwdr7fwtociq Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=oJxWKqqD; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633357829-789438 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This argument was unused. Fix up some comments and rename a parameter. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 9a67dda37951..14a423250611 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1809,23 +1809,23 @@ static void __init init_freelist_randomization(void) } /* Get the next entry on the pre-computed freelist randomized */ -static void *next_freelist_entry(struct kmem_cache *s, struct page *page, +static void *next_freelist_entry(struct kmem_cache *s, unsigned long *pos, void *start, - unsigned long page_limit, + unsigned long slab_limit, unsigned long freelist_count) { unsigned int idx; /* - * If the target page allocation failed, the number of objects on the - * page might be smaller than the usual size defined by the cache. + * If the target slab allocation failed, the number of objects in the + * slab might be smaller than the usual size defined by the cache. */ do { idx = s->random_seq[*pos]; *pos += 1; if (*pos >= freelist_count) *pos = 0; - } while (unlikely(idx >= page_limit)); + } while (unlikely(idx >= slab_limit)); return (char *)start + idx; } @@ -1848,13 +1848,12 @@ static bool shuffle_freelist(struct kmem_cache *s, struct slab *slab) start = fixup_red_left(s, slab_address(slab)); /* First entry is used as the base of the freelist */ - cur = next_freelist_entry(s, slab_page(slab), &pos, start, slab_limit, - freelist_count); + cur = next_freelist_entry(s, &pos, start, slab_limit, freelist_count); cur = setup_object(s, slab_page(slab), cur); slab->freelist = cur; for (idx = 1; idx < slab->objects; idx++) { - next = next_freelist_entry(s, slab_page(slab), &pos, start, slab_limit, + next = next_freelist_entry(s, &pos, start, slab_limit, freelist_count); next = setup_object(s, slab_page(slab), next); set_freepointer(s, cur, next); From patchwork Mon Oct 4 13:46:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1AA9C433F5 for ; Mon, 4 Oct 2021 14:32:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 96CE361002 for ; Mon, 4 Oct 2021 14:32:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 96CE361002 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 3427594003D; Mon, 4 Oct 2021 10:32:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2F1B394000B; Mon, 4 Oct 2021 10:32:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1934594003D; Mon, 4 Oct 2021 10:32:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0188.hostedemail.com [216.40.44.188]) by kanga.kvack.org (Postfix) with ESMTP id 0AB6B94000B for ; Mon, 4 Oct 2021 10:32:38 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BBE0582499A8 for ; Mon, 4 Oct 2021 14:32:37 +0000 (UTC) X-FDA: 78658995954.31.4AAEDB5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 7DE3B5070E49 for ; Mon, 4 Oct 2021 14:32:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=gYbWY4AiHY4gtCFDUVHX4fZwYoYyhYSqm8lU9ZCB0Ks=; b=crfmohcyGZbR5w64GcciBI/gY5 wUjQ57BA91CcSNRLztzWURlgU2esJS2TPTGAzIFgsA1lp11pErBNKUbw4wxMO4Sv/zeFbzM7O/kqB m4M/M0DK4ZnqIZIo1GvZd8AoumoJDZ60rxuxCtjxK2sXLya1ADO1nzTemXFG+sC3YZ7FCUAqx0c5j mo8XuaMKUGRkjqdcvo1JFtWSf/wE/kKw6abLtYWnVxUqhKM7SMezDk06zMOEW+6YmG9vAUEswuW6j xRsz8LGfrKTU12dUfeGFnDfAy29hBzNn3xsWnt9okHP9mHQUH0ja+yAQVX0KCBZ6VSgTAZb7Ihzwv zP/0EuEA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOyc-00GzyZ-2Y; Mon, 04 Oct 2021 14:30:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 34/62] mm/slub: Remove struct page argument from setup_object() Date: Mon, 4 Oct 2021 14:46:22 +0100 Message-Id: <20211004134650.4031813-35-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 7DE3B5070E49 X-Stat-Signature: shbjxhuam9a5n8xgogb8btw89o8shray Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=crfmohcy; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633357957-672588 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Neither setup_object() nor setup_object_debug() used their struct page argument, so delete it instead of converting to struct slab. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 14a423250611..16ce9aeccdc8 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1240,8 +1240,7 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node, int objects) } /* Object debug checks for alloc/free paths */ -static void setup_object_debug(struct kmem_cache *s, struct page *page, - void *object) +static void setup_object_debug(struct kmem_cache *s, void *object) { if (!kmem_cache_debug_flags(s, SLAB_STORE_USER|SLAB_RED_ZONE|__OBJECT_POISON)) return; @@ -1600,8 +1599,7 @@ slab_flags_t kmem_cache_flags(unsigned int object_size, return flags | slub_debug_local; } #else /* !CONFIG_SLUB_DEBUG */ -static inline void setup_object_debug(struct kmem_cache *s, - struct page *page, void *object) {} +static inline void setup_object_debug(struct kmem_cache *s, void *object) {} static inline void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr) {} @@ -1737,10 +1735,9 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, return *head != NULL; } -static void *setup_object(struct kmem_cache *s, struct page *page, - void *object) +static void *setup_object(struct kmem_cache *s, void *object) { - setup_object_debug(s, page, object); + setup_object_debug(s, object); object = kasan_init_slab_obj(s, object); if (unlikely(s->ctor)) { kasan_unpoison_object_data(s, object); @@ -1849,13 +1846,13 @@ static bool shuffle_freelist(struct kmem_cache *s, struct slab *slab) /* First entry is used as the base of the freelist */ cur = next_freelist_entry(s, &pos, start, slab_limit, freelist_count); - cur = setup_object(s, slab_page(slab), cur); + cur = setup_object(s, cur); slab->freelist = cur; for (idx = 1; idx < slab->objects; idx++) { next = next_freelist_entry(s, &pos, start, slab_limit, freelist_count); - next = setup_object(s, slab_page(slab), next); + next = setup_object(s, next); set_freepointer(s, cur, next); cur = next; } @@ -1929,11 +1926,11 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) if (!shuffle) { start = fixup_red_left(s, start); - start = setup_object(s, slab_page(slab), start); + start = setup_object(s, start); slab->freelist = start; for (idx = 0, p = start; idx < slab->objects - 1; idx++) { next = p + s->size; - next = setup_object(s, slab_page(slab), next); + next = setup_object(s, next); set_freepointer(s, p, next); p = next; } From patchwork Mon Oct 4 13:46:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E66EC433EF for ; Mon, 4 Oct 2021 14:35:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 96EF261360 for ; Mon, 4 Oct 2021 14:35:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 96EF261360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0F45094003F; Mon, 4 Oct 2021 10:35:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 07D4E94000B; Mon, 4 Oct 2021 10:35:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E60BB94003F; Mon, 4 Oct 2021 10:35:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0173.hostedemail.com [216.40.44.173]) by kanga.kvack.org (Postfix) with ESMTP id CF6DB94000B for ; Mon, 4 Oct 2021 10:35:17 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 60FA38249980 for ; Mon, 4 Oct 2021 14:35:17 +0000 (UTC) X-FDA: 78659002674.06.7826736 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 19619D0389E9 for ; Mon, 4 Oct 2021 14:35:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IdyZS3/GUVLZnLZb2TccyhOnfKIZ83B7fFj/ZA0FCDM=; b=rNxAjFvPyY9LIJH1sP7F4qUDzX 8mfilbhpzokaOW2MLVxneeuf3yIn2jNnZf0YBWYEabQanGLVTA7C+TTqZHxf67DwvUywwZtJls0om wdEBhZGiOqo5RmuiZRZN8S9q1/UTuKMoxWRgS/q3nhBTMMzwB+9DNrEwJI281qlHbaSk9r4zEf1Yh pBxn+9OpR/iAd6VYQCmPJ5eS83q22ZHk3afUuAAk2nfheDpTac/kcTU8oHgrvJycnhyweuOfwUUx/ 2OyWkdLk5sBCqj2n3n5pGez6Ai2XlmY/bRP71LEC6Ikb4dNNcOV7bOEVcYBsvaoLxLDY/SE2eNNZB jkkLb0lw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOzg-00H04u-Q4; Mon, 04 Oct 2021 14:31:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 35/62] mm/slub: Convert freelist_corrupted() to struct slab Date: Mon, 4 Oct 2021 14:46:23 +0100 Message-Id: <20211004134650.4031813-36-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rNxAjFvP; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 19619D0389E9 X-Stat-Signature: 3wn1q377wz8rejxqngtt37w19sc856ki X-HE-Tag: 1633358116-721403 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move slab_page() call down a level. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 16ce9aeccdc8..6d81e54e61df 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -799,12 +799,12 @@ static void slab_fix(struct kmem_cache *s, char *fmt, ...) va_end(args); } -static bool freelist_corrupted(struct kmem_cache *s, struct page *page, +static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, void **freelist, void *nextfree) { if ((s->flags & SLAB_CONSISTENCY_CHECKS) && - !check_valid_pointer(s, page, nextfree) && freelist) { - object_err(s, page, *freelist, "Freechain corrupt"); + !check_valid_pointer(s, slab_page(slab), nextfree) && freelist) { + object_err(s, slab_page(slab), *freelist, "Freechain corrupt"); *freelist = NULL; slab_fix(s, "Isolate corrupted freechain"); return true; @@ -1637,7 +1637,7 @@ static inline void inc_slabs_node(struct kmem_cache *s, int node, static inline void dec_slabs_node(struct kmem_cache *s, int node, int objects) {} -static bool freelist_corrupted(struct kmem_cache *s, struct page *page, +static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, void **freelist, void *nextfree) { return false; @@ -2330,7 +2330,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, * 'freelist_iter' is already corrupted. So isolate all objects * starting at 'freelist_iter' by skipping them. */ - if (freelist_corrupted(s, slab_page(slab), &freelist_iter, nextfree)) + if (freelist_corrupted(s, slab, &freelist_iter, nextfree)) break; freelist_tail = freelist_iter; From patchwork Mon Oct 4 13:46:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED7CBC433EF for ; Mon, 4 Oct 2021 14:36:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3138361163 for ; Mon, 4 Oct 2021 14:36:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3138361163 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C8BBE940040; Mon, 4 Oct 2021 10:36:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C3BA194000B; Mon, 4 Oct 2021 10:36:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ADC6F940040; Mon, 4 Oct 2021 10:36:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id 9F54E94000B for ; Mon, 4 Oct 2021 10:36:39 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 5D3A8182B0485 for ; Mon, 4 Oct 2021 14:36:39 +0000 (UTC) X-FDA: 78659006118.15.7FC897C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 0DB78D037991 for ; Mon, 4 Oct 2021 14:36:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=EpAmadTHgh/gl19p+6/qMZi8dpvy7AwV+VtuwRVjv04=; b=emcNFiV41DTZYMOzYXkYlZEI4K oCTg6HAEgG+OGF8UL7yqFdQvEGZOjYzfOl6Wc9NKPiJjD9WEyukHV0CwX/QDOBT9XgNzJ7NRRd5N6 oTYFllRwCH4GtfX4vgG5DmELzPHNch89RDwPSY0dd6iaccV9O546NnpD+hGi/u8S4aT4d2hF1vwNr wx+XSRLqnsvbuaCqhFgN2+X0mcTbiIsmIU66btuaoou8dL+0v5CLSa8m2O2U46b1QHMX4ViisCY0N 6ZNpudcaWVk5FzpaN5VlJZHDSeeQFj7EBrXAoyPKN7C2U6VzdQwt4416vIuYnWfcf6zePkmDGI4aj rZmVTn8w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXP1h-00H0Da-86; Mon, 04 Oct 2021 14:34:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 36/62] mm/slub: Convert full slab management to struct slab Date: Mon, 4 Oct 2021 14:46:24 +0100 Message-Id: <20211004134650.4031813-37-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 0DB78D037991 X-Stat-Signature: g16893xg3s9wa5abu9wdwffc466fsyq1 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=emcNFiV4; dmarc=none; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633358198-624677 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pass struct slab to add_full() and remove_full(). Improves type safety. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 6d81e54e61df..32a1bd4c8a88 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1185,22 +1185,22 @@ static void trace(struct kmem_cache *s, struct page *page, void *object, * Tracking of fully allocated slabs for debugging purposes. */ static void add_full(struct kmem_cache *s, - struct kmem_cache_node *n, struct page *page) + struct kmem_cache_node *n, struct slab *slab) { if (!(s->flags & SLAB_STORE_USER)) return; lockdep_assert_held(&n->list_lock); - list_add(&page->slab_list, &n->full); + list_add(&slab->slab_list, &n->full); } -static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct page *page) +static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct slab *slab) { if (!(s->flags & SLAB_STORE_USER)) return; lockdep_assert_held(&n->list_lock); - list_del(&page->slab_list); + list_del(&slab->slab_list); } /* Tracking of the number of slabs for debugging purposes */ @@ -1616,9 +1616,9 @@ static inline int slab_pad_check(struct kmem_cache *s, struct page *page) static inline int check_object(struct kmem_cache *s, struct page *page, void *object, u8 val) { return 1; } static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *n, - struct page *page) {} + struct slab *slab) {} static inline void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, - struct page *page) {} + struct slab *slab) {} slab_flags_t kmem_cache_flags(unsigned int object_size, slab_flags_t flags, const char *name) { @@ -2402,12 +2402,12 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, if (l == M_PARTIAL) remove_partial(n, slab); else if (l == M_FULL) - remove_full(s, n, slab_page(slab)); + remove_full(s, n, slab); if (m == M_PARTIAL) add_partial(n, slab, tail); else if (m == M_FULL) - add_full(s, n, slab_page(slab)); + add_full(s, n, slab); } l = m; @@ -3361,7 +3361,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * then add it. */ if (!kmem_cache_has_cpu_partial(s) && unlikely(!prior)) { - remove_full(s, n, slab_page(slab)); + remove_full(s, n, slab); add_partial(n, slab, DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } @@ -3377,7 +3377,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, stat(s, FREE_REMOVE_PARTIAL); } else { /* Slab must be on the full list */ - remove_full(s, n, slab_page(slab)); + remove_full(s, n, slab); } spin_unlock_irqrestore(&n->list_lock, flags); From patchwork Mon Oct 4 13:46:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C19ECC433EF for ; Mon, 4 Oct 2021 14:38:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6A1876136F for ; Mon, 4 Oct 2021 14:38:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6A1876136F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 07B88940041; Mon, 4 Oct 2021 10:38:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 02B3494000B; Mon, 4 Oct 2021 10:38:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5C12940041; Mon, 4 Oct 2021 10:38:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id D393A94000B for ; Mon, 4 Oct 2021 10:38:37 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 89ED92BFAA for ; Mon, 4 Oct 2021 14:38:37 +0000 (UTC) X-FDA: 78659011074.01.AD88FD0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 48A9CB00154E for ; Mon, 4 Oct 2021 14:38:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4W8Ur02zWVOUVXpmjYLLTbiz1bBmz3cyKywKDzFgU4k=; b=hznbWmGbdHGzSd0p2WWxH3lFeG yCy+/1JDdkhqA5U8XCoyAtvI4iMari+vmu0me0h8wsbdL/ut36eK4MZEFKUr6kBcj/frUFqO6viKa +EXLLeIejAihz2mmRzAg4mvPmaUGg3YCvtlKUfpop7K7brXnebeLH9/B5lj3ugkIoMSKrHwOno5ze Ie24LDV+RamQIFo3Ev3h+0PGnURN7TaX3zy9BjQ7dHZ9SDn2qFwOXu9nK9Oc3FUre1VVNaf+VXaIC BkWKwWGRhGRZjzUKVNEEDfbF6jO2PTx7bynrKXfIuWtlSpJFoQDtjPK7N/6TFep8+djsI6L3Vin4E mz2mGuig==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXP3q-00H0OP-V7; Mon, 04 Oct 2021 14:35:54 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 37/62] mm/slub: Convert free_consistency_checks() to take a struct slab Date: Mon, 4 Oct 2021 14:46:25 +0100 Message-Id: <20211004134650.4031813-38-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 48A9CB00154E X-Stat-Signature: oyqkcx89mzu3pw411gm6i6oz5jur974g Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hznbWmGb; dmarc=none; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633358317-848992 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provides a little more type safety, but mostly this is just pushing slab_page() calls down. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 32a1bd4c8a88..a8ea2779edf4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1308,32 +1308,32 @@ static noinline int alloc_debug_processing(struct kmem_cache *s, } static inline int free_consistency_checks(struct kmem_cache *s, - struct page *page, void *object, unsigned long addr) + struct slab *slab, void *object, unsigned long addr) { - if (!check_valid_pointer(s, page, object)) { - slab_err(s, page, "Invalid object pointer 0x%p", object); + if (!check_valid_pointer(s, slab_page(slab), object)) { + slab_err(s, slab_page(slab), "Invalid object pointer 0x%p", object); return 0; } - if (on_freelist(s, page, object)) { - object_err(s, page, object, "Object already free"); + if (on_freelist(s, slab_page(slab), object)) { + object_err(s, slab_page(slab), object, "Object already free"); return 0; } - if (!check_object(s, page, object, SLUB_RED_ACTIVE)) + if (!check_object(s, slab_page(slab), object, SLUB_RED_ACTIVE)) return 0; - if (unlikely(s != page->slab_cache)) { - if (!PageSlab(page)) { - slab_err(s, page, "Attempt to free object(0x%p) outside of slab", + if (unlikely(s != slab->slab_cache)) { + if (!slab_test_cache(slab)) { + slab_err(s, slab_page(slab), "Attempt to free object(0x%p) outside of slab", object); - } else if (!page->slab_cache) { + } else if (!slab->slab_cache) { pr_err("SLUB : no slab for object 0x%p.\n", object); dump_stack(); } else - object_err(s, page, object, - "page slab pointer corrupt."); + object_err(s, slab_page(slab), object, + "slab pointer corrupt."); return 0; } return 1; @@ -1363,7 +1363,7 @@ static noinline int free_debug_processing( cnt++; if (s->flags & SLAB_CONSISTENCY_CHECKS) { - if (!free_consistency_checks(s, slab_page(slab), object, addr)) + if (!free_consistency_checks(s, slab, object, addr)) goto out; } From patchwork Mon Oct 4 13:46:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27DC6C433EF for ; Mon, 4 Oct 2021 14:39:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 985EA61372 for ; Mon, 4 Oct 2021 14:39:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 985EA61372 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 36183940042; Mon, 4 Oct 2021 10:39:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 311B094000B; Mon, 4 Oct 2021 10:39:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2003E940042; Mon, 4 Oct 2021 10:39:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 0FEB794000B for ; Mon, 4 Oct 2021 10:39:15 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C5038181A88E4 for ; Mon, 4 Oct 2021 14:39:14 +0000 (UTC) X-FDA: 78659012628.14.80EBBA4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 8C321F000EE0 for ; Mon, 4 Oct 2021 14:39:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6WzDPlePuewKHlznd/kzbUBpVCH7kMdexvOTiAufxvY=; b=TGSSofliDqLY9q7M2DubIVtXTm ZkuaCWyG9WBWLXiIglcH+/Sf0L/Tz9YU83pK4lsTW68seennb1jE1rvtytSRCHa9ymxOcn3HvzFIj OFSkDyQdleJ4EWkHunW+3iTnKIE3QQ2AcvkqHTEXsnBKIyiinXyUojS2CeF1AHF7By6JRdBPU5d3h Nxj4rqQysE0AoAl8n3qBFckeJ95pJhRGJA0BdemZHj99sK92hOnroDMTAO0YZl/8iJwjIz/6Fbpkx zsuQza5vwINoj8TgiXnZO1LcOCDLLebX7ylha8pA8i97h3im7jhqUeAMTaHELNkV0H4sfpHL7iwnQ J1mkTfnA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXP5Z-00H0XM-0r; Mon, 04 Oct 2021 14:37:19 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 38/62] mm/slub: Convert alloc_debug_processing() to struct slab Date: Mon, 4 Oct 2021 14:46:26 +0100 Message-Id: <20211004134650.4031813-39-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 8C321F000EE0 X-Stat-Signature: 6hdqx5siwb5ahna67x13qnc7xos5udeh Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=TGSSofli; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633358354-939061 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Push the slab conversion all the way down to alloc_consistency_checks(), but actually use the fact that it's a slab in alloc_debug_processing(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index a8ea2779edf4..eb4286886c3e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1261,48 +1261,48 @@ void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr) } static inline int alloc_consistency_checks(struct kmem_cache *s, - struct page *page, void *object) + struct slab *slab, void *object) { - if (!check_slab(s, page)) + if (!check_slab(s, slab_page(slab))) return 0; - if (!check_valid_pointer(s, page, object)) { - object_err(s, page, object, "Freelist Pointer check fails"); + if (!check_valid_pointer(s, slab_page(slab), object)) { + object_err(s, slab_page(slab), object, "Freelist Pointer check fails"); return 0; } - if (!check_object(s, page, object, SLUB_RED_INACTIVE)) + if (!check_object(s, slab_page(slab), object, SLUB_RED_INACTIVE)) return 0; return 1; } static noinline int alloc_debug_processing(struct kmem_cache *s, - struct page *page, + struct slab *slab, void *object, unsigned long addr) { if (s->flags & SLAB_CONSISTENCY_CHECKS) { - if (!alloc_consistency_checks(s, page, object)) + if (!alloc_consistency_checks(s, slab, object)) goto bad; } /* Success perform special debug activities for allocs */ if (s->flags & SLAB_STORE_USER) set_track(s, object, TRACK_ALLOC, addr); - trace(s, page, object, 1); + trace(s, slab_page(slab), object, 1); init_object(s, object, SLUB_RED_ACTIVE); return 1; bad: - if (PageSlab(page)) { + if (slab_test_cache(slab)) { /* - * If this is a slab page then lets do the best we can + * If this is a slab then lets do the best we can * to avoid issues in the future. Marking all objects * as used avoids touching the remaining objects. */ slab_fix(s, "Marking all objects used"); - page->inuse = page->objects; - page->freelist = NULL; + slab->inuse = slab->objects; + slab->freelist = NULL; } return 0; } @@ -1604,7 +1604,7 @@ static inline void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr) {} static inline int alloc_debug_processing(struct kmem_cache *s, - struct page *page, void *object, unsigned long addr) { return 0; } + struct slab *slab, void *object, unsigned long addr) { return 0; } static inline int free_debug_processing( struct kmem_cache *s, struct slab *slab, @@ -3006,7 +3006,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, check_new_slab: if (kmem_cache_debug(s)) { - if (!alloc_debug_processing(s, slab_page(slab), freelist, addr)) { + if (!alloc_debug_processing(s, slab, freelist, addr)) { /* Slab failed checks. Next slab needed */ goto new_slab; } else { From patchwork Mon Oct 4 13:46:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5469EC433FE for ; Mon, 4 Oct 2021 14:39:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 05AD16136F for ; Mon, 4 Oct 2021 14:39:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 05AD16136F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id A6538940043; Mon, 4 Oct 2021 10:39:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A14AA94000B; Mon, 4 Oct 2021 10:39:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9040B940043; Mon, 4 Oct 2021 10:39:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0042.hostedemail.com [216.40.44.42]) by kanga.kvack.org (Postfix) with ESMTP id 8130C94000B for ; Mon, 4 Oct 2021 10:39:56 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3C5152DD7B for ; Mon, 4 Oct 2021 14:39:56 +0000 (UTC) X-FDA: 78659014392.12.1081BF1 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id CB051B000D09 for ; Mon, 4 Oct 2021 14:39:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=WHVh1IrKqLvOW+uQWq6NeqETwN3jHdcfu7o2NC34Whg=; b=DW3bYiJGTBSxs/9vQ59wIc9DQO 7dhbXB696EMImuelHjJM+uGrGl3xyaVOheXH/EyVGAn49jZjcZgeZwKKSxNTSL7myumEGdinqtHJC ZA4t6iGX/+BQfGn0hWu4yqobfnJIgMCGDUL2QzZjNa9shQICShPMsDG91Zm0aCdg9wsI11ahn+jy9 R5kh0JxTmgd/qKOxwPuZ738NaE10gM11GazP1is82GbM6ARmhe1pU451yftU8TX1i8TnijfYzZLUo D6JSjdWnm1HxShNncdHPLy8nnTsfVLiGxjwFzD2CfvqpbOdBoeCHmUlx4908qhx9rJ4B8M63466Fm p7XvcceA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXP6h-00H0bN-8n; Mon, 04 Oct 2021 14:38:54 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 39/62] mm/slub: Convert check_object() to struct slab Date: Mon, 4 Oct 2021 14:46:27 +0100 Message-Id: <20211004134650.4031813-40-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: CB051B000D09 X-Stat-Signature: 5k8qs333fme8cdgmon84bg3bguitjw3e Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DW3bYiJG; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633358395-684430 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also convert check_bytes_and_report() and check_pad_bytes(). This is almost exclusively pushing slab_page() calls down. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index eb4286886c3e..fd11ca47bce8 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -904,13 +904,13 @@ static void restore_bytes(struct kmem_cache *s, char *message, u8 data, memset(from, data, to - from); } -static int check_bytes_and_report(struct kmem_cache *s, struct page *page, +static int check_bytes_and_report(struct kmem_cache *s, struct slab *slab, u8 *object, char *what, u8 *start, unsigned int value, unsigned int bytes) { u8 *fault; u8 *end; - u8 *addr = page_address(page); + u8 *addr = slab_address(slab); metadata_access_enable(); fault = memchr_inv(kasan_reset_tag(start), value, bytes); @@ -929,7 +929,7 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page, pr_err("0x%p-0x%p @offset=%tu. First byte 0x%x instead of 0x%x\n", fault, end - 1, fault - addr, fault[0], value); - print_trailer(s, page, object); + print_trailer(s, slab_page(slab), object); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); skip_bug_print: @@ -975,7 +975,7 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page, * may be used with merged slabcaches. */ -static int check_pad_bytes(struct kmem_cache *s, struct page *page, u8 *p) +static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p) { unsigned long off = get_info_end(s); /* The end of info */ @@ -988,7 +988,7 @@ static int check_pad_bytes(struct kmem_cache *s, struct page *page, u8 *p) if (size_from_object(s) == off) return 1; - return check_bytes_and_report(s, page, p, "Object padding", + return check_bytes_and_report(s, slab, p, "Object padding", p + off, POISON_INUSE, size_from_object(s) - off); } @@ -1029,23 +1029,23 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page) return 0; } -static int check_object(struct kmem_cache *s, struct page *page, +static int check_object(struct kmem_cache *s, struct slab *slab, void *object, u8 val) { u8 *p = object; u8 *endobject = object + s->object_size; if (s->flags & SLAB_RED_ZONE) { - if (!check_bytes_and_report(s, page, object, "Left Redzone", + if (!check_bytes_and_report(s, slab, object, "Left Redzone", object - s->red_left_pad, val, s->red_left_pad)) return 0; - if (!check_bytes_and_report(s, page, object, "Right Redzone", + if (!check_bytes_and_report(s, slab, object, "Right Redzone", endobject, val, s->inuse - s->object_size)) return 0; } else { if ((s->flags & SLAB_POISON) && s->object_size < s->inuse) { - check_bytes_and_report(s, page, p, "Alignment padding", + check_bytes_and_report(s, slab, p, "Alignment padding", endobject, POISON_INUSE, s->inuse - s->object_size); } @@ -1053,15 +1053,15 @@ static int check_object(struct kmem_cache *s, struct page *page, if (s->flags & SLAB_POISON) { if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON) && - (!check_bytes_and_report(s, page, p, "Poison", p, + (!check_bytes_and_report(s, slab, p, "Poison", p, POISON_FREE, s->object_size - 1) || - !check_bytes_and_report(s, page, p, "End Poison", + !check_bytes_and_report(s, slab, p, "End Poison", p + s->object_size - 1, POISON_END, 1))) return 0; /* * check_pad_bytes cleans up on its own. */ - check_pad_bytes(s, page, p); + check_pad_bytes(s, slab, p); } if (!freeptr_outside_object(s) && val == SLUB_RED_ACTIVE) @@ -1072,8 +1072,8 @@ static int check_object(struct kmem_cache *s, struct page *page, return 1; /* Check free pointer validity */ - if (!check_valid_pointer(s, page, get_freepointer(s, p))) { - object_err(s, page, p, "Freepointer corrupt"); + if (!check_valid_pointer(s, slab_page(slab), get_freepointer(s, p))) { + object_err(s, slab_page(slab), p, "Freepointer corrupt"); /* * No choice but to zap it and thus lose the remainder * of the free objects in this slab. May cause @@ -1271,7 +1271,7 @@ static inline int alloc_consistency_checks(struct kmem_cache *s, return 0; } - if (!check_object(s, slab_page(slab), object, SLUB_RED_INACTIVE)) + if (!check_object(s, slab, object, SLUB_RED_INACTIVE)) return 0; return 1; @@ -1320,7 +1320,7 @@ static inline int free_consistency_checks(struct kmem_cache *s, return 0; } - if (!check_object(s, slab_page(slab), object, SLUB_RED_ACTIVE)) + if (!check_object(s, slab, object, SLUB_RED_ACTIVE)) return 0; if (unlikely(s != slab->slab_cache)) { @@ -1613,7 +1613,7 @@ static inline int free_debug_processing( static inline int slab_pad_check(struct kmem_cache *s, struct page *page) { return 1; } -static inline int check_object(struct kmem_cache *s, struct page *page, +static inline int check_object(struct kmem_cache *s, struct slab *slab, void *object, u8 val) { return 1; } static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *n, struct slab *slab) {} @@ -1971,7 +1971,7 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) slab_pad_check(s, slab_page(slab)); for_each_object(p, s, slab_address(slab), slab->objects) - check_object(s, slab_page(slab), p, SLUB_RED_INACTIVE); + check_object(s, slab, p, SLUB_RED_INACTIVE); } __slab_clear_pfmemalloc(slab); @@ -4968,7 +4968,7 @@ static void validate_slab(struct kmem_cache *s, struct slab *slab, u8 val = test_bit(__obj_to_index(s, addr, p), obj_map) ? SLUB_RED_INACTIVE : SLUB_RED_ACTIVE; - if (!check_object(s, slab_page(slab), p, val)) + if (!check_object(s, slab, p, val)) break; } unlock: From patchwork Mon Oct 4 13:46:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE2EFC433F5 for ; Mon, 4 Oct 2021 14:40:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4159061373 for ; Mon, 4 Oct 2021 14:40:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4159061373 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D22D9940044; Mon, 4 Oct 2021 10:40:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD2DC94000B; Mon, 4 Oct 2021 10:40:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC246940044; Mon, 4 Oct 2021 10:40:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0251.hostedemail.com [216.40.44.251]) by kanga.kvack.org (Postfix) with ESMTP id ADEEF94000B for ; Mon, 4 Oct 2021 10:40:37 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6261A182A6CB5 for ; Mon, 4 Oct 2021 14:40:37 +0000 (UTC) X-FDA: 78659016114.20.C719800 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id 11E66700864A for ; Mon, 4 Oct 2021 14:40:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mqDGptlt0kDY2UoM0GUV9bKnRVRX2WNfLX46D22kcB0=; b=USTEUPsDBbmob6TJb6XGCOuYcr H8jmiqFtS4INYqtJpojIZWr8fayv8hMmv+7csW4XewuCHwiG6zNkdVJ74Y+In7hUnWself3bFpf9O SC7eNMZ0LAuWrWf4Xd2yvyaRKPAPyCQ8WG7hN+2BVyQLOAw0+S4SS56hwnmasDd4BOQ4OIP8hOd7X BMU5Q5TjKfZjyMP+t91N65oZA+pTZJoH4GS+IgcmAdgCei9MNVYNNb835MsDCY2hkr3HqbiVn8MqZ r7aLfr0BjtkA0a2zv4JPRqmTN2bsUuV9X+SmVasiOA3VQ2O1+wxe0w0V3+RydvaWAEwcrGubAB+zD TKpKL6/g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXP7n-00H0rV-SD; Mon, 04 Oct 2021 14:39:37 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 40/62] mm/slub: Convert on_freelist() to struct slab Date: Mon, 4 Oct 2021 14:46:28 +0100 Message-Id: <20211004134650.4031813-41-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 11E66700864A X-Stat-Signature: yxbtn7u9q1p7sx5zzr59zh9m3daycsz1 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=USTEUPsD; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633358436-175158 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improves type safety as well as pushing down calls to slab_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index fd11ca47bce8..10db0ce7fe2a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1111,29 +1111,29 @@ static int check_slab(struct kmem_cache *s, struct page *page) } /* - * Determine if a certain object on a page is on the freelist. Must hold the + * Determine if a certain object in a slab is on the freelist. Must hold the * slab lock to guarantee that the chains are in a consistent state. */ -static int on_freelist(struct kmem_cache *s, struct page *page, void *search) +static int on_freelist(struct kmem_cache *s, struct slab *slab, void *search) { int nr = 0; void *fp; void *object = NULL; int max_objects; - fp = page->freelist; - while (fp && nr <= page->objects) { + fp = slab->freelist; + while (fp && nr <= slab->objects) { if (fp == search) return 1; - if (!check_valid_pointer(s, page, fp)) { + if (!check_valid_pointer(s, slab_page(slab), fp)) { if (object) { - object_err(s, page, object, + object_err(s, slab_page(slab), object, "Freechain corrupt"); set_freepointer(s, object, NULL); } else { - slab_err(s, page, "Freepointer corrupt"); - page->freelist = NULL; - page->inuse = page->objects; + slab_err(s, slab_page(slab), "Freepointer corrupt"); + slab->freelist = NULL; + slab->inuse = slab->objects; slab_fix(s, "Freelist cleared"); return 0; } @@ -1144,20 +1144,20 @@ static int on_freelist(struct kmem_cache *s, struct page *page, void *search) nr++; } - max_objects = order_objects(compound_order(page), s->size); + max_objects = order_objects(slab_order(slab), s->size); if (max_objects > MAX_OBJS_PER_PAGE) max_objects = MAX_OBJS_PER_PAGE; - if (page->objects != max_objects) { - slab_err(s, page, "Wrong number of objects. Found %d but should be %d", - page->objects, max_objects); - page->objects = max_objects; + if (slab->objects != max_objects) { + slab_err(s, slab_page(slab), "Wrong number of objects. Found %d but should be %d", + slab->objects, max_objects); + slab->objects = max_objects; slab_fix(s, "Number of objects adjusted"); } - if (page->inuse != page->objects - nr) { - slab_err(s, page, "Wrong object count. Counter is %d but counted were %d", - page->inuse, page->objects - nr); - page->inuse = page->objects - nr; + if (slab->inuse != slab->objects - nr) { + slab_err(s, slab_page(slab), "Wrong object count. Counter is %d but counted were %d", + slab->inuse, slab->objects - nr); + slab->inuse = slab->objects - nr; slab_fix(s, "Object count adjusted"); } return search == NULL; @@ -1315,7 +1315,7 @@ static inline int free_consistency_checks(struct kmem_cache *s, return 0; } - if (on_freelist(s, slab_page(slab), object)) { + if (on_freelist(s, slab, object)) { object_err(s, slab_page(slab), object, "Object already free"); return 0; } @@ -4959,7 +4959,7 @@ static void validate_slab(struct kmem_cache *s, struct slab *slab, slab_lock(slab_page(slab), &flags); - if (!check_slab(s, slab_page(slab)) || !on_freelist(s, slab_page(slab), NULL)) + if (!check_slab(s, slab_page(slab)) || !on_freelist(s, slab, NULL)) goto unlock; /* Now we know that a valid freelist exists */ From patchwork Mon Oct 4 13:46:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC595C433FE for ; Mon, 4 Oct 2021 14:41:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 90BB66115B for ; Mon, 4 Oct 2021 14:41:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 90BB66115B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 1D755940045; Mon, 4 Oct 2021 10:41:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 186F994000B; Mon, 4 Oct 2021 10:41:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 07678940045; Mon, 4 Oct 2021 10:41:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0086.hostedemail.com [216.40.44.86]) by kanga.kvack.org (Postfix) with ESMTP id ECC8094000B for ; Mon, 4 Oct 2021 10:41:40 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9A9F12C687 for ; Mon, 4 Oct 2021 14:41:40 +0000 (UTC) X-FDA: 78659018760.17.7048789 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 623795071818 for ; Mon, 4 Oct 2021 14:41:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FvJWTd9k9lSKs6uvJ381PRrh6tayArFgy81W9O2SvVU=; b=g/5a+5Mz2Oa+g7DboktlQPIiAq fdIU50Kqa1TfoGLPY0jXwhOgFk95EEnbQZx2IgVwPVRbtNzJyqiaRTIdI7cQO9TyemeOmCvOeLJpv fW9aefomC0jr2cQcQevrAn8PAQWIUlpQ/RovVaFtpSx0lx7sj8zhSTDDPr/Wb8m4E+7i0swoWFDmm uzK9wEuQGar+wuthRqZxM6KWtIRbty7zZhYHnORi2abdA3MFs0fyoajRVxJstkocP2bYF4rEdeaao 5xRxJ7mBSHZ4Fak7RrdYi8TvcId1KvHJxvoryoXMOjkYjwEua9Uuj5tTMy15t3t1UNSm5257W6opg ti7pRCTQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXP8I-00H0tC-N4; Mon, 04 Oct 2021 14:40:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 41/62] mm/slub: Convert check_slab() to struct slab Date: Mon, 4 Oct 2021 14:46:29 +0100 Message-Id: <20211004134650.4031813-42-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 623795071818 X-Stat-Signature: oonjd8i3upa78ta5aauktab87zynharo Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="g/5a+5Mz"; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633358500-324111 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also convert slab_pad_check() to struct slab. Improves type safety and pushes down a few calls to slab_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 44 ++++++++++++++++++++++---------------------- 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 10db0ce7fe2a..b1122b8cb36f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -992,8 +992,8 @@ static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p) p + off, POISON_INUSE, size_from_object(s) - off); } -/* Check the pad bytes at the end of a slab page */ -static int slab_pad_check(struct kmem_cache *s, struct page *page) +/* Check the pad bytes at the end of a slab */ +static int slab_pad_check(struct kmem_cache *s, struct slab *slab) { u8 *start; u8 *fault; @@ -1005,8 +1005,8 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page) if (!(s->flags & SLAB_POISON)) return 1; - start = page_address(page); - length = page_size(page); + start = slab_address(slab); + length = slab_size(slab); end = start + length; remainder = length % s->size; if (!remainder) @@ -1021,7 +1021,7 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page) while (end > fault && end[-1] == POISON_INUSE) end--; - slab_err(s, page, "Padding overwritten. 0x%p-0x%p @offset=%tu", + slab_err(s, slab_page(slab), "Padding overwritten. 0x%p-0x%p @offset=%tu", fault, end - 1, fault - start); print_section(KERN_ERR, "Padding ", pad, remainder); @@ -1085,28 +1085,28 @@ static int check_object(struct kmem_cache *s, struct slab *slab, return 1; } -static int check_slab(struct kmem_cache *s, struct page *page) +static int check_slab(struct kmem_cache *s, struct slab *slab) { int maxobj; - if (!PageSlab(page)) { - slab_err(s, page, "Not a valid slab page"); + if (!slab_test_cache(slab)) { + slab_err(s, slab_page(slab), "Not a valid slab page"); return 0; } - maxobj = order_objects(compound_order(page), s->size); - if (page->objects > maxobj) { - slab_err(s, page, "objects %u > max %u", - page->objects, maxobj); + maxobj = order_objects(slab_order(slab), s->size); + if (slab->objects > maxobj) { + slab_err(s, slab_page(slab), "objects %u > max %u", + slab->objects, maxobj); return 0; } - if (page->inuse > page->objects) { - slab_err(s, page, "inuse %u > max %u", - page->inuse, page->objects); + if (slab->inuse > slab->objects) { + slab_err(s, slab_page(slab), "inuse %u > max %u", + slab->inuse, slab->objects); return 0; } - /* Slab_pad_check fixes things up after itself */ - slab_pad_check(s, page); + /* slab_pad_check fixes things up after itself */ + slab_pad_check(s, slab); return 1; } @@ -1263,7 +1263,7 @@ void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr) static inline int alloc_consistency_checks(struct kmem_cache *s, struct slab *slab, void *object) { - if (!check_slab(s, slab_page(slab))) + if (!check_slab(s, slab)) return 0; if (!check_valid_pointer(s, slab_page(slab), object)) { @@ -1355,7 +1355,7 @@ static noinline int free_debug_processing( slab_lock(slab_page(slab), &flags2); if (s->flags & SLAB_CONSISTENCY_CHECKS) { - if (!check_slab(s, slab_page(slab))) + if (!check_slab(s, slab)) goto out; } @@ -1611,7 +1611,7 @@ static inline int free_debug_processing( void *head, void *tail, int bulk_cnt, unsigned long addr) { return 0; } -static inline int slab_pad_check(struct kmem_cache *s, struct page *page) +static inline int slab_pad_check(struct kmem_cache *s, struct slab *slab) { return 1; } static inline int check_object(struct kmem_cache *s, struct slab *slab, void *object, u8 val) { return 1; } @@ -1969,7 +1969,7 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) if (kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) { void *p; - slab_pad_check(s, slab_page(slab)); + slab_pad_check(s, slab); for_each_object(p, s, slab_address(slab), slab->objects) check_object(s, slab, p, SLUB_RED_INACTIVE); } @@ -4959,7 +4959,7 @@ static void validate_slab(struct kmem_cache *s, struct slab *slab, slab_lock(slab_page(slab), &flags); - if (!check_slab(s, slab_page(slab)) || !on_freelist(s, slab, NULL)) + if (!check_slab(s, slab) || !on_freelist(s, slab, NULL)) goto unlock; /* Now we know that a valid freelist exists */ From patchwork Mon Oct 4 13:46:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B1EAC4332F for ; Mon, 4 Oct 2021 14:42:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 30A1A613A2 for ; Mon, 4 Oct 2021 14:42:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 30A1A613A2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id B4C3B940046; Mon, 4 Oct 2021 10:42:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AFBC394000B; Mon, 4 Oct 2021 10:42:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A1230940046; Mon, 4 Oct 2021 10:42:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id 9217894000B for ; Mon, 4 Oct 2021 10:42:43 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4B75F2DEBA for ; Mon, 4 Oct 2021 14:42:43 +0000 (UTC) X-FDA: 78659021406.37.8169D57 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 0608250714E0 for ; Mon, 4 Oct 2021 14:42:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=qc/45yb1X633dZ86WEJwm+UIp3+I+JOTpOrA5rVUcsQ=; b=HhaPDBEv5xqIaT8qh/NUBl/ssJ RGhOOioizqTnd3wBumySL5A1wXpgCx4j+nNYRYjQ/ApvcNbLY61QfpMG1EYrkqAHouuwLqva2I1di EfHJD7bhTK+zzN1Nk5ENja9K8NL8+TUFADDO8bnH41ZNuiSoAHYmlOgtw/lI+dzpal8RBcERmk53X horYAVr/r9OYSmvYB26r74OvLnmQYV2aUz+w0AKlQNTDo2ba6GyXSwYrRjXZIgjwi9eYLPw5SYgTI N3ElDkV3FAoS7LP9GnSG/XSWdQoYJGxod5xn/bHwDL7X6fGSdXTmdehJLiUZ+w2CfRQlQjsl27rY5 e5As9CDw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXP96-00H0vE-6W; Mon, 04 Oct 2021 14:41:14 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 42/62] mm/slub: Convert check_valid_pointer() to struct slab Date: Mon, 4 Oct 2021 14:46:30 +0100 Message-Id: <20211004134650.4031813-43-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HhaPDBEv; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 0608250714E0 X-Stat-Signature: frsk3stxgtoiun94fk1aucsoiipegwtn X-HE-Tag: 1633358562-38710 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improves type safety and removes a lot of calls to slab_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index b1122b8cb36f..524e3c7eac30 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -638,19 +638,19 @@ static inline void metadata_access_disable(void) * Object debugging */ -/* Verify that a pointer has an address that is valid within a slab page */ +/* Verify that a pointer has an address that is valid within a slab */ static inline int check_valid_pointer(struct kmem_cache *s, - struct page *page, void *object) + struct slab *slab, void *object) { void *base; if (!object) return 1; - base = page_address(page); + base = slab_address(slab); object = kasan_reset_tag(object); object = restore_red_left(s, object); - if (object < base || object >= base + page->objects * s->size || + if (object < base || object >= base + slab->objects * s->size || (object - base) % s->size) { return 0; } @@ -803,7 +803,7 @@ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, void **freelist, void *nextfree) { if ((s->flags & SLAB_CONSISTENCY_CHECKS) && - !check_valid_pointer(s, slab_page(slab), nextfree) && freelist) { + !check_valid_pointer(s, slab, nextfree) && freelist) { object_err(s, slab_page(slab), *freelist, "Freechain corrupt"); *freelist = NULL; slab_fix(s, "Isolate corrupted freechain"); @@ -1072,7 +1072,7 @@ static int check_object(struct kmem_cache *s, struct slab *slab, return 1; /* Check free pointer validity */ - if (!check_valid_pointer(s, slab_page(slab), get_freepointer(s, p))) { + if (!check_valid_pointer(s, slab, get_freepointer(s, p))) { object_err(s, slab_page(slab), p, "Freepointer corrupt"); /* * No choice but to zap it and thus lose the remainder @@ -1125,7 +1125,7 @@ static int on_freelist(struct kmem_cache *s, struct slab *slab, void *search) while (fp && nr <= slab->objects) { if (fp == search) return 1; - if (!check_valid_pointer(s, slab_page(slab), fp)) { + if (!check_valid_pointer(s, slab, fp)) { if (object) { object_err(s, slab_page(slab), object, "Freechain corrupt"); @@ -1266,7 +1266,7 @@ static inline int alloc_consistency_checks(struct kmem_cache *s, if (!check_slab(s, slab)) return 0; - if (!check_valid_pointer(s, slab_page(slab), object)) { + if (!check_valid_pointer(s, slab, object)) { object_err(s, slab_page(slab), object, "Freelist Pointer check fails"); return 0; } @@ -1310,7 +1310,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s, static inline int free_consistency_checks(struct kmem_cache *s, struct slab *slab, void *object, unsigned long addr) { - if (!check_valid_pointer(s, slab_page(slab), object)) { + if (!check_valid_pointer(s, slab, object)) { slab_err(s, slab_page(slab), "Invalid object pointer 0x%p", object); return 0; } From patchwork Mon Oct 4 13:46:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1040BC433EF for ; Mon, 4 Oct 2021 14:44:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B5D8A6137D for ; Mon, 4 Oct 2021 14:44:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B5D8A6137D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 49397940047; Mon, 4 Oct 2021 10:44:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 41D5094000B; Mon, 4 Oct 2021 10:44:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BDA5940047; Mon, 4 Oct 2021 10:44:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id 185BB94000B for ; Mon, 4 Oct 2021 10:44:13 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C6AFF30C85 for ; Mon, 4 Oct 2021 14:44:12 +0000 (UTC) X-FDA: 78659025144.38.08690F4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 646FF1035DEA for ; Mon, 4 Oct 2021 14:44:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=rByloetF7rpW2bor0wwNhj+zZqrgq7MTCe4F+n5zJi8=; b=Pj1h9aE32FNWD6pqajXa2uOD+2 OcdKRX3wXFaRQyG3wo79a3d3H/v5VihjmWnqlsg+ua/qe4pLzabFfKVKtWN54FeACpebRtyEobtBI jeeJ5DlJ+8YzF3jPdQ6CZzHauvVNUEYTM9kOepHI6pcMb9xZQzvqr4+5WDtoEwMEFu3ip9xu2phvk 3DpPcLQvfLmIZrNkDJgLJTQLpSwjrLRqTzHcURkhsxvrsQRKT9RqGBcs2AUk7kbObgKDr0ybytWhA DY8KRcARdph4bQkObDSg1sDfY0aN65ULRmuSFufW2KfSSJAgSufxAty6FqSPBzyhC+oizrKzO8D43 RNlA+rcQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXP9z-00H0wX-Rz; Mon, 04 Oct 2021 14:41:56 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 43/62] mm/slub: Convert object_err() to take a struct slab Date: Mon, 4 Oct 2021 14:46:31 +0100 Message-Id: <20211004134650.4031813-44-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 646FF1035DEA X-Stat-Signature: top9kgicby6ngwd36jxsgi6hy96pp5ch Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Pj1h9aE3; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633358652-474557 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improves type safety and removes a lot of calls to slab_page(). Also make object_err() static. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/slub_def.h | 3 --- mm/slub.c | 20 +++++++++++--------- 2 files changed, 11 insertions(+), 12 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 3cc64e9f988c..63eae033d713 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -165,9 +165,6 @@ static inline void sysfs_slab_release(struct kmem_cache *s) } #endif -void object_err(struct kmem_cache *s, struct page *page, - u8 *object, char *reason); - void *fixup_red_left(struct kmem_cache *s, void *p); static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, diff --git a/mm/slub.c b/mm/slub.c index 524e3c7eac30..a93a6d679de2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -799,12 +799,15 @@ static void slab_fix(struct kmem_cache *s, char *fmt, ...) va_end(args); } +static void object_err(struct kmem_cache *s, struct slab *slab, + u8 *object, char *reason); + static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, void **freelist, void *nextfree) { if ((s->flags & SLAB_CONSISTENCY_CHECKS) && !check_valid_pointer(s, slab, nextfree) && freelist) { - object_err(s, slab_page(slab), *freelist, "Freechain corrupt"); + object_err(s, slab, *freelist, "Freechain corrupt"); *freelist = NULL; slab_fix(s, "Isolate corrupted freechain"); return true; @@ -852,14 +855,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p) dump_stack(); } -void object_err(struct kmem_cache *s, struct page *page, +static void object_err(struct kmem_cache *s, struct slab *slab, u8 *object, char *reason) { if (slab_add_kunit_errors()) return; slab_bug(s, "%s", reason); - print_trailer(s, page, object); + print_trailer(s, slab_page(slab), object); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } @@ -1073,7 +1076,7 @@ static int check_object(struct kmem_cache *s, struct slab *slab, /* Check free pointer validity */ if (!check_valid_pointer(s, slab, get_freepointer(s, p))) { - object_err(s, slab_page(slab), p, "Freepointer corrupt"); + object_err(s, slab, p, "Freepointer corrupt"); /* * No choice but to zap it and thus lose the remainder * of the free objects in this slab. May cause @@ -1127,7 +1130,7 @@ static int on_freelist(struct kmem_cache *s, struct slab *slab, void *search) return 1; if (!check_valid_pointer(s, slab, fp)) { if (object) { - object_err(s, slab_page(slab), object, + object_err(s, slab, object, "Freechain corrupt"); set_freepointer(s, object, NULL); } else { @@ -1267,7 +1270,7 @@ static inline int alloc_consistency_checks(struct kmem_cache *s, return 0; if (!check_valid_pointer(s, slab, object)) { - object_err(s, slab_page(slab), object, "Freelist Pointer check fails"); + object_err(s, slab, object, "Freelist Pointer check fails"); return 0; } @@ -1316,7 +1319,7 @@ static inline int free_consistency_checks(struct kmem_cache *s, } if (on_freelist(s, slab, object)) { - object_err(s, slab_page(slab), object, "Object already free"); + object_err(s, slab, object, "Object already free"); return 0; } @@ -1332,8 +1335,7 @@ static inline int free_consistency_checks(struct kmem_cache *s, object); dump_stack(); } else - object_err(s, slab_page(slab), object, - "slab pointer corrupt."); + object_err(s, slab, object, "slab pointer corrupt."); return 0; } return 1; From patchwork Mon Oct 4 13:46:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB9F9C433F5 for ; Mon, 4 Oct 2021 14:45:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 75FE26139F for ; Mon, 4 Oct 2021 14:45:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 75FE26139F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 11037940048; Mon, 4 Oct 2021 10:45:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0C10994000B; Mon, 4 Oct 2021 10:45:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF131940048; Mon, 4 Oct 2021 10:45:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id E1FF594000B for ; Mon, 4 Oct 2021 10:45:14 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A29462FD73 for ; Mon, 4 Oct 2021 14:45:14 +0000 (UTC) X-FDA: 78659027748.25.A6DB2B1 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 5F3C430007BF for ; Mon, 4 Oct 2021 14:45:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aEqZaX/FloNxnWiS4pxgrSMgWV8aLzwHmaFIHzXHqPI=; b=AxGltJEjI8FiRRLwxNF825E5j4 AKGVoe83xgUlEUTbbtOOQnXfDlh43Ckfa6xmpu1KdbjWQJPOpOxqqhAk/V0fMN/F9wQQT2kgn0kuh PPpWOLc5trZJltE+1mNsa7eeQuzm4cV9jAI6rcDpOGrJz0Kh8Q8Arkl1y58HkXwL3L0x42cpOXiSr zqSdtDF1djk8DOIaTFdFf03AfXnuldw7Z41WSEJqw0xdBFc1aAww11STgEM/hohAsFDPjfzK+wmGu dmVpdUzg17rmy05/cGmLiYK4wSRn1G6vW2rhl6t9tbtoYNQKc7FFNGbFfZ9KLm1F3QcN12rVfPdaM ultc6bnw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPBJ-00H0zR-Pm; Mon, 04 Oct 2021 14:43:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 44/62] mm/slub: Convert print_trailer() to struct slab Date: Mon, 4 Oct 2021 14:46:32 +0100 Message-Id: <20211004134650.4031813-45-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5F3C430007BF X-Stat-Signature: zedc4eqb8mzkyy7whssnaw6er9hajkyq Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=AxGltJEj; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633358714-459306 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is mostly pushing slab_page() calls down. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index a93a6d679de2..9651586a3450 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -816,14 +816,14 @@ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, return false; } -static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p) +static void print_trailer(struct kmem_cache *s, struct slab *slab, u8 *p) { unsigned int off; /* Offset of last byte */ - u8 *addr = page_address(page); + u8 *addr = slab_address(slab); print_tracking(s, p); - print_page_info(page); + print_page_info(slab_page(slab)); pr_err("Object 0x%p @offset=%tu fp=0x%p\n\n", p, p - addr, get_freepointer(s, p)); @@ -862,7 +862,7 @@ static void object_err(struct kmem_cache *s, struct slab *slab, return; slab_bug(s, "%s", reason); - print_trailer(s, slab_page(slab), object); + print_trailer(s, slab, object); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } @@ -932,7 +932,7 @@ static int check_bytes_and_report(struct kmem_cache *s, struct slab *slab, pr_err("0x%p-0x%p @offset=%tu. First byte 0x%x instead of 0x%x\n", fault, end - 1, fault - addr, fault[0], value); - print_trailer(s, slab_page(slab), object); + print_trailer(s, slab, object); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); skip_bug_print: From patchwork Mon Oct 4 13:46:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C30C5C433EF for ; Mon, 4 Oct 2021 14:46:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7576E610EA for ; Mon, 4 Oct 2021 14:46:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7576E610EA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0E76A940049; Mon, 4 Oct 2021 10:46:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0975494000B; Mon, 4 Oct 2021 10:46:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC94B940049; Mon, 4 Oct 2021 10:46:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id DC09C94000B for ; Mon, 4 Oct 2021 10:46:34 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 968F18249980 for ; Mon, 4 Oct 2021 14:46:34 +0000 (UTC) X-FDA: 78659031108.02.964636F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id 5676320061CA for ; Mon, 4 Oct 2021 14:46:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/RC78lSaGXRDrNrbdXhhvYZOtHHhSvlMsLlFe4T2/xo=; b=qk1zRLApRYKENPFHrvLJ1ySGk7 vTwiK+FXxGzFP440teQey8oAK3VN4P9b1cJe/v6ichmlylHGCJIVa7ezj+1GPJBHud8QPxmUcRTY3 qylxkC5uWI8lv1f/gclHlqEeb1vAH6vh2qmLeJmvH3VF0l4CO5Mqt5+mTsMrPgHnV4QlA7i/s5+OB XF0o6aMraammQGFDlgu7pxIPTqt68NW5fDtXU6GAWQH833cmhhxGZ0ECmA2fUOwDnb31a+9G9pSGX 5MuMT6zW8UqoAmVga+L75pbtttFdLNLsdmT2kH9xDXy6IztxZNZ1mIOA2EIoxEdSI1sqtLS/yRJsQ 3b2Xll+g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPCb-00H144-1l; Mon, 04 Oct 2021 14:44:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 45/62] mm/slub: Convert slab_err() to take a struct slab Date: Mon, 4 Oct 2021 14:46:33 +0100 Message-Id: <20211004134650.4031813-46-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qk1zRLAp; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 5676320061CA X-Stat-Signature: jkeptn87ie3ub8zd1rgbpryzruuc6utc X-HE-Tag: 1633358794-236133 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Push slab_page() down. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 9651586a3450..98cc2545a9bd 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -866,7 +866,7 @@ static void object_err(struct kmem_cache *s, struct slab *slab, add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } -static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *page, +static __printf(3, 4) void slab_err(struct kmem_cache *s, struct slab *slab, const char *fmt, ...) { va_list args; @@ -879,7 +879,7 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *page, vsnprintf(buf, sizeof(buf), fmt, args); va_end(args); slab_bug(s, "%s", buf); - print_page_info(page); + print_page_info(slab_page(slab)); dump_stack(); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } @@ -1024,7 +1024,7 @@ static int slab_pad_check(struct kmem_cache *s, struct slab *slab) while (end > fault && end[-1] == POISON_INUSE) end--; - slab_err(s, slab_page(slab), "Padding overwritten. 0x%p-0x%p @offset=%tu", + slab_err(s, slab, "Padding overwritten. 0x%p-0x%p @offset=%tu", fault, end - 1, fault - start); print_section(KERN_ERR, "Padding ", pad, remainder); @@ -1093,18 +1093,18 @@ static int check_slab(struct kmem_cache *s, struct slab *slab) int maxobj; if (!slab_test_cache(slab)) { - slab_err(s, slab_page(slab), "Not a valid slab page"); + slab_err(s, slab, "Not a valid slab page"); return 0; } maxobj = order_objects(slab_order(slab), s->size); if (slab->objects > maxobj) { - slab_err(s, slab_page(slab), "objects %u > max %u", + slab_err(s, slab, "objects %u > max %u", slab->objects, maxobj); return 0; } if (slab->inuse > slab->objects) { - slab_err(s, slab_page(slab), "inuse %u > max %u", + slab_err(s, slab, "inuse %u > max %u", slab->inuse, slab->objects); return 0; } @@ -1134,7 +1134,7 @@ static int on_freelist(struct kmem_cache *s, struct slab *slab, void *search) "Freechain corrupt"); set_freepointer(s, object, NULL); } else { - slab_err(s, slab_page(slab), "Freepointer corrupt"); + slab_err(s, slab, "Freepointer corrupt"); slab->freelist = NULL; slab->inuse = slab->objects; slab_fix(s, "Freelist cleared"); @@ -1152,13 +1152,13 @@ static int on_freelist(struct kmem_cache *s, struct slab *slab, void *search) max_objects = MAX_OBJS_PER_PAGE; if (slab->objects != max_objects) { - slab_err(s, slab_page(slab), "Wrong number of objects. Found %d but should be %d", + slab_err(s, slab, "Wrong number of objects. Found %d but should be %d", slab->objects, max_objects); slab->objects = max_objects; slab_fix(s, "Number of objects adjusted"); } if (slab->inuse != slab->objects - nr) { - slab_err(s, slab_page(slab), "Wrong object count. Counter is %d but counted were %d", + slab_err(s, slab, "Wrong object count. Counter is %d but counted were %d", slab->inuse, slab->objects - nr); slab->inuse = slab->objects - nr; slab_fix(s, "Object count adjusted"); @@ -1314,7 +1314,7 @@ static inline int free_consistency_checks(struct kmem_cache *s, struct slab *slab, void *object, unsigned long addr) { if (!check_valid_pointer(s, slab, object)) { - slab_err(s, slab_page(slab), "Invalid object pointer 0x%p", object); + slab_err(s, slab, "Invalid object pointer 0x%p", object); return 0; } @@ -1328,7 +1328,7 @@ static inline int free_consistency_checks(struct kmem_cache *s, if (unlikely(s != slab->slab_cache)) { if (!slab_test_cache(slab)) { - slab_err(s, slab_page(slab), "Attempt to free object(0x%p) outside of slab", + slab_err(s, slab, "Attempt to free object(0x%p) outside of slab", object); } else if (!slab->slab_cache) { pr_err("SLUB : no slab for object 0x%p.\n", @@ -1384,7 +1384,7 @@ static noinline int free_debug_processing( out: if (cnt != bulk_cnt) - slab_err(s, slab_page(slab), "Bulk freelist count(%d) invalid(%d)\n", + slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n", bulk_cnt, cnt); slab_unlock(slab_page(slab), &flags2); @@ -4214,7 +4214,7 @@ static void list_slab_objects(struct kmem_cache *s, struct slab *slab, unsigned long *map; void *p; - slab_err(s, slab_page(slab), text, s->name); + slab_err(s, slab, text, s->name); slab_lock(slab_page(slab), &flags); map = get_map(s, slab_page(slab)); From patchwork Mon Oct 4 13:46:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534203 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21B3AC433EF for ; Mon, 4 Oct 2021 14:47:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C0D276139F for ; Mon, 4 Oct 2021 14:47:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C0D276139F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 57A2994004A; Mon, 4 Oct 2021 10:47:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 52A1E94000B; Mon, 4 Oct 2021 10:47:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 440A294004A; Mon, 4 Oct 2021 10:47:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id 3348794000B for ; Mon, 4 Oct 2021 10:47:20 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E53418249980 for ; Mon, 4 Oct 2021 14:47:19 +0000 (UTC) X-FDA: 78659032998.29.055C88B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 90737801C341 for ; Mon, 4 Oct 2021 14:47:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=qgXZu60O0l8b5tmOOZNb1Kde1E8gMK34R63EDEncCF8=; b=Qr56fSpqhJK3/SgiyGavpkoyIM NX6Mz3DkgQDU8Aqw6ggaant5iF4mQPr5QrJUyQmzCCIA9IgPxFNK7OhVwBCenS7IPMY0NQ60uZY7F 6cyfIEciKyfNM+6wXGbJ4029SdW0j8313gT5MHgZgMSjbJsptpBn/QH8q47wxtgx61b4nL810WDlB yTbN9oLP7eVp7VeBVezt7+ZtsZF7kzGUfnZk7NOrPnCK+yVf6V8JAE1UVttzp5oZR3wrL2Kg0AgnU lXw7tZfwhGvu2jVXKQMqkNfoNZQugN64kyFATkRlTwmobcbTEMLYoLs7Moq/vG/CD926O91kB2VjP IXU3gx/g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPDN-00H184-8Z; Mon, 04 Oct 2021 14:45:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 46/62] mm/slub: Convert print_page_info() to print_slab_info() Date: Mon, 4 Oct 2021 14:46:34 +0100 Message-Id: <20211004134650.4031813-47-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Qr56fSpq; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 90737801C341 X-Stat-Signature: un5bmm14ezkcdnxf4nzbnr6bgbyifhpm X-HE-Tag: 1633358839-10647 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improve the type safety and remove calls to slab_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 98cc2545a9bd..d941bd188a8e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -761,12 +761,11 @@ void print_tracking(struct kmem_cache *s, void *object) print_track("Freed", get_track(s, object, TRACK_FREE), pr_time); } -static void print_page_info(struct page *page) +static void print_slab_info(struct slab *slab) { pr_err("Slab 0x%p objects=%u used=%u fp=0x%p flags=%#lx(%pGp)\n", - page, page->objects, page->inuse, page->freelist, - page->flags, &page->flags); - + slab, slab->objects, slab->inuse, slab->freelist, + slab->flags, &slab->flags); } static void slab_bug(struct kmem_cache *s, char *fmt, ...) @@ -823,7 +822,7 @@ static void print_trailer(struct kmem_cache *s, struct slab *slab, u8 *p) print_tracking(s, p); - print_page_info(slab_page(slab)); + print_slab_info(slab); pr_err("Object 0x%p @offset=%tu fp=0x%p\n\n", p, p - addr, get_freepointer(s, p)); @@ -879,7 +878,7 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct slab *slab, vsnprintf(buf, sizeof(buf), fmt, args); va_end(args); slab_bug(s, "%s", buf); - print_page_info(slab_page(slab)); + print_slab_info(slab); dump_stack(); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } From patchwork Mon Oct 4 13:46:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534205 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41990C433F5 for ; Mon, 4 Oct 2021 14:48:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E46A3613AD for ; Mon, 4 Oct 2021 14:48:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E46A3613AD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 885DC94004B; Mon, 4 Oct 2021 10:48:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 835D494000B; Mon, 4 Oct 2021 10:48:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FDF394004B; Mon, 4 Oct 2021 10:48:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0235.hostedemail.com [216.40.44.235]) by kanga.kvack.org (Postfix) with ESMTP id 617AB94000B for ; Mon, 4 Oct 2021 10:48:11 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1B7CC1828DEBC for ; Mon, 4 Oct 2021 14:48:11 +0000 (UTC) X-FDA: 78659035182.24.40665ED Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id C4B21F000BD0 for ; Mon, 4 Oct 2021 14:48:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ofGLTMpoQ5pBF8mYfiYstuiwmeeCrHVfwPHbw1ktlMs=; b=IhJasQ0w8FtV+dUNWEYQBwn0n9 65dZl96si4RKU8obFdkhdpAaHvktjy4WJJwqSq7YaCqkF6JreIIRVhENPYiUNZfvF4CnI+Q5C6IRw Rp4hfOGYxfjVZxWbhlvZ+uHDRJLngS6wein8ky9MJnfi8OXu4/Pja4YAgC123ROFyxSGzQD6bAkv3 ZyvT1kBiYLBP5Q02tuQcc4ZAOxkb4FGNS9Ms21lImyMRf4NIN+5yCBMv2ptuM+3vZn6q60keb6TEI s8FSnPAWLsWmn+lAPKasactlSZAbQwZxv7QVUMbrKQL6rNXYeKz/frH5e5LZgZpMRZoqSbv5iSXn3 lNFX/zWg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPF3-00H1EC-Ny; Mon, 04 Oct 2021 14:47:06 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 47/62] mm/slub: Convert trace() to take a struct slab Date: Mon, 4 Oct 2021 14:46:35 +0100 Message-Id: <20211004134650.4031813-48-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: C4B21F000BD0 X-Stat-Signature: 7bsctunejh3z1z91jikb78dmjx8zegmt Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=IhJasQ0w; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633358890-700009 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improves type safety and removes calls to slab_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index d941bd188a8e..72a50fab64b5 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1165,15 +1165,15 @@ static int on_freelist(struct kmem_cache *s, struct slab *slab, void *search) return search == NULL; } -static void trace(struct kmem_cache *s, struct page *page, void *object, +static void trace(struct kmem_cache *s, struct slab *slab, void *object, int alloc) { if (s->flags & SLAB_TRACE) { pr_info("TRACE %s %s 0x%p inuse=%d fp=0x%p\n", s->name, alloc ? "alloc" : "free", - object, page->inuse, - page->freelist); + object, slab->inuse, + slab->freelist); if (!alloc) print_section(KERN_INFO, "Object ", (void *)object, @@ -1291,7 +1291,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s, /* Success perform special debug activities for allocs */ if (s->flags & SLAB_STORE_USER) set_track(s, object, TRACK_ALLOC, addr); - trace(s, slab_page(slab), object, 1); + trace(s, slab, object, 1); init_object(s, object, SLUB_RED_ACTIVE); return 1; @@ -1370,7 +1370,7 @@ static noinline int free_debug_processing( if (s->flags & SLAB_STORE_USER) set_track(s, object, TRACK_FREE, addr); - trace(s, slab_page(slab), object, 0); + trace(s, slab, object, 0); /* Freepointer not overwritten by init_object(), SLAB_POISON moved it */ init_object(s, object, SLUB_RED_INACTIVE); From patchwork Mon Oct 4 13:46:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46166C433F5 for ; Mon, 4 Oct 2021 14:49:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EDA9B613A8 for ; Mon, 4 Oct 2021 14:49:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EDA9B613A8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 8AB1B94004C; Mon, 4 Oct 2021 10:49:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 85A7594000B; Mon, 4 Oct 2021 10:49:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 74A9694004C; Mon, 4 Oct 2021 10:49:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id 6723194000B for ; Mon, 4 Oct 2021 10:49:17 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 191862D3B1 for ; Mon, 4 Oct 2021 14:49:17 +0000 (UTC) X-FDA: 78659037954.03.44E539C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id CE9C52002835 for ; Mon, 4 Oct 2021 14:49:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=JXf1y2FhZPKSvbmR8SXUgl2IQsIcI5Cle91jKNKGrCc=; b=oFsRDF2ZXsJeL2F5mxFJ7FFD9D ng64rUuYpuGLP2ZxQZrNOdF/EUL/G7nsWaA4/6XcMEs2rLOainICSjo+CNw9dnpOg2qu4X5PDAKEY H+D1WJe6dhg/xz4uh+28o2R2cyuDEz4EJJGEr3cYh/S1SqEMYZbg4cJoghK5EixBNmgBYcz62qkF4 U6b67B62TvnFBtnXdjb5WAQvlGoJwcxe3QURWIuPnzBOyoatNJycMwNy5OnPasW1f+zQLsIVDWg0z uUaF1IDwc9rTMCWHQRKtUWvJMgO/foaw74j0JR86I3HNyscenfYTk8EBAcclC7j66s3/esXh/+3ms JwOoFOfw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPFf-00H1Gf-Ei; Mon, 04 Oct 2021 14:47:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 48/62] mm/slub: Convert cmpxchg_double_slab to struct slab Date: Mon, 4 Oct 2021 14:46:36 +0100 Message-Id: <20211004134650.4031813-49-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: CE9C52002835 X-Stat-Signature: azezng85xwfhou8rtg9h48x8uc4k3d8j Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=oFsRDF2Z; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633358956-272903 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improve type safety for both cmpxchg_double_slab() and __cmpxchg_double_slab(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 46 +++++++++++++++++++++++----------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 72a50fab64b5..0d9299679ea2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -448,7 +448,7 @@ static __always_inline void slab_unlock(struct page *page, unsigned long *flags) * by an _irqsave() lock variant. Except on PREEMPT_RT where locks are different * so we disable interrupts as part of slab_[un]lock(). */ -static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page, +static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab, void *freelist_old, unsigned long counters_old, void *freelist_new, unsigned long counters_new, const char *n) @@ -458,7 +458,7 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \ defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE) if (s->flags & __CMPXCHG_DOUBLE) { - if (cmpxchg_double(&page->freelist, &page->counters, + if (cmpxchg_double(&slab->freelist, &slab->counters, freelist_old, counters_old, freelist_new, counters_new)) return true; @@ -468,15 +468,15 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page /* init to 0 to prevent spurious warnings */ unsigned long flags = 0; - slab_lock(page, &flags); - if (page->freelist == freelist_old && - page->counters == counters_old) { - page->freelist = freelist_new; - page->counters = counters_new; - slab_unlock(page, &flags); + slab_lock(slab_page(slab), &flags); + if (slab->freelist == freelist_old && + slab->counters == counters_old) { + slab->freelist = freelist_new; + slab->counters = counters_new; + slab_unlock(slab_page(slab), &flags); return true; } - slab_unlock(page, &flags); + slab_unlock(slab_page(slab), &flags); } cpu_relax(); @@ -489,7 +489,7 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page return false; } -static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, +static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab, void *freelist_old, unsigned long counters_old, void *freelist_new, unsigned long counters_new, const char *n) @@ -497,7 +497,7 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \ defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE) if (s->flags & __CMPXCHG_DOUBLE) { - if (cmpxchg_double(&page->freelist, &page->counters, + if (cmpxchg_double(&slab->freelist, &slab->counters, freelist_old, counters_old, freelist_new, counters_new)) return true; @@ -507,16 +507,16 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, unsigned long flags; local_irq_save(flags); - __slab_lock(page); - if (page->freelist == freelist_old && - page->counters == counters_old) { - page->freelist = freelist_new; - page->counters = counters_new; - __slab_unlock(page); + __slab_lock(slab_page(slab)); + if (slab->freelist == freelist_old && + slab->counters == counters_old) { + slab->freelist = freelist_new; + slab->counters = counters_new; + __slab_unlock(slab_page(slab)); local_irq_restore(flags); return true; } - __slab_unlock(page); + __slab_unlock(slab_page(slab)); local_irq_restore(flags); } @@ -2068,7 +2068,7 @@ static inline void *acquire_slab(struct kmem_cache *s, VM_BUG_ON(new.frozen); new.frozen = 1; - if (!__cmpxchg_double_slab(s, slab_page(slab), + if (!__cmpxchg_double_slab(s, slab, freelist, counters, new.freelist, new.counters, "acquire_slab")) @@ -2412,7 +2412,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, } l = m; - if (!cmpxchg_double_slab(s, slab_page(slab), + if (!cmpxchg_double_slab(s, slab, old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")) @@ -2466,7 +2466,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) new.frozen = 0; - } while (!__cmpxchg_double_slab(s, slab_page(slab), + } while (!__cmpxchg_double_slab(s, slab, old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")); @@ -2837,7 +2837,7 @@ static inline void *get_freelist(struct kmem_cache *s, struct slab *slab) new.inuse = slab->objects; new.frozen = freelist != NULL; - } while (!__cmpxchg_double_slab(s, slab_page(slab), + } while (!__cmpxchg_double_slab(s, slab, freelist, counters, NULL, new.counters, "get_freelist")); @@ -3329,7 +3329,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, } } - } while (!cmpxchg_double_slab(s, slab_page(slab), + } while (!cmpxchg_double_slab(s, slab, prior, counters, head, new.counters, "__slab_free")); From patchwork Mon Oct 4 13:46:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E56BC433EF for ; Mon, 4 Oct 2021 14:50:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C173C613AC for ; Mon, 4 Oct 2021 14:50:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C173C613AC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 5E9F794004D; Mon, 4 Oct 2021 10:50:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 598DF94000B; Mon, 4 Oct 2021 10:50:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AFF394004D; Mon, 4 Oct 2021 10:50:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id 3C71594000B for ; Mon, 4 Oct 2021 10:50:32 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id EA1C41828000B for ; Mon, 4 Oct 2021 14:50:31 +0000 (UTC) X-FDA: 78659041062.06.B41F279 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id 27B319008004 for ; Mon, 4 Oct 2021 14:50:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=pS0ASi/81sc9R6hwg8umMOFY0NXDtIk1poxPoXfn+RQ=; b=pGluLTuo3uBmLI7Xd0WbYMCfwY udeF2ZxiQsBZIivFkH4v2JbWF7b+7y7sfJdLi1vucv90NtXFuZ/Fyxn2d7RQixuUBeGOHzYIt4hA3 XRK7Pdh7OtNBk2rgvu0O+rBCT0fbmqLVh0F+5sdvSa1XGM8UGUvskGPAN5+0mFEiRckDtDop2qDUy iH8ZT5B8Xyi+NgBXH65XVMsTbW0JfGXly1vuExF4o4vSJDAaX+XB2a28u6TUS9RVjMslUmfzOwmot GKbuCaQRqdyBxRxPh6fTHsYjM5GDf02wadE5WYwA0zjtHoP4BCBoawzkypUc94SMOEQhT9IlzYYq/ 276RaetQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPGR-00H1Kr-JV; Mon, 04 Oct 2021 14:48:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 49/62] mm/slub: Convert get_map() and __fill_map() to struct slab Date: Mon, 4 Oct 2021 14:46:37 +0100 Message-Id: <20211004134650.4031813-50-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 27B319008004 X-Stat-Signature: jgpkrpbgbpwb9kbzug3btjtdejtk51zh Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=pGluLTuo; dmarc=none; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633359031-254354 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improve type safety and remove calls to slab_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 0d9299679ea2..86d06f6aa743 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -535,14 +535,14 @@ static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)]; static DEFINE_RAW_SPINLOCK(object_map_lock); static void __fill_map(unsigned long *obj_map, struct kmem_cache *s, - struct page *page) + struct slab *slab) { - void *addr = page_address(page); + void *addr = slab_address(slab); void *p; - bitmap_zero(obj_map, page->objects); + bitmap_zero(obj_map, slab->objects); - for (p = page->freelist; p; p = get_freepointer(s, p)) + for (p = slab->freelist; p; p = get_freepointer(s, p)) set_bit(__obj_to_index(s, addr, p), obj_map); } @@ -567,19 +567,19 @@ static inline bool slab_add_kunit_errors(void) { return false; } #endif /* - * Determine a map of object in use on a page. + * Determine a map of objects in use in a slab. * - * Node listlock must be held to guarantee that the page does + * Node listlock must be held to guarantee that the slab does * not vanish from under us. */ -static unsigned long *get_map(struct kmem_cache *s, struct page *page) +static unsigned long *get_map(struct kmem_cache *s, struct slab *slab) __acquires(&object_map_lock) { VM_BUG_ON(!irqs_disabled()); raw_spin_lock(&object_map_lock); - __fill_map(object_map, s, page); + __fill_map(object_map, s, slab); return object_map; } @@ -4216,7 +4216,7 @@ static void list_slab_objects(struct kmem_cache *s, struct slab *slab, slab_err(s, slab, text, s->name); slab_lock(slab_page(slab), &flags); - map = get_map(s, slab_page(slab)); + map = get_map(s, slab); for_each_object(p, s, addr, slab->objects) { if (!test_bit(__obj_to_index(s, addr, p), map)) { @@ -4964,7 +4964,7 @@ static void validate_slab(struct kmem_cache *s, struct slab *slab, goto unlock; /* Now we know that a valid freelist exists */ - __fill_map(obj_map, s, slab_page(slab)); + __fill_map(obj_map, s, slab); for_each_object(p, s, addr, slab->objects) { u8 val = test_bit(__obj_to_index(s, addr, p), obj_map) ? SLUB_RED_INACTIVE : SLUB_RED_ACTIVE; @@ -5170,7 +5170,7 @@ static void process_slab(struct loc_track *t, struct kmem_cache *s, void *addr = slab_address(slab); void *p; - __fill_map(obj_map, s, slab_page(slab)); + __fill_map(obj_map, s, slab); for_each_object(p, s, addr, slab->objects) if (!test_bit(__obj_to_index(s, addr, p), obj_map)) From patchwork Mon Oct 4 13:46:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2C4EC433F5 for ; Mon, 4 Oct 2021 14:52:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6403C61175 for ; Mon, 4 Oct 2021 14:52:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6403C61175 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D60DD94004E; Mon, 4 Oct 2021 10:52:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D10E694000B; Mon, 4 Oct 2021 10:52:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD96394004E; Mon, 4 Oct 2021 10:52:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0113.hostedemail.com [216.40.44.113]) by kanga.kvack.org (Postfix) with ESMTP id AA90994000B for ; Mon, 4 Oct 2021 10:52:17 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 701792DEBE for ; Mon, 4 Oct 2021 14:52:17 +0000 (UTC) X-FDA: 78659045514.02.CD3FE36 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 297431000A80 for ; Mon, 4 Oct 2021 14:52:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=RWRCIWW/YKFbWLDMrwFgdy3wJdFZ6F9Jn8ZHDko0M80=; b=PK/62uzhX4XLdTtLoWAd3VJNJu e1oHl7ScZ1fNgWOJa/sWw7tdBwSPT8L2lnlNqhWfGROtmm+wm43zKfnIPjcxkFK0fl6S6yB1Ct5PF aFZFv31dS1x+ZjFd8U5acJ/Yyoz1Vpp6JgmzLWR0yVIUBaeJd2r5yZT1IE/egG771oEFj90jsghWd ePUU2qr05hWflneR1T/IRlvfSSWrpcphvSPYhpgnbo9zXsBhkevJQJBNn4HaoJ8qnfHD2ZYigkWji kzl8/LLAnop24Y9P9DbjrXs5p8L9pPlYA4rvTMf5V9iPQkQSrWCjui3PzFG+UaeOnpxK7wLbOT45v kFDGe5uw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPHg-00H1Yp-MS; Mon, 04 Oct 2021 14:50:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 50/62] mm/slub: Convert slab_lock() and slab_unlock() to struct slab Date: Mon, 4 Oct 2021 14:46:38 +0100 Message-Id: <20211004134650.4031813-51-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 297431000A80 X-Stat-Signature: p6nh8niup43wncapruo33r91mtokqsnr Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="PK/62uzh"; dmarc=none; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633359137-910333 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improve type safety to the point where we can get rid of the assertions that this is not a tail page. Remove a lot of calls to slab_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 52 +++++++++++++++++++++++++--------------------------- 1 file changed, 25 insertions(+), 27 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 86d06f6aa743..5cf305b2b8da 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -48,7 +48,7 @@ * 1. slab_mutex (Global Mutex) * 2. node->list_lock (Spinlock) * 3. kmem_cache->cpu_slab->lock (Local lock) - * 4. slab_lock(page) (Only on some arches or for debugging) + * 4. slab_lock() (Only on some arches or for debugging) * 5. object_map_lock (Only for debugging) * * slab_mutex @@ -64,10 +64,10 @@ * * The slab_lock is only used for debugging and on arches that do not * have the ability to do a cmpxchg_double. It only protects: - * A. page->freelist -> List of object free in a page - * B. page->inuse -> Number of objects in use - * C. page->objects -> Number of objects in page - * D. page->frozen -> frozen state + * A. slab->freelist -> List of object free in a page + * B. slab->inuse -> Number of objects in use + * C. slab->objects -> Number of objects in page + * D. slab->frozen -> frozen state * * Frozen slabs * @@ -417,28 +417,26 @@ static inline unsigned int oo_objects(struct kmem_cache_order_objects x) /* * Per slab locking using the pagelock */ -static __always_inline void __slab_lock(struct page *page) +static __always_inline void __slab_lock(struct slab *slab) { - VM_BUG_ON_PAGE(PageTail(page), page); - bit_spin_lock(PG_locked, &page->flags); + bit_spin_lock(PG_locked, &slab->flags); } -static __always_inline void __slab_unlock(struct page *page) +static __always_inline void __slab_unlock(struct slab *slab) { - VM_BUG_ON_PAGE(PageTail(page), page); - __bit_spin_unlock(PG_locked, &page->flags); + __bit_spin_unlock(PG_locked, &slab->flags); } -static __always_inline void slab_lock(struct page *page, unsigned long *flags) +static __always_inline void slab_lock(struct slab *slab, unsigned long *flags) { if (IS_ENABLED(CONFIG_PREEMPT_RT)) local_irq_save(*flags); - __slab_lock(page); + __slab_lock(slab); } -static __always_inline void slab_unlock(struct page *page, unsigned long *flags) +static __always_inline void slab_unlock(struct slab *slab, unsigned long *flags) { - __slab_unlock(page); + __slab_unlock(slab); if (IS_ENABLED(CONFIG_PREEMPT_RT)) local_irq_restore(*flags); } @@ -468,15 +466,15 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab /* init to 0 to prevent spurious warnings */ unsigned long flags = 0; - slab_lock(slab_page(slab), &flags); + slab_lock(slab, &flags); if (slab->freelist == freelist_old && slab->counters == counters_old) { slab->freelist = freelist_new; slab->counters = counters_new; - slab_unlock(slab_page(slab), &flags); + slab_unlock(slab, &flags); return true; } - slab_unlock(slab_page(slab), &flags); + slab_unlock(slab, &flags); } cpu_relax(); @@ -507,16 +505,16 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab, unsigned long flags; local_irq_save(flags); - __slab_lock(slab_page(slab)); + __slab_lock(slab); if (slab->freelist == freelist_old && slab->counters == counters_old) { slab->freelist = freelist_new; slab->counters = counters_new; - __slab_unlock(slab_page(slab)); + __slab_unlock(slab); local_irq_restore(flags); return true; } - __slab_unlock(slab_page(slab)); + __slab_unlock(slab); local_irq_restore(flags); } @@ -1353,7 +1351,7 @@ static noinline int free_debug_processing( int ret = 0; spin_lock_irqsave(&n->list_lock, flags); - slab_lock(slab_page(slab), &flags2); + slab_lock(slab, &flags2); if (s->flags & SLAB_CONSISTENCY_CHECKS) { if (!check_slab(s, slab)) @@ -1386,7 +1384,7 @@ static noinline int free_debug_processing( slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n", bulk_cnt, cnt); - slab_unlock(slab_page(slab), &flags2); + slab_unlock(slab, &flags2); spin_unlock_irqrestore(&n->list_lock, flags); if (!ret) slab_fix(s, "Object at 0x%p not freed", object); @@ -4214,7 +4212,7 @@ static void list_slab_objects(struct kmem_cache *s, struct slab *slab, void *p; slab_err(s, slab, text, s->name); - slab_lock(slab_page(slab), &flags); + slab_lock(slab, &flags); map = get_map(s, slab); for_each_object(p, s, addr, slab->objects) { @@ -4225,7 +4223,7 @@ static void list_slab_objects(struct kmem_cache *s, struct slab *slab, } } put_map(map); - slab_unlock(slab_page(slab), &flags); + slab_unlock(slab, &flags); #endif } @@ -4958,7 +4956,7 @@ static void validate_slab(struct kmem_cache *s, struct slab *slab, void *addr = slab_address(slab); unsigned long flags; - slab_lock(slab_page(slab), &flags); + slab_lock(slab, &flags); if (!check_slab(s, slab) || !on_freelist(s, slab, NULL)) goto unlock; @@ -4973,7 +4971,7 @@ static void validate_slab(struct kmem_cache *s, struct slab *slab, break; } unlock: - slab_unlock(slab_page(slab), &flags); + slab_unlock(slab, &flags); } static int validate_slab_node(struct kmem_cache *s, From patchwork Mon Oct 4 13:46:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7A98C433F5 for ; Mon, 4 Oct 2021 14:52:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5DBCC61373 for ; Mon, 4 Oct 2021 14:52:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5DBCC61373 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id F353694004F; Mon, 4 Oct 2021 10:52:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EE58194000B; Mon, 4 Oct 2021 10:52:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DD54694004F; Mon, 4 Oct 2021 10:52:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0003.hostedemail.com [216.40.44.3]) by kanga.kvack.org (Postfix) with ESMTP id C7DDE94000B for ; Mon, 4 Oct 2021 10:52:40 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8B5F1250D3 for ; Mon, 4 Oct 2021 14:52:40 +0000 (UTC) X-FDA: 78659046480.32.F057207 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 41A0B50009D6 for ; Mon, 4 Oct 2021 14:52:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FF2KZ97aw0nGufx3iFvBniAMSz+RDBFYPAt9qka6On8=; b=wAbBM8MOKBwpDrib9tFTfZNgoO Zx/HfItUgbKJkfP4nIL8IU6+gahRRRrdaEcBi/krHP6l8ccPF35nt4IJ2gor+2mjvw9dmd22SXQVz 36Pivk0EmyoUyL/zOZ9oV32cesFxpNqxoYbITUKtsDEWcUeOlhwavif2/ZfKGefNf2wRd+PYA0Wf5 eRPinVo2d5kxthGbU4Awxu3CFN6a1ArKSVJdOblFFbnh8P843V2M+KdJ8pLpMqXbzZEAZQ/WLhDgT gHdboViwQbra3QjQACwXgLoJ+TP4+h/5GFM7jVq2aZN5OXAnFqHjtNh0iOmpT0r9J+6EIsD+eGcvC /MjP6k+w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPJW-00H1ez-JO; Mon, 04 Oct 2021 14:51:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 51/62] mm/slub: Convert setup_page_debug() to setup_slab_debug() Date: Mon, 4 Oct 2021 14:46:39 +0100 Message-Id: <20211004134650.4031813-52-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 41A0B50009D6 X-Stat-Signature: 1i9niuut43zkoy56erfiwzfworwcer7m Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=wAbBM8MO; dmarc=none; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633359160-202674 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes a call to slab_page() Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 5cf305b2b8da..24111e30c7a2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1250,13 +1250,13 @@ static void setup_object_debug(struct kmem_cache *s, void *object) } static -void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr) +void setup_slab_debug(struct kmem_cache *s, struct slab *slab, void *addr) { if (!kmem_cache_debug_flags(s, SLAB_POISON)) return; metadata_access_enable(); - memset(kasan_reset_tag(addr), POISON_INUSE, page_size(page)); + memset(kasan_reset_tag(addr), POISON_INUSE, slab_size(slab)); metadata_access_disable(); } @@ -1600,7 +1600,7 @@ slab_flags_t kmem_cache_flags(unsigned int object_size, #else /* !CONFIG_SLUB_DEBUG */ static inline void setup_object_debug(struct kmem_cache *s, void *object) {} static inline -void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr) {} +void setup_slab_debug(struct kmem_cache *s, struct slab *slab, void *addr) {} static inline int alloc_debug_processing(struct kmem_cache *s, struct slab *slab, void *object, unsigned long addr) { return 0; } @@ -1919,7 +1919,7 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) start = slab_address(slab); - setup_page_debug(s, slab_page(slab), start); + setup_slab_debug(s, slab, start); shuffle = shuffle_freelist(s, slab); From patchwork Mon Oct 4 13:46:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A748C433F5 for ; Mon, 4 Oct 2021 14:54:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E5585611C0 for ; Mon, 4 Oct 2021 14:53:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E5585611C0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 82F3A940050; Mon, 4 Oct 2021 10:53:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7DE6D94000B; Mon, 4 Oct 2021 10:53:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6CDFB940050; Mon, 4 Oct 2021 10:53:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0075.hostedemail.com [216.40.44.75]) by kanga.kvack.org (Postfix) with ESMTP id 5938094000B for ; Mon, 4 Oct 2021 10:53:59 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1E9F2181AF5C1 for ; Mon, 4 Oct 2021 14:53:59 +0000 (UTC) X-FDA: 78659049798.02.43AB107 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id CEB96D0389FF for ; Mon, 4 Oct 2021 14:53:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dxzaPSBzfY0SrjP2w2cNw5gqRiPW5D1VARcKJPnKPv0=; b=oW2iIvdE9U97oMshStcSVtDteN vXqi5Q14GotqcBDfKSCex4YQJ2Ip/7V6YwtzwvGRv9YLX1gXA0aGcYSQlvC/AUUM1qtlyPmATSCxV cX00r5hxBfF1FreJ42Gk1M5T3yoO2e8psGcA6VuEbcxt4Y85bw7Vn1YaN8lbwPjNKllSR5iVjxwbA vuV33udEqrDQRhBg28tsTi+QVFuRu4nxrec4s0d005B+8bVswBI4dtDeGid23IPkmPtOteZ/veDcv USUtl4ocrtYor4MjypO0fHGvS7jKkD8VtvUWnknXpwVh09Ad0OGG+UimhWsz+ls3/x1X3TgJ08bLA aSbacz2A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPJy-00H1hV-Vc; Mon, 04 Oct 2021 14:52:20 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 52/62] mm/slub: Convert pfmemalloc_match() to take a struct slab Date: Mon, 4 Oct 2021 14:46:40 +0100 Message-Id: <20211004134650.4031813-53-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: CEB96D0389FF X-Stat-Signature: 7pj7cbzuw6no7zdefn435ne5js85s59h Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=oW2iIvdE; dmarc=none; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633359238-779704 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improves type safety and removes calls to slab_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 24111e30c7a2..7e2c5342196a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2083,7 +2083,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain); static inline void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain) { } #endif -static inline bool pfmemalloc_match(struct page *page, gfp_t gfpflags); +static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags); /* * Try to allocate a partial slab from a specific node. @@ -2110,7 +2110,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, list_for_each_entry_safe(slab, slab2, &n->partial, slab_list) { void *t; - if (!pfmemalloc_match(slab_page(slab), gfpflags)) + if (!pfmemalloc_match(slab, gfpflags)) continue; t = acquire_slab(s, n, slab, object == NULL, &objects); @@ -2788,9 +2788,9 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid) #endif } -static inline bool pfmemalloc_match(struct page *page, gfp_t gfpflags) +static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags) { - if (unlikely(PageSlabPfmemalloc(page))) + if (unlikely(slab_test_pfmemalloc(slab))) return gfp_pfmemalloc_allowed(gfpflags); return true; @@ -3017,7 +3017,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, } } - if (unlikely(!pfmemalloc_match(slab_page(slab), gfpflags))) + if (unlikely(!pfmemalloc_match(slab, gfpflags))) /* * For !pfmemalloc_match() case we don't load freelist so that * we don't make further mismatched allocations easier. From patchwork Mon Oct 4 13:46:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B46D3C433F5 for ; Mon, 4 Oct 2021 14:54:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6250761373 for ; Mon, 4 Oct 2021 14:54:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6250761373 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id EE695940051; Mon, 4 Oct 2021 10:54:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E950F94000B; Mon, 4 Oct 2021 10:54:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5D1E940051; Mon, 4 Oct 2021 10:54:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id C411494000B for ; Mon, 4 Oct 2021 10:54:18 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 849952DEC8 for ; Mon, 4 Oct 2021 14:54:18 +0000 (UTC) X-FDA: 78659050596.33.0DB5539 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 2CE83900070C for ; Mon, 4 Oct 2021 14:54:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/qXjlT9umJV3/fh5rGiESQtEI1YS5Uel0xJHsvq5R+0=; b=oCdyqFHpgJGUlYKoStAtkfIg13 Np44sNRkLsXnE/EyauG5/5CGoDaiJEnmZYlwDzijqnptep1N/6O6g91GIQoQ35u1n3VEgTQSaGYxV yEFETI0g0mrNe/4MUYqoTxM5b67ceCOLNg6arJg8cmvhF/JY3ZRzY/HxDlAOlzKAvovRpPffHRIbu bdpYwFWuoXLD5tOZXWhQ+rXxdNwSBrhJ2sDz7YGcdEpA7JRbP/Xnc8xLbq0AmiuqaCLtSGYjBlKrA IecNtuYu4bfjG6fI49ikUn9YkDEr8rF/NscrCD1S3hRoAPSuqW5rTiS7DAqqIPV9FLbjOC7tJcy9/ YgBVyU9w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPKi-00H1jC-Nr; Mon, 04 Oct 2021 14:53:14 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 53/62] mm/slub: Remove pfmemalloc_match_unsafe() Date: Mon, 4 Oct 2021 14:46:41 +0100 Message-Id: <20211004134650.4031813-54-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 2CE83900070C X-Stat-Signature: uea5qww1x93dfi1joq5yj7ceu5dfnd7f Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=oCdyqFHp; dmarc=none; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633359258-563489 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: slab_test_pfmemalloc() doesn't need to check PageSlab() (unlike PageSlabPfmemalloc()), so we don't need a pfmemalloc_match_unsafe() variant any more. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 15 +-------------- 1 file changed, 1 insertion(+), 14 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 7e2c5342196a..229fc56809c2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2796,19 +2796,6 @@ static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags) return true; } -/* - * A variant of pfmemalloc_match() that tests page flags without asserting - * PageSlab. Intended for opportunistic checks before taking a lock and - * rechecking that nobody else freed the page under us. - */ -static inline bool pfmemalloc_match_unsafe(struct page *page, gfp_t gfpflags) -{ - if (unlikely(__PageSlabPfmemalloc(page))) - return gfp_pfmemalloc_allowed(gfpflags); - - return true; -} - /* * Check the freelist of a slab and either transfer the freelist to the * per cpu freelist or deactivate the slab @@ -2905,7 +2892,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, * PFMEMALLOC but right now, we lose the pfmemalloc * information when the page leaves the per-cpu allocator */ - if (unlikely(!pfmemalloc_match_unsafe(slab_page(slab), gfpflags))) + if (unlikely(!pfmemalloc_match(slab, gfpflags))) goto deactivate_slab; /* must check again c->slab in case we got preempted and it changed */ From patchwork Mon Oct 4 13:46:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C10FCC433EF for ; Mon, 4 Oct 2021 14:55:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 585B861373 for ; Mon, 4 Oct 2021 14:55:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 585B861373 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E215C940053; Mon, 4 Oct 2021 10:55:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DD0DF94000B; Mon, 4 Oct 2021 10:55:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFD13940053; Mon, 4 Oct 2021 10:55:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id ADA0794000B for ; Mon, 4 Oct 2021 10:55:15 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 68B178249980 for ; Mon, 4 Oct 2021 14:55:15 +0000 (UTC) X-FDA: 78659052990.34.3DBE04B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id CE3474002811 for ; Mon, 4 Oct 2021 14:55:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=V/lL4x2V3BPu5KCV3fcrW6Ht6889o/HF9D0NlclwwM4=; b=Srg/3dpgeBqu9BPdVizG9oK7/t zz+eK214rsleInjWcjeZaTFThN+6Q/zXWLBEoR5kq6bvLVm9OI+lUc0wk06k9g26EaeqxW9PPatnb V/Umfixh9o3yEUkZAabKQQVBL0EPq5TTzioWB9VJlGI40R5qlmb3tAjA+5Bo2bia69EoUD2usn3i4 aQPZjG8TmbNbQfOdCoXFVeGD2GFrsnZmvzJU3Ra4IYGRU445MK47sX8O/UMR0T2HFTHhJGwI/n2wf JAo5xviioWSM39uiO0n2HzHCzZDsEnMMeZ92x3TGuQ2QTAX7YETc0qdqVh65EUfZUp6kyDy2qAxfx tXaa+6xw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPLp-00H1nF-Nm; Mon, 04 Oct 2021 14:54:07 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 54/62] mm: Convert slab to use struct slab Date: Mon, 4 Oct 2021 14:46:42 +0100 Message-Id: <20211004134650.4031813-55-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: CE3474002811 X-Stat-Signature: zhzoensza39y3kayk6dxftpiye38895b Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Srg/3dpg"; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633359314-413657 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use struct slab throughout the slab allocator. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slab.c | 405 +++++++++++++++++++++++++++--------------------------- mm/slab.h | 24 +--- 2 files changed, 208 insertions(+), 221 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 0d515fd697a0..29dc09e784b8 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -218,7 +218,7 @@ static void cache_reap(struct work_struct *unused); static inline void fixup_objfreelist_debug(struct kmem_cache *cachep, void **list); static inline void fixup_slab_list(struct kmem_cache *cachep, - struct kmem_cache_node *n, struct page *page, + struct kmem_cache_node *n, struct slab *slab, void **list); static int slab_early_init = 1; @@ -372,10 +372,10 @@ static void **dbg_userword(struct kmem_cache *cachep, void *objp) static int slab_max_order = SLAB_MAX_ORDER_LO; static bool slab_max_order_set __initdata; -static inline void *index_to_obj(struct kmem_cache *cache, - const struct page *page, unsigned int idx) +static inline void *index_to_obj(const struct kmem_cache *cache, + const struct slab *slab, unsigned int idx) { - return page->s_mem + cache->size * idx; + return slab->s_mem + cache->size * idx; } #define BOOT_CPUCACHE_ENTRIES 1 @@ -418,7 +418,7 @@ static unsigned int cache_estimate(unsigned long gfporder, size_t buffer_size, * * If the slab management structure is off the slab, then the * alignment will already be calculated into the size. Because - * the slabs are all pages aligned, the objects will be at the + * the slabs are all page aligned, the objects will be at the * correct alignment when allocated. */ if (flags & (CFLGS_OBJFREELIST_SLAB | CFLGS_OFF_SLAB)) { @@ -550,17 +550,17 @@ static struct array_cache *alloc_arraycache(int node, int entries, } static noinline void cache_free_pfmemalloc(struct kmem_cache *cachep, - struct page *page, void *objp) + struct slab *slab, void *objp) { struct kmem_cache_node *n; - int page_node; + int slab_node; LIST_HEAD(list); - page_node = page_to_nid(page); - n = get_node(cachep, page_node); + slab_node = slab_nid(slab); + n = get_node(cachep, slab_node); spin_lock(&n->list_lock); - free_block(cachep, &objp, 1, page_node, &list); + free_block(cachep, &objp, 1, slab_node, &list); spin_unlock(&n->list_lock); slabs_destroy(cachep, &list); @@ -1367,10 +1367,11 @@ slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nodeid) * did not request dmaable memory, we might get it, but that * would be relatively rare and ignorable. */ -static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, +static struct slab *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, int nodeid) { struct page *page; + struct slab *slab; flags |= cachep->allocflags; @@ -1380,44 +1381,42 @@ static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, return NULL; } - account_slab_page(page, cachep->gfporder, cachep, flags); + slab = (struct slab *)page; + account_slab(slab, cachep->gfporder, cachep, flags); __SetPageSlab(page); /* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */ if (sk_memalloc_socks() && page_is_pfmemalloc(page)) - SetPageSlabPfmemalloc(page); + slab_set_pfmemalloc(slab); - return page; + return slab; } /* * Interface to system's page release. */ -static void kmem_freepages(struct kmem_cache *cachep, struct page *page) +static void kmem_freepages(struct kmem_cache *cachep, struct slab *slab) { + struct page *page = slab_page(slab); int order = cachep->gfporder; - BUG_ON(!PageSlab(page)); - __ClearPageSlabPfmemalloc(page); + BUG_ON(!slab_test_cache(slab)); + __slab_clear_pfmemalloc(slab); __ClearPageSlab(page); page_mapcount_reset(page); - /* In union with page->mapping where page allocator expects NULL */ - page->slab_cache = NULL; + page->mapping = NULL; if (current->reclaim_state) current->reclaim_state->reclaimed_slab += 1 << order; - unaccount_slab_page(page, order, cachep); + unaccount_slab(slab, order, cachep); __free_pages(page, order); } static void kmem_rcu_free(struct rcu_head *head) { - struct kmem_cache *cachep; - struct page *page; + struct slab *slab = container_of(head, struct slab, rcu_head); + struct kmem_cache *cachep = slab->slab_cache; - page = container_of(head, struct page, rcu_head); - cachep = page->slab_cache; - - kmem_freepages(cachep, page); + kmem_freepages(cachep, slab); } #if DEBUG @@ -1553,18 +1552,18 @@ static void check_poison_obj(struct kmem_cache *cachep, void *objp) /* Print some data about the neighboring objects, if they * exist: */ - struct page *page = virt_to_head_page(objp); + struct slab *slab = virt_to_slab(objp); unsigned int objnr; - objnr = obj_to_index(cachep, page, objp); + objnr = obj_to_index(cachep, slab_page(slab), objp); if (objnr) { - objp = index_to_obj(cachep, page, objnr - 1); + objp = index_to_obj(cachep, slab, objnr - 1); realobj = (char *)objp + obj_offset(cachep); pr_err("Prev obj: start=%px, len=%d\n", realobj, size); print_objinfo(cachep, objp, 2); } if (objnr + 1 < cachep->num) { - objp = index_to_obj(cachep, page, objnr + 1); + objp = index_to_obj(cachep, slab, objnr + 1); realobj = (char *)objp + obj_offset(cachep); pr_err("Next obj: start=%px, len=%d\n", realobj, size); print_objinfo(cachep, objp, 2); @@ -1575,17 +1574,17 @@ static void check_poison_obj(struct kmem_cache *cachep, void *objp) #if DEBUG static void slab_destroy_debugcheck(struct kmem_cache *cachep, - struct page *page) + struct slab *slab) { int i; if (OBJFREELIST_SLAB(cachep) && cachep->flags & SLAB_POISON) { - poison_obj(cachep, page->freelist - obj_offset(cachep), + poison_obj(cachep, slab->freelist - obj_offset(cachep), POISON_FREE); } for (i = 0; i < cachep->num; i++) { - void *objp = index_to_obj(cachep, page, i); + void *objp = index_to_obj(cachep, slab, i); if (cachep->flags & SLAB_POISON) { check_poison_obj(cachep, objp); @@ -1601,7 +1600,7 @@ static void slab_destroy_debugcheck(struct kmem_cache *cachep, } #else static void slab_destroy_debugcheck(struct kmem_cache *cachep, - struct page *page) + struct slab *slab) { } #endif @@ -1609,26 +1608,26 @@ static void slab_destroy_debugcheck(struct kmem_cache *cachep, /** * slab_destroy - destroy and release all objects in a slab * @cachep: cache pointer being destroyed - * @page: page pointer being destroyed + * @slab: slab being destroyed * - * Destroy all the objs in a slab page, and release the mem back to the system. - * Before calling the slab page must have been unlinked from the cache. The + * Destroy all the objs in a slab, and release the mem back to the system. + * Before calling the slab must have been unlinked from the cache. The * kmem_cache_node ->list_lock is not held/needed. */ -static void slab_destroy(struct kmem_cache *cachep, struct page *page) +static void slab_destroy(struct kmem_cache *cachep, struct slab *slab) { void *freelist; - freelist = page->freelist; - slab_destroy_debugcheck(cachep, page); + freelist = slab->freelist; + slab_destroy_debugcheck(cachep, slab); if (unlikely(cachep->flags & SLAB_TYPESAFE_BY_RCU)) - call_rcu(&page->rcu_head, kmem_rcu_free); + call_rcu(&slab->rcu_head, kmem_rcu_free); else - kmem_freepages(cachep, page); + kmem_freepages(cachep, slab); /* * From now on, we don't use freelist - * although actual page can be freed in rcu context + * although actual slab can be freed in rcu context */ if (OFF_SLAB(cachep)) kmem_cache_free(cachep->freelist_cache, freelist); @@ -1640,11 +1639,11 @@ static void slab_destroy(struct kmem_cache *cachep, struct page *page) */ static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list) { - struct page *page, *n; + struct slab *slab, *n; - list_for_each_entry_safe(page, n, list, slab_list) { - list_del(&page->slab_list); - slab_destroy(cachep, page); + list_for_each_entry_safe(slab, n, list, slab_list) { + list_del(&slab->slab_list); + slab_destroy(cachep, slab); } } @@ -2194,7 +2193,7 @@ static int drain_freelist(struct kmem_cache *cache, { struct list_head *p; int nr_freed; - struct page *page; + struct slab *slab; nr_freed = 0; while (nr_freed < tofree && !list_empty(&n->slabs_free)) { @@ -2206,8 +2205,8 @@ static int drain_freelist(struct kmem_cache *cache, goto out; } - page = list_entry(p, struct page, slab_list); - list_del(&page->slab_list); + slab = list_entry(p, struct slab, slab_list); + list_del(&slab->slab_list); n->free_slabs--; n->total_slabs--; /* @@ -2216,7 +2215,7 @@ static int drain_freelist(struct kmem_cache *cache, */ n->free_objects -= cache->num; spin_unlock_irq(&n->list_lock); - slab_destroy(cache, page); + slab_destroy(cache, slab); nr_freed++; } out: @@ -2291,14 +2290,14 @@ void __kmem_cache_release(struct kmem_cache *cachep) * which are all initialized during kmem_cache_init(). */ static void *alloc_slabmgmt(struct kmem_cache *cachep, - struct page *page, int colour_off, + struct slab *slab, int colour_off, gfp_t local_flags, int nodeid) { void *freelist; - void *addr = page_address(page); + void *addr = slab_address(slab); - page->s_mem = addr + colour_off; - page->active = 0; + slab->s_mem = addr + colour_off; + slab->active = 0; if (OBJFREELIST_SLAB(cachep)) freelist = NULL; @@ -2315,24 +2314,24 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep, return freelist; } -static inline freelist_idx_t get_free_obj(struct page *page, unsigned int idx) +static inline freelist_idx_t get_free_obj(struct slab *slab, unsigned int idx) { - return ((freelist_idx_t *)page->freelist)[idx]; + return ((freelist_idx_t *)slab->freelist)[idx]; } -static inline void set_free_obj(struct page *page, +static inline void set_free_obj(struct slab *slab, unsigned int idx, freelist_idx_t val) { - ((freelist_idx_t *)(page->freelist))[idx] = val; + ((freelist_idx_t *)(slab->freelist))[idx] = val; } -static void cache_init_objs_debug(struct kmem_cache *cachep, struct page *page) +static void cache_init_objs_debug(struct kmem_cache *cachep, struct slab *slab) { #if DEBUG int i; for (i = 0; i < cachep->num; i++) { - void *objp = index_to_obj(cachep, page, i); + void *objp = index_to_obj(cachep, slab, i); if (cachep->flags & SLAB_STORE_USER) *dbg_userword(cachep, objp) = NULL; @@ -2416,17 +2415,17 @@ static freelist_idx_t next_random_slot(union freelist_init_state *state) } /* Swap two freelist entries */ -static void swap_free_obj(struct page *page, unsigned int a, unsigned int b) +static void swap_free_obj(struct slab *slab, unsigned int a, unsigned int b) { - swap(((freelist_idx_t *)page->freelist)[a], - ((freelist_idx_t *)page->freelist)[b]); + swap(((freelist_idx_t *)slab->freelist)[a], + ((freelist_idx_t *)slab->freelist)[b]); } /* * Shuffle the freelist initialization state based on pre-computed lists. * return true if the list was successfully shuffled, false otherwise. */ -static bool shuffle_freelist(struct kmem_cache *cachep, struct page *page) +static bool shuffle_freelist(struct kmem_cache *cachep, struct slab *slab) { unsigned int objfreelist = 0, i, rand, count = cachep->num; union freelist_init_state state; @@ -2443,7 +2442,7 @@ static bool shuffle_freelist(struct kmem_cache *cachep, struct page *page) objfreelist = count - 1; else objfreelist = next_random_slot(&state); - page->freelist = index_to_obj(cachep, page, objfreelist) + + slab->freelist = index_to_obj(cachep, slab, objfreelist) + obj_offset(cachep); count--; } @@ -2454,51 +2453,51 @@ static bool shuffle_freelist(struct kmem_cache *cachep, struct page *page) */ if (!precomputed) { for (i = 0; i < count; i++) - set_free_obj(page, i, i); + set_free_obj(slab, i, i); /* Fisher-Yates shuffle */ for (i = count - 1; i > 0; i--) { rand = prandom_u32_state(&state.rnd_state); rand %= (i + 1); - swap_free_obj(page, i, rand); + swap_free_obj(slab, i, rand); } } else { for (i = 0; i < count; i++) - set_free_obj(page, i, next_random_slot(&state)); + set_free_obj(slab, i, next_random_slot(&state)); } if (OBJFREELIST_SLAB(cachep)) - set_free_obj(page, cachep->num - 1, objfreelist); + set_free_obj(slab, cachep->num - 1, objfreelist); return true; } #else static inline bool shuffle_freelist(struct kmem_cache *cachep, - struct page *page) + struct slab *slab) { return false; } #endif /* CONFIG_SLAB_FREELIST_RANDOM */ static void cache_init_objs(struct kmem_cache *cachep, - struct page *page) + struct slab *slab) { int i; void *objp; bool shuffled; - cache_init_objs_debug(cachep, page); + cache_init_objs_debug(cachep, slab); /* Try to randomize the freelist if enabled */ - shuffled = shuffle_freelist(cachep, page); + shuffled = shuffle_freelist(cachep, slab); if (!shuffled && OBJFREELIST_SLAB(cachep)) { - page->freelist = index_to_obj(cachep, page, cachep->num - 1) + + slab->freelist = index_to_obj(cachep, slab, cachep->num - 1) + obj_offset(cachep); } for (i = 0; i < cachep->num; i++) { - objp = index_to_obj(cachep, page, i); + objp = index_to_obj(cachep, slab, i); objp = kasan_init_slab_obj(cachep, objp); /* constructor could break poison info */ @@ -2509,41 +2508,41 @@ static void cache_init_objs(struct kmem_cache *cachep, } if (!shuffled) - set_free_obj(page, i, i); + set_free_obj(slab, i, i); } } -static void *slab_get_obj(struct kmem_cache *cachep, struct page *page) +static void *slab_get_obj(struct kmem_cache *cachep, struct slab *slab) { void *objp; - objp = index_to_obj(cachep, page, get_free_obj(page, page->active)); - page->active++; + objp = index_to_obj(cachep, slab, get_free_obj(slab, slab->active)); + slab->active++; return objp; } static void slab_put_obj(struct kmem_cache *cachep, - struct page *page, void *objp) + struct slab *slab, void *objp) { - unsigned int objnr = obj_to_index(cachep, page, objp); + unsigned int objnr = obj_to_index(cachep, slab_page(slab), objp); #if DEBUG unsigned int i; /* Verify double free bug */ - for (i = page->active; i < cachep->num; i++) { - if (get_free_obj(page, i) == objnr) { + for (i = slab->active; i < cachep->num; i++) { + if (get_free_obj(slab, i) == objnr) { pr_err("slab: double free detected in cache '%s', objp %px\n", cachep->name, objp); BUG(); } } #endif - page->active--; - if (!page->freelist) - page->freelist = objp + obj_offset(cachep); + slab->active--; + if (!slab->freelist) + slab->freelist = objp + obj_offset(cachep); - set_free_obj(page, page->active, objnr); + set_free_obj(slab, slab->active, objnr); } /* @@ -2551,26 +2550,26 @@ static void slab_put_obj(struct kmem_cache *cachep, * for the slab allocator to be able to lookup the cache and slab of a * virtual address for kfree, ksize, and slab debugging. */ -static void slab_map_pages(struct kmem_cache *cache, struct page *page, +static void slab_map_pages(struct kmem_cache *cache, struct slab *slab, void *freelist) { - page->slab_cache = cache; - page->freelist = freelist; + slab->slab_cache = cache; + slab->freelist = freelist; } /* * Grow (by 1) the number of slabs within a cache. This is called by * kmem_cache_alloc() when there are no active objs left in a cache. */ -static struct page *cache_grow_begin(struct kmem_cache *cachep, +static struct slab *cache_grow_begin(struct kmem_cache *cachep, gfp_t flags, int nodeid) { void *freelist; size_t offset; gfp_t local_flags; - int page_node; + int slab_node; struct kmem_cache_node *n; - struct page *page; + struct slab *slab; /* * Be lazy and only check for valid flags here, keeping it out of the @@ -2590,12 +2589,12 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep, * Get mem for the objs. Attempt to allocate a physical page from * 'nodeid'. */ - page = kmem_getpages(cachep, local_flags, nodeid); - if (!page) + slab = kmem_getpages(cachep, local_flags, nodeid); + if (!slab) goto failed; - page_node = page_to_nid(page); - n = get_node(cachep, page_node); + slab_node = slab_nid(slab); + n = get_node(cachep, slab_node); /* Get colour for the slab, and cal the next value. */ n->colour_next++; @@ -2610,57 +2609,57 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep, /* * Call kasan_poison_slab() before calling alloc_slabmgmt(), so - * page_address() in the latter returns a non-tagged pointer, + * slab_address() in the latter returns a non-tagged pointer, * as it should be for slab pages. */ - kasan_poison_slab(page); + kasan_poison_slab(slab_page(slab)); /* Get slab management. */ - freelist = alloc_slabmgmt(cachep, page, offset, - local_flags & ~GFP_CONSTRAINT_MASK, page_node); + freelist = alloc_slabmgmt(cachep, slab, offset, + local_flags & ~GFP_CONSTRAINT_MASK, slab_node); if (OFF_SLAB(cachep) && !freelist) goto opps1; - slab_map_pages(cachep, page, freelist); + slab_map_pages(cachep, slab, freelist); - cache_init_objs(cachep, page); + cache_init_objs(cachep, slab); if (gfpflags_allow_blocking(local_flags)) local_irq_disable(); - return page; + return slab; opps1: - kmem_freepages(cachep, page); + kmem_freepages(cachep, slab); failed: if (gfpflags_allow_blocking(local_flags)) local_irq_disable(); return NULL; } -static void cache_grow_end(struct kmem_cache *cachep, struct page *page) +static void cache_grow_end(struct kmem_cache *cachep, struct slab *slab) { struct kmem_cache_node *n; void *list = NULL; check_irq_off(); - if (!page) + if (!slab) return; - INIT_LIST_HEAD(&page->slab_list); - n = get_node(cachep, page_to_nid(page)); + INIT_LIST_HEAD(&slab->slab_list); + n = get_node(cachep, slab_nid(slab)); spin_lock(&n->list_lock); n->total_slabs++; - if (!page->active) { - list_add_tail(&page->slab_list, &n->slabs_free); + if (!slab->active) { + list_add_tail(&slab->slab_list, &n->slabs_free); n->free_slabs++; } else - fixup_slab_list(cachep, n, page, &list); + fixup_slab_list(cachep, n, slab, &list); STATS_INC_GROWN(cachep); - n->free_objects += cachep->num - page->active; + n->free_objects += cachep->num - slab->active; spin_unlock(&n->list_lock); fixup_objfreelist_debug(cachep, &list); @@ -2708,13 +2707,13 @@ static void *cache_free_debugcheck(struct kmem_cache *cachep, void *objp, unsigned long caller) { unsigned int objnr; - struct page *page; + struct slab *slab; BUG_ON(virt_to_cache(objp) != cachep); objp -= obj_offset(cachep); kfree_debugcheck(objp); - page = virt_to_head_page(objp); + slab = virt_to_slab(objp); if (cachep->flags & SLAB_RED_ZONE) { verify_redzone_free(cachep, objp); @@ -2724,10 +2723,10 @@ static void *cache_free_debugcheck(struct kmem_cache *cachep, void *objp, if (cachep->flags & SLAB_STORE_USER) *dbg_userword(cachep, objp) = (void *)caller; - objnr = obj_to_index(cachep, page, objp); + objnr = obj_to_index(cachep, slab_page(slab), objp); BUG_ON(objnr >= cachep->num); - BUG_ON(objp != index_to_obj(cachep, page, objnr)); + BUG_ON(objp != index_to_obj(cachep, slab, objnr)); if (cachep->flags & SLAB_POISON) { poison_obj(cachep, objp, POISON_FREE); @@ -2757,97 +2756,97 @@ static inline void fixup_objfreelist_debug(struct kmem_cache *cachep, } static inline void fixup_slab_list(struct kmem_cache *cachep, - struct kmem_cache_node *n, struct page *page, + struct kmem_cache_node *n, struct slab *slab, void **list) { - /* move slabp to correct slabp list: */ - list_del(&page->slab_list); - if (page->active == cachep->num) { - list_add(&page->slab_list, &n->slabs_full); + /* move slab to correct slab list: */ + list_del(&slab->slab_list); + if (slab->active == cachep->num) { + list_add(&slab->slab_list, &n->slabs_full); if (OBJFREELIST_SLAB(cachep)) { #if DEBUG /* Poisoning will be done without holding the lock */ if (cachep->flags & SLAB_POISON) { - void **objp = page->freelist; + void **objp = slab->freelist; *objp = *list; *list = objp; } #endif - page->freelist = NULL; + slab->freelist = NULL; } } else - list_add(&page->slab_list, &n->slabs_partial); + list_add(&slab->slab_list, &n->slabs_partial); } /* Try to find non-pfmemalloc slab if needed */ -static noinline struct page *get_valid_first_slab(struct kmem_cache_node *n, - struct page *page, bool pfmemalloc) +static noinline struct slab *get_valid_first_slab(struct kmem_cache_node *n, + struct slab *slab, bool pfmemalloc) { - if (!page) + if (!slab) return NULL; if (pfmemalloc) - return page; + return slab; - if (!PageSlabPfmemalloc(page)) - return page; + if (!slab_test_pfmemalloc(slab)) + return slab; /* No need to keep pfmemalloc slab if we have enough free objects */ if (n->free_objects > n->free_limit) { - ClearPageSlabPfmemalloc(page); - return page; + slab_clear_pfmemalloc(slab); + return slab; } /* Move pfmemalloc slab to the end of list to speed up next search */ - list_del(&page->slab_list); - if (!page->active) { - list_add_tail(&page->slab_list, &n->slabs_free); + list_del(&slab->slab_list); + if (!slab->active) { + list_add_tail(&slab->slab_list, &n->slabs_free); n->free_slabs++; } else - list_add_tail(&page->slab_list, &n->slabs_partial); + list_add_tail(&slab->slab_list, &n->slabs_partial); - list_for_each_entry(page, &n->slabs_partial, slab_list) { - if (!PageSlabPfmemalloc(page)) - return page; + list_for_each_entry(slab, &n->slabs_partial, slab_list) { + if (!slab_test_pfmemalloc(slab)) + return slab; } n->free_touched = 1; - list_for_each_entry(page, &n->slabs_free, slab_list) { - if (!PageSlabPfmemalloc(page)) { + list_for_each_entry(slab, &n->slabs_free, slab_list) { + if (!slab_test_pfmemalloc(slab)) { n->free_slabs--; - return page; + return slab; } } return NULL; } -static struct page *get_first_slab(struct kmem_cache_node *n, bool pfmemalloc) +static struct slab *get_first_slab(struct kmem_cache_node *n, bool pfmemalloc) { - struct page *page; + struct slab *slab; assert_spin_locked(&n->list_lock); - page = list_first_entry_or_null(&n->slabs_partial, struct page, + slab = list_first_entry_or_null(&n->slabs_partial, struct slab, slab_list); - if (!page) { + if (!slab) { n->free_touched = 1; - page = list_first_entry_or_null(&n->slabs_free, struct page, + slab = list_first_entry_or_null(&n->slabs_free, struct slab, slab_list); - if (page) + if (slab) n->free_slabs--; } if (sk_memalloc_socks()) - page = get_valid_first_slab(n, page, pfmemalloc); + slab = get_valid_first_slab(n, slab, pfmemalloc); - return page; + return slab; } static noinline void *cache_alloc_pfmemalloc(struct kmem_cache *cachep, struct kmem_cache_node *n, gfp_t flags) { - struct page *page; + struct slab *slab; void *obj; void *list = NULL; @@ -2855,16 +2854,16 @@ static noinline void *cache_alloc_pfmemalloc(struct kmem_cache *cachep, return NULL; spin_lock(&n->list_lock); - page = get_first_slab(n, true); - if (!page) { + slab = get_first_slab(n, true); + if (!slab) { spin_unlock(&n->list_lock); return NULL; } - obj = slab_get_obj(cachep, page); + obj = slab_get_obj(cachep, slab); n->free_objects--; - fixup_slab_list(cachep, n, page, &list); + fixup_slab_list(cachep, n, slab, &list); spin_unlock(&n->list_lock); fixup_objfreelist_debug(cachep, &list); @@ -2877,20 +2876,20 @@ static noinline void *cache_alloc_pfmemalloc(struct kmem_cache *cachep, * or cache_grow_end() for new slab */ static __always_inline int alloc_block(struct kmem_cache *cachep, - struct array_cache *ac, struct page *page, int batchcount) + struct array_cache *ac, struct slab *slab, int batchcount) { /* * There must be at least one object available for * allocation. */ - BUG_ON(page->active >= cachep->num); + BUG_ON(slab->active >= cachep->num); - while (page->active < cachep->num && batchcount--) { + while (slab->active < cachep->num && batchcount--) { STATS_INC_ALLOCED(cachep); STATS_INC_ACTIVE(cachep); STATS_SET_HIGH(cachep); - ac->entry[ac->avail++] = slab_get_obj(cachep, page); + ac->entry[ac->avail++] = slab_get_obj(cachep, slab); } return batchcount; @@ -2903,7 +2902,7 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags) struct array_cache *ac, *shared; int node; void *list = NULL; - struct page *page; + struct slab *slab; check_irq_off(); node = numa_mem_id(); @@ -2936,14 +2935,14 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags) while (batchcount > 0) { /* Get slab alloc is to come from. */ - page = get_first_slab(n, false); - if (!page) + slab = get_first_slab(n, false); + if (!slab) goto must_grow; check_spinlock_acquired(cachep); - batchcount = alloc_block(cachep, ac, page, batchcount); - fixup_slab_list(cachep, n, page, &list); + batchcount = alloc_block(cachep, ac, slab, batchcount); + fixup_slab_list(cachep, n, slab, &list); } must_grow: @@ -2962,16 +2961,16 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags) return obj; } - page = cache_grow_begin(cachep, gfp_exact_node(flags), node); + slab = cache_grow_begin(cachep, gfp_exact_node(flags), node); /* * cache_grow_begin() can reenable interrupts, * then ac could change. */ ac = cpu_cache_get(cachep); - if (!ac->avail && page) - alloc_block(cachep, ac, page, batchcount); - cache_grow_end(cachep, page); + if (!ac->avail && slab) + alloc_block(cachep, ac, slab, batchcount); + cache_grow_end(cachep, slab); if (!ac->avail) return NULL; @@ -3101,7 +3100,7 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags) struct zone *zone; enum zone_type highest_zoneidx = gfp_zone(flags); void *obj = NULL; - struct page *page; + struct slab *slab; int nid; unsigned int cpuset_mems_cookie; @@ -3137,10 +3136,10 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags) * We may trigger various forms of reclaim on the allowed * set and go into memory reserves if necessary. */ - page = cache_grow_begin(cache, flags, numa_mem_id()); - cache_grow_end(cache, page); - if (page) { - nid = page_to_nid(page); + slab = cache_grow_begin(cache, flags, numa_mem_id()); + cache_grow_end(cache, slab); + if (slab) { + nid = slab_nid(slab); obj = ____cache_alloc_node(cache, gfp_exact_node(flags), nid); @@ -3164,7 +3163,7 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags) static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { - struct page *page; + struct slab *slab; struct kmem_cache_node *n; void *obj = NULL; void *list = NULL; @@ -3175,8 +3174,8 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, check_irq_off(); spin_lock(&n->list_lock); - page = get_first_slab(n, false); - if (!page) + slab = get_first_slab(n, false); + if (!slab) goto must_grow; check_spinlock_acquired_node(cachep, nodeid); @@ -3185,12 +3184,12 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, STATS_INC_ACTIVE(cachep); STATS_SET_HIGH(cachep); - BUG_ON(page->active == cachep->num); + BUG_ON(slab->active == cachep->num); - obj = slab_get_obj(cachep, page); + obj = slab_get_obj(cachep, slab); n->free_objects--; - fixup_slab_list(cachep, n, page, &list); + fixup_slab_list(cachep, n, slab, &list); spin_unlock(&n->list_lock); fixup_objfreelist_debug(cachep, &list); @@ -3198,12 +3197,12 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, must_grow: spin_unlock(&n->list_lock); - page = cache_grow_begin(cachep, gfp_exact_node(flags), nodeid); - if (page) { + slab = cache_grow_begin(cachep, gfp_exact_node(flags), nodeid); + if (slab) { /* This slab isn't counted yet so don't update free_objects */ - obj = slab_get_obj(cachep, page); + obj = slab_get_obj(cachep, slab); } - cache_grow_end(cachep, page); + cache_grow_end(cachep, slab); return obj ? obj : fallback_alloc(cachep, flags); } @@ -3333,40 +3332,40 @@ static void free_block(struct kmem_cache *cachep, void **objpp, { int i; struct kmem_cache_node *n = get_node(cachep, node); - struct page *page; + struct slab *slab; n->free_objects += nr_objects; for (i = 0; i < nr_objects; i++) { void *objp; - struct page *page; + struct slab *slab; objp = objpp[i]; - page = virt_to_head_page(objp); - list_del(&page->slab_list); + slab = virt_to_slab(objp); + list_del(&slab->slab_list); check_spinlock_acquired_node(cachep, node); - slab_put_obj(cachep, page, objp); + slab_put_obj(cachep, slab, objp); STATS_DEC_ACTIVE(cachep); /* fixup slab chains */ - if (page->active == 0) { - list_add(&page->slab_list, &n->slabs_free); + if (slab->active == 0) { + list_add(&slab->slab_list, &n->slabs_free); n->free_slabs++; } else { /* Unconditionally move a slab to the end of the * partial list on free - maximum time for the * other objects to be freed, too. */ - list_add_tail(&page->slab_list, &n->slabs_partial); + list_add_tail(&slab->slab_list, &n->slabs_partial); } } while (n->free_objects > n->free_limit && !list_empty(&n->slabs_free)) { n->free_objects -= cachep->num; - page = list_last_entry(&n->slabs_free, struct page, slab_list); - list_move(&page->slab_list, list); + slab = list_last_entry(&n->slabs_free, struct slab, slab_list); + list_move(&slab->slab_list, list); n->free_slabs--; n->total_slabs--; } @@ -3402,10 +3401,10 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) #if STATS { int i = 0; - struct page *page; + struct slab *slab; - list_for_each_entry(page, &n->slabs_free, slab_list) { - BUG_ON(page->active); + list_for_each_entry(slab, &n->slabs_free, slab_list) { + BUG_ON(slab->active); i++; } @@ -3481,10 +3480,10 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, } if (sk_memalloc_socks()) { - struct page *page = virt_to_head_page(objp); + struct slab *slab = virt_to_slab(objp); - if (unlikely(PageSlabPfmemalloc(page))) { - cache_free_pfmemalloc(cachep, page, objp); + if (unlikely(slab_test_pfmemalloc(slab))) { + cache_free_pfmemalloc(cachep, slab, objp); return; } } @@ -3671,7 +3670,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) kpp->kp_data_offset = obj_offset(cachep); slab = virt_to_slab(objp); objnr = obj_to_index(cachep, slab_page(slab), objp); - objp = index_to_obj(cachep, slab_page(slab), objnr); + objp = index_to_obj(cachep, slab, objnr); kpp->kp_objp = objp; if (DEBUG && cachep->flags & SLAB_STORE_USER) kpp->kp_ret = *dbg_userword(cachep, objp); @@ -4199,7 +4198,7 @@ void __check_heap_object(const void *ptr, unsigned long n, if (is_kfence_address(ptr)) offset = ptr - kfence_object_start(ptr); else - offset = ptr - index_to_obj(cachep, slab_page(slab), objnr) - obj_offset(cachep); + offset = ptr - index_to_obj(cachep, slab, objnr) - obj_offset(cachep); /* Allow address range falling entirely within usercopy region. */ if (offset >= cachep->useroffset && diff --git a/mm/slab.h b/mm/slab.h index 53fe3a746973..7631e274a840 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -489,39 +489,27 @@ static inline struct kmem_cache *virt_to_cache(const void *obj) return slab->slab_cache; } -static __always_inline void account_slab_page(struct page *page, int order, +static __always_inline void account_slab(struct slab *slab, int order, struct kmem_cache *s, gfp_t gfp) { if (memcg_kmem_enabled() && (s->flags & SLAB_ACCOUNT)) - memcg_alloc_page_obj_cgroups(page, s, gfp, true); + memcg_alloc_page_obj_cgroups(slab_page(slab), s, gfp, true); - mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s), + mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), PAGE_SIZE << order); } -static __always_inline void unaccount_slab_page(struct page *page, int order, +static __always_inline void unaccount_slab(struct slab *slab, int order, struct kmem_cache *s) { if (memcg_kmem_enabled()) - memcg_free_page_obj_cgroups(page); + memcg_free_page_obj_cgroups(slab_page(slab)); - mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s), + mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), -(PAGE_SIZE << order)); } -static __always_inline void account_slab(struct slab *slab, int order, - struct kmem_cache *s, gfp_t gfp) -{ - account_slab_page(slab_page(slab), order, s, gfp); -} - -static __always_inline void unaccount_slab(struct slab *slab, int order, - struct kmem_cache *s) -{ - unaccount_slab_page(slab_page(slab), order, s); -} - static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) { struct kmem_cache *cachep; From patchwork Mon Oct 4 13:46:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B791C433EF for ; Mon, 4 Oct 2021 14:56:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2FA1361175 for ; Mon, 4 Oct 2021 14:56:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2FA1361175 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C242F940054; Mon, 4 Oct 2021 10:56:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD3B994000B; Mon, 4 Oct 2021 10:56:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC295940054; Mon, 4 Oct 2021 10:56:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0070.hostedemail.com [216.40.44.70]) by kanga.kvack.org (Postfix) with ESMTP id 9D61E94000B for ; Mon, 4 Oct 2021 10:56:21 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 61CA8181D3043 for ; Mon, 4 Oct 2021 14:56:21 +0000 (UTC) X-FDA: 78659055762.30.A452180 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 136EC3002E74 for ; Mon, 4 Oct 2021 14:56:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DzertxoMJbHD5EvugZfzOqtrEn4HUjqp/EGYE9blCy8=; b=E7F5tDlfp2KOFJRneGhjq7kQbi pUOJNsfHjGO0opy/46aBvtRPqn6oTTSwGFcSKgTmzxOHwu3w7b8BvSuwKJdXMtcX+qyo5cpuPd6XQ wp/OdVBWR/s+gjRycQ++KtyLMfmEGqwNVN3HK7U6rrjuxF2nT9a0S1jQspSOipCxiZwEweGfNiuSW Xnd1nRSQhsCnCtp1SHwBCjHEJmYhp+GF59nAxg6DCdozxO/34BuQtTlfb/ZNdGlGPicXZ2UhtBW7X /zTgaPF9CF6uaCQH+VHwE6rBkRNHllcz3lnDHKzHBI5s1pN5fRr5fpc+UOAg3VwgCQ92OVGrztVcj gkVYVqRA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPMS-00H1re-Q4; Mon, 04 Oct 2021 14:54:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 55/62] mm: Convert slob to use struct slab Date: Mon, 4 Oct 2021 14:46:43 +0100 Message-Id: <20211004134650.4031813-56-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 136EC3002E74 X-Stat-Signature: hwn7te6azjeq1qcozb4946h8wbj7iu3k Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=E7F5tDlf; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633359380-638521 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use struct slab throughout the slob allocator. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slab.h | 15 +++++++++++++++ mm/slob.c | 30 +++++++++++++++--------------- 2 files changed, 30 insertions(+), 15 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 7631e274a840..5eabc9352bbf 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -43,6 +43,21 @@ static inline void __slab_clear_pfmemalloc(struct slab *slab) __clear_bit(PG_pfmemalloc, &slab->flags); } +static inline bool slab_test_free(const struct slab *slab) +{ + return test_bit(PG_slob_free, &slab->flags); +} + +static inline void __slab_set_free(struct slab *slab) +{ + __set_bit(PG_slob_free, &slab->flags); +} + +static inline void __slab_clear_free(struct slab *slab) +{ + __clear_bit(PG_slob_free, &slab->flags); +} + static inline void *slab_address(const struct slab *slab) { return page_address(slab_page(slab)); diff --git a/mm/slob.c b/mm/slob.c index 8cede39054fc..be5c9c472bbb 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -105,21 +105,21 @@ static LIST_HEAD(free_slob_large); /* * slob_page_free: true for pages on free_slob_pages list. */ -static inline int slob_page_free(struct page *sp) +static inline int slob_page_free(struct slab *sp) { - return PageSlobFree(sp); + return slab_test_free(sp); } -static void set_slob_page_free(struct page *sp, struct list_head *list) +static void set_slob_page_free(struct slab *sp, struct list_head *list) { list_add(&sp->slab_list, list); - __SetPageSlobFree(sp); + __slab_set_free(sp); } -static inline void clear_slob_page_free(struct page *sp) +static inline void clear_slob_page_free(struct slab *sp) { list_del(&sp->slab_list); - __ClearPageSlobFree(sp); + __slab_clear_free(sp); } #define SLOB_UNIT sizeof(slob_t) @@ -234,7 +234,7 @@ static void slob_free_pages(void *b, int order) * freelist, in this case @page_removed_from_list will be set to * true (set to false otherwise). */ -static void *slob_page_alloc(struct page *sp, size_t size, int align, +static void *slob_page_alloc(struct slab *sp, size_t size, int align, int align_offset, bool *page_removed_from_list) { slob_t *prev, *cur, *aligned = NULL; @@ -301,7 +301,7 @@ static void *slob_page_alloc(struct page *sp, size_t size, int align, static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, int align_offset) { - struct page *sp; + struct slab *sp; struct list_head *slob_list; slob_t *b = NULL; unsigned long flags; @@ -323,7 +323,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, * If there's a node specification, search for a partial * page with a matching node id in the freelist. */ - if (node != NUMA_NO_NODE && page_to_nid(sp) != node) + if (node != NUMA_NO_NODE && slab_nid(sp) != node) continue; #endif /* Enough room on this page? */ @@ -358,8 +358,8 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, b = slob_new_pages(gfp & ~__GFP_ZERO, 0, node); if (!b) return NULL; - sp = virt_to_page(b); - __SetPageSlab(sp); + sp = virt_to_slab(b); + __SetPageSlab(slab_page(sp)); spin_lock_irqsave(&slob_lock, flags); sp->units = SLOB_UNITS(PAGE_SIZE); @@ -381,7 +381,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, */ static void slob_free(void *block, int size) { - struct page *sp; + struct slab *sp; slob_t *prev, *next, *b = (slob_t *)block; slobidx_t units; unsigned long flags; @@ -391,7 +391,7 @@ static void slob_free(void *block, int size) return; BUG_ON(!size); - sp = virt_to_page(block); + sp = virt_to_slab(block); units = SLOB_UNITS(size); spin_lock_irqsave(&slob_lock, flags); @@ -401,8 +401,8 @@ static void slob_free(void *block, int size) if (slob_page_free(sp)) clear_slob_page_free(sp); spin_unlock_irqrestore(&slob_lock, flags); - __ClearPageSlab(sp); - page_mapcount_reset(sp); + __ClearPageSlab(slab_page(sp)); + page_mapcount_reset(slab_page(sp)); slob_free_pages(b, 0); return; } From patchwork Mon Oct 4 13:46:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72B08C433F5 for ; Mon, 4 Oct 2021 14:57:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1304D61186 for ; Mon, 4 Oct 2021 14:57:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1304D61186 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id A4DCD940056; Mon, 4 Oct 2021 10:57:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FD8894000B; Mon, 4 Oct 2021 10:57:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8ECDB940056; Mon, 4 Oct 2021 10:57:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0089.hostedemail.com [216.40.44.89]) by kanga.kvack.org (Postfix) with ESMTP id 8104B94000B for ; Mon, 4 Oct 2021 10:57:33 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 27CA327DD8 for ; Mon, 4 Oct 2021 14:57:33 +0000 (UTC) X-FDA: 78659058786.16.52DBC2F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id D9DBD20061CB for ; Mon, 4 Oct 2021 14:57:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=L7XbtYBRFHqbQAfsnTuRU8rzMVBdIuXtm2jFCwGgkoo=; b=bTydWZiKgvEJxq+sGzqYnTWqAp auk+SKVy+hTsY1xoFYPoa6cTv4uhh1jP3E1voky3kapdmF695BS03zqrgwm+vRk17+oKGrwEycO3C iCfINr4Fi/IOig/EnA5nJBhdL9pFdedi2CWQKugQPAC/7WMChhUCJssN426pGWim0TqaHDu5krqIV Of2s7514AnFfMhIa3hWa3CY8a2xNpwZB3DtjJmdJYA0PPJnyZ67vFP+jN475zOkFIrulfZp1bB4mU P3w2FCfluypniksEk2/8YWUqiF2uYSVbNdUDsjNCVI1dckAOuEu+w1ai/i3Lkm9vujnlUfowsGEcN 0zk4UExg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPNG-00H1uy-UT; Mon, 04 Oct 2021 14:55:52 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 56/62] mm: Convert slub to use struct slab Date: Mon, 4 Oct 2021 14:46:44 +0100 Message-Id: <20211004134650.4031813-57-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D9DBD20061CB X-Stat-Signature: xhh4s9k3uzbm191y9ys11ogmyckr56db Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=bTydWZiK; dmarc=none; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633359452-954207 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remaining bits & pieces. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 29 ++++++++++++++++------------- 1 file changed, 16 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 229fc56809c2..51ead3838fc1 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -64,19 +64,19 @@ * * The slab_lock is only used for debugging and on arches that do not * have the ability to do a cmpxchg_double. It only protects: - * A. slab->freelist -> List of object free in a page + * A. slab->freelist -> List of object free in a slab * B. slab->inuse -> Number of objects in use - * C. slab->objects -> Number of objects in page + * C. slab->objects -> Number of objects in slab * D. slab->frozen -> frozen state * * Frozen slabs * * If a slab is frozen then it is exempt from list management. It is not * on any list except per cpu partial list. The processor that froze the - * slab is the one who can perform list operations on the page. Other + * slab is the one who can perform list operations on the slab. Other * processors may put objects onto the freelist but the processor that * froze the slab is the only one that can retrieve the objects from the - * page's freelist. + * slab's freelist. * * list_lock * @@ -135,7 +135,7 @@ * minimal so we rely on the page allocators per cpu caches for * fast frees and allocs. * - * page->frozen The slab is frozen and exempt from list processing. + * slab->frozen The slab is frozen and exempt from list processing. * This means that the slab is dedicated to a purpose * such as satisfying allocations for a specific * processor. Objects may be freed in the slab while @@ -250,7 +250,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) #define OO_SHIFT 16 #define OO_MASK ((1 << OO_SHIFT) - 1) -#define MAX_OBJS_PER_PAGE 32767 /* since page.objects is u15 */ +#define MAX_OBJS_PER_PAGE 32767 /* since slab.objects is u15 */ /* Internal SLUB flags */ /* Poison object */ @@ -1753,14 +1753,21 @@ static inline struct slab *alloc_slab(struct kmem_cache *s, gfp_t flags, int node, struct kmem_cache_order_objects oo) { struct page *page; + struct slab *slab; unsigned int order = oo_order(oo); if (node == NUMA_NO_NODE) page = alloc_pages(flags, order); else page = __alloc_pages_node(node, flags, order); + if (!page) + return NULL; - return (struct slab *)page; + __SetPageSlab(page); + slab = (struct slab *)page; + if (page_is_pfmemalloc(page)) + slab_set_pfmemalloc(slab); + return slab; } #ifdef CONFIG_SLAB_FREELIST_RANDOM @@ -1781,7 +1788,7 @@ static int init_cache_random_seq(struct kmem_cache *s) return err; } - /* Transform to an offset on the set of pages */ + /* Transform to an offset on the set of slabs */ if (s->random_seq) { unsigned int i; @@ -1911,10 +1918,6 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) account_slab(slab, oo_order(oo), s, flags); slab->slab_cache = s; - __SetPageSlab(slab_page(slab)); - if (page_is_pfmemalloc(slab_page(slab))) - slab_set_pfmemalloc(slab); - kasan_poison_slab(slab_page(slab)); start = slab_address(slab); @@ -3494,7 +3497,7 @@ static inline void free_nonslab_page(struct page *page, void *object) { unsigned int order = compound_order(page); - VM_BUG_ON_PAGE(!PageCompound(page), page); + VM_BUG_ON_PAGE(!PageHead(page), page); kfree_hook(object); mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); __free_pages(page, order); From patchwork Mon Oct 4 13:46:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4C85C433F5 for ; Mon, 4 Oct 2021 14:58:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7D383611C0 for ; Mon, 4 Oct 2021 14:58:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7D383611C0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 23789940057; Mon, 4 Oct 2021 10:58:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E78094000B; Mon, 4 Oct 2021 10:58:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0AFAA940057; Mon, 4 Oct 2021 10:58:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0160.hostedemail.com [216.40.44.160]) by kanga.kvack.org (Postfix) with ESMTP id ED99594000B for ; Mon, 4 Oct 2021 10:58:03 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8C524182C1615 for ; Mon, 4 Oct 2021 14:58:03 +0000 (UTC) X-FDA: 78659060046.37.A3BAAED Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id 07040F001D30 for ; Mon, 4 Oct 2021 14:58:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=R5ynLC6ZNEshVCWiLkxY4+ieqsnRVreKIz0ksK/c6YY=; b=Ju3TdA1ZOWlaZlDWScwMVcEfVT QshXLfLsT45TMy0RhbgC12PfZKNlhiUvGrhCqXxmjAA/p4mIXLXdmHv+RQBKZR2sZz9brfAdXcThl /xrPn5YHrw3FfIzh54M9AHReFgBGXLZZxNpjqQJ0o8jSAIMcDN2PIk2LZaIphM81Bp0vBRqI6ns7y kRpe7YI2wMXTFqve/NTp9di1DHiGkmZ38fr8CwSbiZwQ1yrgtbkODLdt3TuWUmLd5cyeH5lVr29+z uFH7nSlD2cMuoNC64xeslKi6pqwoyNigZN4hyhDJ/XbYtGUlRPqU2TJa/6NA5ADryWcD3XMSYAgOl Q+022giw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPO7-00H1xK-Se; Mon, 04 Oct 2021 14:56:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 57/62] memcg: Convert object cgroups from struct page to struct slab Date: Mon, 4 Oct 2021 14:46:45 +0100 Message-Id: <20211004134650.4031813-58-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Ju3TdA1Z; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 07040F001D30 X-Stat-Signature: hkhcsiix9c4y9hkbfew4cf91oqapoif4 X-HE-Tag: 1633359482-304642 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that slab and slub are converted to use struct slab throughout, convert the memcg infrastructure that they use. There is a comment in here that I would appreciate being cleared up before this patch is merged. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/memcontrol.h | 34 +++++++++++++-------------- include/linux/slab_def.h | 10 ++++---- include/linux/slub_def.h | 10 ++++---- mm/kasan/common.c | 2 +- mm/memcontrol.c | 33 +++++++++++++------------- mm/slab.c | 10 ++++---- mm/slab.h | 47 +++++++++++++++++++------------------- mm/slub.c | 2 +- 8 files changed, 74 insertions(+), 74 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 3096c9a0ee01..3ddc7a980fda 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -537,41 +537,41 @@ static inline bool PageMemcgKmem(struct page *page) } /* - * page_objcgs - get the object cgroups vector associated with a page - * @page: a pointer to the page struct + * slab_objcgs - get the object cgroups vector associated with a page + * @slab: a pointer to the slab struct * - * Returns a pointer to the object cgroups vector associated with the page, - * or NULL. This function assumes that the page is known to have an + * Returns a pointer to the object cgroups vector associated with the slab, + * or NULL. This function assumes that the slab is known to have an * associated object cgroups vector. It's not safe to call this function * against pages, which might have an associated memory cgroup: e.g. * kernel stack pages. */ -static inline struct obj_cgroup **page_objcgs(struct page *page) +static inline struct obj_cgroup **slab_objcgs(struct slab *slab) { - unsigned long memcg_data = READ_ONCE(page->memcg_data); + unsigned long memcg_data = READ_ONCE(slab->memcg_data); - VM_BUG_ON_PAGE(memcg_data && !(memcg_data & MEMCG_DATA_OBJCGS), page); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page); + VM_BUG_ON_PAGE(memcg_data && !(memcg_data & MEMCG_DATA_OBJCGS), slab_page(slab)); + VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, slab_page(slab)); return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } /* - * page_objcgs_check - get the object cgroups vector associated with a page - * @page: a pointer to the page struct + * slab_objcgs_check - get the object cgroups vector associated with a page + * @slab: a pointer to the slab struct * - * Returns a pointer to the object cgroups vector associated with the page, - * or NULL. This function is safe to use if the page can be directly associated + * Returns a pointer to the object cgroups vector associated with the slab, + * or NULL. This function is safe to use if the slab can be directly associated * with a memory cgroup. */ -static inline struct obj_cgroup **page_objcgs_check(struct page *page) +static inline struct obj_cgroup **slab_objcgs_check(struct slab *slab) { - unsigned long memcg_data = READ_ONCE(page->memcg_data); + unsigned long memcg_data = READ_ONCE(slab->memcg_data); if (!memcg_data || !(memcg_data & MEMCG_DATA_OBJCGS)) return NULL; - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page); + VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, slab_page(slab)); return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } @@ -582,12 +582,12 @@ static inline bool PageMemcgKmem(struct page *page) return false; } -static inline struct obj_cgroup **page_objcgs(struct page *page) +static inline struct obj_cgroup **slab_objcgs(struct slab *slab) { return NULL; } -static inline struct obj_cgroup **page_objcgs_check(struct page *page) +static inline struct obj_cgroup **slab_objcgs_check(struct slab *slab) { return NULL; } diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 3aa5e1e73ab6..f81a41f9d5d1 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -106,16 +106,16 @@ static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, * reciprocal_divide(offset, cache->reciprocal_buffer_size) */ static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct page *page, void *obj) + const struct slab *slab, void *obj) { - u32 offset = (obj - page->s_mem); + u32 offset = (obj - slab->s_mem); return reciprocal_divide(offset, cache->reciprocal_buffer_size); } -static inline int objs_per_slab_page(const struct kmem_cache *cache, - const struct page *page) +static inline int objs_per_slab(const struct kmem_cache *cache, + const struct slab *slab) { - if (is_kfence_address(page_address(page))) + if (is_kfence_address(slab_address(slab))) return 1; return cache->num; } diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 63eae033d713..994a60da2f2e 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -187,16 +187,16 @@ static inline unsigned int __obj_to_index(const struct kmem_cache *cache, } static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct page *page, void *obj) + const struct slab *slab, void *obj) { if (is_kfence_address(obj)) return 0; - return __obj_to_index(cache, page_address(page), obj); + return __obj_to_index(cache, slab_address(slab), obj); } -static inline int objs_per_slab_page(const struct kmem_cache *cache, - const struct page *page) +static inline int objs_per_slab(const struct kmem_cache *cache, + const struct slab *slab) { - return page->objects; + return slab->objects; } #endif /* _LINUX_SLUB_DEF_H */ diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 41779ad109cd..f3972af7fa1b 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -298,7 +298,7 @@ static inline u8 assign_tag(struct kmem_cache *cache, /* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */ #ifdef CONFIG_SLAB /* For SLAB assign tags based on the object index in the freelist. */ - return (u8)obj_to_index(cache, virt_to_head_page(object), (void *)object); + return (u8)obj_to_index(cache, virt_to_slab(object), (void *)object); #else /* * For SLUB assign a random tag during slab creation, otherwise reuse diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6da5020a8656..fb15325549c1 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2770,16 +2770,16 @@ static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) */ #define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT) -int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s, - gfp_t gfp, bool new_page) +int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_page) { - unsigned int objects = objs_per_slab_page(s, page); + unsigned int objects = objs_per_slab(s, slab); unsigned long memcg_data; void *vec; gfp &= ~OBJCGS_CLEAR_MASK; vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp, - page_to_nid(page)); + slab_nid(slab)); if (!vec) return -ENOMEM; @@ -2790,10 +2790,10 @@ int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s, * it's memcg_data, no synchronization is required and * memcg_data can be simply assigned. */ - page->memcg_data = memcg_data; - } else if (cmpxchg(&page->memcg_data, 0, memcg_data)) { + slab->memcg_data = memcg_data; + } else if (cmpxchg(&slab->memcg_data, 0, memcg_data)) { /* - * If the slab page is already in use, somebody can allocate + * If the slab is already in use, somebody can allocate * and assign obj_cgroups in parallel. In this case the existing * objcg vector should be reused. */ @@ -2819,38 +2819,39 @@ int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s, */ struct mem_cgroup *mem_cgroup_from_obj(void *p) { - struct page *page; + struct slab *slab; if (mem_cgroup_disabled()) return NULL; - page = virt_to_head_page(p); + slab = virt_to_slab(p); /* * Slab objects are accounted individually, not per-page. * Memcg membership data for each individual object is saved in - * the page->obj_cgroups. + * the slab->obj_cgroups. */ - if (page_objcgs_check(page)) { + if (slab_objcgs_check(slab)) { struct obj_cgroup *objcg; unsigned int off; - off = obj_to_index(page->slab_cache, page, p); - objcg = page_objcgs(page)[off]; + off = obj_to_index(slab->slab_cache, slab, p); + objcg = slab_objcgs(slab)[off]; if (objcg) return obj_cgroup_memcg(objcg); return NULL; } + /* I am pretty sure this could just be 'return NULL' */ /* - * page_memcg_check() is used here, because page_has_obj_cgroups() + * page_memcg_check() is used here, because slab_has_obj_cgroups() * check above could fail because the object cgroups vector wasn't set * at that moment, but it can be set concurrently. - * page_memcg_check(page) will guarantee that a proper memory + * page_memcg_check() will guarantee that a proper memory * cgroup pointer or NULL will be returned. */ - return page_memcg_check(page); + return page_memcg_check((struct page *)slab); } __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void) diff --git a/mm/slab.c b/mm/slab.c index 29dc09e784b8..3e9cd3ecc9ab 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1555,7 +1555,7 @@ static void check_poison_obj(struct kmem_cache *cachep, void *objp) struct slab *slab = virt_to_slab(objp); unsigned int objnr; - objnr = obj_to_index(cachep, slab_page(slab), objp); + objnr = obj_to_index(cachep, slab, objp); if (objnr) { objp = index_to_obj(cachep, slab, objnr - 1); realobj = (char *)objp + obj_offset(cachep); @@ -2525,7 +2525,7 @@ static void *slab_get_obj(struct kmem_cache *cachep, struct slab *slab) static void slab_put_obj(struct kmem_cache *cachep, struct slab *slab, void *objp) { - unsigned int objnr = obj_to_index(cachep, slab_page(slab), objp); + unsigned int objnr = obj_to_index(cachep, slab, objp); #if DEBUG unsigned int i; @@ -2723,7 +2723,7 @@ static void *cache_free_debugcheck(struct kmem_cache *cachep, void *objp, if (cachep->flags & SLAB_STORE_USER) *dbg_userword(cachep, objp) = (void *)caller; - objnr = obj_to_index(cachep, slab_page(slab), objp); + objnr = obj_to_index(cachep, slab, objp); BUG_ON(objnr >= cachep->num); BUG_ON(objp != index_to_obj(cachep, slab, objnr)); @@ -3669,7 +3669,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) objp = object - obj_offset(cachep); kpp->kp_data_offset = obj_offset(cachep); slab = virt_to_slab(objp); - objnr = obj_to_index(cachep, slab_page(slab), objp); + objnr = obj_to_index(cachep, slab, objp); objp = index_to_obj(cachep, slab, objnr); kpp->kp_objp = objp; if (DEBUG && cachep->flags & SLAB_STORE_USER) @@ -4191,7 +4191,7 @@ void __check_heap_object(const void *ptr, unsigned long n, /* Find and validate object. */ cachep = slab->slab_cache; - objnr = obj_to_index(cachep, slab_page(slab), (void *)ptr); + objnr = obj_to_index(cachep, slab, (void *)ptr); BUG_ON(objnr >= cachep->num); /* Find offset within object. */ diff --git a/mm/slab.h b/mm/slab.h index 5eabc9352bbf..ac9dcdc1bfa9 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -333,15 +333,15 @@ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t fla } #ifdef CONFIG_MEMCG_KMEM -int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s, +int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, gfp_t gfp, bool new_page); void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, enum node_stat_item idx, int nr); -static inline void memcg_free_page_obj_cgroups(struct page *page) +static inline void memcg_free_slab_cgroups(struct slab *slab) { - kfree(page_objcgs(page)); - page->memcg_data = 0; + kfree(slab_objcgs(slab)); + slab->memcg_data = 0; } static inline size_t obj_full_size(struct kmem_cache *s) @@ -386,7 +386,7 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, size_t size, void **p) { - struct page *page; + struct slab *slab; unsigned long off; size_t i; @@ -395,19 +395,18 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, for (i = 0; i < size; i++) { if (likely(p[i])) { - page = virt_to_head_page(p[i]); + slab = virt_to_slab(p[i]); - if (!page_objcgs(page) && - memcg_alloc_page_obj_cgroups(page, s, flags, - false)) { + if (!slab_objcgs(slab) && + memcg_alloc_slab_cgroups(slab, s, flags, false)) { obj_cgroup_uncharge(objcg, obj_full_size(s)); continue; } - off = obj_to_index(s, page, p[i]); + off = obj_to_index(s, slab, p[i]); obj_cgroup_get(objcg); - page_objcgs(page)[off] = objcg; - mod_objcg_state(objcg, page_pgdat(page), + slab_objcgs(slab)[off] = objcg; + mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), obj_full_size(s)); } else { obj_cgroup_uncharge(objcg, obj_full_size(s)); @@ -422,7 +421,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig, struct kmem_cache *s; struct obj_cgroup **objcgs; struct obj_cgroup *objcg; - struct page *page; + struct slab *slab; unsigned int off; int i; @@ -433,24 +432,24 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig, if (unlikely(!p[i])) continue; - page = virt_to_head_page(p[i]); - objcgs = page_objcgs_check(page); + slab = virt_to_slab(p[i]); + objcgs = slab_objcgs_check(slab); if (!objcgs) continue; if (!s_orig) - s = page->slab_cache; + s = slab->slab_cache; else s = s_orig; - off = obj_to_index(s, page, p[i]); + off = obj_to_index(s, slab, p[i]); objcg = objcgs[off]; if (!objcg) continue; objcgs[off] = NULL; obj_cgroup_uncharge(objcg, obj_full_size(s)); - mod_objcg_state(objcg, page_pgdat(page), cache_vmstat_idx(s), + mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), -obj_full_size(s)); obj_cgroup_put(objcg); } @@ -462,14 +461,14 @@ static inline struct mem_cgroup *memcg_from_slab_obj(void *ptr) return NULL; } -static inline int memcg_alloc_page_obj_cgroups(struct page *page, - struct kmem_cache *s, gfp_t gfp, - bool new_page) +static inline int memcg_alloc_slab_cgroups(struct slab *slab, + struct kmem_cache *s, gfp_t gfp, + bool new_page) { return 0; } -static inline void memcg_free_page_obj_cgroups(struct page *page) +static inline void memcg_free_slab_cgroups(struct slab *slab) { } @@ -509,7 +508,7 @@ static __always_inline void account_slab(struct slab *slab, int order, gfp_t gfp) { if (memcg_kmem_enabled() && (s->flags & SLAB_ACCOUNT)) - memcg_alloc_page_obj_cgroups(slab_page(slab), s, gfp, true); + memcg_alloc_slab_cgroups(slab, s, gfp, true); mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), PAGE_SIZE << order); @@ -519,7 +518,7 @@ static __always_inline void unaccount_slab(struct slab *slab, int order, struct kmem_cache *s) { if (memcg_kmem_enabled()) - memcg_free_page_obj_cgroups(slab_page(slab)); + memcg_free_slab_cgroups(slab); mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), -(PAGE_SIZE << order)); diff --git a/mm/slub.c b/mm/slub.c index 51ead3838fc1..659b30afbb58 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4294,7 +4294,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) #else objp = objp0; #endif - objnr = obj_to_index(s, slab_page(slab), objp); + objnr = obj_to_index(s, slab, objp); kpp->kp_data_offset = (unsigned long)((char *)objp0 - (char *)objp); objp = base + s->size * objnr; kpp->kp_objp = objp; From patchwork Mon Oct 4 13:46:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 744D4C433EF for ; Mon, 4 Oct 2021 14:59:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 099A9611CA for ; Mon, 4 Oct 2021 14:59:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 099A9611CA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 67C74940058; Mon, 4 Oct 2021 10:59:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6047394000B; Mon, 4 Oct 2021 10:59:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A5DF940058; Mon, 4 Oct 2021 10:59:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0225.hostedemail.com [216.40.44.225]) by kanga.kvack.org (Postfix) with ESMTP id 3743994000B for ; Mon, 4 Oct 2021 10:59:28 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E894631E7E for ; Mon, 4 Oct 2021 14:59:27 +0000 (UTC) X-FDA: 78659063574.24.3722434 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 8DD9DD03885E for ; Mon, 4 Oct 2021 14:59:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=s1TXas+N6RfOF2JRawS5DBEjkpGLTm/HmH2iNZr6p0g=; b=ud6Mhpm1E/+cjRwPX9ALraobgm x/KV4EdElHZ8WhXsQgK6pd2PuRhqkBv/j3EGi3tbStKqE8Jyf5piWEzmMHAOHIYt25EeurH9LZoCX ov9SlQFe/eLc3N6T/6IUDCWMNXLgjPwburHuprTOkl0vfwsyghibxpPhkxZQqESUaDRyD5P3laGxP wEWgQ6P6ngCezUpHRXOUYXidQ4dRoqZmn3UKRYtY2Es4islRoBE1RLmN8O31sSa/jArznqsIAxyUZ Ce2IbyTxupvSM3p9Sao/jneN5REB3AK2kb/B8UO9KMeyli9kXS4Ra4GmnKyZ6lrrXeRq/u0jZ7aGV lJnVhgEQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPP3-00H259-4L; Mon, 04 Oct 2021 14:57:53 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 58/62] mm/kasan: Convert to struct slab Date: Mon, 4 Oct 2021 14:46:46 +0100 Message-Id: <20211004134650.4031813-59-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 8DD9DD03885E X-Stat-Signature: j66ah93jkxyy51jzibi7tjw7h38rjdtg Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ud6Mhpm1; dmarc=none; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633359567-117993 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This should all be split up and done better. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/kasan.h | 8 ++++---- include/linux/slab_def.h | 6 +++--- include/linux/slub_def.h | 8 ++++---- mm/kasan/common.c | 23 ++++++++++++----------- mm/kasan/generic.c | 8 ++++---- mm/kasan/kasan.h | 2 +- mm/kasan/quarantine.c | 2 +- mm/kasan/report.c | 16 ++++++++-------- mm/kasan/report_tags.c | 10 +++++----- mm/slab.c | 2 +- mm/slub.c | 2 +- 11 files changed, 44 insertions(+), 43 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index dd874a1ee862..59c860295618 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -188,11 +188,11 @@ static __always_inline size_t kasan_metadata_size(struct kmem_cache *cache) return 0; } -void __kasan_poison_slab(struct page *page); -static __always_inline void kasan_poison_slab(struct page *page) +void __kasan_poison_slab(struct slab *slab); +static __always_inline void kasan_poison_slab(struct slab *slab) { if (kasan_enabled()) - __kasan_poison_slab(page); + __kasan_poison_slab(slab); } void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object); @@ -317,7 +317,7 @@ static inline void kasan_cache_create(struct kmem_cache *cache, slab_flags_t *flags) {} static inline void kasan_cache_create_kmalloc(struct kmem_cache *cache) {} static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; } -static inline void kasan_poison_slab(struct page *page) {} +static inline void kasan_poison_slab(struct slab *slab) {} static inline void kasan_unpoison_object_data(struct kmem_cache *cache, void *object) {} static inline void kasan_poison_object_data(struct kmem_cache *cache, diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index f81a41f9d5d1..f1bfcb10f5e0 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -87,11 +87,11 @@ struct kmem_cache { struct kmem_cache_node *node[MAX_NUMNODES]; }; -static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, +static inline void *nearest_obj(struct kmem_cache *cache, struct slab *slab, void *x) { - void *object = x - (x - page->s_mem) % cache->size; - void *last_object = page->s_mem + (cache->num - 1) * cache->size; + void *object = x - (x - slab->s_mem) % cache->size; + void *last_object = slab->s_mem + (cache->num - 1) * cache->size; if (unlikely(object > last_object)) return last_object; diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 994a60da2f2e..4db01470a9e3 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -167,11 +167,11 @@ static inline void sysfs_slab_release(struct kmem_cache *s) void *fixup_red_left(struct kmem_cache *s, void *p); -static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, +static inline void *nearest_obj(struct kmem_cache *cache, struct slab *slab, void *x) { - void *object = x - (x - page_address(page)) % cache->size; - void *last_object = page_address(page) + - (page->objects - 1) * cache->size; + void *object = x - (x - slab_address(slab)) % cache->size; + void *last_object = slab_address(slab) + + (slab->objects - 1) * cache->size; void *result = (unlikely(object > last_object)) ? last_object : object; result = fixup_red_left(cache, result); diff --git a/mm/kasan/common.c b/mm/kasan/common.c index f3972af7fa1b..85774174a437 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -247,8 +247,9 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache, } #endif -void __kasan_poison_slab(struct page *page) +void __kasan_poison_slab(struct slab *slab) { + struct page *page = slab_page(slab); unsigned long i; for (i = 0; i < compound_nr(page); i++) @@ -341,7 +342,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object, if (is_kfence_address(object)) return false; - if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) != + if (unlikely(nearest_obj(cache, virt_to_slab(object), object) != object)) { kasan_report_invalid_free(tagged_object, ip); return true; @@ -401,9 +402,9 @@ void __kasan_kfree_large(void *ptr, unsigned long ip) void __kasan_slab_free_mempool(void *ptr, unsigned long ip) { - struct page *page; + struct slab *slab; - page = virt_to_head_page(ptr); + slab = virt_to_slab(ptr); /* * Even though this function is only called for kmem_cache_alloc and @@ -411,12 +412,12 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip) * !PageSlab() when the size provided to kmalloc is larger than * KMALLOC_MAX_SIZE, and kmalloc falls back onto page_alloc. */ - if (unlikely(!PageSlab(page))) { + if (unlikely(!slab_test_cache(slab))) { if (____kasan_kfree_large(ptr, ip)) return; - kasan_poison(ptr, page_size(page), KASAN_FREE_PAGE, false); + kasan_poison(ptr, slab_size(slab), KASAN_FREE_PAGE, false); } else { - ____kasan_slab_free(page->slab_cache, ptr, ip, false, false); + ____kasan_slab_free(slab->slab_cache, ptr, ip, false, false); } } @@ -560,7 +561,7 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size, void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags) { - struct page *page; + struct slab *slab; if (unlikely(object == ZERO_SIZE_PTR)) return (void *)object; @@ -572,13 +573,13 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag */ kasan_unpoison(object, size, false); - page = virt_to_head_page(object); + slab = virt_to_slab(object); /* Piggy-back on kmalloc() instrumentation to poison the redzone. */ - if (unlikely(!PageSlab(page))) + if (unlikely(!slab_test_cache(slab))) return __kasan_kmalloc_large(object, size, flags); else - return ____kasan_kmalloc(page->slab_cache, object, size, flags); + return ____kasan_kmalloc(slab->slab_cache, object, size, flags); } bool __kasan_check_byte(const void *address, unsigned long ip) diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c index c3f5ba7a294a..6153f85b90cb 100644 --- a/mm/kasan/generic.c +++ b/mm/kasan/generic.c @@ -330,16 +330,16 @@ DEFINE_ASAN_SET_SHADOW(f8); void kasan_record_aux_stack(void *addr) { - struct page *page = kasan_addr_to_page(addr); + struct slab *slab = kasan_addr_to_slab(addr); struct kmem_cache *cache; struct kasan_alloc_meta *alloc_meta; void *object; - if (is_kfence_address(addr) || !(page && PageSlab(page))) + if (is_kfence_address(addr) || !(slab && slab_test_cache(slab))) return; - cache = page->slab_cache; - object = nearest_obj(cache, page, addr); + cache = slab->slab_cache; + object = nearest_obj(cache, slab, addr); alloc_meta = kasan_get_alloc_meta(cache, object); if (!alloc_meta) return; diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 8bf568a80eb8..8f9aca95db72 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -249,7 +249,7 @@ bool kasan_report(unsigned long addr, size_t size, bool is_write, unsigned long ip); void kasan_report_invalid_free(void *object, unsigned long ip); -struct page *kasan_addr_to_page(const void *addr); +struct slab *kasan_addr_to_slab(const void *addr); depot_stack_handle_t kasan_save_stack(gfp_t flags); void kasan_set_track(struct kasan_track *track, gfp_t flags); diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c index d8ccff4c1275..587da8995f2d 100644 --- a/mm/kasan/quarantine.c +++ b/mm/kasan/quarantine.c @@ -117,7 +117,7 @@ static unsigned long quarantine_batch_size; static struct kmem_cache *qlink_to_cache(struct qlist_node *qlink) { - return virt_to_head_page(qlink)->slab_cache; + return virt_to_slab(qlink)->slab_cache; } static void *qlink_to_object(struct qlist_node *qlink, struct kmem_cache *cache) diff --git a/mm/kasan/report.c b/mm/kasan/report.c index 884a950c7026..49b58221755a 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -151,11 +151,11 @@ static void print_track(struct kasan_track *track, const char *prefix) } } -struct page *kasan_addr_to_page(const void *addr) +struct slab *kasan_addr_to_slab(const void *addr) { if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory)) - return virt_to_head_page(addr); + return virt_to_slab(addr); return NULL; } @@ -251,14 +251,14 @@ static inline bool init_task_stack_addr(const void *addr) static void print_address_description(void *addr, u8 tag) { - struct page *page = kasan_addr_to_page(addr); + struct slab *slab = kasan_addr_to_slab(addr); dump_stack_lvl(KERN_ERR); pr_err("\n"); - if (page && PageSlab(page)) { - struct kmem_cache *cache = page->slab_cache; - void *object = nearest_obj(cache, page, addr); + if (slab && slab_test_cache(slab)) { + struct kmem_cache *cache = slab->slab_cache; + void *object = nearest_obj(cache, slab, addr); describe_object(cache, object, addr, tag); } @@ -268,9 +268,9 @@ static void print_address_description(void *addr, u8 tag) pr_err(" %pS\n", addr); } - if (page) { + if (slab) { pr_err("The buggy address belongs to the page:\n"); - dump_page(page, "kasan: bad access detected"); + dump_page(slab_page(slab), "kasan: bad access detected"); } kasan_print_address_stack_frame(addr); diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c index 8a319fc16dab..16a3c55ce698 100644 --- a/mm/kasan/report_tags.c +++ b/mm/kasan/report_tags.c @@ -12,7 +12,7 @@ const char *kasan_get_bug_type(struct kasan_access_info *info) #ifdef CONFIG_KASAN_TAGS_IDENTIFY struct kasan_alloc_meta *alloc_meta; struct kmem_cache *cache; - struct page *page; + struct slab *slab; const void *addr; void *object; u8 tag; @@ -20,10 +20,10 @@ const char *kasan_get_bug_type(struct kasan_access_info *info) tag = get_tag(info->access_addr); addr = kasan_reset_tag(info->access_addr); - page = kasan_addr_to_page(addr); - if (page && PageSlab(page)) { - cache = page->slab_cache; - object = nearest_obj(cache, page, (void *)addr); + slab = kasan_addr_to_slab(addr); + if (slab && SlabAllocation(slab)) { + cache = slab->slab_cache; + object = nearest_obj(cache, slab, (void *)addr); alloc_meta = kasan_get_alloc_meta(cache, object); if (alloc_meta) { diff --git a/mm/slab.c b/mm/slab.c index 3e9cd3ecc9ab..8cbb6e91922e 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2612,7 +2612,7 @@ static struct slab *cache_grow_begin(struct kmem_cache *cachep, * slab_address() in the latter returns a non-tagged pointer, * as it should be for slab pages. */ - kasan_poison_slab(slab_page(slab)); + kasan_poison_slab(slab); /* Get slab management. */ freelist = alloc_slabmgmt(cachep, slab, offset, diff --git a/mm/slub.c b/mm/slub.c index 659b30afbb58..998c1eefd205 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1918,7 +1918,7 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) account_slab(slab, oo_order(oo), s, flags); slab->slab_cache = s; - kasan_poison_slab(slab_page(slab)); + kasan_poison_slab(slab); start = slab_address(slab); From patchwork Mon Oct 4 13:46:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534259 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9FCEC433EF for ; Mon, 4 Oct 2021 15:01:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3CCC3611C0 for ; Mon, 4 Oct 2021 15:01:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3CCC3611C0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C4713940059; Mon, 4 Oct 2021 11:01:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF4AC94000B; Mon, 4 Oct 2021 11:01:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ABD19940059; Mon, 4 Oct 2021 11:01:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0204.hostedemail.com [216.40.44.204]) by kanga.kvack.org (Postfix) with ESMTP id 9B42994000B for ; Mon, 4 Oct 2021 11:01:21 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5B10E31E5A for ; Mon, 4 Oct 2021 15:01:21 +0000 (UTC) X-FDA: 78659068362.02.7D5F8DF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 590E510393D8 for ; Mon, 4 Oct 2021 15:01:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=u+vBceyo13JEcf0ozoosSbSNeRLOKgeq1n0urmoRMn0=; b=cSnHpjsFmezAxq5CQolfnk4mnm +4JN+rbctnOLjz36BnutTxx779Md/rpTzwzn04f3W75fla6Du+n2pCnWzmxqzYN4pU1jY4z8/nKFv NV82ItsIgLeOfuS5ZhB775iYB/P401uypxqGUVW8/4fJTycE4zlGdw5i3bWTHTSHZqnKXUDimjuVA Exond2QxnBXKur5ZTrfUrREepD8YTyF820zRpXzN9KMgJJ9RAYG4/kgN/19TBXaNEERS/RADOVtKQ gFQipMGDEIzyVc6YGKBGdlGOz+XkbH48qEcduKIwKoKRQ0Htugedp7ysrTD3fkO3sPYv1ctqZDf/u nNVLX6+w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPQd-00H2Al-3j; Mon, 04 Oct 2021 14:59:09 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 59/62] zsmalloc: Stop using slab fields in struct page Date: Mon, 4 Oct 2021 14:46:47 +0100 Message-Id: <20211004134650.4031813-60-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 590E510393D8 X-Stat-Signature: zrtz4mdc9ggkkthn3jbmmdbr9ccrdqg5 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=cSnHpjsF; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633359675-724931 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The ->freelist and ->units members of struct page are for the use of slab only. I'm not particularly familiar with zsmalloc, so generate the same code by using page->index to store 'page' (page->index and page->freelist are at the same offset in struct page). This should be cleaned up properly at some point by somebody who is familiar with zsmalloc. Signed-off-by: Matthew Wilcox (Oracle) --- mm/zsmalloc.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 68e8831068f4..fccb28e5b6bb 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -17,10 +17,10 @@ * * Usage of struct page fields: * page->private: points to zspage - * page->freelist(index): links together all component pages of a zspage + * page->index: links together all component pages of a zspage * For the huge page, this is always 0, so we use this field * to store handle. - * page->units: first object offset in a subpage of zspage + * page->page_type: first object offset in a subpage of zspage * * Usage of struct page flags: * PG_private: identifies the first component page @@ -489,12 +489,12 @@ static inline struct page *get_first_page(struct zspage *zspage) static inline int get_first_obj_offset(struct page *page) { - return page->units; + return page->page_type; } static inline void set_first_obj_offset(struct page *page, int offset) { - page->units = offset; + page->page_type = offset; } static inline unsigned int get_freeobj(struct zspage *zspage) @@ -827,7 +827,7 @@ static struct page *get_next_page(struct page *page) if (unlikely(PageHugeObject(page))) return NULL; - return page->freelist; + return (struct page *)page->index; } /** @@ -901,7 +901,7 @@ static void reset_page(struct page *page) set_page_private(page, 0); page_mapcount_reset(page); ClearPageHugeObject(page); - page->freelist = NULL; + page->index = 0; } static int trylock_zspage(struct zspage *zspage) @@ -1027,7 +1027,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage, /* * Allocate individual pages and link them together as: - * 1. all pages are linked together using page->freelist + * 1. all pages are linked together using page->index * 2. each sub-page point to zspage using page->private * * we set PG_private to identify the first page (i.e. no other sub-page @@ -1036,7 +1036,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage, for (i = 0; i < nr_pages; i++) { page = pages[i]; set_page_private(page, (unsigned long)zspage); - page->freelist = NULL; + page->index = 0; if (i == 0) { zspage->first_page = page; SetPagePrivate(page); @@ -1044,7 +1044,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage, class->pages_per_zspage == 1)) SetPageHugeObject(page); } else { - prev_page->freelist = page; + prev_page->index = (unsigned long)page; } prev_page = page; } From patchwork Mon Oct 4 13:46:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B79C1C433EF for ; Mon, 4 Oct 2021 15:02:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3E039611CA for ; Mon, 4 Oct 2021 15:02:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3E039611CA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D082B94005A; Mon, 4 Oct 2021 11:02:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CB7A794000B; Mon, 4 Oct 2021 11:02:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA72F94005A; Mon, 4 Oct 2021 11:02:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0021.hostedemail.com [216.40.44.21]) by kanga.kvack.org (Postfix) with ESMTP id AD07894000B for ; Mon, 4 Oct 2021 11:02:04 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6D6902BC39 for ; Mon, 4 Oct 2021 15:02:04 +0000 (UTC) X-FDA: 78659070168.31.44F147A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 15B7D7001718 for ; Mon, 4 Oct 2021 15:02:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=59s01WMgyDCSHBYY22mVOJcI96lL392acnupwVOiP60=; b=Rg7IgpZp1p0MEE/laY5zaMBe3g rfBG/5JPi2tQsVJ+98QR7MBuNLKQssf3xZKAXtyv3ivuCVaz7Q4p8EztmaAFBKDWm+kTIoiBWhpAY hrVmZaCTLNWK4scflVWASqGtKJAJCRSYasMO0u7nHFTlRODGdRs1EIJeJFWajSudyWOGP6Lm22P9A rtzo7m3tZYk0NiKgjpkqVFjYK1JufWcj+08ytlf0KrlDxhx5OyYAQjNisEBG/TCkGVti8GdwBb+TO /7IPubFXkVpWCh0Vm/dY/vbdiVieREawvEnU1/pjpmNj5u/VI0pcOKFz3kbsj3lgFD2tMcfUSLKj9 seCgSMIA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPRU-00H2H0-I9; Mon, 04 Oct 2021 15:00:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 60/62] bootmem: Use page->index instead of page->freelist Date: Mon, 4 Oct 2021 14:46:48 +0100 Message-Id: <20211004134650.4031813-61-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 15B7D7001718 X-Stat-Signature: 31whxb43dmzi69rkm7yr8t3gdwjyhfu4 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Rg7IgpZp; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633359723-103288 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: page->freelist is for the use of slab. Using page->index is the same set of bits as page->freelist, and by using an integer instead of a pointer, we can avoid casts. Signed-off-by: Matthew Wilcox (Oracle) --- arch/x86/mm/init_64.c | 2 +- include/linux/bootmem_info.h | 2 +- mm/bootmem_info.c | 7 +++---- mm/sparse.c | 2 +- 4 files changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 36098226a957..96d34ebb20a9 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -981,7 +981,7 @@ static void __meminit free_pagetable(struct page *page, int order) if (PageReserved(page)) { __ClearPageReserved(page); - magic = (unsigned long)page->freelist; + magic = page->index; if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) { while (nr_pages--) put_page_bootmem(page++); diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 2bc8b1f69c93..cc35d010fa94 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -30,7 +30,7 @@ void put_page_bootmem(struct page *page); */ static inline void free_bootmem_page(struct page *page) { - unsigned long magic = (unsigned long)page->freelist; + unsigned long magic = page->index; /* * The reserve_bootmem_region sets the reserved flag on bootmem diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index f03f42f426f6..f18a631e7479 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -15,7 +15,7 @@ void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) { - page->freelist = (void *)type; + page->index = type; SetPagePrivate(page); set_page_private(page, info); page_ref_inc(page); @@ -23,14 +23,13 @@ void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) void put_page_bootmem(struct page *page) { - unsigned long type; + unsigned long type = page->index; - type = (unsigned long) page->freelist; BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); if (page_ref_dec_return(page) == 1) { - page->freelist = NULL; + page->index = 0; ClearPagePrivate(page); set_page_private(page, 0); INIT_LIST_HEAD(&page->lru); diff --git a/mm/sparse.c b/mm/sparse.c index 818bdb84be99..d531c533ee53 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -722,7 +722,7 @@ static void free_map_bootmem(struct page *memmap) >> PAGE_SHIFT; for (i = 0; i < nr_pages; i++, page++) { - magic = (unsigned long) page->freelist; + magic = page->index; BUG_ON(magic == NODE_INFO); From patchwork Mon Oct 4 13:46:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A5F9C433EF for ; Mon, 4 Oct 2021 15:03:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E0999610FC for ; Mon, 4 Oct 2021 15:03:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E0999610FC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7D6AD94005B; Mon, 4 Oct 2021 11:03:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 786AE94000B; Mon, 4 Oct 2021 11:03:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 64E9F94005B; Mon, 4 Oct 2021 11:03:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id 5373A94000B for ; Mon, 4 Oct 2021 11:03:03 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0450C8249980 for ; Mon, 4 Oct 2021 15:03:03 +0000 (UTC) X-FDA: 78659072646.33.93C1447 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id 4356450714C5 for ; Mon, 4 Oct 2021 15:03:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ciWg/KWFyf7bSUht3kSFJavaBSwzT+o/5Cj3/UALSYU=; b=qDNzcpPJ8D7dAY/KqeGDlrcCRa UzyM9VP/Tm2rWRwuzFl9U50J3WnNGjJS9UA7WxoFOTxnu5YgW5n9oO2Ah3FvzQG1Vzvaaepb7Evwu lEpnceMasR18JpKpRv//5hKX/mGFTlsgTlYtXNMdGfhBYeEEndGqNjNmKIyCGqUM6rKl5qOfOdyrM C5UShXcRj0d1gSt1ESfk30VyimVvJXX3jUyj5G8phwebHZlhdA3M1kTAt9zxiW+chO7tFnSgopw4c NJFnSIANmClbOuq2MtFnjjH062NvD9i4o//tJBpk0tUB2XcBI6Q8H9TkSW9MEtMJCfAUxr0SL4r6e UvOWNZzA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPT3-00H2Lf-4f; Mon, 04 Oct 2021 15:01:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 61/62] iommu: Use put_pages_list Date: Mon, 4 Oct 2021 14:46:49 +0100 Message-Id: <20211004134650.4031813-62-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 4356450714C5 X-Stat-Signature: wj3g91rfk56gdnsdacnosbfpgt1ktukw Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qDNzcpPJ; dmarc=none; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633359782-245497 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: page->freelist is for the use of slab. We already have the ability to free a list of pages in the core mm, but it requires the use of a list_head and for the pages to be chained together through page->lru. Switch the iommu code over to using free_pages_list(). Signed-off-by: Matthew Wilcox (Oracle) --- drivers/iommu/amd/io_pgtable.c | 99 +++++++++++++++------------------- drivers/iommu/dma-iommu.c | 11 +--- drivers/iommu/intel/iommu.c | 89 +++++++++++------------------- include/linux/iommu.h | 3 +- 4 files changed, 77 insertions(+), 125 deletions(-) diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c index 182c93a43efd..8dfa6ee58b76 100644 --- a/drivers/iommu/amd/io_pgtable.c +++ b/drivers/iommu/amd/io_pgtable.c @@ -74,49 +74,37 @@ static u64 *first_pte_l7(u64 *pte, unsigned long *page_size, * ****************************************************************************/ -static void free_page_list(struct page *freelist) -{ - while (freelist != NULL) { - unsigned long p = (unsigned long)page_address(freelist); - - freelist = freelist->freelist; - free_page(p); - } -} - -static struct page *free_pt_page(unsigned long pt, struct page *freelist) +static void free_pt_page(unsigned long pt, struct list_head *list) { struct page *p = virt_to_page((void *)pt); - p->freelist = freelist; - - return p; + list_add_tail(&p->lru, list); } #define DEFINE_FREE_PT_FN(LVL, FN) \ -static struct page *free_pt_##LVL (unsigned long __pt, struct page *freelist) \ -{ \ - unsigned long p; \ - u64 *pt; \ - int i; \ - \ - pt = (u64 *)__pt; \ - \ - for (i = 0; i < 512; ++i) { \ - /* PTE present? */ \ - if (!IOMMU_PTE_PRESENT(pt[i])) \ - continue; \ - \ - /* Large PTE? */ \ - if (PM_PTE_LEVEL(pt[i]) == 0 || \ - PM_PTE_LEVEL(pt[i]) == 7) \ - continue; \ - \ - p = (unsigned long)IOMMU_PTE_PAGE(pt[i]); \ - freelist = FN(p, freelist); \ - } \ - \ - return free_pt_page((unsigned long)pt, freelist); \ +static void free_pt_##LVL (unsigned long __pt, struct list_head *list) \ +{ \ + unsigned long p; \ + u64 *pt; \ + int i; \ + \ + pt = (u64 *)__pt; \ + \ + for (i = 0; i < 512; ++i) { \ + /* PTE present? */ \ + if (!IOMMU_PTE_PRESENT(pt[i])) \ + continue; \ + \ + /* Large PTE? */ \ + if (PM_PTE_LEVEL(pt[i]) == 0 || \ + PM_PTE_LEVEL(pt[i]) == 7) \ + continue; \ + \ + p = (unsigned long)IOMMU_PTE_PAGE(pt[i]); \ + FN(p, list); \ + } \ + \ + free_pt_page((unsigned long)pt, list); \ } DEFINE_FREE_PT_FN(l2, free_pt_page) @@ -125,36 +113,33 @@ DEFINE_FREE_PT_FN(l4, free_pt_l3) DEFINE_FREE_PT_FN(l5, free_pt_l4) DEFINE_FREE_PT_FN(l6, free_pt_l5) -static struct page *free_sub_pt(unsigned long root, int mode, - struct page *freelist) +static void free_sub_pt(unsigned long root, int mode, struct list_head *list) { switch (mode) { case PAGE_MODE_NONE: case PAGE_MODE_7_LEVEL: break; case PAGE_MODE_1_LEVEL: - freelist = free_pt_page(root, freelist); + free_pt_page(root, list); break; case PAGE_MODE_2_LEVEL: - freelist = free_pt_l2(root, freelist); + free_pt_l2(root, list); break; case PAGE_MODE_3_LEVEL: - freelist = free_pt_l3(root, freelist); + free_pt_l3(root, list); break; case PAGE_MODE_4_LEVEL: - freelist = free_pt_l4(root, freelist); + free_pt_l4(root, list); break; case PAGE_MODE_5_LEVEL: - freelist = free_pt_l5(root, freelist); + free_pt_l5(root, list); break; case PAGE_MODE_6_LEVEL: - freelist = free_pt_l6(root, freelist); + free_pt_l6(root, list); break; default: BUG(); } - - return freelist; } void amd_iommu_domain_set_pgtable(struct protection_domain *domain, @@ -362,7 +347,7 @@ static u64 *fetch_pte(struct amd_io_pgtable *pgtable, return pte; } -static struct page *free_clear_pte(u64 *pte, u64 pteval, struct page *freelist) +static void free_clear_pte(u64 *pte, u64 pteval, struct list_head *list) { unsigned long pt; int mode; @@ -373,12 +358,12 @@ static struct page *free_clear_pte(u64 *pte, u64 pteval, struct page *freelist) } if (!IOMMU_PTE_PRESENT(pteval)) - return freelist; + return; pt = (unsigned long)IOMMU_PTE_PAGE(pteval); mode = IOMMU_PTE_MODE(pteval); - return free_sub_pt(pt, mode, freelist); + free_sub_pt(pt, mode, list); } /* @@ -392,7 +377,7 @@ static int iommu_v1_map_page(struct io_pgtable_ops *ops, unsigned long iova, phys_addr_t paddr, size_t size, int prot, gfp_t gfp) { struct protection_domain *dom = io_pgtable_ops_to_domain(ops); - struct page *freelist = NULL; + LIST_HEAD(freelist); bool updated = false; u64 __pte, *pte; int ret, i, count; @@ -412,9 +397,9 @@ static int iommu_v1_map_page(struct io_pgtable_ops *ops, unsigned long iova, goto out; for (i = 0; i < count; ++i) - freelist = free_clear_pte(&pte[i], pte[i], freelist); + free_clear_pte(&pte[i], pte[i], &freelist); - if (freelist != NULL) + if (!list_empty(&freelist)) updated = true; if (count > 1) { @@ -449,7 +434,7 @@ static int iommu_v1_map_page(struct io_pgtable_ops *ops, unsigned long iova, } /* Everything flushed out, free pages now */ - free_page_list(freelist); + put_pages_list(&freelist); return ret; } @@ -511,7 +496,7 @@ static void v1_free_pgtable(struct io_pgtable *iop) { struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, iop); struct protection_domain *dom; - struct page *freelist = NULL; + LIST_HEAD(freelist); unsigned long root; if (pgtable->mode == PAGE_MODE_NONE) @@ -530,9 +515,9 @@ static void v1_free_pgtable(struct io_pgtable *iop) pgtable->mode > PAGE_MODE_6_LEVEL); root = (unsigned long)pgtable->root; - freelist = free_sub_pt(root, pgtable->mode, freelist); + free_sub_pt(root, pgtable->mode, &freelist); - free_page_list(freelist); + put_pages_list(&freelist); } static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 896bea04c347..16742d9d8a1a 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -66,14 +66,7 @@ early_param("iommu.forcedac", iommu_dma_forcedac_setup); static void iommu_dma_entry_dtor(unsigned long data) { - struct page *freelist = (struct page *)data; - - while (freelist) { - unsigned long p = (unsigned long)page_address(freelist); - - freelist = freelist->freelist; - free_page(p); - } + put_pages_list((struct list_head *)data); } static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) @@ -481,7 +474,7 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, else if (gather && gather->queued) queue_iova(iovad, iova_pfn(iovad, iova), size >> iova_shift(iovad), - (unsigned long)gather->freelist); + (unsigned long)&gather->freelist); else free_iova_fast(iovad, iova_pfn(iovad, iova), size >> iova_shift(iovad)); diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index d75f59ae28e6..eaaff646e1b4 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -1186,35 +1186,30 @@ static void dma_pte_free_pagetable(struct dmar_domain *domain, know the hardware page-walk will no longer touch them. The 'pte' argument is the *parent* PTE, pointing to the page that is to be freed. */ -static struct page *dma_pte_list_pagetables(struct dmar_domain *domain, - int level, struct dma_pte *pte, - struct page *freelist) +static void dma_pte_list_pagetables(struct dmar_domain *domain, + int level, struct dma_pte *pte, + struct list_head *list) { struct page *pg; pg = pfn_to_page(dma_pte_addr(pte) >> PAGE_SHIFT); - pg->freelist = freelist; - freelist = pg; + list_add_tail(&pg->lru, list); if (level == 1) - return freelist; + return; pte = page_address(pg); do { if (dma_pte_present(pte) && !dma_pte_superpage(pte)) - freelist = dma_pte_list_pagetables(domain, level - 1, - pte, freelist); + dma_pte_list_pagetables(domain, level - 1, pte, list); pte++; } while (!first_pte_in_page(pte)); - - return freelist; } -static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level, - struct dma_pte *pte, unsigned long pfn, - unsigned long start_pfn, - unsigned long last_pfn, - struct page *freelist) +static void dma_pte_clear_level(struct dmar_domain *domain, int level, + struct dma_pte *pte, unsigned long pfn, + unsigned long start_pfn, unsigned long last_pfn, + struct list_head *list) { struct dma_pte *first_pte = NULL, *last_pte = NULL; @@ -1235,7 +1230,7 @@ static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level, /* These suborbinate page tables are going away entirely. Don't bother to clear them; we're just going to *free* them. */ if (level > 1 && !dma_pte_superpage(pte)) - freelist = dma_pte_list_pagetables(domain, level - 1, pte, freelist); + dma_pte_list_pagetables(domain, level - 1, pte, list); dma_clear_pte(pte); if (!first_pte) @@ -1243,10 +1238,10 @@ static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level, last_pte = pte; } else if (level > 1) { /* Recurse down into a level that isn't *entirely* obsolete */ - freelist = dma_pte_clear_level(domain, level - 1, - phys_to_virt(dma_pte_addr(pte)), - level_pfn, start_pfn, last_pfn, - freelist); + dma_pte_clear_level(domain, level - 1, + phys_to_virt(dma_pte_addr(pte)), + level_pfn, start_pfn, last_pfn, + list); } next: pfn += level_size(level); @@ -1255,47 +1250,28 @@ static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level, if (first_pte) domain_flush_cache(domain, first_pte, (void *)++last_pte - (void *)first_pte); - - return freelist; } /* We can't just free the pages because the IOMMU may still be walking the page tables, and may have cached the intermediate levels. The pages can only be freed after the IOTLB flush has been done. */ -static struct page *domain_unmap(struct dmar_domain *domain, - unsigned long start_pfn, - unsigned long last_pfn, - struct page *freelist) +static void domain_unmap(struct dmar_domain *domain, unsigned long start_pfn, + unsigned long last_pfn, struct list_head *list) { BUG_ON(!domain_pfn_supported(domain, start_pfn)); BUG_ON(!domain_pfn_supported(domain, last_pfn)); BUG_ON(start_pfn > last_pfn); /* we don't need lock here; nobody else touches the iova range */ - freelist = dma_pte_clear_level(domain, agaw_to_level(domain->agaw), - domain->pgd, 0, start_pfn, last_pfn, - freelist); + dma_pte_clear_level(domain, agaw_to_level(domain->agaw), + domain->pgd, 0, start_pfn, last_pfn, list); /* free pgd */ if (start_pfn == 0 && last_pfn == DOMAIN_MAX_PFN(domain->gaw)) { struct page *pgd_page = virt_to_page(domain->pgd); - pgd_page->freelist = freelist; - freelist = pgd_page; - + list_add_tail(&pgd_page->lru, list); domain->pgd = NULL; } - - return freelist; -} - -static void dma_free_pagelist(struct page *freelist) -{ - struct page *pg; - - while ((pg = freelist)) { - freelist = pg->freelist; - free_pgtable_page(page_address(pg)); - } } /* iommu handling */ @@ -1972,11 +1948,10 @@ static void domain_exit(struct dmar_domain *domain) domain_remove_dev_info(domain); if (domain->pgd) { - struct page *freelist; + LIST_HEAD(pages); - freelist = domain_unmap(domain, 0, - DOMAIN_MAX_PFN(domain->gaw), NULL); - dma_free_pagelist(freelist); + domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw), &pages); + put_pages_list(&pages); } free_domain_mem(domain); @@ -4068,19 +4043,17 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb, { struct dmar_drhd_unit *drhd; struct intel_iommu *iommu; - struct page *freelist; + LIST_HEAD(pages); - freelist = domain_unmap(si_domain, - start_vpfn, last_vpfn, - NULL); + domain_unmap(si_domain, start_vpfn, last_vpfn, &pages); rcu_read_lock(); for_each_active_iommu(iommu, drhd) iommu_flush_iotlb_psi(iommu, si_domain, start_vpfn, mhp->nr_pages, - !freelist, 0); + list_empty(&pages), 0); rcu_read_unlock(); - dma_free_pagelist(freelist); + put_pages_list(&pages); } break; } @@ -5087,8 +5060,7 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain, start_pfn = iova >> VTD_PAGE_SHIFT; last_pfn = (iova + size - 1) >> VTD_PAGE_SHIFT; - gather->freelist = domain_unmap(dmar_domain, start_pfn, - last_pfn, gather->freelist); + domain_unmap(dmar_domain, start_pfn, last_pfn, &gather->freelist); if (dmar_domain->max_addr == iova + size) dmar_domain->max_addr = iova; @@ -5124,9 +5096,10 @@ static void intel_iommu_tlb_sync(struct iommu_domain *domain, for_each_domain_iommu(iommu_id, dmar_domain) iommu_flush_iotlb_psi(g_iommus[iommu_id], dmar_domain, - start_pfn, nrpages, !gather->freelist, 0); + start_pfn, nrpages, + list_empty(&gather->freelist), 0); - dma_free_pagelist(gather->freelist); + put_pages_list(&gather->freelist); } static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain, diff --git a/include/linux/iommu.h b/include/linux/iommu.h index d2f3435e7d17..de0c57a567c8 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -186,7 +186,7 @@ struct iommu_iotlb_gather { unsigned long start; unsigned long end; size_t pgsize; - struct page *freelist; + struct list_head freelist; bool queued; }; @@ -399,6 +399,7 @@ static inline void iommu_iotlb_gather_init(struct iommu_iotlb_gather *gather) { *gather = (struct iommu_iotlb_gather) { .start = ULONG_MAX, + .freelist = LIST_HEAD_INIT(gather->freelist), }; } From patchwork Mon Oct 4 13:46:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2271C433EF for ; Mon, 4 Oct 2021 15:03:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 85BF9611C0 for ; Mon, 4 Oct 2021 15:03:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 85BF9611C0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2807194005C; Mon, 4 Oct 2021 11:03:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 22E7694000B; Mon, 4 Oct 2021 11:03:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 11DCA94005C; Mon, 4 Oct 2021 11:03:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0053.hostedemail.com [216.40.44.53]) by kanga.kvack.org (Postfix) with ESMTP id 035E594000B for ; Mon, 4 Oct 2021 11:03:32 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B335E1802748A for ; Mon, 4 Oct 2021 15:03:31 +0000 (UTC) X-FDA: 78659073822.11.F4D51F6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 6754F500030C for ; Mon, 4 Oct 2021 15:03:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yAjAp9IE3yDKt/h7bLA1uPfgDxlkqShPVuyA6C7bPsc=; b=DA16Fly81LrERwOcSyH/F00IAL UnsNl02wm6Kr9VA5nCxgpL3yRpB5zc2RVzPOpzmIu2WvNaLV9pwGFEZ2x0XW1mVdquQp1KZP2sP/A pY0xJXybL3INBOnk0wjEd9Ovxl9Wp74bifpq2Trxzvo4oXH4sm3wzxYNaZ6d16eI/UdKuTd58XlMg B3579v3w7Zv+gHykNvM+IaVIrGuHQDAJrzGJVC1m6df7CDet+cdVYEYWGozEAWcP2QnAq1oUqkxAL DZUJ4YTwyq0vZz4GQdRAL8ScZpjeaZeNMdiHHqyg9MTDIW2iY9wArq20Y5ikus5SCPI/pryzJdayQ 4GShKFQQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPU3-00H2Pj-6a; Mon, 04 Oct 2021 15:02:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 62/62] mm: Remove slab from struct page Date: Mon, 4 Oct 2021 14:46:50 +0100 Message-Id: <20211004134650.4031813-63-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 6754F500030C X-Stat-Signature: eg9fw6nnfbyayex5sb1rhiftm465ojac Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DA16Fly8; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633359811-382838 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All members of struct slab can now be removed from struct page. This shrinks the definition of struct page by 30 LOC, making it easier to understand. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm_types.h | 34 ---------------------------------- include/linux/page-flags.h | 37 ------------------------------------- 2 files changed, 71 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index c2ea71aba84e..417c5e8a3371 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -117,33 +117,6 @@ struct page { atomic_long_t pp_frag_count; }; }; - struct { /* slab, slob and slub */ - union { - struct list_head slab_list; - struct { /* Partial pages */ - struct page *next; -#ifdef CONFIG_64BIT - int pages; /* Nr of pages left */ - int pobjects; /* Approximate count */ -#else - short int pages; - short int pobjects; -#endif - }; - }; - struct kmem_cache *slab_cache; /* not slob */ - /* Double-word boundary */ - void *freelist; /* first free object */ - union { - void *s_mem; /* slab: first object */ - unsigned long counters; /* SLUB */ - struct { /* SLUB */ - unsigned inuse:16; - unsigned objects:15; - unsigned frozen:1; - }; - }; - }; struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ @@ -207,9 +180,6 @@ struct page { * which are currently stored here. */ unsigned int page_type; - - unsigned int active; /* SLAB */ - int units; /* SLOB */ }; /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */ @@ -283,11 +253,7 @@ struct slab { static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl)) SLAB_MATCH(flags, flags); SLAB_MATCH(compound_head, slab_list); /* Ensure bit 0 is clear */ -SLAB_MATCH(slab_list, slab_list); SLAB_MATCH(rcu_head, rcu_head); -SLAB_MATCH(slab_cache, slab_cache); -SLAB_MATCH(s_mem, s_mem); -SLAB_MATCH(active, active); SLAB_MATCH(_refcount, _refcount); #ifdef CONFIG_MEMCG SLAB_MATCH(memcg_data, memcg_data); diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 57bdb1eb2f29..d3d0806c1535 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -799,43 +799,6 @@ extern bool is_free_buddy_page(struct page *page); __PAGEFLAG(Isolated, isolated, PF_ANY); -/* - * If network-based swap is enabled, sl*b must keep track of whether pages - * were allocated from pfmemalloc reserves. - */ -static inline int PageSlabPfmemalloc(struct page *page) -{ - VM_BUG_ON_PAGE(!PageSlab(page), page); - return PageActive(page); -} - -/* - * A version of PageSlabPfmemalloc() for opportunistic checks where the page - * might have been freed under us and not be a PageSlab anymore. - */ -static inline int __PageSlabPfmemalloc(struct page *page) -{ - return PageActive(page); -} - -static inline void SetPageSlabPfmemalloc(struct page *page) -{ - VM_BUG_ON_PAGE(!PageSlab(page), page); - SetPageActive(page); -} - -static inline void __ClearPageSlabPfmemalloc(struct page *page) -{ - VM_BUG_ON_PAGE(!PageSlab(page), page); - __ClearPageActive(page); -} - -static inline void ClearPageSlabPfmemalloc(struct page *page) -{ - VM_BUG_ON_PAGE(!PageSlab(page), page); - ClearPageActive(page); -} - #ifdef CONFIG_MMU #define __PG_MLOCKED (1UL << PG_mlocked) #else