From patchwork Wed Dec 1 18:14:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650617 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E975C43217 for ; Wed, 1 Dec 2021 18:15:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4CEF66B0081; Wed, 1 Dec 2021 13:15:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 115D56B0072; Wed, 1 Dec 2021 13:15:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC28E6B0074; Wed, 1 Dec 2021 13:15:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0111.hostedemail.com [216.40.44.111]) by kanga.kvack.org (Postfix) with ESMTP id BC43C6B0073 for ; Wed, 1 Dec 2021 13:15:27 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7F03318116A55 for ; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) X-FDA: 78870027474.09.5B28DA1 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf21.hostedemail.com (Postfix) with ESMTP id 0F441D035190 for ; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 64FB41FD5A; Wed, 1 Dec 2021 18:15:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382515; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/w5chhh8YYLvgltHSMyWQiSTAqtdsVsIZrIGmVlHmJ4=; b=2Z/iFC7lcZ5H+jnlUo2V94gheiV7g8JITLW+70TKM/Zelhuo6Nno96T85NNpfN945WxEW2 utfhOV6Cn4Yw5bVvsvN48TcA0DKafXODGgozY+4m/7bFihGMbp7FRszEiCJBp/dvjiNLrr rvZxRIVwiLJFdOw4t3BecvRqU7Aa+BI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382515; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/w5chhh8YYLvgltHSMyWQiSTAqtdsVsIZrIGmVlHmJ4=; b=tycgaUmvkVUfLq8TBhn9TP+h65Uft6BAm3kHlfDNGGbxNIOYjjG5SkF6a3A405RCJs9d+G 3puyrarRxqQjDpAA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 40CB814050; Wed, 1 Dec 2021 18:15:15 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id YE3VDrO7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:15 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 01/33] mm: add virt_to_folio() and folio_address() Date: Wed, 1 Dec 2021 19:14:38 +0100 Message-Id: <20211201181510.18784-2-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1138; h=from:subject; bh=mhmYSV3mUW4peqzSqETX0O7Aw92jq0WStnXU/D0a9tI=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7tfayXTSar6TgpzBItBpEDWivg05JS4pV3TzbFh uYpJcBOJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7XwAKCRDgIcpz8YmpEKUdB/ 9U5iwTNrdCu9sy2Kc+wGl1Z5y+h9qUVEPwXCAXKelFoIQ4ryIvjoRkZp/brsFwhN4370dV64csin9o rQQFg1b+9+YgyYPCiqrkcA5eJiq0ikjyJblgnA6JlYM57HDuj+sL/uPrWGSNPvBNog1CkyrJ87B/sW 3nYwzH8m1BZYNbUBkZix+USYtYqDq0x9fcSMwhanbyc/m/5GWucXpf1kJGuhe9RtN2YxJHfXDRwxm3 Flp6TGa4LfyAar1dbKiJsKM7XQZo5goSfpu8ayXsmIe/XfxvjZ33deiMCm2ro6x347AGDjR6d21GH3 WhqPvj+fyx3ZHw0BIwL3rdtmwlWD9u X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 0F441D035190 X-Stat-Signature: o4r9ayxsxbnamghzmtquuoxhtio1aip7 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="2Z/iFC7l"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=tycgaUmv; spf=pass (imf21.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382516-132243 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These two wrappers around their respective struct page variants will be useful in the following patches. Signed-off-by: Vlastimil Babka Acked-by: Johannes Weiner --- include/linux/mm.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index a7e4a9e7d807..4a6cf22483da 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -863,6 +863,13 @@ static inline struct page *virt_to_head_page(const void *x) return compound_head(page); } +static inline struct folio *virt_to_folio(const void *x) +{ + struct page *page = virt_to_page(x); + + return page_folio(page); +} + void __put_page(struct page *page); void put_pages_list(struct list_head *pages); @@ -1753,6 +1760,11 @@ void page_address_init(void); #define page_address_init() do { } while(0) #endif +static inline void *folio_address(const struct folio *folio) +{ + return page_address(&folio->page); +} + extern void *page_rmapping(struct page *page); extern struct anon_vma *page_anon_vma(struct page *page); extern pgoff_t __page_file_index(struct page *page); From patchwork Wed Dec 1 18:14:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4ABC6C433FE for ; Wed, 1 Dec 2021 18:17:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC9B76B007B; Wed, 1 Dec 2021 13:15:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 92CBC6B007D; Wed, 1 Dec 2021 13:15:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 22BC16B007B; Wed, 1 Dec 2021 13:15:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0048.hostedemail.com [216.40.44.48]) by kanga.kvack.org (Postfix) with ESMTP id D7DAD6B007E for ; Wed, 1 Dec 2021 13:15:27 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7AD5B8A3CC for ; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) X-FDA: 78870027474.14.0302085 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf14.hostedemail.com (Postfix) with ESMTP id DC55D6001992 for ; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 8A7F41FD5D; Wed, 1 Dec 2021 18:15:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382515; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jn+6GPK2k/C9lWYc6NViZGzCjYmx5JN3u87TgpqX4L4=; b=prNEptaCL7Q0PiDy624UVC6pmGBQsFZ1YnK8rz+Eg2d5F2hOjyIvp77KPxtxiZOrNUGHNi 0ZQ4baYPgqk46tXDaklBqOLtIzKfV1HzTSvcjwcHLU6qxYwlFQtjpyLZQPHKoO+uu47OcB cR4l/cq19ptpMmL4twhqlJENVg+BB1Q= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382515; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jn+6GPK2k/C9lWYc6NViZGzCjYmx5JN3u87TgpqX4L4=; b=m3IWFCG4o0841ptrWa7o1uvLXQhknrwHKA+pPAoao47LBFl9vA6SzrmOvWAhVq/jWKzrji Hbm7mOGbaBSk6cAg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 675DC13D9D; Wed, 1 Dec 2021 18:15:15 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id iC6kGLO7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:15 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 02/33] mm/slab: Dissolve slab_map_pages() in its caller Date: Wed, 1 Dec 2021 19:14:39 +0100 Message-Id: <20211201181510.18784-3-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1369; h=from:subject; bh=s5BHDQ9ShCOrZi/zSAN4+X2nIjPUcPGwiH3G+H994yM=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7thRziLLb6Ja0Y/nOtgnRGInrbvGjtjE84g+JJT VS7r0jGJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7YQAKCRDgIcpz8YmpENNYB/ 4nCqz5Ga4+Hq6FPPILdawUJHmKxyW/Mdv+HUDn1IYLddfeUA9JvPt8SzedC+c9+cg6z8I+txDaGaRR M5jfOW0QJoWXtR9xBQPFKyHyrO6oM0tHxXPL9vHPW+VXVtLfzzaeJnut3Qzni+loRZVwRjgcuZD3vk 9AHnLyVePAhYLPaw+owVdP4MwTiV3nJTznkfwWBIGI1/s0HkL9Mjca2bSMHLKCRfpf4GS02sMyVIWM HW/FjzIQC7EfX1GuWfB7zNsv6Os79UQxuxwsrNiyzqAB1JJHB09ekJvqKDVPYd7ugkWBZjrsClf56k +9tPX9lo/UYnBo/ISKCP2GhgOzW1JS X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: DC55D6001992 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=prNEptaC; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=m3IWFCG4; dmarc=none; spf=pass (imf14.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Stat-Signature: 99b96em5nzn8oumf8en1a8zd4dy7axgb X-HE-Tag: 1638382516-738115 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The function no longer does what its name and comment suggests, and just sets two struct page fields, which can be done directly in its sole caller. Signed-off-by: Vlastimil Babka --- mm/slab.c | 15 ++------------- 1 file changed, 2 insertions(+), 13 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index ca4822f6b2b6..381875e23277 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2546,18 +2546,6 @@ static void slab_put_obj(struct kmem_cache *cachep, set_free_obj(page, page->active, objnr); } -/* - * Map pages beginning at addr to the given cache and slab. This is required - * for the slab allocator to be able to lookup the cache and slab of a - * virtual address for kfree, ksize, and slab debugging. - */ -static void slab_map_pages(struct kmem_cache *cache, struct page *page, - void *freelist) -{ - page->slab_cache = cache; - page->freelist = freelist; -} - /* * Grow (by 1) the number of slabs within a cache. This is called by * kmem_cache_alloc() when there are no active objs left in a cache. @@ -2621,7 +2609,8 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep, if (OFF_SLAB(cachep) && !freelist) goto opps1; - slab_map_pages(cachep, page, freelist); + page->slab_cache = cachep; + page->freelist = freelist; cache_init_objs(cachep, page); From patchwork Wed Dec 1 18:14:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650615 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F5D8C433F5 for ; Wed, 1 Dec 2021 18:15:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 123E76B0074; Wed, 1 Dec 2021 13:15:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0C8816B0073; Wed, 1 Dec 2021 13:15:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB0836B0075; Wed, 1 Dec 2021 13:15:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0058.hostedemail.com [216.40.44.58]) by kanga.kvack.org (Postfix) with ESMTP id B9BFE6B0072 for ; Wed, 1 Dec 2021 13:15:27 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6B04C8A3F3 for ; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) X-FDA: 78870027474.02.7ACC3A8 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf09.hostedemail.com (Postfix) with ESMTP id BCF7E3000116 for ; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id AEA481FD5E; Wed, 1 Dec 2021 18:15:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382515; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jlONebzexGrYMZ7OSGV00RiEHdfpIHjndvDpoQ5LM2o=; b=MgW2HY9fMfPpQ6TYFgN9i5TYoKf+dFL/zPGlpCb8+PZAC7huEBhFKvnsrffQOlsifQJGyX v29Ba7n5UWEJJ/cNU3y5iPs2V7r6wiq6ADBuY+8eGnOlJtX2gzdwrCQ7KSuzdg2geJe68x VG/+MHJVyBYG363Jj0Az3+tVg+Q/GBg= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382515; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jlONebzexGrYMZ7OSGV00RiEHdfpIHjndvDpoQ5LM2o=; b=zPxI5+c1PfeAOm5W2kyNtISmEuBFo1eHD+X60ZK/qg4w/jRPoZD31ucBBzcX753ISM06ti QOb0MqDOS9ZlSgBQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8967A14050; Wed, 1 Dec 2021 18:15:15 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id kPwGIbO7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:15 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 03/33] mm/slub: Make object_err() static Date: Wed, 1 Dec 2021 19:14:40 +0100 Message-Id: <20211201181510.18784-4-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2588; h=from:subject; bh=1YNMkrdr4iZ4u/uTXiqVZG/ZLq/rPPXfPpjKYYT48lM=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7tjKJiMgqK1X1dx9+2vIZx/Mqa9rKloHuJgj+Fu AKsGryyJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7YwAKCRDgIcpz8YmpED3/CA CmVq7mpa/9tMt3ziN4PL9jwf88geovb9PtNlj4AHjLiUlrRYsVe8CjDNA71t9sr9JhGFYYbfldGTe4 zZYYipVKvk9jOauJ4S1NhGG6+np80uvzXA8RfLT60d7T1eIPcOslZKFQJVQ3MwwViAglSrwX0aZ4wc ZrdzSR12yuxTPDABdIhMseCJ7Wez5HtoV0cybU1HOwHxL6LzIRirr48M6pKq1SKZuoFu6au5y9rRYM YutS8p2o+wLYfwH0gVZ8VRyOtBSDZ6eyFttouyEy+KH7rDR/NDK0ilanUuvVyEvmKhKxvqoWoJQiOm 3lLklpg/8rt6B8jh/b1C2hrHcW1gbK X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Queue-Id: BCF7E3000116 X-Stat-Signature: eho1hbqbctzzokgdcxwbsubc65w14f5k Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=MgW2HY9f; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=zPxI5+c1; dmarc=none; spf=pass (imf09.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Rspamd-Server: rspam02 X-HE-Tag: 1638382518-622984 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are no callers outside of mm/slub.c anymore. Move freelist_corrupted() that calls object_err() to avoid a need for forward declaration. Signed-off-by: Vlastimil Babka --- include/linux/slub_def.h | 3 --- mm/slub.c | 30 +++++++++++++++--------------- 2 files changed, 15 insertions(+), 18 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 0fa751b946fa..1ef68d4de9c0 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -156,9 +156,6 @@ static inline void sysfs_slab_release(struct kmem_cache *s) } #endif -void object_err(struct kmem_cache *s, struct page *page, - u8 *object, char *reason); - void *fixup_red_left(struct kmem_cache *s, void *p); static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, diff --git a/mm/slub.c b/mm/slub.c index a8626825a829..98bf50c1c145 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -822,20 +822,6 @@ static void slab_fix(struct kmem_cache *s, char *fmt, ...) va_end(args); } -static bool freelist_corrupted(struct kmem_cache *s, struct page *page, - void **freelist, void *nextfree) -{ - if ((s->flags & SLAB_CONSISTENCY_CHECKS) && - !check_valid_pointer(s, page, nextfree) && freelist) { - object_err(s, page, *freelist, "Freechain corrupt"); - *freelist = NULL; - slab_fix(s, "Isolate corrupted freechain"); - return true; - } - - return false; -} - static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p) { unsigned int off; /* Offset of last byte */ @@ -875,7 +861,7 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p) dump_stack(); } -void object_err(struct kmem_cache *s, struct page *page, +static void object_err(struct kmem_cache *s, struct page *page, u8 *object, char *reason) { if (slab_add_kunit_errors()) @@ -886,6 +872,20 @@ void object_err(struct kmem_cache *s, struct page *page, add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } +static bool freelist_corrupted(struct kmem_cache *s, struct page *page, + void **freelist, void *nextfree) +{ + if ((s->flags & SLAB_CONSISTENCY_CHECKS) && + !check_valid_pointer(s, page, nextfree) && freelist) { + object_err(s, page, *freelist, "Freechain corrupt"); + *freelist = NULL; + slab_fix(s, "Isolate corrupted freechain"); + return true; + } + + return false; +} + static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...) { From patchwork Wed Dec 1 18:14:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7575FC433F5 for ; Wed, 1 Dec 2021 18:18:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 11B3D6B0073; Wed, 1 Dec 2021 13:15:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AB3256B0075; Wed, 1 Dec 2021 13:15:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3FC7C6B0073; Wed, 1 Dec 2021 13:15:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id EF65D6B0080 for ; Wed, 1 Dec 2021 13:15:27 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A5C7318145544 for ; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) X-FDA: 78870027474.02.3F4F24C Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf02.hostedemail.com (Postfix) with ESMTP id 6CCB97001775 for ; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id E1C7F1FD60; Wed, 1 Dec 2021 18:15:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382515; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DTORvtbbTeBi2cigghYq68RsPc94VqIt/zrAOolu9eI=; b=F2pt/TGrh/Rd6gBsLsBGpxp6tCL7yOjLTdg5ZSSISANzN169YaEDOZUPE4AtpSnLKKuLZn 2ik28Ibx2Ku2asqka6MiheYvu6vuzw1sMdA+rJyw4tIjud4HP7/w+8T7SJJB8inreILvA/ pz9VDJFhuhXeaWnbp/K0IOTuzdwlKvM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382515; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DTORvtbbTeBi2cigghYq68RsPc94VqIt/zrAOolu9eI=; b=QdBQLLyFzLI/Xsiyk9X4TCADq7qcdYmyfMil1lIc1mCWQaSL93JDEeGLud0FTKNXf7nsTh bXQFIYEEfaIL09Cw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B30E913D9D; Wed, 1 Dec 2021 18:15:15 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id cHz8KrO7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:15 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 04/33] mm: Split slab into its own type Date: Wed, 1 Dec 2021 19:14:41 +0100 Message-Id: <20211201181510.18784-5-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=8910; i=vbabka@suse.cz; h=from:subject; bh=RgP0+JVqx4FIK6rYa5Pm4IgygGB5QTtUfzCTyf/HUjs=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7tm5RkpbQIZj1ntS/0cYv/LQdFtIVOtLdaCDPqS FlJ+VdWJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7ZgAKCRDgIcpz8YmpEOVUCA C0yUgw95+lhBSOqaMhr46ls5b0I4r3OHYnkWhqzkGffouLc7NaeuLYBizFbyHNBJu5I7geIFRpj4Hd K+Pz+XTffKfH/R8EmHKCX78yNSrF5sPcSOyjwCmsaHAcJkLnn2nxvdOXbE63RHJQFiC7g3a9DChIw2 7bXgN7MAoZA/5kFoMLAiEsmIvjMAhwDECaQ3l0yQr7+IJ+SIqMYoCdf1Fwa2b3DhqSmfzpYmEhRIOY u7JPVywt1Yawm/SD8q1sl4+rvbiBrmfjy0W3qxw16we7VDqjD1gJ+a0SWVsmV4dVYPKsMF2AcaxCLa aaUZbXiYGIwCsv8btcdRyxqwTaWC85 X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 6CCB97001775 X-Stat-Signature: neowmmcjqkjiepbmzebwp646fz6d6xne Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="F2pt/TGr"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=QdBQLLyF; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382516-539298 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Make struct slab independent of struct page. It still uses the underlying memory in struct page for storing slab-specific data, but slab and slub can now be weaned off using struct page directly. Some of the wrapper functions (slab_address() and slab_order()) still need to cast to struct folio, but this is a significant disentanglement. [ vbabka@suse.cz: Rebase on folios, use folio instead of page where possible. Do not duplicate flags field in struct slab, instead make the related accessors go through slab_folio(). For testing pfmemalloc use the folio_*_active flag accessors directly so the PageSlabPfmemalloc wrappers can be removed later. Make folio_slab() expect only folio_test_slab() == true folios and virt_to_slab() return NULL when folio_test_slab() == false. Move struct slab to mm/slab.h. Don't represent with struct slab pages that are not true slab pages, but just a compound page obtained directly rom page allocator (with large kmalloc() for SLUB and SLOB). ] Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka Acked-by: Johannes Weiner --- include/linux/mm_types.h | 10 +-- mm/slab.h | 167 +++++++++++++++++++++++++++++++++++++++ mm/slub.c | 8 +- 3 files changed, 176 insertions(+), 9 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index c3a6e6209600..1ae3537c7920 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -56,11 +56,11 @@ struct mem_cgroup; * in each subpage, but you may need to restore some of their values * afterwards. * - * SLUB uses cmpxchg_double() to atomically update its freelist and - * counters. That requires that freelist & counters be adjacent and - * double-word aligned. We align all struct pages to double-word - * boundaries, and ensure that 'freelist' is aligned within the - * struct. + * SLUB uses cmpxchg_double() to atomically update its freelist and counters. + * That requires that freelist & counters in struct slab be adjacent and + * double-word aligned. Because struct slab currently just reinterprets the + * bits of struct page, we align all struct pages to double-word boundaries, + * and ensure that 'freelist' is aligned within struct slab. */ #ifdef CONFIG_HAVE_ALIGNED_STRUCT_PAGE #define _struct_page_alignment __aligned(2 * sizeof(unsigned long)) diff --git a/mm/slab.h b/mm/slab.h index 56ad7eea3ddf..0e67a8cb7f80 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -5,6 +5,173 @@ * Internal slab definitions */ +/* Reuses the bits in struct page */ +struct slab { + unsigned long __page_flags; + union { + struct list_head slab_list; + struct { /* Partial pages */ + struct slab *next; +#ifdef CONFIG_64BIT + int slabs; /* Nr of slabs left */ +#else + short int slabs; +#endif + }; + struct rcu_head rcu_head; + }; + struct kmem_cache *slab_cache; /* not slob */ + /* Double-word boundary */ + void *freelist; /* first free object */ + union { + void *s_mem; /* slab: first object */ + unsigned long counters; /* SLUB */ + struct { /* SLUB */ + unsigned inuse:16; + unsigned objects:15; + unsigned frozen:1; + }; + }; + + union { + unsigned int active; /* SLAB */ + int units; /* SLOB */ + }; + atomic_t __page_refcount; +#ifdef CONFIG_MEMCG + unsigned long memcg_data; +#endif +}; + +#define SLAB_MATCH(pg, sl) \ + static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl)) +SLAB_MATCH(flags, __page_flags); +SLAB_MATCH(compound_head, slab_list); /* Ensure bit 0 is clear */ +SLAB_MATCH(slab_list, slab_list); +SLAB_MATCH(rcu_head, rcu_head); +SLAB_MATCH(slab_cache, slab_cache); +SLAB_MATCH(s_mem, s_mem); +SLAB_MATCH(active, active); +SLAB_MATCH(_refcount, __page_refcount); +#ifdef CONFIG_MEMCG +SLAB_MATCH(memcg_data, memcg_data); +#endif +#undef SLAB_MATCH +static_assert(sizeof(struct slab) <= sizeof(struct page)); + +/** + * folio_slab - Converts from folio to slab. + * @folio: The folio. + * + * Currently struct slab is a different representation of a folio where + * folio_test_slab() is true. + * + * Return: The slab which contains this folio. + */ +#define folio_slab(folio) (_Generic((folio), \ + const struct folio *: (const struct slab *)(folio), \ + struct folio *: (struct slab *)(folio))) + +/** + * slab_folio - The folio allocated for a slab + * @slab: The slab. + * + * Slabs are allocated as folios that contain the individual objects and are + * using some fields in the first struct page of the folio - those fields are + * now accessed by struct slab. It is occasionally necessary to convert back to + * a folio in order to communicate with the rest of the mm. Please use this + * helper function instead of casting yourself, as the implementation may change + * in the future. + */ +#define slab_folio(s) (_Generic((s), \ + const struct slab *: (const struct folio *)s, \ + struct slab *: (struct folio *)s)) + +/** + * page_slab - Converts from first struct page to slab. + * @p: The first (either head of compound or single) page of slab. + * + * A temporary wrapper to convert struct page to struct slab in situations where + * we know the page is the compound head, or single order-0 page. + * + * Long-term ideally everything would work with struct slab directly or go + * through folio to struct slab. + * + * Return: The slab which contains this page + */ +#define page_slab(p) (_Generic((p), \ + const struct page *: (const struct slab *)(p), \ + struct page *: (struct slab *)(p))) + +/** + * slab_page - The first struct page allocated for a slab + * @slab: The slab. + * + * A convenience wrapper for converting slab to the first struct page of the + * underlying folio, to communicate with code not yet converted to folio or + * struct slab. + */ +#define slab_page(s) folio_page(slab_folio(s), 0) + +/* + * If network-based swap is enabled, sl*b must keep track of whether pages + * were allocated from pfmemalloc reserves. + */ +static inline bool slab_test_pfmemalloc(const struct slab *slab) +{ + return folio_test_active((struct folio *)slab_folio(slab)); +} + +static inline void slab_set_pfmemalloc(struct slab *slab) +{ + folio_set_active(slab_folio(slab)); +} + +static inline void slab_clear_pfmemalloc(struct slab *slab) +{ + folio_clear_active(slab_folio(slab)); +} + +static inline void __slab_clear_pfmemalloc(struct slab *slab) +{ + __folio_clear_active(slab_folio(slab)); +} + +static inline void *slab_address(const struct slab *slab) +{ + return folio_address(slab_folio(slab)); +} + +static inline int slab_nid(const struct slab *slab) +{ + return folio_nid(slab_folio(slab)); +} + +static inline pg_data_t *slab_pgdat(const struct slab *slab) +{ + return folio_pgdat(slab_folio(slab)); +} + +static inline struct slab *virt_to_slab(const void *addr) +{ + struct folio *folio = virt_to_folio(addr); + + if (!folio_test_slab(folio)) + return NULL; + + return folio_slab(folio); +} + +static inline int slab_order(const struct slab *slab) +{ + return folio_order((struct folio *)slab_folio(slab)); +} + +static inline size_t slab_size(const struct slab *slab) +{ + return PAGE_SIZE << slab_order(slab); +} + #ifdef CONFIG_SLOB /* * Common fields provided in kmem_cache by all slab allocators diff --git a/mm/slub.c b/mm/slub.c index 98bf50c1c145..4776bcc8c9e4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3787,7 +3787,7 @@ static unsigned int slub_min_objects; * requested a higher minimum order then we start with that one instead of * the smallest order which will fit the object. */ -static inline unsigned int slab_order(unsigned int size, +static inline unsigned int calc_slab_order(unsigned int size, unsigned int min_objects, unsigned int max_order, unsigned int fract_leftover) { @@ -3851,7 +3851,7 @@ static inline int calculate_order(unsigned int size) fraction = 16; while (fraction >= 4) { - order = slab_order(size, min_objects, + order = calc_slab_order(size, min_objects, slub_max_order, fraction); if (order <= slub_max_order) return order; @@ -3864,14 +3864,14 @@ static inline int calculate_order(unsigned int size) * We were unable to place multiple objects in a slab. Now * lets see if we can place a single object there. */ - order = slab_order(size, 1, slub_max_order, 1); + order = calc_slab_order(size, 1, slub_max_order, 1); if (order <= slub_max_order) return order; /* * Doh this slab cannot be placed using slub_max_order. */ - order = slab_order(size, 1, MAX_ORDER, 1); + order = calc_slab_order(size, 1, MAX_ORDER, 1); if (order < MAX_ORDER) return order; return -ENOSYS; From patchwork Wed Dec 1 18:14:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81078C433F5 for ; Wed, 1 Dec 2021 18:17:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B54646B0082; Wed, 1 Dec 2021 13:15:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 708F56B0085; Wed, 1 Dec 2021 13:15:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1661C6B007D; Wed, 1 Dec 2021 13:15:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id CFAD86B0072 for ; Wed, 1 Dec 2021 13:15:27 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 92C508088E for ; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) X-FDA: 78870027474.19.A87CC57 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf14.hostedemail.com (Postfix) with ESMTP id 2A37C6001995 for ; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 192162171F; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3dn7x4vAO7A1NDlfRW2lr8zvnjRIzRGSYuX8+ae7iAw=; b=0RjmnZeJwhmoZCijBqgn7s55f1ORRMuTNF7tH9pmfvdQBcWS7cs+VdCo4y7H1QVEqJPKxW jTJvR30UwJAs9cOHBlzlPQmVmpxmxmKuSSf7AzfYjAH9KKXsPu3abvS35RceTBDaFb7lV3 tpZIus1KWB+2/ESM5uIZx/P8YWJEHK8= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3dn7x4vAO7A1NDlfRW2lr8zvnjRIzRGSYuX8+ae7iAw=; b=+iIhENglwph2PLRFkViBe7s8myIKJm9sTTP/rJPN2mqhmg5fWCk0D4w35w+hrmWYb+C4L0 QTItZwLnwxytdoCA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E44FE14050; Wed, 1 Dec 2021 18:15:15 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id WKQrN7O7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:15 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 05/33] mm: Add account_slab() and unaccount_slab() Date: Wed, 1 Dec 2021 19:14:42 +0100 Message-Id: <20211201181510.18784-6-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3391; i=vbabka@suse.cz; h=from:subject; bh=PBR+3dbetbQkpEh/4DBavtsNZubPZd2KOiUPTRmMTnw=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7tofxgN90NPg9aRllNVe7iImBbARlWku9YYX0pv Am0IZMiJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7aAAKCRDgIcpz8YmpELrFB/ 4slbsUOsTLbCVOummpScvgifsVJhFxG52kDmIwrLbIdF0aK4SJBygpWj/EVTqnR0tSx/OAkKuvgVLU TyKLBDwj3qKLQfEnpD8YMLzKykFBJOW5+z0h1JcDM2yvjZyYV0IX4/+lsMdGqySDbuyrOEigHdWaJ3 KdGPGMTzJxf/9GWJnRmhs8CqkXyfYJemC1IHSv37Q5Vfbhu1k6kDzt/9ozF0kdypzP3S7aQgcZrstk kdJMAK4jD9gf4zebLGETQPrvPKH/CKXHOhQ6HxJEhFnS/Y7Th3BKluDg3ulsey1ex0QK/HQE7jWiqN k7KejnQqjiGeRKa28Dr5N0aGelglh4 X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 2A37C6001995 X-Stat-Signature: webpfn6n1dui14a7qikmeezayy3dws7s Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=0RjmnZeJ; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=+iIhENgl; spf=pass (imf14.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382516-698991 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" These functions take struct slab instead of struct page and replace (un)account_slab_page(). For now their callers just convert page to slab. [ vbabka@suse.cz: replace existing functions instead of calling them ] Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka Acked-by: Johannes Weiner --- mm/slab.c | 4 ++-- mm/slab.h | 17 ++++++++--------- mm/slub.c | 4 ++-- 3 files changed, 12 insertions(+), 13 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 381875e23277..7f147805d0ab 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1380,7 +1380,7 @@ static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, return NULL; } - account_slab_page(page, cachep->gfporder, cachep, flags); + account_slab(page_slab(page), cachep->gfporder, cachep, flags); __SetPageSlab(page); /* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */ if (sk_memalloc_socks() && page_is_pfmemalloc(page)) @@ -1405,7 +1405,7 @@ static void kmem_freepages(struct kmem_cache *cachep, struct page *page) if (current->reclaim_state) current->reclaim_state->reclaimed_slab += 1 << order; - unaccount_slab_page(page, order, cachep); + unaccount_slab(page_slab(page), order, cachep); __free_pages(page, order); } diff --git a/mm/slab.h b/mm/slab.h index 0e67a8cb7f80..dd3f72fddff6 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -583,24 +583,23 @@ static inline struct kmem_cache *virt_to_cache(const void *obj) return page->slab_cache; } -static __always_inline void account_slab_page(struct page *page, int order, - struct kmem_cache *s, - gfp_t gfp) +static __always_inline void account_slab(struct slab *slab, int order, + struct kmem_cache *s, gfp_t gfp) { if (memcg_kmem_enabled() && (s->flags & SLAB_ACCOUNT)) - memcg_alloc_page_obj_cgroups(page, s, gfp, true); + memcg_alloc_page_obj_cgroups(slab_page(slab), s, gfp, true); - mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s), + mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), PAGE_SIZE << order); } -static __always_inline void unaccount_slab_page(struct page *page, int order, - struct kmem_cache *s) +static __always_inline void unaccount_slab(struct slab *slab, int order, + struct kmem_cache *s) { if (memcg_kmem_enabled()) - memcg_free_page_obj_cgroups(page); + memcg_free_page_obj_cgroups(slab_page(slab)); - mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s), + mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), -(PAGE_SIZE << order)); } diff --git a/mm/slub.c b/mm/slub.c index 4776bcc8c9e4..8b172de26c67 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1943,7 +1943,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) page->objects = oo_objects(oo); - account_slab_page(page, oo_order(oo), s, flags); + account_slab(page_slab(page), oo_order(oo), s, flags); page->slab_cache = s; __SetPageSlab(page); @@ -2014,7 +2014,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page) page->slab_cache = NULL; if (current->reclaim_state) current->reclaim_state->reclaimed_slab += pages; - unaccount_slab_page(page, order, s); + unaccount_slab(page_slab(page), order, s); __free_pages(page, order); } From patchwork Wed Dec 1 18:14:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C359C433EF for ; Wed, 1 Dec 2021 18:18:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 309C66B0075; Wed, 1 Dec 2021 13:15:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D73056B0083; Wed, 1 Dec 2021 13:15:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 498496B0080; Wed, 1 Dec 2021 13:15:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id 017E16B0075 for ; Wed, 1 Dec 2021 13:15:27 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B528482499B9 for ; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) X-FDA: 78870027474.16.CEE6DAB Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf12.hostedemail.com (Postfix) with ESMTP id 838B410000A8 for ; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 3DDDD21763; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2huBtd78dgIMu/REbqCFf2aOU3Hf2qAQZoDVbiYptxA=; b=VFZi564XHkimtWnAETlR3UDS7AzMp+gVToDvguCEyPUeO4liBMiuJ5rTck6mQM5uGTpXrX sDArKwTuA3/T+xnZ0JCYL4pPRqRvRfCNkY5Zo8euLi55JDhXCU9GQ5yG7AEtAcl1Ub7Wji 7A6OxRGj+g3nzL79N4maRvYiApVMIDU= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2huBtd78dgIMu/REbqCFf2aOU3Hf2qAQZoDVbiYptxA=; b=T62aNIrOXBKrK18MCmWJs1YxwuNfo3MtTr+nSHMSYl2qB1U1OGLNUm9GNOOnJE9sj7do17 1u74fv2ORdsBwXBw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1B13A13D9D; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id IOcBBrS7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:16 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 06/33] mm: Convert virt_to_cache() to use struct slab Date: Wed, 1 Dec 2021 19:14:43 +0100 Message-Id: <20211201181510.18784-7-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=971; i=vbabka@suse.cz; h=from:subject; bh=VZU2uieaWuqFt5d/ZKEkgSWeVL6DLIe/z3yNXrPL7fY=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7tqDNUCA3VM++k6m7bOU+APqk4pvifXwyvNdi36 ZpvYBe+JATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7agAKCRDgIcpz8YmpEJsJB/ 9sgRBNhe7UIwRNrB5VVGUXHQO1lndiItwKB8GXZBsHvrvX/B5EpGYMPDhigPVsoUHpDK1JqJg/c7QY ULwCunSwYUDP2AmoNIeYaypBIA+/Ie5gFSQDYAR+yMR5NhZ+jZp6WjkVsv8G/dI7y6Sq7WGcf1fQPb aUqQGmf4rQa0guan64GhApnHafFQAUa0Bt6eu7B5ri9/4biN1r0tuZ62j1Ay3lVirMFCR6qz4WTLah bPYPPq1BsSU35zHEmwKyqQYhuZn17JN2OYrM2rdJg73zCbwjcUxi93d4luPK810imWqv6EdEog6SSt Ow/EsIe/y+0jx8Tp3Y/DB9n+KPT6cJ X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Stat-Signature: iha6mosognnf73pwwgdxwbep987auoep Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=VFZi564X; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=T62aNIrO; dmarc=none; spf=pass (imf12.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 838B410000A8 X-HE-Tag: 1638382517-174378 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" This function is entirely self-contained, so can be converted from page to slab. Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka Acked-by: Johannes Weiner --- mm/slab.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index dd3f72fddff6..1408ada9ff72 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -574,13 +574,13 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, static inline struct kmem_cache *virt_to_cache(const void *obj) { - struct page *page; + struct slab *slab; - page = virt_to_head_page(obj); - if (WARN_ONCE(!PageSlab(page), "%s: Object is not a Slab page!\n", + slab = virt_to_slab(obj); + if (WARN_ONCE(!slab, "%s: Object is not a Slab page!\n", __func__)) return NULL; - return page->slab_cache; + return slab->slab_cache; } static __always_inline void account_slab(struct slab *slab, int order, From patchwork Wed Dec 1 18:14:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650695 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58F30C433EF for ; Wed, 1 Dec 2021 18:19:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 559646B007D; Wed, 1 Dec 2021 13:15:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ED67C6B0085; Wed, 1 Dec 2021 13:15:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 77EF96B007E; Wed, 1 Dec 2021 13:15:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id 2EC286B0083 for ; Wed, 1 Dec 2021 13:15:28 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E176E8248D52 for ; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) X-FDA: 78870027474.15.EAE93AB Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf23.hostedemail.com (Postfix) with ESMTP id 70BC79000093 for ; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 68F302177B; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Rblro+TC1uDVBy3mod02KtqT/aFmx2uev/PD//fDNcw=; b=yDIOmFLbbM4dw8OZ6Z5qor/Js2UnwBzoWZSAfIlHYj9kKfyjXTQmoI+a8lCjwSJt5zuKzX 4WTPWhE5KybediHELf08xzqDQugiBnF/eks8TRxmz5KzhUA4nvfnlwUrT241/0R7CXkAkV jbjPqj4w0Mz0sgLGwmi9845RaBg9304= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Rblro+TC1uDVBy3mod02KtqT/aFmx2uev/PD//fDNcw=; b=2ZBqdQ++hDhpCMig1ecaaBmuxt432sLvuFEyHl2N6ALBBFurzP7VJ3ckT/o8xZTE78WJf0 v0ZlKwTtGDgfdoBA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 40C0914050; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 6EQtD7S7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:16 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 07/33] mm: Convert __ksize() to struct slab Date: Wed, 1 Dec 2021 19:14:44 +0100 Message-Id: <20211201181510.18784-8-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2002; i=vbabka@suse.cz; h=from:subject; bh=eX+s7OU+yp43nrNz9lV9PAitVzSm6e7XjthT2EewhLk=; b=owGbwMvMwMH4QPFU8cfOlQKMp9WSGBKX785VuL/rRHdqzL6tC5QyBBhMAlqY9DJmiW4T77W6/Vrp +pG8TkZjFgZGDgZZMUWWXu/JjCtNH0vs84g7AzOIlQlkCgMXpwBMZIkF+z/zG0vlzRbcYrG8IpEz81 9xZImiS+OPCwvnOXyd9Loq8YaNbKq00Qph57bzS/1EsvwERMRet154c9P645252fMMVP/efyi6Y5k3 z29ezzIJQSVhBpkkaa3qdnGlw93rWFNe/dC5KbFj9WPWg0HC0y+/8f4lsFls+xuVzU/DD2/2X/rnVO 4s/4ee8iqpyzl+HpfTydGpDX1YbLwtrNrw/t3AWpnd0b6z7racbPPOthduyjnyr3uzXTf74j9+61oV 5vYclebicluy9n6bwWaX67fLeTQZZ/vrmgtWaJpqfM3wNpHb1HVTXdbP+UzJwS0xb+c5v8mUejzZX9 EhLt/S/NmDrwlB3kLxXzMEnETsAQ== X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 70BC79000093 X-Stat-Signature: 5n319i6671wqufs4xo1k1sx9yaighc36 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=yDIOmFLb; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=2ZBqdQ++; spf=pass (imf23.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382517-765392 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" In SLUB, use folios, and struct slab to access slab_cache field. In SLOB, use folios to properly resolve pointers beyond PAGE_SIZE offset of the object. [ vbabka@suse.cz: use folios, and only convert folio_test_slab() == true folios to struct slab ] Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka Acked-by: Johannes Weiner --- mm/slob.c | 8 ++++---- mm/slub.c | 12 +++++------- 2 files changed, 9 insertions(+), 11 deletions(-) diff --git a/mm/slob.c b/mm/slob.c index 03deee1e6a94..c8a4290012a6 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -570,7 +570,7 @@ EXPORT_SYMBOL(kfree); /* can't use ksize for kmem_cache_alloc memory, only kmalloc */ size_t __ksize(const void *block) { - struct page *sp; + struct folio *folio; int align; unsigned int *m; @@ -578,9 +578,9 @@ size_t __ksize(const void *block) if (unlikely(block == ZERO_SIZE_PTR)) return 0; - sp = virt_to_page(block); - if (unlikely(!PageSlab(sp))) - return page_size(sp); + folio = virt_to_folio(block); + if (unlikely(!folio_test_slab(folio))) + return folio_size(folio); align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN); m = (unsigned int *)(block - align); diff --git a/mm/slub.c b/mm/slub.c index 8b172de26c67..6ae7c63cedc0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4527,19 +4527,17 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, size_t __ksize(const void *object) { - struct page *page; + struct folio *folio; if (unlikely(object == ZERO_SIZE_PTR)) return 0; - page = virt_to_head_page(object); + folio = virt_to_folio(object); - if (unlikely(!PageSlab(page))) { - WARN_ON(!PageCompound(page)); - return page_size(page); - } + if (unlikely(!folio_test_slab(folio))) + return folio_size(folio); - return slab_ksize(page->slab_cache); + return slab_ksize(folio_slab(folio)->slab_cache); } EXPORT_SYMBOL(__ksize); From patchwork Wed Dec 1 18:14:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 931F9C433EF for ; Wed, 1 Dec 2021 18:19:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 846E26B0085; Wed, 1 Dec 2021 13:15:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A1186B0080; Wed, 1 Dec 2021 13:15:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B4F266B0073; Wed, 1 Dec 2021 13:15:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0052.hostedemail.com [216.40.44.52]) by kanga.kvack.org (Postfix) with ESMTP id 781276B0087 for ; Wed, 1 Dec 2021 13:15:28 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3C9668A3D0 for ; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) X-FDA: 78870027516.25.8C11538 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf20.hostedemail.com (Postfix) with ESMTP id 7F6C9D0000A7 for ; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id A301A21891; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BZ9yxaTfKeg4EKSbSnXTLYZa6V+LUFTOK0Lrhh/tepk=; b=VWE3eBFdHdF2Hid+nEnSLh58/CbHOOafJNeyFT5YL2mA6516eqtx55AqcznQbxJB8PwF6O o75QDbzaNGb15MypTAfksSZopmcTjJ6IF/udESWssKurmq9zxLDgpTV1kisZH/AwKZZ7d4 tJg75UEeBn3E4RaRGT7NddET426RtVs= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BZ9yxaTfKeg4EKSbSnXTLYZa6V+LUFTOK0Lrhh/tepk=; b=63SlIrUIEW7aOmWEuTVIiDT0x60zT5+MJgxb9trVJ83yDigqchOQX8mXoEGq4gwH/XD9Fy IdiMafr4I+Bn5jCQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6BD4113D9D; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 0J+eGbS7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:16 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 08/33] mm: Use struct slab in kmem_obj_info() Date: Wed, 1 Dec 2021 19:14:45 +0100 Message-Id: <20211201181510.18784-9-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5088; i=vbabka@suse.cz; h=from:subject; bh=rqxRHBo8Lf2ztOUBJMH62wCHp4QXoSoJ8QLKLQTT3h0=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7tv0JomP3mNV8VcopenY//mchamshwEM3ZFyFGK etwsebCJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7bwAKCRDgIcpz8YmpEMAUB/ 4vStMd0fbKlfj2a5aBKnW9dtQh9FPPr20d1fjPjC3c+zOHhChAvvvxMgc8xQ6GU3tg5qahPQA5ZN64 MuASZEeya7SvSXd4lKvFqOlakdNGG09IqY/Z6C62UhftBy+piBZNm+ybtvrpp3+1WiIbOhYWZyovll M4O/dVae2cHbBVjGelxRDEUjKrSXLFAxcF0ueJ3Ns6A3gtVfpCjBJhn7Eklix/ejaUBDd1PjCkf8jQ Gw3xrcYSRTrxqLZfejS/m5ENdK07N7+PwjdhYHPUPX3dCw0OMYDINmomGJLLmEiahoxBJxQuMiFjyk ZMKbt7WfOXTNreqh8WJdzx12UPdHx4 X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7F6C9D0000A7 X-Stat-Signature: 6djgsepjpr33hrypyn3w3zoo14nxfftd Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=VWE3eBFd; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=63SlIrUI; spf=pass (imf20.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382517-311908 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" All three implementations of slab support kmem_obj_info() which reports details of an object allocated from the slab allocator. By using the slab type instead of the page type, we make it obvious that this can only be called for slabs. Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka --- mm/slab.c | 12 ++++++------ mm/slab.h | 4 ++-- mm/slab_common.c | 8 ++++---- mm/slob.c | 4 ++-- mm/slub.c | 12 ++++++------ 5 files changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 7f147805d0ab..44bc1fcd1393 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3646,21 +3646,21 @@ EXPORT_SYMBOL(__kmalloc_node_track_caller); #endif /* CONFIG_NUMA */ #ifdef CONFIG_PRINTK -void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page) +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) { struct kmem_cache *cachep; unsigned int objnr; void *objp; kpp->kp_ptr = object; - kpp->kp_page = page; - cachep = page->slab_cache; + kpp->kp_slab = slab; + cachep = slab->slab_cache; kpp->kp_slab_cache = cachep; objp = object - obj_offset(cachep); kpp->kp_data_offset = obj_offset(cachep); - page = virt_to_head_page(objp); - objnr = obj_to_index(cachep, page, objp); - objp = index_to_obj(cachep, page, objnr); + slab = virt_to_slab(objp); + objnr = obj_to_index(cachep, slab_page(slab), objp); + objp = index_to_obj(cachep, slab_page(slab), objnr); kpp->kp_objp = objp; if (DEBUG && cachep->flags & SLAB_STORE_USER) kpp->kp_ret = *dbg_userword(cachep, objp); diff --git a/mm/slab.h b/mm/slab.h index 1408ada9ff72..9ae9f6c3d1cb 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -801,7 +801,7 @@ static inline void debugfs_slab_release(struct kmem_cache *s) { } #define KS_ADDRS_COUNT 16 struct kmem_obj_info { void *kp_ptr; - struct page *kp_page; + struct slab *kp_slab; void *kp_objp; unsigned long kp_data_offset; struct kmem_cache *kp_slab_cache; @@ -809,7 +809,7 @@ struct kmem_obj_info { void *kp_stack[KS_ADDRS_COUNT]; void *kp_free_stack[KS_ADDRS_COUNT]; }; -void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page); +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab); #endif #endif /* MM_SLAB_H */ diff --git a/mm/slab_common.c b/mm/slab_common.c index e5d080a93009..bddefca7d584 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -579,18 +579,18 @@ void kmem_dump_obj(void *object) { char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc"; int i; - struct page *page; + struct slab *slab; unsigned long ptroffset; struct kmem_obj_info kp = { }; if (WARN_ON_ONCE(!virt_addr_valid(object))) return; - page = virt_to_head_page(object); - if (WARN_ON_ONCE(!PageSlab(page))) { + slab = virt_to_slab(object); + if (WARN_ON_ONCE(!slab)) { pr_cont(" non-slab memory.\n"); return; } - kmem_obj_info(&kp, object, page); + kmem_obj_info(&kp, object, slab); if (kp.kp_slab_cache) pr_cont(" slab%s %s", cp, kp.kp_slab_cache->name); else diff --git a/mm/slob.c b/mm/slob.c index c8a4290012a6..d2d15e7f191c 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -462,10 +462,10 @@ static void slob_free(void *block, int size) } #ifdef CONFIG_PRINTK -void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page) +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) { kpp->kp_ptr = object; - kpp->kp_page = page; + kpp->kp_slab = slab; } #endif diff --git a/mm/slub.c b/mm/slub.c index 6ae7c63cedc0..bc8a1fa146a5 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4322,31 +4322,31 @@ int __kmem_cache_shutdown(struct kmem_cache *s) } #ifdef CONFIG_PRINTK -void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page) +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) { void *base; int __maybe_unused i; unsigned int objnr; void *objp; void *objp0; - struct kmem_cache *s = page->slab_cache; + struct kmem_cache *s = slab->slab_cache; struct track __maybe_unused *trackp; kpp->kp_ptr = object; - kpp->kp_page = page; + kpp->kp_slab = slab; kpp->kp_slab_cache = s; - base = page_address(page); + base = slab_address(slab); objp0 = kasan_reset_tag(object); #ifdef CONFIG_SLUB_DEBUG objp = restore_red_left(s, objp0); #else objp = objp0; #endif - objnr = obj_to_index(s, page, objp); + objnr = obj_to_index(s, slab_page(slab), objp); kpp->kp_data_offset = (unsigned long)((char *)objp0 - (char *)objp); objp = base + s->size * objnr; kpp->kp_objp = objp; - if (WARN_ON_ONCE(objp < base || objp >= base + page->objects * s->size || (objp - base) % s->size) || + if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size || (objp - base) % s->size) || !(s->flags & SLAB_STORE_USER)) return; #ifdef CONFIG_SLUB_DEBUG From patchwork Wed Dec 1 18:14:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38816C433F5 for ; Wed, 1 Dec 2021 18:22:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8AAF86B008C; Wed, 1 Dec 2021 13:15:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 27C8B6B0087; Wed, 1 Dec 2021 13:15:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E623C6B008A; Wed, 1 Dec 2021 13:15:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0246.hostedemail.com [216.40.44.246]) by kanga.kvack.org (Postfix) with ESMTP id 788136B008C for ; Wed, 1 Dec 2021 13:15:29 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3928B8248D7C for ; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) X-FDA: 78870027558.30.0AF1EEC Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf08.hostedemail.com (Postfix) with ESMTP id 580A230000AB for ; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id BECD8218A4; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=F+rg5CMNpqVAbBxQDbscRGGomXPEkB8UmLUiWBileU0=; b=fd+A5F6axsWvbV7GhRfxq5rrn2Ober/Vh9wYFArZwE2flHD9XZC2OBF4H170gtiNgZZcKv jn3W25Ju/wxx1opiBG6e4wRdn4OpUpLiFWv2R7y6P75RI8JcH2LQzs26XyyoKBKqlepnNa TlgVSjJq0cFwzzRu8XqRbqpyOJc2HWY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=F+rg5CMNpqVAbBxQDbscRGGomXPEkB8UmLUiWBileU0=; b=h38QJmQ0uP0HZkJP7w3Cg4hsyLBCtDE6ZWO1O/mUjLzfD9DyR4Fqq/hmsUTtP1ZM2Jd0nX ZdYz4C4O4B3nrgCg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9AFFA14050; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id yBUMJbS7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:16 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 09/33] mm: Convert check_heap_object() to use struct slab Date: Wed, 1 Dec 2021 19:14:46 +0100 Message-Id: <20211201181510.18784-10-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6760; i=vbabka@suse.cz; h=from:subject; bh=ZzBM4O+8XzeaWUnjXBfE/ls0l3En3D9S79issp7uHa8=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7txMH9hXvpJtC+9bt7AppmzIjn88jqy0jht5bUp NzPSANSJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7cQAKCRDgIcpz8YmpEIRQB/ 4keqaWRja++qcn2WUu4NzMFdW1Wp0gYeWAkFSM3JXcP0zASu0q3Ii9CBGclsx3imM5Duvskef8fzyH 0IlL/AVA7JbVgnTZ9v0cByzrXBYu93rSG/00mr07AhiRZixd3zeWuY804GQTAmKAw5xqpTiiAL6xcw 4qgZ7CiL9XnmH1foKsOG5fh+jCotiRXqMSjCKj52KbRBR4XreXT//CxLAw7SVdYIW+dSmt3tX7BalC qWXKZg7NSjMpO1KtPvrBGiCfDT7/1QU7E3tyOMRmPV5t0WvcwBPdaAEun3JWSCrPLmIhFOsTr+AjLr 5ypANakCQFUkiawwhyvqVzFNvm6O6d X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 580A230000AB X-Stat-Signature: 5zysfpxgxoh7a8pbay8q45quwkqf14yc Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=fd+A5F6a; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=h38QJmQ0; dmarc=none; spf=pass (imf08.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-HE-Tag: 1638382519-280601 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Ensure that we're not seeing a tail page inside __check_heap_object() by converting to a slab instead of a page. Take the opportunity to mark the slab as const since we're not modifying it. Also move the declaration of __check_heap_object() to mm/slab.h so it's not available to the wider kernel. [ vbabka@suse.cz: in check_heap_object() only convert to struct slab for actual PageSlab pages; use folio as intermediate step instead of page ] Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka --- include/linux/slab.h | 8 -------- mm/slab.c | 14 +++++++------- mm/slab.h | 9 +++++++++ mm/slub.c | 10 +++++----- mm/usercopy.c | 13 +++++++------ 5 files changed, 28 insertions(+), 26 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 181045148b06..367366f1d1ff 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -189,14 +189,6 @@ bool kmem_valid_obj(void *object); void kmem_dump_obj(void *object); #endif -#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR -void __check_heap_object(const void *ptr, unsigned long n, struct page *page, - bool to_user); -#else -static inline void __check_heap_object(const void *ptr, unsigned long n, - struct page *page, bool to_user) { } -#endif - /* * Some archs want to perform DMA into kmalloc caches and need a guaranteed * alignment larger than the alignment of a 64-bit integer. diff --git a/mm/slab.c b/mm/slab.c index 44bc1fcd1393..38fcd3f496df 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -372,8 +372,8 @@ static void **dbg_userword(struct kmem_cache *cachep, void *objp) static int slab_max_order = SLAB_MAX_ORDER_LO; static bool slab_max_order_set __initdata; -static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, - unsigned int idx) +static inline void *index_to_obj(struct kmem_cache *cache, + const struct page *page, unsigned int idx) { return page->s_mem + cache->size * idx; } @@ -4166,8 +4166,8 @@ ssize_t slabinfo_write(struct file *file, const char __user *buffer, * Returns NULL if check passes, otherwise const char * to name of cache * to indicate an error. */ -void __check_heap_object(const void *ptr, unsigned long n, struct page *page, - bool to_user) +void __check_heap_object(const void *ptr, unsigned long n, + const struct slab *slab, bool to_user) { struct kmem_cache *cachep; unsigned int objnr; @@ -4176,15 +4176,15 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, ptr = kasan_reset_tag(ptr); /* Find and validate object. */ - cachep = page->slab_cache; - objnr = obj_to_index(cachep, page, (void *)ptr); + cachep = slab->slab_cache; + objnr = obj_to_index(cachep, slab_page(slab), (void *)ptr); BUG_ON(objnr >= cachep->num); /* Find offset within object. */ if (is_kfence_address(ptr)) offset = ptr - kfence_object_start(ptr); else - offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep); + offset = ptr - index_to_obj(cachep, slab_page(slab), objnr) - obj_offset(cachep); /* Allow address range falling entirely within usercopy region. */ if (offset >= cachep->useroffset && diff --git a/mm/slab.h b/mm/slab.h index 9ae9f6c3d1cb..7376c9d8aa2b 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -812,4 +812,13 @@ struct kmem_obj_info { void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab); #endif +#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR +void __check_heap_object(const void *ptr, unsigned long n, + const struct slab *slab, bool to_user); +#else +static inline +void __check_heap_object(const void *ptr, unsigned long n, + const struct slab *slab, bool to_user) { } +#endif + #endif /* MM_SLAB_H */ diff --git a/mm/slub.c b/mm/slub.c index bc8a1fa146a5..0d31274743d9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4484,8 +4484,8 @@ EXPORT_SYMBOL(__kmalloc_node); * Returns NULL if check passes, otherwise const char * to name of cache * to indicate an error. */ -void __check_heap_object(const void *ptr, unsigned long n, struct page *page, - bool to_user) +void __check_heap_object(const void *ptr, unsigned long n, + const struct slab *slab, bool to_user) { struct kmem_cache *s; unsigned int offset; @@ -4494,10 +4494,10 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, ptr = kasan_reset_tag(ptr); /* Find object and usable object size. */ - s = page->slab_cache; + s = slab->slab_cache; /* Reject impossible pointers. */ - if (ptr < page_address(page)) + if (ptr < slab_address(slab)) usercopy_abort("SLUB object not in SLUB page?!", NULL, to_user, 0, n); @@ -4505,7 +4505,7 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, if (is_kfence) offset = ptr - kfence_object_start(ptr); else - offset = (ptr - page_address(page)) % s->size; + offset = (ptr - slab_address(slab)) % s->size; /* Adjust for redzone and reject if within the redzone. */ if (!is_kfence && kmem_cache_debug_flags(s, SLAB_RED_ZONE)) { diff --git a/mm/usercopy.c b/mm/usercopy.c index b3de3c4eefba..d0d268135d96 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -20,6 +20,7 @@ #include #include #include +#include "slab.h" /* * Checks if a given pointer and length is contained by the current @@ -223,7 +224,7 @@ static inline void check_page_span(const void *ptr, unsigned long n, static inline void check_heap_object(const void *ptr, unsigned long n, bool to_user) { - struct page *page; + struct folio *folio; if (!virt_addr_valid(ptr)) return; @@ -231,16 +232,16 @@ static inline void check_heap_object(const void *ptr, unsigned long n, /* * When CONFIG_HIGHMEM=y, kmap_to_page() will give either the * highmem page or fallback to virt_to_page(). The following - * is effectively a highmem-aware virt_to_head_page(). + * is effectively a highmem-aware virt_to_slab(). */ - page = compound_head(kmap_to_page((void *)ptr)); + folio = page_folio(kmap_to_page((void *)ptr)); - if (PageSlab(page)) { + if (folio_test_slab(folio)) { /* Check slab allocator for flags and size. */ - __check_heap_object(ptr, n, page, to_user); + __check_heap_object(ptr, n, folio_slab(folio), to_user); } else { /* Verify object does not incorrectly span multiple pages. */ - check_page_span(ptr, n, page, to_user); + check_page_span(ptr, n, folio_page(folio, 0), to_user); } } From patchwork Wed Dec 1 18:14:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F2E6C433F5 for ; Wed, 1 Dec 2021 18:21:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 35E466B0088; Wed, 1 Dec 2021 13:15:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EAA826B0080; Wed, 1 Dec 2021 13:15:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A2926B0089; Wed, 1 Dec 2021 13:15:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0225.hostedemail.com [216.40.44.225]) by kanga.kvack.org (Postfix) with ESMTP id 487F06B0088 for ; Wed, 1 Dec 2021 13:15:29 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id EF7B41811E9CF for ; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) X-FDA: 78870027516.15.C455D7D Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf12.hostedemail.com (Postfix) with ESMTP id A9F5010000AE for ; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id E4D9C218A8; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=58hfxBnYuwldOsS5RWPBlmkBh3kDaXQv2xFDmW4LndY=; b=n7Aolzg1ngl6J3WJTn7tfeV1zrjafRyiNC8W9St8QHyRGTPufwB5LNfzo9K8KianwcxUUN eCByQmVERil1dBcfH7FC8gq1dWQnLKMR19UkuSrYy8BRfW9QriLYN9bJJWbtaFSPjLBKE7 qCe2VD/8LIgh0tt1ZAn3NJQHs5b7DmI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=58hfxBnYuwldOsS5RWPBlmkBh3kDaXQv2xFDmW4LndY=; b=SGYIASap+UQxfGOjK18SWyRWpMg/QwCkinBzlHx06/LVeGL/mhZs1oFr3uECgWdPo+Pxwg VEhvDhYLSj8qkfAg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C142B13D9D; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 6MWWLrS7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:16 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 10/33] mm/slub: Convert detached_freelist to use a struct slab Date: Wed, 1 Dec 2021 19:14:47 +0100 Message-Id: <20211201181510.18784-11-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3747; i=vbabka@suse.cz; h=from:subject; bh=O8xdnfGnjh+DVAQGcXEOWgPRczsdJzoKygblRxlNB+s=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7tzg/mgthML8LjJOVCjvFddbzxDjhsfv+qvzBQR ydgUMweJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7cwAKCRDgIcpz8YmpEBBUB/ 9Oma5vnuxifv8h1Xevvyv2y3oePd5y8T7+aQNNLaJ2Fea1AtoHmTCmx3JcPpXlHVWjy5i2Jb7qnm3l y29ahDE+PR6/fK9Mq6hvTdECpuQQkogvXnjyzqokdX2EEMfSI+WMu4/5YotqqUhNGEfvPu48P6mrVP Xa/wWD3MwtG6cXDNQ7P09JeuBx5Tv8iOgdZVNfmoF82HYnLRx7G6IK5egOmkVT3ZHrYiQaw4H5sEd8 Vl7LMwyMsMAsaDHypRVTsCbKvSrVuIU2ecsWVnEODWFnTMnDIbqszzgpfx/z92I+VnIpewcgiJ4bbG 3FvjZqlSQhTLmpF8wgjyEIrBCN/s9B X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Stat-Signature: 4gwbg91e6fn3a38wdeu4bd8c6jp8nrhw X-Rspamd-Queue-Id: A9F5010000AE X-Rspamd-Server: rspam07 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=n7Aolzg1; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=SGYIASap; spf=pass (imf12.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382518-489097 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" This gives us a little bit of extra typesafety as we know that nobody called virt_to_page() instead of virt_to_head_page(). [ vbabka@suse.cz: Use folio as intermediate step when filtering out large kmalloc pages ] Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka --- mm/slub.c | 31 +++++++++++++++++-------------- 1 file changed, 17 insertions(+), 14 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 0d31274743d9..9bdb0a5d6ab2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3532,7 +3532,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) EXPORT_SYMBOL(kmem_cache_free); struct detached_freelist { - struct page *page; + struct slab *slab; void *tail; void *freelist; int cnt; @@ -3554,8 +3554,8 @@ static inline void free_nonslab_page(struct page *page, void *object) /* * This function progressively scans the array with free objects (with * a limited look ahead) and extract objects belonging to the same - * page. It builds a detached freelist directly within the given - * page/objects. This can happen without any need for + * slab. It builds a detached freelist directly within the given + * slab/objects. This can happen without any need for * synchronization, because the objects are owned by running process. * The freelist is build up as a single linked list in the objects. * The idea is, that this detached freelist can then be bulk @@ -3570,10 +3570,11 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, size_t first_skipped_index = 0; int lookahead = 3; void *object; - struct page *page; + struct folio *folio; + struct slab *slab; /* Always re-init detached_freelist */ - df->page = NULL; + df->slab = NULL; do { object = p[--size]; @@ -3583,17 +3584,19 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, if (!object) return 0; - page = virt_to_head_page(object); + folio = virt_to_folio(object); if (!s) { /* Handle kalloc'ed objects */ - if (unlikely(!PageSlab(page))) { - free_nonslab_page(page, object); + if (unlikely(!folio_test_slab(folio))) { + free_nonslab_page(folio_page(folio, 0), object); p[size] = NULL; /* mark object processed */ return size; } /* Derive kmem_cache from object */ - df->s = page->slab_cache; + slab = folio_slab(folio); + df->s = slab->slab_cache; } else { + slab = folio_slab(folio); df->s = cache_from_obj(s, object); /* Support for memcg */ } @@ -3605,7 +3608,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, } /* Start new detached freelist */ - df->page = page; + df->slab = slab; set_freepointer(df->s, object, NULL); df->tail = object; df->freelist = object; @@ -3617,8 +3620,8 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, if (!object) continue; /* Skip processed objects */ - /* df->page is always set at this point */ - if (df->page == virt_to_head_page(object)) { + /* df->slab is always set at this point */ + if (df->slab == virt_to_slab(object)) { /* Opportunity build freelist */ set_freepointer(df->s, object, df->freelist); df->freelist = object; @@ -3650,10 +3653,10 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) struct detached_freelist df; size = build_detached_freelist(s, size, p, &df); - if (!df.page) + if (!df.slab) continue; - slab_free(df.s, df.page, df.freelist, df.tail, df.cnt, _RET_IP_); + slab_free(df.s, slab_page(df.slab), df.freelist, df.tail, df.cnt, _RET_IP_); } while (likely(size)); } EXPORT_SYMBOL(kmem_cache_free_bulk); From patchwork Wed Dec 1 18:14:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A83A8C4332F for ; Wed, 1 Dec 2021 18:20:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 003966B0083; Wed, 1 Dec 2021 13:15:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BC3DE6B0088; Wed, 1 Dec 2021 13:15:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 842856B0083; Wed, 1 Dec 2021 13:15:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0252.hostedemail.com [216.40.44.252]) by kanga.kvack.org (Postfix) with ESMTP id 3FB8F6B0087 for ; Wed, 1 Dec 2021 13:15:29 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 084A089D37 for ; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) X-FDA: 78870027558.06.60664DA Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf24.hostedemail.com (Postfix) with ESMTP id 36052B0000A1 for ; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 17607218A9; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382517; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kZxMyj6Wp8HA56QwK3f0tcjB2n7iq+1Of+4MuQIlFE8=; b=ZzPI8nUDggoCYrIrBCiVNqqJyxnDcsUU2840V+/OqeTwr9bVRfV9nhd5js9If5IUHDLAJ+ Q9xMSPX6PPsUBaihtOJGGfUCUnKJ1Q59s14riSFYCTTYmdOfY5YS2dkk758w6ARHQs1nGv 0ptlglcQOYdQgfyMeGV8VE6PDDwndYM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382517; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kZxMyj6Wp8HA56QwK3f0tcjB2n7iq+1Of+4MuQIlFE8=; b=Z56jISVIqoRA/EaTNYJMD4n4zO1d6Rtb3+K/57kxN2goH84pRtg/HafIlZDrtbhzJ4QzSY ZXnYsc9tkgaBhnCQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E940214050; Wed, 1 Dec 2021 18:15:16 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id WI9nOLS7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:16 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 11/33] mm/slub: Convert kfree() to use a struct slab Date: Wed, 1 Dec 2021 19:14:48 +0100 Message-Id: <20211201181510.18784-12-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2119; i=vbabka@suse.cz; h=from:subject; bh=MtR80ZILh+jqjffo/Kbuxz56SWHzzZyNMAh7fcGLzpk=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7t2hlVDUcX4WrIhp6ivzBG7+xn8yk3HfpPz9NXA 4rv1bRKJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7dgAKCRDgIcpz8YmpEKrgB/ wL5Qj23adTLhJRPSmKsNy+ZqU2ckI5jc7W2bfxAApMUPlGwMxfv1t1hKBey3aLL+W2RB/4fV7Ac+fX UjtWNsOr0wosh+1RorYwnb8YgSmYPA7Aacw2UM0CgFI8DUxwis8/TY+Bdp58zJxYLRUziV/UrVQxpC 5EMbpqJ0uBuKgejt08aD/fDvwkbIFrXcpJMZ1Yw9u7puB9Tj3cPKm+65Od1N5sEi4iPnGNkcgSVNi7 IdvfKi59+BV6x6sdf2k1tKiRJW0m8Ze3rECK+u1n/NrPGqfc2YrVntH1+8zhY16fqHRURaZ/CuiqTT t/B4m5POJuVATR9I3vyq1qxCIR1kiA X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Stat-Signature: mq179ojpuuwq65mhu74iq1nymm31qh96 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=ZzPI8nUD; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=Z56jISVI; dmarc=none; spf=pass (imf24.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 36052B0000A1 X-HE-Tag: 1638382518-412329 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Convert kfree(), kmem_cache_free() and ___cache_free() to resolve object addresses to struct slab, using folio as intermediate step where needed. Keep passing the result as struct page for now in preparation for mass conversion of internal functions. [ vbabka@suse.cz: Use folio as intermediate step when checking for large kmalloc pages ] Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka --- mm/slub.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 9bdb0a5d6ab2..2d9c1b3c9f37 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3517,7 +3517,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page, #ifdef CONFIG_KASAN_GENERIC void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) { - do_slab_free(cache, virt_to_head_page(x), x, NULL, 1, addr); + do_slab_free(cache, slab_page(virt_to_slab(x)), x, NULL, 1, addr); } #endif @@ -3527,7 +3527,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) if (!s) return; trace_kmem_cache_free(_RET_IP_, x, s->name); - slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_); + slab_free(s, slab_page(virt_to_slab(x)), x, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free); @@ -4546,7 +4546,8 @@ EXPORT_SYMBOL(__ksize); void kfree(const void *x) { - struct page *page; + struct folio *folio; + struct slab *slab; void *object = (void *)x; trace_kfree(_RET_IP_, x); @@ -4554,12 +4555,13 @@ void kfree(const void *x) if (unlikely(ZERO_OR_NULL_PTR(x))) return; - page = virt_to_head_page(x); - if (unlikely(!PageSlab(page))) { - free_nonslab_page(page, object); + folio = virt_to_folio(x); + if (unlikely(!folio_test_slab(folio))) { + free_nonslab_page(folio_page(folio, 0), object); return; } - slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_); + slab = folio_slab(folio); + slab_free(slab->slab_cache, slab_page(slab), object, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kfree); From patchwork Wed Dec 1 18:14:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B87FDC433FE for ; Wed, 1 Dec 2021 18:23:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ACD626B0087; Wed, 1 Dec 2021 13:15:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 55A0E6B0095; Wed, 1 Dec 2021 13:15:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 22F296B0089; Wed, 1 Dec 2021 13:15:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id 7AF216B0093 for ; Wed, 1 Dec 2021 13:15:29 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 39BA3180CB07D for ; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) X-FDA: 78870027558.28.A2EDDF7 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf15.hostedemail.com (Postfix) with ESMTP id B2D9ED00009A for ; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 3D9BF218B0; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382517; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sf8S0KINYvRzgGzjjnh3SThdSh212t4v961NUPPA8y0=; b=XZOQT+ZqVpxTbL5vFPFgH4FhhSHMUXEK6bUEgRVzCgHYuw862ErySFWcyywM+/TM01xkgR sO/EGqJAQ7KkToJrUi8xzqFjHwwLqCxIBe3NbCZgQnnJI+hdf3Mar1hWzIdC7oSkI30Z6G d9xAKBwOkIANQg9QnqEd7UhPhW1k5rA= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382517; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sf8S0KINYvRzgGzjjnh3SThdSh212t4v961NUPPA8y0=; b=iMnVG2R7uNlMscu/I3lftMWhWM9ftcWxhhGZuV7h87xYKDIvPtUOsTHCoFlkB3geA2epHQ ycpPey+ybhr4QvAQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1AF7713D9D; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id aAv6BbW7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:17 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 12/33] mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab Date: Wed, 1 Dec 2021 19:14:49 +0100 Message-Id: <20211201181510.18784-13-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2107; h=from:subject; bh=Ms68WebZMas1qlieVdKtgoF7lKxfZIAM3TnSfQQ6bBE=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7t4c1fmWg96L3j4eoQMZtVIXK6zKj/dCOXn5UVe ba9HM9mJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7eAAKCRDgIcpz8YmpEGUyCA CvRhfLS72ellKRjmpFkYfCuFiIuYmrXDfkN3UNsyyuQozP6kbmaW3t9J1i4WJJtbvK/1Co2D4CjkMO o94cVAL71S/0vddGBaosldEZSczqXalc+R06dZxD5NTLfvvvp3Txm3AIV4DulcEfoHri5dcxw8msHQ AqsLsLj0H+ip6iYypaNPQ1Z9VOf5qpS6iPaEcSuMD6aHOm3w3Jly6TkwgU+tj2FySke5yhHcYbYRCR KYDCIQs/c/PH+I9Sx44uUWyjCKbCp/AHes3vGgkXVDfTrnq7HAD2RlhiI2JShTdk09+gEUgx51VmrZ SOay3GW6gwBkahnFhdTW7HVmfFLrnl X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Stat-Signature: 1ib4kzdndeig3y4e6xjx43h6usnkbfpx X-Rspamd-Queue-Id: B2D9ED00009A X-Rspamd-Server: rspam07 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=XZOQT+Zq; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=iMnVG2R7; spf=pass (imf15.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382518-100145 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These functions operate on the PG_locked page flag, but make them accept struct slab to encapsulate this implementation detail. Signed-off-by: Vlastimil Babka --- mm/slub.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 2d9c1b3c9f37..2cbd2403005f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -440,14 +440,18 @@ slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_objects) /* * Per slab locking using the pagelock */ -static __always_inline void __slab_lock(struct page *page) +static __always_inline void __slab_lock(struct slab *slab) { + struct page *page = slab_page(slab); + VM_BUG_ON_PAGE(PageTail(page), page); bit_spin_lock(PG_locked, &page->flags); } -static __always_inline void __slab_unlock(struct page *page) +static __always_inline void __slab_unlock(struct slab *slab) { + struct page *page = slab_page(slab); + VM_BUG_ON_PAGE(PageTail(page), page); __bit_spin_unlock(PG_locked, &page->flags); } @@ -456,12 +460,12 @@ static __always_inline void slab_lock(struct page *page, unsigned long *flags) { if (IS_ENABLED(CONFIG_PREEMPT_RT)) local_irq_save(*flags); - __slab_lock(page); + __slab_lock(page_slab(page)); } static __always_inline void slab_unlock(struct page *page, unsigned long *flags) { - __slab_unlock(page); + __slab_unlock(page_slab(page)); if (IS_ENABLED(CONFIG_PREEMPT_RT)) local_irq_restore(*flags); } @@ -530,16 +534,16 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, unsigned long flags; local_irq_save(flags); - __slab_lock(page); + __slab_lock(page_slab(page)); if (page->freelist == freelist_old && page->counters == counters_old) { page->freelist = freelist_new; page->counters = counters_new; - __slab_unlock(page); + __slab_unlock(page_slab(page)); local_irq_restore(flags); return true; } - __slab_unlock(page); + __slab_unlock(page_slab(page)); local_irq_restore(flags); } From patchwork Wed Dec 1 18:14:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2EF3C433F5 for ; Wed, 1 Dec 2021 18:20:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C03226B007E; Wed, 1 Dec 2021 13:15:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 78DA66B0092; Wed, 1 Dec 2021 13:15:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C5146B0089; Wed, 1 Dec 2021 13:15:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0019.hostedemail.com [216.40.44.19]) by kanga.kvack.org (Postfix) with ESMTP id 1935D6B007D for ; Wed, 1 Dec 2021 13:15:29 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id CD2FE8249980 for ; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) X-FDA: 78870027516.30.B82A47B Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf11.hostedemail.com (Postfix) with ESMTP id 78344F0000A8 for ; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 69EBC1FD61; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382517; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/cffam0/Cfj2BjqtfHUS37XDsk9qpJ6fpIcasEIOlOM=; b=vAdJ7CrLNQCAyGakq6MmL4aJyPLU3LWPynC1MPk+hokgMt6mCr/khyUzvEDC0cN1YVhVds v9t7NEVPH5dQLlYcSSplsltMPGJzikA321h691UgKlLcI7OIOplqJZTRlYqqB0riocCXiZ 92+rOfz0X0Gz30TwzgRnNxT8NgdgwZ4= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382517; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/cffam0/Cfj2BjqtfHUS37XDsk9qpJ6fpIcasEIOlOM=; b=VLOpm+QQrtYvMH1+IcpCbmlynCkqJFHTjdHY1YfGwpt4PR85MIX6aOl8RyMDv9eMM/7ku5 8yvqfzqYL1MrXADw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4140D14050; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id APojD7W7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:17 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 13/33] mm/slub: Convert print_page_info() to print_slab_info() Date: Wed, 1 Dec 2021 19:14:50 +0100 Message-Id: <20211201181510.18784-14-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1796; i=vbabka@suse.cz; h=from:subject; bh=dM/j4MeUN9tPniCbCQTnqAsDsmqb9dlfi/A8pky4VTU=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7t6KcMe9afXi227VqxahdBawf4y5PTEl1vI3hzA sYL2nPmJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7egAKCRDgIcpz8YmpEBVdCA CogvL1qPclYG/vdsBuwvdblTSaqgaJFEpwg+Pvq3kCCwqqBrG+n3S3aqvXK6wpeXSYE/Cz1QYfiPtl HJvvac/ueWVZN0ByifWamQrpEkuLU/kQvabWu2zMWI4XnY0cVy/rdycgGhpglGovC7gueESc0IdTZh 7sq6xZ6I6qJeqfP8T28vRQuaZurqGZyLDVzIE875aWim2sJ8SBZfnRqAOTxFMzU9YCdp3LupWnI3oH 1qj1i4iKYPpLtROHx2fnpsn0wgQugyyJ5zCBIHoRjDDpb4C3Z2G4ZlmTK+bvnNPE07we6gmG4rlwV0 b4wi7zJ4828Hc0Bfu7ugM7JQJlOfi4 X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 78344F0000A8 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=vAdJ7CrL; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=VLOpm+QQ; dmarc=none; spf=pass (imf11.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Stat-Signature: 6xmckf65n4j6jy19a8y7jh586z7q6pbf X-HE-Tag: 1638382518-701651 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Improve the type safety and prepare for further conversion. For flags access, convert to folio internally. [ vbabka@suse.cz: access flags via folio_flags() ] Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka --- mm/slub.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 2cbd2403005f..a4f7dafe0610 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -788,12 +788,13 @@ void print_tracking(struct kmem_cache *s, void *object) print_track("Freed", get_track(s, object, TRACK_FREE), pr_time); } -static void print_page_info(struct page *page) +static void print_slab_info(const struct slab *slab) { - pr_err("Slab 0x%p objects=%u used=%u fp=0x%p flags=%pGp\n", - page, page->objects, page->inuse, page->freelist, - &page->flags); + struct folio *folio = (struct folio *)slab_folio(slab); + pr_err("Slab 0x%p objects=%u used=%u fp=0x%p flags=%pGp\n", + slab, slab->objects, slab->inuse, slab->freelist, + folio_flags(folio, 0)); } static void slab_bug(struct kmem_cache *s, char *fmt, ...) @@ -833,7 +834,7 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p) print_tracking(s, p); - print_page_info(page); + print_slab_info(page_slab(page)); pr_err("Object 0x%p @offset=%tu fp=0x%p\n\n", p, p - addr, get_freepointer(s, p)); @@ -903,7 +904,7 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *page, vsnprintf(buf, sizeof(buf), fmt, args); va_end(args); slab_bug(s, "%s", buf); - print_page_info(page); + print_slab_info(page_slab(page)); dump_stack(); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } From patchwork Wed Dec 1 18:14:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDB2AC433F5 for ; Wed, 1 Dec 2021 18:22:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 61DD66B0080; Wed, 1 Dec 2021 13:15:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D13B6B008C; Wed, 1 Dec 2021 13:15:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B601F6B0087; Wed, 1 Dec 2021 13:15:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id 5F6B36B007E for ; Wed, 1 Dec 2021 13:15:29 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 14BD71814C3DC for ; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) X-FDA: 78870027558.20.17A4DE3 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf04.hostedemail.com (Postfix) with ESMTP id 21CD150000AD for ; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 9114D218B1; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382517; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8J4CQGaEGCd8Yz7JY+TYmVJrL6nLrY8zj7hNJ3qkc+k=; b=DyCA22WIO3DjK2/+ZHlsGdJ6cctTV260qxySkBHW5f72SCGWZkGjPOcl7COgJytygZsHDN EIY9nNoVwvPdhQC5k5W87iHm64wCU3HBYxvR9cY7F3pXPxU9p6psRIdOClkv7kmZ8LuJns wNvhjn/ABXz28dfWf5mTyj6wnxXSSkw= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382517; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8J4CQGaEGCd8Yz7JY+TYmVJrL6nLrY8zj7hNJ3qkc+k=; b=P3k8IbJV8Ng+yKrs7+6CsK/aLDzucZTALEud/K+qanhCbOnfUMBlYNRYrRFeOJSZVC89v0 qG78jasfVzKrQACw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 67B4213D9D; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id WCO8GLW7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:17 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 14/33] mm/slub: Convert alloc_slab_page() to return a struct slab Date: Wed, 1 Dec 2021 19:14:51 +0100 Message-Id: <20211201181510.18784-15-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2855; h=from:subject; bh=6DHLS3aE0sjZlDxj0h0hg7NtElnxcVlaxyhYFZV4jZo=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7t9b8DfRVlr73BupNOc5li3qPhliOQr4WzJI7sr mw04wHqJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7fQAKCRDgIcpz8YmpEL9gB/ 9sXX6ELAT7+mVJB44POyU6W4ZKDMS9gvuFe0aMIkClVfNXTTIA6TDX8nzuSchQJjuTdn9nTET0P2Ec qBZHGQG++AskupUTknih3j4/wA2SltHoJaI7i3Yl+wKhAbNoxtltW05l1o5MJqlnl7B/l6T+nZV+mC nCIy8+T9+hsx8FCIG2vJJq7LkKxXKq7JSvRz3zxcecRZCk2DGGT+rjx7MNZ426AAYXCRdtcA00e8Xu lDgdxW8EwRlgMC0B/cPiOLZwpsQEO+QdVjLx9cxyRSmB/h5Q9RVbZS6hTd2Y/WYkRtxeXlrUa7EqCY SwxhW5gQzzfZ4nMTOgiUNA92iWfncL X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 21CD150000AD X-Stat-Signature: nz8kd8miki1zomukx6zsqtj59xyte97i Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=DyCA22WI; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=P3k8IbJV; dmarc=none; spf=pass (imf04.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-HE-Tag: 1638382518-377490 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Preparatory, callers convert back to struct page for now. Also move setting page flags to alloc_slab_page() where we still operate on a struct page. This means the page->slab_cache pointer is now set later than the PageSlab flag, which could theoretically confuse some pfn walker assuming PageSlab means there would be a valid cache pointer. But as the code had no barriers and used __set_bit() anyway, it could have happened already, so there shouldn't be such a walker. Signed-off-by: Vlastimil Babka --- mm/slub.c | 26 ++++++++++++++++---------- 1 file changed, 16 insertions(+), 10 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index a4f7dafe0610..3d82e8e0fc2b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1788,18 +1788,27 @@ static void *setup_object(struct kmem_cache *s, struct page *page, /* * Slab allocation and freeing */ -static inline struct page *alloc_slab_page(struct kmem_cache *s, +static inline struct slab *alloc_slab_page(struct kmem_cache *s, gfp_t flags, int node, struct kmem_cache_order_objects oo) { - struct page *page; + struct folio *folio; + struct slab *slab; unsigned int order = oo_order(oo); if (node == NUMA_NO_NODE) - page = alloc_pages(flags, order); + folio = (struct folio *)alloc_pages(flags, order); else - page = __alloc_pages_node(node, flags, order); + folio = (struct folio *)__alloc_pages_node(node, flags, order); - return page; + if (!folio) + return NULL; + + slab = folio_slab(folio); + __folio_set_slab(folio); + if (page_is_pfmemalloc(folio_page(folio, 0))) + slab_set_pfmemalloc(slab); + + return slab; } #ifdef CONFIG_SLAB_FREELIST_RANDOM @@ -1932,7 +1941,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) if ((alloc_gfp & __GFP_DIRECT_RECLAIM) && oo_order(oo) > oo_order(s->min)) alloc_gfp = (alloc_gfp | __GFP_NOMEMALLOC) & ~(__GFP_RECLAIM|__GFP_NOFAIL); - page = alloc_slab_page(s, alloc_gfp, node, oo); + page = slab_page(alloc_slab_page(s, alloc_gfp, node, oo)); if (unlikely(!page)) { oo = s->min; alloc_gfp = flags; @@ -1940,7 +1949,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) * Allocation may have failed due to fragmentation. * Try a lower order alloc if possible */ - page = alloc_slab_page(s, alloc_gfp, node, oo); + page = slab_page(alloc_slab_page(s, alloc_gfp, node, oo)); if (unlikely(!page)) goto out; stat(s, ORDER_FALLBACK); @@ -1951,9 +1960,6 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) account_slab(page_slab(page), oo_order(oo), s, flags); page->slab_cache = s; - __SetPageSlab(page); - if (page_is_pfmemalloc(page)) - SetPageSlabPfmemalloc(page); kasan_poison_slab(page); From patchwork Wed Dec 1 18:14:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650781 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C85CC433FE for ; Wed, 1 Dec 2021 18:33:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 222DB6B0074; Wed, 1 Dec 2021 13:22:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F9986B0073; Wed, 1 Dec 2021 13:22:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F00466B007D; Wed, 1 Dec 2021 13:22:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0254.hostedemail.com [216.40.44.254]) by kanga.kvack.org (Postfix) with ESMTP id D86106B0073 for ; Wed, 1 Dec 2021 13:22:09 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A391D181251B9 for ; Wed, 1 Dec 2021 18:21:59 +0000 (UTC) X-FDA: 78870044358.20.08B63E1 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf27.hostedemail.com (Postfix) with ESMTP id 13CCD7000091 for ; Wed, 1 Dec 2021 18:21:58 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id BB5A1218B2; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382517; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uWOTlxFmef0oGs2stQSS0Rsd2cxbVvDDqKeEkd3BcJ4=; b=hde6vCSOPLaGFMRjOeN4ZnNwki5UfUXlShRBoCTB+Iuvav84YYD67q0QAGLcMIJxJLGxUX 2KVZfV/sa4MlH3z9jYbpBtf3E7mLnyR9JzknbOhGJhcW+aDEwV6x/BRsuJgiT5GMw5qkvE oGTvXrEvaRy9Q4rsMbPhsOhgUrUoaVw= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382517; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uWOTlxFmef0oGs2stQSS0Rsd2cxbVvDDqKeEkd3BcJ4=; b=HxoWw1nmC1F+Tep4pS9PdDL4WJPC8w4b6T4xHSwdBMeVtOYe7n+vYK/lLlkvdqvcgZFF+/ 3jN841WomPoN4GCA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 935D414050; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id UIZHI7W7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:17 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 15/33] mm/slub: Convert __free_slab() to use struct slab Date: Wed, 1 Dec 2021 19:14:52 +0100 Message-Id: <20211201181510.18784-16-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2460; h=from:subject; bh=geJJNEx28YZrRr5Wvloi9v4Zgqxi1u8ijlEq0D6QbzY=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7t/teQXdQhfId8DCGJ7rBvvdQonM8sqbgwhAHYm 4c3x8kiJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7fwAKCRDgIcpz8YmpEKBHB/ 4gbJ4kyIKLr3x+NRnCHh4NvLBA2K9DqB1KHDmZ2OJehAEhMQxCmRDDD9eclrKO0HDT0z01ccb1K2nj M8fVT308mcc2GXkA76VuNYUiDBCjLrwHQFmuyLNOjQwY7b/0d6mC4IgilahAwncqrj49m7lPXA6Azq KIV2JNXXS5AeAGmiomFDDBjXVPEL9tB3R8GiMxEMheaglvBMzdfg+Y+UWS8qZwmck8yzP3389JA9dd J0lPpchJkTla2pOgjVyHnbEsM6KZJIZrv9e9a8SdZj0Aid1ezZ+SOaC4dOJQ8h6AV/Zcy2nMsQSBGT GT6W+QTEF8PFCMDxuF9+2DOr1BjIsH X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 13CCD7000091 X-Stat-Signature: hshdzj15dw7iithzjtfrigapz7t67h54 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=hde6vCSO; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=HxoWw1nm; spf=pass (imf27.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382918-122354 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __free_slab() is on the boundary of distinguishing struct slab and struct page so start with struct slab but convert to folio for working with flags and folio_page() to call functions that require struct page. Signed-off-by: Vlastimil Babka --- mm/slub.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 3d82e8e0fc2b..daa5c5b605d6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2005,35 +2005,35 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node) flags & (GFP_RECLAIM_MASK | GFP_CONSTRAINT_MASK), node); } -static void __free_slab(struct kmem_cache *s, struct page *page) +static void __free_slab(struct kmem_cache *s, struct slab *slab) { - int order = compound_order(page); + struct folio *folio = slab_folio(slab); + int order = folio_order(folio); int pages = 1 << order; if (kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) { void *p; - slab_pad_check(s, page); - for_each_object(p, s, page_address(page), - page->objects) - check_object(s, page, p, SLUB_RED_INACTIVE); + slab_pad_check(s, folio_page(folio, 0)); + for_each_object(p, s, slab_address(slab), slab->objects) + check_object(s, folio_page(folio, 0), p, SLUB_RED_INACTIVE); } - __ClearPageSlabPfmemalloc(page); - __ClearPageSlab(page); + __slab_clear_pfmemalloc(slab); + __folio_clear_slab(folio); /* In union with page->mapping where page allocator expects NULL */ - page->slab_cache = NULL; + slab->slab_cache = NULL; if (current->reclaim_state) current->reclaim_state->reclaimed_slab += pages; - unaccount_slab(page_slab(page), order, s); - __free_pages(page, order); + unaccount_slab(slab, order, s); + __free_pages(folio_page(folio, 0), order); } static void rcu_free_slab(struct rcu_head *h) { struct page *page = container_of(h, struct page, rcu_head); - __free_slab(page->slab_cache, page); + __free_slab(page->slab_cache, page_slab(page)); } static void free_slab(struct kmem_cache *s, struct page *page) @@ -2041,7 +2041,7 @@ static void free_slab(struct kmem_cache *s, struct page *page) if (unlikely(s->flags & SLAB_TYPESAFE_BY_RCU)) { call_rcu(&page->rcu_head, rcu_free_slab); } else - __free_slab(s, page); + __free_slab(s, page_slab(page)); } static void discard_slab(struct kmem_cache *s, struct page *page) From patchwork Wed Dec 1 18:14:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 018AAC433EF for ; Wed, 1 Dec 2021 18:23:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D132A6B0089; Wed, 1 Dec 2021 13:15:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 869C26B008A; Wed, 1 Dec 2021 13:15:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2EABD6B0092; Wed, 1 Dec 2021 13:15:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0100.hostedemail.com [216.40.44.100]) by kanga.kvack.org (Postfix) with ESMTP id AB6B36B0095 for ; Wed, 1 Dec 2021 13:15:29 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 712D88A3E1 for ; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) X-FDA: 78870027558.16.A3FEBBE Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf28.hostedemail.com (Postfix) with ESMTP id EC696900009E for ; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id E1D4E218B5; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382517; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eerv0eOXwo1dz/cHVYapVAsLRxt5WoMuFF7LxI7Y1pU=; b=ju6NqaXcttO3OWBKxcI/IW2IOUgIzlEHyEWlXi0dCylOXBItZuPtBE+qrJfvKYZjL/YIXa I0KWdxsoL5yPsJ8yegxsORd+fJbQyy/KYuoPeuewN0/Naf9oWuvgGuO2afLqLXi7eGiX0d 35tN2b+KKEW95bk/MgA91c3n0Poq4WU= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382517; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eerv0eOXwo1dz/cHVYapVAsLRxt5WoMuFF7LxI7Y1pU=; b=peRBK6nhVPMNiuDHpTAYQVxJl+yiyXN7Xth/ugmuL05jAxmtOgzA9v3X6lKuyzNrUBDWEf hEeJkYHqQ85owRDw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BDC8213D9D; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id KDKlLbW7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:17 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 16/33] mm/slub: Convert pfmemalloc_match() to take a struct slab Date: Wed, 1 Dec 2021 19:14:53 +0100 Message-Id: <20211201181510.18784-17-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2935; i=vbabka@suse.cz; h=from:subject; bh=N2hV2YbtRFPCsEmwb9JXnEFVgKFfaA6E1imI7J+GczQ=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7uBHnLSf9MYf4PcG3PYdsDhGmAdSogDmnVkl2so SmDS7hSJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7gQAKCRDgIcpz8YmpELj/B/ 4iPrY5wv0VgMWbOoCzjjUxgf3VQZsgTRmmkeMsj4Rha/g9CALiUdB4it+RQJFZnuzylgHuKmcCOtgr 4e5VqEZr4X0xnbePpA98zGpMhWFRpfxrX9EumjCEKVhwZbbNfHbFNPhCv7T2+pXYQIjXrw//AGCI6Z Z/4BSJStFDW6XVodvXubI0zpqj9u73PyTa6RRRi0l5VPmB2VuEuHK5CDZkUCygvq4t9JmDSdjrEFAC 0C9wdSHk9s4TGTB7VBK1T2BGRXRlEGwCQDiRWhMWrMj2Y8H+GXciypJoVlGAPmfp68V4laiMqssQJF nqlYrQgOj4XR0NDHGBSp0wURLAawMX X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: EC696900009E X-Stat-Signature: j9sjw9qxkmwiqyxhgd7urmj7gobubaoi Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=ju6NqaXc; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=peRBK6nh; dmarc=none; spf=pass (imf28.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-HE-Tag: 1638382518-923987 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Preparatory for mass conversion. Use the new slab_test_pfmemalloc() helper. As it doesn't do VM_BUG_ON(!PageSlab()) we no longer need the pfmemalloc_match_unsafe() variant. Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka --- mm/slub.c | 25 ++++++------------------- 1 file changed, 6 insertions(+), 19 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index daa5c5b605d6..8224962c81aa 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2129,7 +2129,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain); static inline void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) { } #endif -static inline bool pfmemalloc_match(struct page *page, gfp_t gfpflags); +static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags); /* * Try to allocate a partial slab from a specific node. @@ -2155,7 +2155,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, list_for_each_entry_safe(page, page2, &n->partial, slab_list) { void *t; - if (!pfmemalloc_match(page, gfpflags)) + if (!pfmemalloc_match(page_slab(page), gfpflags)) continue; t = acquire_slab(s, n, page, object == NULL); @@ -2833,22 +2833,9 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid) #endif } -static inline bool pfmemalloc_match(struct page *page, gfp_t gfpflags) +static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags) { - if (unlikely(PageSlabPfmemalloc(page))) - return gfp_pfmemalloc_allowed(gfpflags); - - return true; -} - -/* - * A variant of pfmemalloc_match() that tests page flags without asserting - * PageSlab. Intended for opportunistic checks before taking a lock and - * rechecking that nobody else freed the page under us. - */ -static inline bool pfmemalloc_match_unsafe(struct page *page, gfp_t gfpflags) -{ - if (unlikely(__PageSlabPfmemalloc(page))) + if (unlikely(slab_test_pfmemalloc(slab))) return gfp_pfmemalloc_allowed(gfpflags); return true; @@ -2950,7 +2937,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, * PFMEMALLOC but right now, we are losing the pfmemalloc * information when the page leaves the per-cpu allocator */ - if (unlikely(!pfmemalloc_match_unsafe(page, gfpflags))) + if (unlikely(!pfmemalloc_match(page_slab(page), gfpflags))) goto deactivate_slab; /* must check again c->page in case we got preempted and it changed */ @@ -3062,7 +3049,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, } } - if (unlikely(!pfmemalloc_match(page, gfpflags))) + if (unlikely(!pfmemalloc_match(page_slab(page), gfpflags))) /* * For !pfmemalloc_match() case we don't load freelist so that * we don't make further mismatched allocations easier. From patchwork Wed Dec 1 18:14:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650725 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D8E2C433F5 for ; Wed, 1 Dec 2021 18:24:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F2696B0093; Wed, 1 Dec 2021 13:15:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 27E2B6B0096; Wed, 1 Dec 2021 13:15:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B282A6B0092; Wed, 1 Dec 2021 13:15:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0210.hostedemail.com [216.40.44.210]) by kanga.kvack.org (Postfix) with ESMTP id 6C2106B0089 for ; Wed, 1 Dec 2021 13:15:30 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 1DD0889564 for ; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) X-FDA: 78870027600.04.9505EAF Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf19.hostedemail.com (Postfix) with ESMTP id 47404B0000A2 for ; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 36669218BB; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WXHZTAhQlULLTNUEQbliYVJgOf6oa2EUyUh6AXmvfeY=; b=DctS8XLWuBNpXlvkQ9v4oaSCtoYKWcENviwfRMzFi7KNea6Su1ewqn2Z84gLWy4ojSLmv+ Q+fQNNvuXDORk5lH/RJd4V9AO5oK/BIrntole0TDJD+HNBHpskLCWPU2qUexzilS9jBFPT gUy71Tzo2x8hPe8i2igKyeLkN6gg8S4= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WXHZTAhQlULLTNUEQbliYVJgOf6oa2EUyUh6AXmvfeY=; b=SQ+VNqUwoAqobZX5lgW2EreA226dwZZOt75IgxmJ+ldvIKinxNG1WoN4IpdBiCzD+nX0lb jI8MpA9ASnR4gFCQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E67F914050; Wed, 1 Dec 2021 18:15:17 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id EOCgN7W7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:17 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka , Julia Lawall , Luis Chamberlain Subject: [PATCH v2 17/33] mm/slub: Convert most struct page to struct slab by spatch Date: Wed, 1 Dec 2021 19:14:54 +0100 Message-Id: <20211201181510.18784-18-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=82916; h=from:subject; bh=icRE9Xwrr1Nj6EJcukeWwelmSbvYhrBo8lrBSMs5z+M=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7uEZqFhds5QuWwfJFq32OExgmlXjfvTHc9b7B/E 7mv9SpaJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7hAAKCRDgIcpz8YmpEAxwB/ 0VAofx52y3ODjE9cFuZAD0ki/wK6uLQD1Uwb/ZMBPx9na9jUOzuD2MdRb0RjAEztiTBj4bGnqgX5pc pZorhmxotxTOkJC1qfxWvKSdAEqdHc5F0gmZKTmqd0IdAqPH0xcJC5vEzkBGAMI0gbJpMndY/Jem5F lUwnZLeobKKrfikVg5sljrZQPpvfTUFPvr6iz/LNRb8BPOOA4Zx3TDEXK8LdsAOYf6/BZGY5aK3Y95 OhPb3At9V9S7avDXMSSPw33a+xeCxkZ8InrB3PiWyIXND+pHGhJdLZfaUL38gZg41iOvAcSRImkTpt nhGig67GkAUcE4UgmF3i8bdMU6qD9p X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Queue-Id: 47404B0000A2 X-Stat-Signature: g1za99z5ipetqw6o335auwtr6rh1xojm Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=DctS8XLW; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=SQ+VNqUw; dmarc=none; spf=pass (imf19.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Rspamd-Server: rspam02 X-HE-Tag: 1638382519-387249 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The majority of conversion from struct page to struct slab in SLUB internals can be delegated to a coccinelle semantic patch. This includes renaming of variables with 'page' in name to 'slab', and similar. Big thanks to Julia Lawall and Luis Chamberlain for help with coccinelle. // Options: --include-headers --no-includes --smpl-spacing include/linux/slub_def.h mm/slub.c // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the // embedded script script // build list of functions to exclude from applying the next rule @initialize:ocaml@ @@ let ok_function p = not (List.mem (List.hd p).current_element ["nearest_obj";"obj_to_index";"objs_per_slab_page";"__slab_lock";"__slab_unlock";"free_nonslab_page";"kmalloc_large_node"]) // convert the type from struct page to struct page in all functions except the // list from previous rule // this also affects struct kmem_cache_cpu, but that's ok @@ position p : script:ocaml() { ok_function p }; @@ - struct page@p + struct slab // in struct kmem_cache_cpu, change the name from page to slab // the type was already converted by the previous rule @@ @@ struct kmem_cache_cpu { ... -struct slab *page; +struct slab *slab; ... } // there are many places that use c->page which is now c->slab after the // previous rule @@ struct kmem_cache_cpu *c; @@ -c->page +c->slab @@ @@ struct kmem_cache { ... - unsigned int cpu_partial_pages; + unsigned int cpu_partial_slabs; ... } @@ struct kmem_cache *s; @@ - s->cpu_partial_pages + s->cpu_partial_slabs @@ @@ static void - setup_page_debug( + setup_slab_debug( ...) {...} @@ @@ - setup_page_debug( + setup_slab_debug( ...); // for all functions (with exceptions), change any "struct slab *page" // parameter to "struct slab *slab" in the signature, and generally all // occurences of "page" to "slab" in the body - with some special cases. @@ identifier fn !~ "free_nonslab_page|obj_to_index|objs_per_slab_page|nearest_obj"; @@ fn(..., - struct slab *page + struct slab *slab ,...) { <... - page + slab ...> } // similar to previous but the param is called partial_page @@ identifier fn; @@ fn(..., - struct slab *partial_page + struct slab *partial_slab ,...) { <... - partial_page + partial_slab ...> } // similar to previous but for functions that take pointer to struct page ptr @@ identifier fn; @@ fn(..., - struct slab **ret_page + struct slab **ret_slab ,...) { <... - ret_page + ret_slab ...> } // functions converted by previous rules that were temporarily called using // slab_page(E) so we want to remove the wrapper now that they accept struct // slab ptr directly @@ identifier fn =~ "slab_free|do_slab_free"; expression E; @@ fn(..., - slab_page(E) + E ,...) // similar to previous but for another pattern @@ identifier fn =~ "slab_pad_check|check_object"; @@ fn(..., - folio_page(folio, 0) + slab ,...) // functions that were returning struct page ptr and now will return struct // slab ptr, including slab_page() wrapper removal @@ identifier fn =~ "allocate_slab|new_slab"; expression E; @@ static -struct slab * +struct slab * fn(...) { <... - slab_page(E) + E ...> } // rename any former struct page * declarations @@ @@ struct slab * ( - page + slab | - partial_page + partial_slab | - oldpage + oldslab ) ; // this has to be separate from previous rule as page and page2 appear at the // same line @@ @@ struct slab * -page2 +slab2 ; // similar but with initial assignment @@ expression E; @@ struct slab * ( - page + slab | - flush_page + flush_slab | - discard_page + slab_to_discard | - page_to_unfreeze + slab_to_unfreeze ) = E; // convert most of struct page to struct slab usage inside functions (with // exceptions), including specific variable renames @@ identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node"; expression E; @@ fn(...) { <... ( - int pages; + int slabs; | - int pages = E; + int slabs = E; | - page + slab | - flush_page + flush_slab | - partial_page + partial_slab | - oldpage->pages + oldslab->slabs | - oldpage + oldslab | - unsigned int nr_pages; + unsigned int nr_slabs; | - nr_pages + nr_slabs | - unsigned int partial_pages = E; + unsigned int partial_slabs = E; | - partial_pages + partial_slabs ) ...> } // this has to be split out from the previous rule so that lines containing // multiple matching changes will be fully converted @@ identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node"; @@ fn(...) { <... ( - slab->pages + slab->slabs | - pages + slabs | - page2 + slab2 | - discard_page + slab_to_discard | - page_to_unfreeze + slab_to_unfreeze ) ...> } // after we simply changed all occurences of page to slab, some usages need // adjustment for slab-specific functions, or use slab_page() wrapper @@ identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node"; @@ fn(...) { <... ( - page_slab(slab) + slab | - kasan_poison_slab(slab) + kasan_poison_slab(slab_page(slab)) | - page_address(slab) + slab_address(slab) | - page_size(slab) + slab_size(slab) | - PageSlab(slab) + folio_test_slab(slab_folio(slab)) | - page_to_nid(slab) + slab_nid(slab) | - compound_order(slab) + slab_order(slab) ) ...> } Signed-off-by: Vlastimil Babka Cc: Julia Lawall Cc: Luis Chamberlain --- include/linux/slub_def.h | 6 +- mm/slub.c | 872 +++++++++++++++++++-------------------- 2 files changed, 439 insertions(+), 439 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 1ef68d4de9c0..00d99afe1c0e 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -48,9 +48,9 @@ enum stat_item { struct kmem_cache_cpu { void **freelist; /* Pointer to next available object */ unsigned long tid; /* Globally unique transaction id */ - struct page *page; /* The slab from which we are allocating */ + struct slab *slab; /* The slab from which we are allocating */ #ifdef CONFIG_SLUB_CPU_PARTIAL - struct page *partial; /* Partially allocated frozen slabs */ + struct slab *partial; /* Partially allocated frozen slabs */ #endif local_lock_t lock; /* Protects the fields above */ #ifdef CONFIG_SLUB_STATS @@ -100,7 +100,7 @@ struct kmem_cache { /* Number of per cpu partial objects to keep around */ unsigned int cpu_partial; /* Number of per cpu partial pages to keep around */ - unsigned int cpu_partial_pages; + unsigned int cpu_partial_slabs; #endif struct kmem_cache_order_objects oo; diff --git a/mm/slub.c b/mm/slub.c index 8224962c81aa..be46a7151e8c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -417,7 +417,7 @@ static inline unsigned int oo_objects(struct kmem_cache_order_objects x) #ifdef CONFIG_SLUB_CPU_PARTIAL static void slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_objects) { - unsigned int nr_pages; + unsigned int nr_slabs; s->cpu_partial = nr_objects; @@ -427,8 +427,8 @@ static void slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_objects) * growth of the list. For simplicity we assume that the pages will * be half-full. */ - nr_pages = DIV_ROUND_UP(nr_objects * 2, oo_objects(s->oo)); - s->cpu_partial_pages = nr_pages; + nr_slabs = DIV_ROUND_UP(nr_objects * 2, oo_objects(s->oo)); + s->cpu_partial_slabs = nr_slabs; } #else static inline void @@ -456,16 +456,16 @@ static __always_inline void __slab_unlock(struct slab *slab) __bit_spin_unlock(PG_locked, &page->flags); } -static __always_inline void slab_lock(struct page *page, unsigned long *flags) +static __always_inline void slab_lock(struct slab *slab, unsigned long *flags) { if (IS_ENABLED(CONFIG_PREEMPT_RT)) local_irq_save(*flags); - __slab_lock(page_slab(page)); + __slab_lock(slab); } -static __always_inline void slab_unlock(struct page *page, unsigned long *flags) +static __always_inline void slab_unlock(struct slab *slab, unsigned long *flags) { - __slab_unlock(page_slab(page)); + __slab_unlock(slab); if (IS_ENABLED(CONFIG_PREEMPT_RT)) local_irq_restore(*flags); } @@ -475,7 +475,7 @@ static __always_inline void slab_unlock(struct page *page, unsigned long *flags) * by an _irqsave() lock variant. Except on PREEMPT_RT where locks are different * so we disable interrupts as part of slab_[un]lock(). */ -static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page, +static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab, void *freelist_old, unsigned long counters_old, void *freelist_new, unsigned long counters_new, const char *n) @@ -485,7 +485,7 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \ defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE) if (s->flags & __CMPXCHG_DOUBLE) { - if (cmpxchg_double(&page->freelist, &page->counters, + if (cmpxchg_double(&slab->freelist, &slab->counters, freelist_old, counters_old, freelist_new, counters_new)) return true; @@ -495,15 +495,15 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page /* init to 0 to prevent spurious warnings */ unsigned long flags = 0; - slab_lock(page, &flags); - if (page->freelist == freelist_old && - page->counters == counters_old) { - page->freelist = freelist_new; - page->counters = counters_new; - slab_unlock(page, &flags); + slab_lock(slab, &flags); + if (slab->freelist == freelist_old && + slab->counters == counters_old) { + slab->freelist = freelist_new; + slab->counters = counters_new; + slab_unlock(slab, &flags); return true; } - slab_unlock(page, &flags); + slab_unlock(slab, &flags); } cpu_relax(); @@ -516,7 +516,7 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page return false; } -static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, +static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab, void *freelist_old, unsigned long counters_old, void *freelist_new, unsigned long counters_new, const char *n) @@ -524,7 +524,7 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \ defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE) if (s->flags & __CMPXCHG_DOUBLE) { - if (cmpxchg_double(&page->freelist, &page->counters, + if (cmpxchg_double(&slab->freelist, &slab->counters, freelist_old, counters_old, freelist_new, counters_new)) return true; @@ -534,16 +534,16 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, unsigned long flags; local_irq_save(flags); - __slab_lock(page_slab(page)); - if (page->freelist == freelist_old && - page->counters == counters_old) { - page->freelist = freelist_new; - page->counters = counters_new; - __slab_unlock(page_slab(page)); + __slab_lock(slab); + if (slab->freelist == freelist_old && + slab->counters == counters_old) { + slab->freelist = freelist_new; + slab->counters = counters_new; + __slab_unlock(slab); local_irq_restore(flags); return true; } - __slab_unlock(page_slab(page)); + __slab_unlock(slab); local_irq_restore(flags); } @@ -562,14 +562,14 @@ static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)]; static DEFINE_RAW_SPINLOCK(object_map_lock); static void __fill_map(unsigned long *obj_map, struct kmem_cache *s, - struct page *page) + struct slab *slab) { - void *addr = page_address(page); + void *addr = slab_address(slab); void *p; - bitmap_zero(obj_map, page->objects); + bitmap_zero(obj_map, slab->objects); - for (p = page->freelist; p; p = get_freepointer(s, p)) + for (p = slab->freelist; p; p = get_freepointer(s, p)) set_bit(__obj_to_index(s, addr, p), obj_map); } @@ -599,14 +599,14 @@ static inline bool slab_add_kunit_errors(void) { return false; } * Node listlock must be held to guarantee that the page does * not vanish from under us. */ -static unsigned long *get_map(struct kmem_cache *s, struct page *page) +static unsigned long *get_map(struct kmem_cache *s, struct slab *slab) __acquires(&object_map_lock) { VM_BUG_ON(!irqs_disabled()); raw_spin_lock(&object_map_lock); - __fill_map(object_map, s, page); + __fill_map(object_map, s, slab); return object_map; } @@ -667,17 +667,17 @@ static inline void metadata_access_disable(void) /* Verify that a pointer has an address that is valid within a slab page */ static inline int check_valid_pointer(struct kmem_cache *s, - struct page *page, void *object) + struct slab *slab, void *object) { void *base; if (!object) return 1; - base = page_address(page); + base = slab_address(slab); object = kasan_reset_tag(object); object = restore_red_left(s, object); - if (object < base || object >= base + page->objects * s->size || + if (object < base || object >= base + slab->objects * s->size || (object - base) % s->size) { return 0; } @@ -827,14 +827,14 @@ static void slab_fix(struct kmem_cache *s, char *fmt, ...) va_end(args); } -static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p) +static void print_trailer(struct kmem_cache *s, struct slab *slab, u8 *p) { unsigned int off; /* Offset of last byte */ - u8 *addr = page_address(page); + u8 *addr = slab_address(slab); print_tracking(s, p); - print_slab_info(page_slab(page)); + print_slab_info(slab); pr_err("Object 0x%p @offset=%tu fp=0x%p\n\n", p, p - addr, get_freepointer(s, p)); @@ -866,23 +866,23 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p) dump_stack(); } -static void object_err(struct kmem_cache *s, struct page *page, +static void object_err(struct kmem_cache *s, struct slab *slab, u8 *object, char *reason) { if (slab_add_kunit_errors()) return; slab_bug(s, "%s", reason); - print_trailer(s, page, object); + print_trailer(s, slab, object); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } -static bool freelist_corrupted(struct kmem_cache *s, struct page *page, +static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, void **freelist, void *nextfree) { if ((s->flags & SLAB_CONSISTENCY_CHECKS) && - !check_valid_pointer(s, page, nextfree) && freelist) { - object_err(s, page, *freelist, "Freechain corrupt"); + !check_valid_pointer(s, slab, nextfree) && freelist) { + object_err(s, slab, *freelist, "Freechain corrupt"); *freelist = NULL; slab_fix(s, "Isolate corrupted freechain"); return true; @@ -891,7 +891,7 @@ static bool freelist_corrupted(struct kmem_cache *s, struct page *page, return false; } -static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *page, +static __printf(3, 4) void slab_err(struct kmem_cache *s, struct slab *slab, const char *fmt, ...) { va_list args; @@ -904,7 +904,7 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *page, vsnprintf(buf, sizeof(buf), fmt, args); va_end(args); slab_bug(s, "%s", buf); - print_slab_info(page_slab(page)); + print_slab_info(slab); dump_stack(); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } @@ -932,13 +932,13 @@ static void restore_bytes(struct kmem_cache *s, char *message, u8 data, memset(from, data, to - from); } -static int check_bytes_and_report(struct kmem_cache *s, struct page *page, +static int check_bytes_and_report(struct kmem_cache *s, struct slab *slab, u8 *object, char *what, u8 *start, unsigned int value, unsigned int bytes) { u8 *fault; u8 *end; - u8 *addr = page_address(page); + u8 *addr = slab_address(slab); metadata_access_enable(); fault = memchr_inv(kasan_reset_tag(start), value, bytes); @@ -957,7 +957,7 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page, pr_err("0x%p-0x%p @offset=%tu. First byte 0x%x instead of 0x%x\n", fault, end - 1, fault - addr, fault[0], value); - print_trailer(s, page, object); + print_trailer(s, slab, object); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); skip_bug_print: @@ -1003,7 +1003,7 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page, * may be used with merged slabcaches. */ -static int check_pad_bytes(struct kmem_cache *s, struct page *page, u8 *p) +static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p) { unsigned long off = get_info_end(s); /* The end of info */ @@ -1016,12 +1016,12 @@ static int check_pad_bytes(struct kmem_cache *s, struct page *page, u8 *p) if (size_from_object(s) == off) return 1; - return check_bytes_and_report(s, page, p, "Object padding", + return check_bytes_and_report(s, slab, p, "Object padding", p + off, POISON_INUSE, size_from_object(s) - off); } /* Check the pad bytes at the end of a slab page */ -static int slab_pad_check(struct kmem_cache *s, struct page *page) +static int slab_pad_check(struct kmem_cache *s, struct slab *slab) { u8 *start; u8 *fault; @@ -1033,8 +1033,8 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page) if (!(s->flags & SLAB_POISON)) return 1; - start = page_address(page); - length = page_size(page); + start = slab_address(slab); + length = slab_size(slab); end = start + length; remainder = length % s->size; if (!remainder) @@ -1049,7 +1049,7 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page) while (end > fault && end[-1] == POISON_INUSE) end--; - slab_err(s, page, "Padding overwritten. 0x%p-0x%p @offset=%tu", + slab_err(s, slab, "Padding overwritten. 0x%p-0x%p @offset=%tu", fault, end - 1, fault - start); print_section(KERN_ERR, "Padding ", pad, remainder); @@ -1057,23 +1057,23 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page) return 0; } -static int check_object(struct kmem_cache *s, struct page *page, +static int check_object(struct kmem_cache *s, struct slab *slab, void *object, u8 val) { u8 *p = object; u8 *endobject = object + s->object_size; if (s->flags & SLAB_RED_ZONE) { - if (!check_bytes_and_report(s, page, object, "Left Redzone", + if (!check_bytes_and_report(s, slab, object, "Left Redzone", object - s->red_left_pad, val, s->red_left_pad)) return 0; - if (!check_bytes_and_report(s, page, object, "Right Redzone", + if (!check_bytes_and_report(s, slab, object, "Right Redzone", endobject, val, s->inuse - s->object_size)) return 0; } else { if ((s->flags & SLAB_POISON) && s->object_size < s->inuse) { - check_bytes_and_report(s, page, p, "Alignment padding", + check_bytes_and_report(s, slab, p, "Alignment padding", endobject, POISON_INUSE, s->inuse - s->object_size); } @@ -1081,15 +1081,15 @@ static int check_object(struct kmem_cache *s, struct page *page, if (s->flags & SLAB_POISON) { if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON) && - (!check_bytes_and_report(s, page, p, "Poison", p, + (!check_bytes_and_report(s, slab, p, "Poison", p, POISON_FREE, s->object_size - 1) || - !check_bytes_and_report(s, page, p, "End Poison", + !check_bytes_and_report(s, slab, p, "End Poison", p + s->object_size - 1, POISON_END, 1))) return 0; /* * check_pad_bytes cleans up on its own. */ - check_pad_bytes(s, page, p); + check_pad_bytes(s, slab, p); } if (!freeptr_outside_object(s) && val == SLUB_RED_ACTIVE) @@ -1100,8 +1100,8 @@ static int check_object(struct kmem_cache *s, struct page *page, return 1; /* Check free pointer validity */ - if (!check_valid_pointer(s, page, get_freepointer(s, p))) { - object_err(s, page, p, "Freepointer corrupt"); + if (!check_valid_pointer(s, slab, get_freepointer(s, p))) { + object_err(s, slab, p, "Freepointer corrupt"); /* * No choice but to zap it and thus lose the remainder * of the free objects in this slab. May cause @@ -1113,28 +1113,28 @@ static int check_object(struct kmem_cache *s, struct page *page, return 1; } -static int check_slab(struct kmem_cache *s, struct page *page) +static int check_slab(struct kmem_cache *s, struct slab *slab) { int maxobj; - if (!PageSlab(page)) { - slab_err(s, page, "Not a valid slab page"); + if (!folio_test_slab(slab_folio(slab))) { + slab_err(s, slab, "Not a valid slab page"); return 0; } - maxobj = order_objects(compound_order(page), s->size); - if (page->objects > maxobj) { - slab_err(s, page, "objects %u > max %u", - page->objects, maxobj); + maxobj = order_objects(slab_order(slab), s->size); + if (slab->objects > maxobj) { + slab_err(s, slab, "objects %u > max %u", + slab->objects, maxobj); return 0; } - if (page->inuse > page->objects) { - slab_err(s, page, "inuse %u > max %u", - page->inuse, page->objects); + if (slab->inuse > slab->objects) { + slab_err(s, slab, "inuse %u > max %u", + slab->inuse, slab->objects); return 0; } /* Slab_pad_check fixes things up after itself */ - slab_pad_check(s, page); + slab_pad_check(s, slab); return 1; } @@ -1142,26 +1142,26 @@ static int check_slab(struct kmem_cache *s, struct page *page) * Determine if a certain object on a page is on the freelist. Must hold the * slab lock to guarantee that the chains are in a consistent state. */ -static int on_freelist(struct kmem_cache *s, struct page *page, void *search) +static int on_freelist(struct kmem_cache *s, struct slab *slab, void *search) { int nr = 0; void *fp; void *object = NULL; int max_objects; - fp = page->freelist; - while (fp && nr <= page->objects) { + fp = slab->freelist; + while (fp && nr <= slab->objects) { if (fp == search) return 1; - if (!check_valid_pointer(s, page, fp)) { + if (!check_valid_pointer(s, slab, fp)) { if (object) { - object_err(s, page, object, + object_err(s, slab, object, "Freechain corrupt"); set_freepointer(s, object, NULL); } else { - slab_err(s, page, "Freepointer corrupt"); - page->freelist = NULL; - page->inuse = page->objects; + slab_err(s, slab, "Freepointer corrupt"); + slab->freelist = NULL; + slab->inuse = slab->objects; slab_fix(s, "Freelist cleared"); return 0; } @@ -1172,34 +1172,34 @@ static int on_freelist(struct kmem_cache *s, struct page *page, void *search) nr++; } - max_objects = order_objects(compound_order(page), s->size); + max_objects = order_objects(slab_order(slab), s->size); if (max_objects > MAX_OBJS_PER_PAGE) max_objects = MAX_OBJS_PER_PAGE; - if (page->objects != max_objects) { - slab_err(s, page, "Wrong number of objects. Found %d but should be %d", - page->objects, max_objects); - page->objects = max_objects; + if (slab->objects != max_objects) { + slab_err(s, slab, "Wrong number of objects. Found %d but should be %d", + slab->objects, max_objects); + slab->objects = max_objects; slab_fix(s, "Number of objects adjusted"); } - if (page->inuse != page->objects - nr) { - slab_err(s, page, "Wrong object count. Counter is %d but counted were %d", - page->inuse, page->objects - nr); - page->inuse = page->objects - nr; + if (slab->inuse != slab->objects - nr) { + slab_err(s, slab, "Wrong object count. Counter is %d but counted were %d", + slab->inuse, slab->objects - nr); + slab->inuse = slab->objects - nr; slab_fix(s, "Object count adjusted"); } return search == NULL; } -static void trace(struct kmem_cache *s, struct page *page, void *object, +static void trace(struct kmem_cache *s, struct slab *slab, void *object, int alloc) { if (s->flags & SLAB_TRACE) { pr_info("TRACE %s %s 0x%p inuse=%d fp=0x%p\n", s->name, alloc ? "alloc" : "free", - object, page->inuse, - page->freelist); + object, slab->inuse, + slab->freelist); if (!alloc) print_section(KERN_INFO, "Object ", (void *)object, @@ -1213,22 +1213,22 @@ static void trace(struct kmem_cache *s, struct page *page, void *object, * Tracking of fully allocated slabs for debugging purposes. */ static void add_full(struct kmem_cache *s, - struct kmem_cache_node *n, struct page *page) + struct kmem_cache_node *n, struct slab *slab) { if (!(s->flags & SLAB_STORE_USER)) return; lockdep_assert_held(&n->list_lock); - list_add(&page->slab_list, &n->full); + list_add(&slab->slab_list, &n->full); } -static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct page *page) +static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct slab *slab) { if (!(s->flags & SLAB_STORE_USER)) return; lockdep_assert_held(&n->list_lock); - list_del(&page->slab_list); + list_del(&slab->slab_list); } /* Tracking of the number of slabs for debugging purposes */ @@ -1268,7 +1268,7 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node, int objects) } /* Object debug checks for alloc/free paths */ -static void setup_object_debug(struct kmem_cache *s, struct page *page, +static void setup_object_debug(struct kmem_cache *s, struct slab *slab, void *object) { if (!kmem_cache_debug_flags(s, SLAB_STORE_USER|SLAB_RED_ZONE|__OBJECT_POISON)) @@ -1279,89 +1279,89 @@ static void setup_object_debug(struct kmem_cache *s, struct page *page, } static -void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr) +void setup_slab_debug(struct kmem_cache *s, struct slab *slab, void *addr) { if (!kmem_cache_debug_flags(s, SLAB_POISON)) return; metadata_access_enable(); - memset(kasan_reset_tag(addr), POISON_INUSE, page_size(page)); + memset(kasan_reset_tag(addr), POISON_INUSE, slab_size(slab)); metadata_access_disable(); } static inline int alloc_consistency_checks(struct kmem_cache *s, - struct page *page, void *object) + struct slab *slab, void *object) { - if (!check_slab(s, page)) + if (!check_slab(s, slab)) return 0; - if (!check_valid_pointer(s, page, object)) { - object_err(s, page, object, "Freelist Pointer check fails"); + if (!check_valid_pointer(s, slab, object)) { + object_err(s, slab, object, "Freelist Pointer check fails"); return 0; } - if (!check_object(s, page, object, SLUB_RED_INACTIVE)) + if (!check_object(s, slab, object, SLUB_RED_INACTIVE)) return 0; return 1; } static noinline int alloc_debug_processing(struct kmem_cache *s, - struct page *page, + struct slab *slab, void *object, unsigned long addr) { if (s->flags & SLAB_CONSISTENCY_CHECKS) { - if (!alloc_consistency_checks(s, page, object)) + if (!alloc_consistency_checks(s, slab, object)) goto bad; } /* Success perform special debug activities for allocs */ if (s->flags & SLAB_STORE_USER) set_track(s, object, TRACK_ALLOC, addr); - trace(s, page, object, 1); + trace(s, slab, object, 1); init_object(s, object, SLUB_RED_ACTIVE); return 1; bad: - if (PageSlab(page)) { + if (folio_test_slab(slab_folio(slab))) { /* * If this is a slab page then lets do the best we can * to avoid issues in the future. Marking all objects * as used avoids touching the remaining objects. */ slab_fix(s, "Marking all objects used"); - page->inuse = page->objects; - page->freelist = NULL; + slab->inuse = slab->objects; + slab->freelist = NULL; } return 0; } static inline int free_consistency_checks(struct kmem_cache *s, - struct page *page, void *object, unsigned long addr) + struct slab *slab, void *object, unsigned long addr) { - if (!check_valid_pointer(s, page, object)) { - slab_err(s, page, "Invalid object pointer 0x%p", object); + if (!check_valid_pointer(s, slab, object)) { + slab_err(s, slab, "Invalid object pointer 0x%p", object); return 0; } - if (on_freelist(s, page, object)) { - object_err(s, page, object, "Object already free"); + if (on_freelist(s, slab, object)) { + object_err(s, slab, object, "Object already free"); return 0; } - if (!check_object(s, page, object, SLUB_RED_ACTIVE)) + if (!check_object(s, slab, object, SLUB_RED_ACTIVE)) return 0; - if (unlikely(s != page->slab_cache)) { - if (!PageSlab(page)) { - slab_err(s, page, "Attempt to free object(0x%p) outside of slab", + if (unlikely(s != slab->slab_cache)) { + if (!folio_test_slab(slab_folio(slab))) { + slab_err(s, slab, "Attempt to free object(0x%p) outside of slab", object); - } else if (!page->slab_cache) { + } else if (!slab->slab_cache) { pr_err("SLUB : no slab for object 0x%p.\n", object); dump_stack(); } else - object_err(s, page, object, + object_err(s, slab, object, "page slab pointer corrupt."); return 0; } @@ -1370,21 +1370,21 @@ static inline int free_consistency_checks(struct kmem_cache *s, /* Supports checking bulk free of a constructed freelist */ static noinline int free_debug_processing( - struct kmem_cache *s, struct page *page, + struct kmem_cache *s, struct slab *slab, void *head, void *tail, int bulk_cnt, unsigned long addr) { - struct kmem_cache_node *n = get_node(s, page_to_nid(page)); + struct kmem_cache_node *n = get_node(s, slab_nid(slab)); void *object = head; int cnt = 0; unsigned long flags, flags2; int ret = 0; spin_lock_irqsave(&n->list_lock, flags); - slab_lock(page, &flags2); + slab_lock(slab, &flags2); if (s->flags & SLAB_CONSISTENCY_CHECKS) { - if (!check_slab(s, page)) + if (!check_slab(s, slab)) goto out; } @@ -1392,13 +1392,13 @@ static noinline int free_debug_processing( cnt++; if (s->flags & SLAB_CONSISTENCY_CHECKS) { - if (!free_consistency_checks(s, page, object, addr)) + if (!free_consistency_checks(s, slab, object, addr)) goto out; } if (s->flags & SLAB_STORE_USER) set_track(s, object, TRACK_FREE, addr); - trace(s, page, object, 0); + trace(s, slab, object, 0); /* Freepointer not overwritten by init_object(), SLAB_POISON moved it */ init_object(s, object, SLUB_RED_INACTIVE); @@ -1411,10 +1411,10 @@ static noinline int free_debug_processing( out: if (cnt != bulk_cnt) - slab_err(s, page, "Bulk freelist count(%d) invalid(%d)\n", + slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n", bulk_cnt, cnt); - slab_unlock(page, &flags2); + slab_unlock(slab, &flags2); spin_unlock_irqrestore(&n->list_lock, flags); if (!ret) slab_fix(s, "Object at 0x%p not freed", object); @@ -1629,26 +1629,26 @@ slab_flags_t kmem_cache_flags(unsigned int object_size, } #else /* !CONFIG_SLUB_DEBUG */ static inline void setup_object_debug(struct kmem_cache *s, - struct page *page, void *object) {} + struct slab *slab, void *object) {} static inline -void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr) {} +void setup_slab_debug(struct kmem_cache *s, struct slab *slab, void *addr) {} static inline int alloc_debug_processing(struct kmem_cache *s, - struct page *page, void *object, unsigned long addr) { return 0; } + struct slab *slab, void *object, unsigned long addr) { return 0; } static inline int free_debug_processing( - struct kmem_cache *s, struct page *page, + struct kmem_cache *s, struct slab *slab, void *head, void *tail, int bulk_cnt, unsigned long addr) { return 0; } -static inline int slab_pad_check(struct kmem_cache *s, struct page *page) +static inline int slab_pad_check(struct kmem_cache *s, struct slab *slab) { return 1; } -static inline int check_object(struct kmem_cache *s, struct page *page, +static inline int check_object(struct kmem_cache *s, struct slab *slab, void *object, u8 val) { return 1; } static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *n, - struct page *page) {} + struct slab *slab) {} static inline void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, - struct page *page) {} + struct slab *slab) {} slab_flags_t kmem_cache_flags(unsigned int object_size, slab_flags_t flags, const char *name) { @@ -1667,7 +1667,7 @@ static inline void inc_slabs_node(struct kmem_cache *s, int node, static inline void dec_slabs_node(struct kmem_cache *s, int node, int objects) {} -static bool freelist_corrupted(struct kmem_cache *s, struct page *page, +static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, void **freelist, void *nextfree) { return false; @@ -1772,10 +1772,10 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, return *head != NULL; } -static void *setup_object(struct kmem_cache *s, struct page *page, +static void *setup_object(struct kmem_cache *s, struct slab *slab, void *object) { - setup_object_debug(s, page, object); + setup_object_debug(s, slab, object); object = kasan_init_slab_obj(s, object); if (unlikely(s->ctor)) { kasan_unpoison_object_data(s, object); @@ -1853,7 +1853,7 @@ static void __init init_freelist_randomization(void) } /* Get the next entry on the pre-computed freelist randomized */ -static void *next_freelist_entry(struct kmem_cache *s, struct page *page, +static void *next_freelist_entry(struct kmem_cache *s, struct slab *slab, unsigned long *pos, void *start, unsigned long page_limit, unsigned long freelist_count) @@ -1875,32 +1875,32 @@ static void *next_freelist_entry(struct kmem_cache *s, struct page *page, } /* Shuffle the single linked freelist based on a random pre-computed sequence */ -static bool shuffle_freelist(struct kmem_cache *s, struct page *page) +static bool shuffle_freelist(struct kmem_cache *s, struct slab *slab) { void *start; void *cur; void *next; unsigned long idx, pos, page_limit, freelist_count; - if (page->objects < 2 || !s->random_seq) + if (slab->objects < 2 || !s->random_seq) return false; freelist_count = oo_objects(s->oo); pos = get_random_int() % freelist_count; - page_limit = page->objects * s->size; - start = fixup_red_left(s, page_address(page)); + page_limit = slab->objects * s->size; + start = fixup_red_left(s, slab_address(slab)); /* First entry is used as the base of the freelist */ - cur = next_freelist_entry(s, page, &pos, start, page_limit, + cur = next_freelist_entry(s, slab, &pos, start, page_limit, freelist_count); - cur = setup_object(s, page, cur); - page->freelist = cur; + cur = setup_object(s, slab, cur); + slab->freelist = cur; - for (idx = 1; idx < page->objects; idx++) { - next = next_freelist_entry(s, page, &pos, start, page_limit, + for (idx = 1; idx < slab->objects; idx++) { + next = next_freelist_entry(s, slab, &pos, start, page_limit, freelist_count); - next = setup_object(s, page, next); + next = setup_object(s, slab, next); set_freepointer(s, cur, next); cur = next; } @@ -1914,15 +1914,15 @@ static inline int init_cache_random_seq(struct kmem_cache *s) return 0; } static inline void init_freelist_randomization(void) { } -static inline bool shuffle_freelist(struct kmem_cache *s, struct page *page) +static inline bool shuffle_freelist(struct kmem_cache *s, struct slab *slab) { return false; } #endif /* CONFIG_SLAB_FREELIST_RANDOM */ -static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) +static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) { - struct page *page; + struct slab *slab; struct kmem_cache_order_objects oo = s->oo; gfp_t alloc_gfp; void *start, *p, *next; @@ -1941,60 +1941,60 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) if ((alloc_gfp & __GFP_DIRECT_RECLAIM) && oo_order(oo) > oo_order(s->min)) alloc_gfp = (alloc_gfp | __GFP_NOMEMALLOC) & ~(__GFP_RECLAIM|__GFP_NOFAIL); - page = slab_page(alloc_slab_page(s, alloc_gfp, node, oo)); - if (unlikely(!page)) { + slab = alloc_slab_page(s, alloc_gfp, node, oo); + if (unlikely(!slab)) { oo = s->min; alloc_gfp = flags; /* * Allocation may have failed due to fragmentation. * Try a lower order alloc if possible */ - page = slab_page(alloc_slab_page(s, alloc_gfp, node, oo)); - if (unlikely(!page)) + slab = alloc_slab_page(s, alloc_gfp, node, oo); + if (unlikely(!slab)) goto out; stat(s, ORDER_FALLBACK); } - page->objects = oo_objects(oo); + slab->objects = oo_objects(oo); - account_slab(page_slab(page), oo_order(oo), s, flags); + account_slab(slab, oo_order(oo), s, flags); - page->slab_cache = s; + slab->slab_cache = s; - kasan_poison_slab(page); + kasan_poison_slab(slab_page(slab)); - start = page_address(page); + start = slab_address(slab); - setup_page_debug(s, page, start); + setup_slab_debug(s, slab, start); - shuffle = shuffle_freelist(s, page); + shuffle = shuffle_freelist(s, slab); if (!shuffle) { start = fixup_red_left(s, start); - start = setup_object(s, page, start); - page->freelist = start; - for (idx = 0, p = start; idx < page->objects - 1; idx++) { + start = setup_object(s, slab, start); + slab->freelist = start; + for (idx = 0, p = start; idx < slab->objects - 1; idx++) { next = p + s->size; - next = setup_object(s, page, next); + next = setup_object(s, slab, next); set_freepointer(s, p, next); p = next; } set_freepointer(s, p, NULL); } - page->inuse = page->objects; - page->frozen = 1; + slab->inuse = slab->objects; + slab->frozen = 1; out: - if (!page) + if (!slab) return NULL; - inc_slabs_node(s, page_to_nid(page), page->objects); + inc_slabs_node(s, slab_nid(slab), slab->objects); - return page; + return slab; } -static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node) +static struct slab *new_slab(struct kmem_cache *s, gfp_t flags, int node) { if (unlikely(flags & GFP_SLAB_BUG_MASK)) flags = kmalloc_fix_flags(flags); @@ -2014,9 +2014,9 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) if (kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) { void *p; - slab_pad_check(s, folio_page(folio, 0)); + slab_pad_check(s, slab); for_each_object(p, s, slab_address(slab), slab->objects) - check_object(s, folio_page(folio, 0), p, SLUB_RED_INACTIVE); + check_object(s, slab, p, SLUB_RED_INACTIVE); } __slab_clear_pfmemalloc(slab); @@ -2031,50 +2031,50 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) static void rcu_free_slab(struct rcu_head *h) { - struct page *page = container_of(h, struct page, rcu_head); + struct slab *slab = container_of(h, struct slab, rcu_head); - __free_slab(page->slab_cache, page_slab(page)); + __free_slab(slab->slab_cache, slab); } -static void free_slab(struct kmem_cache *s, struct page *page) +static void free_slab(struct kmem_cache *s, struct slab *slab) { if (unlikely(s->flags & SLAB_TYPESAFE_BY_RCU)) { - call_rcu(&page->rcu_head, rcu_free_slab); + call_rcu(&slab->rcu_head, rcu_free_slab); } else - __free_slab(s, page_slab(page)); + __free_slab(s, slab); } -static void discard_slab(struct kmem_cache *s, struct page *page) +static void discard_slab(struct kmem_cache *s, struct slab *slab) { - dec_slabs_node(s, page_to_nid(page), page->objects); - free_slab(s, page); + dec_slabs_node(s, slab_nid(slab), slab->objects); + free_slab(s, slab); } /* * Management of partially allocated slabs. */ static inline void -__add_partial(struct kmem_cache_node *n, struct page *page, int tail) +__add_partial(struct kmem_cache_node *n, struct slab *slab, int tail) { n->nr_partial++; if (tail == DEACTIVATE_TO_TAIL) - list_add_tail(&page->slab_list, &n->partial); + list_add_tail(&slab->slab_list, &n->partial); else - list_add(&page->slab_list, &n->partial); + list_add(&slab->slab_list, &n->partial); } static inline void add_partial(struct kmem_cache_node *n, - struct page *page, int tail) + struct slab *slab, int tail) { lockdep_assert_held(&n->list_lock); - __add_partial(n, page, tail); + __add_partial(n, slab, tail); } static inline void remove_partial(struct kmem_cache_node *n, - struct page *page) + struct slab *slab) { lockdep_assert_held(&n->list_lock); - list_del(&page->slab_list); + list_del(&slab->slab_list); n->nr_partial--; } @@ -2085,12 +2085,12 @@ static inline void remove_partial(struct kmem_cache_node *n, * Returns a list of objects or NULL if it fails. */ static inline void *acquire_slab(struct kmem_cache *s, - struct kmem_cache_node *n, struct page *page, + struct kmem_cache_node *n, struct slab *slab, int mode) { void *freelist; unsigned long counters; - struct page new; + struct slab new; lockdep_assert_held(&n->list_lock); @@ -2099,11 +2099,11 @@ static inline void *acquire_slab(struct kmem_cache *s, * The old freelist is the list of objects for the * per cpu allocation list. */ - freelist = page->freelist; - counters = page->counters; + freelist = slab->freelist; + counters = slab->counters; new.counters = counters; if (mode) { - new.inuse = page->objects; + new.inuse = slab->objects; new.freelist = NULL; } else { new.freelist = freelist; @@ -2112,21 +2112,21 @@ static inline void *acquire_slab(struct kmem_cache *s, VM_BUG_ON(new.frozen); new.frozen = 1; - if (!__cmpxchg_double_slab(s, page, + if (!__cmpxchg_double_slab(s, slab, freelist, counters, new.freelist, new.counters, "acquire_slab")) return NULL; - remove_partial(n, page); + remove_partial(n, slab); WARN_ON(!freelist); return freelist; } #ifdef CONFIG_SLUB_CPU_PARTIAL -static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain); +static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain); #else -static inline void put_cpu_partial(struct kmem_cache *s, struct page *page, +static inline void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain) { } #endif static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags); @@ -2135,12 +2135,12 @@ static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags); * Try to allocate a partial slab from a specific node. */ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, - struct page **ret_page, gfp_t gfpflags) + struct slab **ret_slab, gfp_t gfpflags) { - struct page *page, *page2; + struct slab *slab, *slab2; void *object = NULL; unsigned long flags; - unsigned int partial_pages = 0; + unsigned int partial_slabs = 0; /* * Racy check. If we mistakenly see no partial slabs then we @@ -2152,28 +2152,28 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, return NULL; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry_safe(page, page2, &n->partial, slab_list) { + list_for_each_entry_safe(slab, slab2, &n->partial, slab_list) { void *t; - if (!pfmemalloc_match(page_slab(page), gfpflags)) + if (!pfmemalloc_match(slab, gfpflags)) continue; - t = acquire_slab(s, n, page, object == NULL); + t = acquire_slab(s, n, slab, object == NULL); if (!t) break; if (!object) { - *ret_page = page; + *ret_slab = slab; stat(s, ALLOC_FROM_PARTIAL); object = t; } else { - put_cpu_partial(s, page, 0); + put_cpu_partial(s, slab, 0); stat(s, CPU_PARTIAL_NODE); - partial_pages++; + partial_slabs++; } #ifdef CONFIG_SLUB_CPU_PARTIAL if (!kmem_cache_has_cpu_partial(s) - || partial_pages > s->cpu_partial_pages / 2) + || partial_slabs > s->cpu_partial_slabs / 2) break; #else break; @@ -2188,7 +2188,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, * Get a page from somewhere. Search in increasing NUMA distances. */ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, - struct page **ret_page) + struct slab **ret_slab) { #ifdef CONFIG_NUMA struct zonelist *zonelist; @@ -2230,7 +2230,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, if (n && cpuset_zone_allowed(zone, flags) && n->nr_partial > s->min_partial) { - object = get_partial_node(s, n, ret_page, flags); + object = get_partial_node(s, n, ret_slab, flags); if (object) { /* * Don't check read_mems_allowed_retry() @@ -2252,7 +2252,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, * Get a partial page, lock it and return it. */ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node, - struct page **ret_page) + struct slab **ret_slab) { void *object; int searchnode = node; @@ -2260,11 +2260,11 @@ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node, if (node == NUMA_NO_NODE) searchnode = numa_mem_id(); - object = get_partial_node(s, get_node(s, searchnode), ret_page, flags); + object = get_partial_node(s, get_node(s, searchnode), ret_slab, flags); if (object || node != NUMA_NO_NODE) return object; - return get_any_partial(s, flags, ret_page); + return get_any_partial(s, flags, ret_slab); } #ifdef CONFIG_PREEMPTION @@ -2346,20 +2346,20 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) * Assumes the slab has been already safely taken away from kmem_cache_cpu * by the caller. */ -static void deactivate_slab(struct kmem_cache *s, struct page *page, +static void deactivate_slab(struct kmem_cache *s, struct slab *slab, void *freelist) { enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE }; - struct kmem_cache_node *n = get_node(s, page_to_nid(page)); + struct kmem_cache_node *n = get_node(s, slab_nid(slab)); int lock = 0, free_delta = 0; enum slab_modes l = M_NONE, m = M_NONE; void *nextfree, *freelist_iter, *freelist_tail; int tail = DEACTIVATE_TO_HEAD; unsigned long flags = 0; - struct page new; - struct page old; + struct slab new; + struct slab old; - if (page->freelist) { + if (slab->freelist) { stat(s, DEACTIVATE_REMOTE_FREES); tail = DEACTIVATE_TO_TAIL; } @@ -2378,7 +2378,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, * 'freelist_iter' is already corrupted. So isolate all objects * starting at 'freelist_iter' by skipping them. */ - if (freelist_corrupted(s, page, &freelist_iter, nextfree)) + if (freelist_corrupted(s, slab, &freelist_iter, nextfree)) break; freelist_tail = freelist_iter; @@ -2405,8 +2405,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, */ redo: - old.freelist = READ_ONCE(page->freelist); - old.counters = READ_ONCE(page->counters); + old.freelist = READ_ONCE(slab->freelist); + old.counters = READ_ONCE(slab->counters); VM_BUG_ON(!old.frozen); /* Determine target state of the slab */ @@ -2448,18 +2448,18 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, if (l != m) { if (l == M_PARTIAL) - remove_partial(n, page); + remove_partial(n, slab); else if (l == M_FULL) - remove_full(s, n, page); + remove_full(s, n, slab); if (m == M_PARTIAL) - add_partial(n, page, tail); + add_partial(n, slab, tail); else if (m == M_FULL) - add_full(s, n, page); + add_full(s, n, slab); } l = m; - if (!cmpxchg_double_slab(s, page, + if (!cmpxchg_double_slab(s, slab, old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")) @@ -2474,26 +2474,26 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, stat(s, DEACTIVATE_FULL); else if (m == M_FREE) { stat(s, DEACTIVATE_EMPTY); - discard_slab(s, page); + discard_slab(s, slab); stat(s, FREE_SLAB); } } #ifdef CONFIG_SLUB_CPU_PARTIAL -static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) +static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) { struct kmem_cache_node *n = NULL, *n2 = NULL; - struct page *page, *discard_page = NULL; + struct slab *slab, *slab_to_discard = NULL; unsigned long flags = 0; - while (partial_page) { - struct page new; - struct page old; + while (partial_slab) { + struct slab new; + struct slab old; - page = partial_page; - partial_page = page->next; + slab = partial_slab; + partial_slab = slab->next; - n2 = get_node(s, page_to_nid(page)); + n2 = get_node(s, slab_nid(slab)); if (n != n2) { if (n) spin_unlock_irqrestore(&n->list_lock, flags); @@ -2504,8 +2504,8 @@ static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) do { - old.freelist = page->freelist; - old.counters = page->counters; + old.freelist = slab->freelist; + old.counters = slab->counters; VM_BUG_ON(!old.frozen); new.counters = old.counters; @@ -2513,16 +2513,16 @@ static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) new.frozen = 0; - } while (!__cmpxchg_double_slab(s, page, + } while (!__cmpxchg_double_slab(s, slab, old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")); if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { - page->next = discard_page; - discard_page = page; + slab->next = slab_to_discard; + slab_to_discard = slab; } else { - add_partial(n, page, DEACTIVATE_TO_TAIL); + add_partial(n, slab, DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } } @@ -2530,12 +2530,12 @@ static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) if (n) spin_unlock_irqrestore(&n->list_lock, flags); - while (discard_page) { - page = discard_page; - discard_page = discard_page->next; + while (slab_to_discard) { + slab = slab_to_discard; + slab_to_discard = slab_to_discard->next; stat(s, DEACTIVATE_EMPTY); - discard_slab(s, page); + discard_slab(s, slab); stat(s, FREE_SLAB); } } @@ -2545,28 +2545,28 @@ static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) */ static void unfreeze_partials(struct kmem_cache *s) { - struct page *partial_page; + struct slab *partial_slab; unsigned long flags; local_lock_irqsave(&s->cpu_slab->lock, flags); - partial_page = this_cpu_read(s->cpu_slab->partial); + partial_slab = this_cpu_read(s->cpu_slab->partial); this_cpu_write(s->cpu_slab->partial, NULL); local_unlock_irqrestore(&s->cpu_slab->lock, flags); - if (partial_page) - __unfreeze_partials(s, partial_page); + if (partial_slab) + __unfreeze_partials(s, partial_slab); } static void unfreeze_partials_cpu(struct kmem_cache *s, struct kmem_cache_cpu *c) { - struct page *partial_page; + struct slab *partial_slab; - partial_page = slub_percpu_partial(c); + partial_slab = slub_percpu_partial(c); c->partial = NULL; - if (partial_page) - __unfreeze_partials(s, partial_page); + if (partial_slab) + __unfreeze_partials(s, partial_slab); } /* @@ -2576,42 +2576,42 @@ static void unfreeze_partials_cpu(struct kmem_cache *s, * If we did not find a slot then simply move all the partials to the * per node partial list. */ -static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) +static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain) { - struct page *oldpage; - struct page *page_to_unfreeze = NULL; + struct slab *oldslab; + struct slab *slab_to_unfreeze = NULL; unsigned long flags; - int pages = 0; + int slabs = 0; local_lock_irqsave(&s->cpu_slab->lock, flags); - oldpage = this_cpu_read(s->cpu_slab->partial); + oldslab = this_cpu_read(s->cpu_slab->partial); - if (oldpage) { - if (drain && oldpage->pages >= s->cpu_partial_pages) { + if (oldslab) { + if (drain && oldslab->slabs >= s->cpu_partial_slabs) { /* * Partial array is full. Move the existing set to the * per node partial list. Postpone the actual unfreezing * outside of the critical section. */ - page_to_unfreeze = oldpage; - oldpage = NULL; + slab_to_unfreeze = oldslab; + oldslab = NULL; } else { - pages = oldpage->pages; + slabs = oldslab->slabs; } } - pages++; + slabs++; - page->pages = pages; - page->next = oldpage; + slab->slabs = slabs; + slab->next = oldslab; - this_cpu_write(s->cpu_slab->partial, page); + this_cpu_write(s->cpu_slab->partial, slab); local_unlock_irqrestore(&s->cpu_slab->lock, flags); - if (page_to_unfreeze) { - __unfreeze_partials(s, page_to_unfreeze); + if (slab_to_unfreeze) { + __unfreeze_partials(s, slab_to_unfreeze); stat(s, CPU_PARTIAL_DRAIN); } } @@ -2627,22 +2627,22 @@ static inline void unfreeze_partials_cpu(struct kmem_cache *s, static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) { unsigned long flags; - struct page *page; + struct slab *slab; void *freelist; local_lock_irqsave(&s->cpu_slab->lock, flags); - page = c->page; + slab = c->slab; freelist = c->freelist; - c->page = NULL; + c->slab = NULL; c->freelist = NULL; c->tid = next_tid(c->tid); local_unlock_irqrestore(&s->cpu_slab->lock, flags); - if (page) { - deactivate_slab(s, page, freelist); + if (slab) { + deactivate_slab(s, slab, freelist); stat(s, CPUSLAB_FLUSH); } } @@ -2651,14 +2651,14 @@ static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) { struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); void *freelist = c->freelist; - struct page *page = c->page; + struct slab *slab = c->slab; - c->page = NULL; + c->slab = NULL; c->freelist = NULL; c->tid = next_tid(c->tid); - if (page) { - deactivate_slab(s, page, freelist); + if (slab) { + deactivate_slab(s, slab, freelist); stat(s, CPUSLAB_FLUSH); } @@ -2687,7 +2687,7 @@ static void flush_cpu_slab(struct work_struct *w) s = sfw->s; c = this_cpu_ptr(s->cpu_slab); - if (c->page) + if (c->slab) flush_slab(s, c); unfreeze_partials(s); @@ -2697,7 +2697,7 @@ static bool has_cpu_slab(int cpu, struct kmem_cache *s) { struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); - return c->page || slub_percpu_partial(c); + return c->slab || slub_percpu_partial(c); } static DEFINE_MUTEX(flush_lock); @@ -2759,19 +2759,19 @@ static int slub_cpu_dead(unsigned int cpu) * Check if the objects in a per cpu structure fit numa * locality expectations. */ -static inline int node_match(struct page *page, int node) +static inline int node_match(struct slab *slab, int node) { #ifdef CONFIG_NUMA - if (node != NUMA_NO_NODE && page_to_nid(page) != node) + if (node != NUMA_NO_NODE && slab_nid(slab) != node) return 0; #endif return 1; } #ifdef CONFIG_SLUB_DEBUG -static int count_free(struct page *page) +static int count_free(struct slab *slab) { - return page->objects - page->inuse; + return slab->objects - slab->inuse; } static inline unsigned long node_nr_objs(struct kmem_cache_node *n) @@ -2782,15 +2782,15 @@ static inline unsigned long node_nr_objs(struct kmem_cache_node *n) #if defined(CONFIG_SLUB_DEBUG) || defined(CONFIG_SYSFS) static unsigned long count_partial(struct kmem_cache_node *n, - int (*get_count)(struct page *)) + int (*get_count)(struct slab *)) { unsigned long flags; unsigned long x = 0; - struct page *page; + struct slab *slab; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, slab_list) - x += get_count(page); + list_for_each_entry(slab, &n->partial, slab_list) + x += get_count(slab); spin_unlock_irqrestore(&n->list_lock, flags); return x; } @@ -2849,25 +2849,25 @@ static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags) * * If this function returns NULL then the page has been unfrozen. */ -static inline void *get_freelist(struct kmem_cache *s, struct page *page) +static inline void *get_freelist(struct kmem_cache *s, struct slab *slab) { - struct page new; + struct slab new; unsigned long counters; void *freelist; lockdep_assert_held(this_cpu_ptr(&s->cpu_slab->lock)); do { - freelist = page->freelist; - counters = page->counters; + freelist = slab->freelist; + counters = slab->counters; new.counters = counters; VM_BUG_ON(!new.frozen); - new.inuse = page->objects; + new.inuse = slab->objects; new.frozen = freelist != NULL; - } while (!__cmpxchg_double_slab(s, page, + } while (!__cmpxchg_double_slab(s, slab, freelist, counters, NULL, new.counters, "get_freelist")); @@ -2898,15 +2898,15 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, unsigned long addr, struct kmem_cache_cpu *c) { void *freelist; - struct page *page; + struct slab *slab; unsigned long flags; stat(s, ALLOC_SLOWPATH); reread_page: - page = READ_ONCE(c->page); - if (!page) { + slab = READ_ONCE(c->slab); + if (!slab) { /* * if the node is not online or has no normal memory, just * ignore the node constraint @@ -2918,7 +2918,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, } redo: - if (unlikely(!node_match(page, node))) { + if (unlikely(!node_match(slab, node))) { /* * same as above but node_match() being false already * implies node != NUMA_NO_NODE @@ -2937,12 +2937,12 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, * PFMEMALLOC but right now, we are losing the pfmemalloc * information when the page leaves the per-cpu allocator */ - if (unlikely(!pfmemalloc_match(page_slab(page), gfpflags))) + if (unlikely(!pfmemalloc_match(slab, gfpflags))) goto deactivate_slab; /* must check again c->page in case we got preempted and it changed */ local_lock_irqsave(&s->cpu_slab->lock, flags); - if (unlikely(page != c->page)) { + if (unlikely(slab != c->slab)) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); goto reread_page; } @@ -2950,10 +2950,10 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, if (freelist) goto load_freelist; - freelist = get_freelist(s, page); + freelist = get_freelist(s, slab); if (!freelist) { - c->page = NULL; + c->slab = NULL; local_unlock_irqrestore(&s->cpu_slab->lock, flags); stat(s, DEACTIVATE_BYPASS); goto new_slab; @@ -2970,7 +2970,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, * page is pointing to the page from which the objects are obtained. * That page must be frozen for per cpu allocations to work. */ - VM_BUG_ON(!c->page->frozen); + VM_BUG_ON(!c->slab->frozen); c->freelist = get_freepointer(s, freelist); c->tid = next_tid(c->tid); local_unlock_irqrestore(&s->cpu_slab->lock, flags); @@ -2979,21 +2979,21 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, deactivate_slab: local_lock_irqsave(&s->cpu_slab->lock, flags); - if (page != c->page) { + if (slab != c->slab) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); goto reread_page; } freelist = c->freelist; - c->page = NULL; + c->slab = NULL; c->freelist = NULL; local_unlock_irqrestore(&s->cpu_slab->lock, flags); - deactivate_slab(s, page, freelist); + deactivate_slab(s, slab, freelist); new_slab: if (slub_percpu_partial(c)) { local_lock_irqsave(&s->cpu_slab->lock, flags); - if (unlikely(c->page)) { + if (unlikely(c->slab)) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); goto reread_page; } @@ -3003,8 +3003,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, goto new_objects; } - page = c->page = slub_percpu_partial(c); - slub_set_percpu_partial(c, page); + slab = c->slab = slub_percpu_partial(c); + slub_set_percpu_partial(c, slab); local_unlock_irqrestore(&s->cpu_slab->lock, flags); stat(s, CPU_PARTIAL_ALLOC); goto redo; @@ -3012,15 +3012,15 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, new_objects: - freelist = get_partial(s, gfpflags, node, &page); + freelist = get_partial(s, gfpflags, node, &slab); if (freelist) goto check_new_page; slub_put_cpu_ptr(s->cpu_slab); - page = new_slab(s, gfpflags, node); + slab = new_slab(s, gfpflags, node); c = slub_get_cpu_ptr(s->cpu_slab); - if (unlikely(!page)) { + if (unlikely(!slab)) { slab_out_of_memory(s, gfpflags, node); return NULL; } @@ -3029,15 +3029,15 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, * No other reference to the page yet so we can * muck around with it freely without cmpxchg */ - freelist = page->freelist; - page->freelist = NULL; + freelist = slab->freelist; + slab->freelist = NULL; stat(s, ALLOC_SLAB); check_new_page: if (kmem_cache_debug(s)) { - if (!alloc_debug_processing(s, page, freelist, addr)) { + if (!alloc_debug_processing(s, slab, freelist, addr)) { /* Slab failed checks. Next slab needed */ goto new_slab; } else { @@ -3049,7 +3049,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, } } - if (unlikely(!pfmemalloc_match(page_slab(page), gfpflags))) + if (unlikely(!pfmemalloc_match(slab, gfpflags))) /* * For !pfmemalloc_match() case we don't load freelist so that * we don't make further mismatched allocations easier. @@ -3059,29 +3059,29 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, retry_load_page: local_lock_irqsave(&s->cpu_slab->lock, flags); - if (unlikely(c->page)) { + if (unlikely(c->slab)) { void *flush_freelist = c->freelist; - struct page *flush_page = c->page; + struct slab *flush_slab = c->slab; - c->page = NULL; + c->slab = NULL; c->freelist = NULL; c->tid = next_tid(c->tid); local_unlock_irqrestore(&s->cpu_slab->lock, flags); - deactivate_slab(s, flush_page, flush_freelist); + deactivate_slab(s, flush_slab, flush_freelist); stat(s, CPUSLAB_FLUSH); goto retry_load_page; } - c->page = page; + c->slab = slab; goto load_freelist; return_single: - deactivate_slab(s, page, get_freepointer(s, freelist)); + deactivate_slab(s, slab, get_freepointer(s, freelist)); return freelist; } @@ -3138,7 +3138,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, { void *object; struct kmem_cache_cpu *c; - struct page *page; + struct slab *slab; unsigned long tid; struct obj_cgroup *objcg = NULL; bool init = false; @@ -3185,7 +3185,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, */ object = c->freelist; - page = c->page; + slab = c->slab; /* * We cannot use the lockless fastpath on PREEMPT_RT because if a * slowpath has taken the local_lock_irqsave(), it is not protected @@ -3194,7 +3194,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, * there is a suitable cpu freelist. */ if (IS_ENABLED(CONFIG_PREEMPT_RT) || - unlikely(!object || !page || !node_match(page, node))) { + unlikely(!object || !slab || !node_match(slab, node))) { object = __slab_alloc(s, gfpflags, node, addr, c); } else { void *next_object = get_freepointer_safe(s, object); @@ -3299,14 +3299,14 @@ EXPORT_SYMBOL(kmem_cache_alloc_node_trace); * lock and free the item. If there is no additional partial page * handling required then we can return immediately. */ -static void __slab_free(struct kmem_cache *s, struct page *page, +static void __slab_free(struct kmem_cache *s, struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { void *prior; int was_frozen; - struct page new; + struct slab new; unsigned long counters; struct kmem_cache_node *n = NULL; unsigned long flags; @@ -3317,7 +3317,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, return; if (kmem_cache_debug(s) && - !free_debug_processing(s, page, head, tail, cnt, addr)) + !free_debug_processing(s, slab, head, tail, cnt, addr)) return; do { @@ -3325,8 +3325,8 @@ static void __slab_free(struct kmem_cache *s, struct page *page, spin_unlock_irqrestore(&n->list_lock, flags); n = NULL; } - prior = page->freelist; - counters = page->counters; + prior = slab->freelist; + counters = slab->counters; set_freepointer(s, tail, prior); new.counters = counters; was_frozen = new.frozen; @@ -3345,7 +3345,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, } else { /* Needs to be taken off a list */ - n = get_node(s, page_to_nid(page)); + n = get_node(s, slab_nid(slab)); /* * Speculatively acquire the list_lock. * If the cmpxchg does not succeed then we may @@ -3359,7 +3359,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, } } - } while (!cmpxchg_double_slab(s, page, + } while (!cmpxchg_double_slab(s, slab, prior, counters, head, new.counters, "__slab_free")); @@ -3377,7 +3377,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, * If we just froze the page then put it onto the * per cpu partial list. */ - put_cpu_partial(s, page, 1); + put_cpu_partial(s, slab, 1); stat(s, CPU_PARTIAL_FREE); } @@ -3392,8 +3392,8 @@ static void __slab_free(struct kmem_cache *s, struct page *page, * then add it. */ if (!kmem_cache_has_cpu_partial(s) && unlikely(!prior)) { - remove_full(s, n, page); - add_partial(n, page, DEACTIVATE_TO_TAIL); + remove_full(s, n, slab); + add_partial(n, slab, DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } spin_unlock_irqrestore(&n->list_lock, flags); @@ -3404,16 +3404,16 @@ static void __slab_free(struct kmem_cache *s, struct page *page, /* * Slab on the partial list. */ - remove_partial(n, page); + remove_partial(n, slab); stat(s, FREE_REMOVE_PARTIAL); } else { /* Slab must be on the full list */ - remove_full(s, n, page); + remove_full(s, n, slab); } spin_unlock_irqrestore(&n->list_lock, flags); stat(s, FREE_SLAB); - discard_slab(s, page); + discard_slab(s, slab); } /* @@ -3432,7 +3432,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, * count (cnt). Bulk free indicated by tail pointer being set. */ static __always_inline void do_slab_free(struct kmem_cache *s, - struct page *page, void *head, void *tail, + struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { void *tail_obj = tail ? : head; @@ -3455,7 +3455,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s, /* Same with comment on barrier() in slab_alloc_node() */ barrier(); - if (likely(page == c->page)) { + if (likely(slab == c->slab)) { #ifndef CONFIG_PREEMPT_RT void **freelist = READ_ONCE(c->freelist); @@ -3481,7 +3481,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s, local_lock(&s->cpu_slab->lock); c = this_cpu_ptr(s->cpu_slab); - if (unlikely(page != c->page)) { + if (unlikely(slab != c->slab)) { local_unlock(&s->cpu_slab->lock); goto redo; } @@ -3496,11 +3496,11 @@ static __always_inline void do_slab_free(struct kmem_cache *s, #endif stat(s, FREE_FASTPATH); } else - __slab_free(s, page, head, tail_obj, cnt, addr); + __slab_free(s, slab, head, tail_obj, cnt, addr); } -static __always_inline void slab_free(struct kmem_cache *s, struct page *page, +static __always_inline void slab_free(struct kmem_cache *s, struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { @@ -3509,13 +3509,13 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page, * to remove objects, whose reuse must be delayed. */ if (slab_free_freelist_hook(s, &head, &tail, &cnt)) - do_slab_free(s, page, head, tail, cnt, addr); + do_slab_free(s, slab, head, tail, cnt, addr); } #ifdef CONFIG_KASAN_GENERIC void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) { - do_slab_free(cache, slab_page(virt_to_slab(x)), x, NULL, 1, addr); + do_slab_free(cache, virt_to_slab(x), x, NULL, 1, addr); } #endif @@ -3525,7 +3525,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) if (!s) return; trace_kmem_cache_free(_RET_IP_, x, s->name); - slab_free(s, slab_page(virt_to_slab(x)), x, NULL, 1, _RET_IP_); + slab_free(s, virt_to_slab(x), x, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free); @@ -3654,7 +3654,7 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) if (!df.slab) continue; - slab_free(df.s, slab_page(df.slab), df.freelist, df.tail, df.cnt, _RET_IP_); + slab_free(df.s, df.slab, df.freelist, df.tail, df.cnt, _RET_IP_); } while (likely(size)); } EXPORT_SYMBOL(kmem_cache_free_bulk); @@ -3924,38 +3924,38 @@ static struct kmem_cache *kmem_cache_node; */ static void early_kmem_cache_node_alloc(int node) { - struct page *page; + struct slab *slab; struct kmem_cache_node *n; BUG_ON(kmem_cache_node->size < sizeof(struct kmem_cache_node)); - page = new_slab(kmem_cache_node, GFP_NOWAIT, node); + slab = new_slab(kmem_cache_node, GFP_NOWAIT, node); - BUG_ON(!page); - if (page_to_nid(page) != node) { + BUG_ON(!slab); + if (slab_nid(slab) != node) { pr_err("SLUB: Unable to allocate memory from node %d\n", node); pr_err("SLUB: Allocating a useless per node structure in order to be able to continue\n"); } - n = page->freelist; + n = slab->freelist; BUG_ON(!n); #ifdef CONFIG_SLUB_DEBUG init_object(kmem_cache_node, n, SLUB_RED_ACTIVE); init_tracking(kmem_cache_node, n); #endif n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL, false); - page->freelist = get_freepointer(kmem_cache_node, n); - page->inuse = 1; - page->frozen = 0; + slab->freelist = get_freepointer(kmem_cache_node, n); + slab->inuse = 1; + slab->frozen = 0; kmem_cache_node->node[node] = n; init_kmem_cache_node(n); - inc_slabs_node(kmem_cache_node, node, page->objects); + inc_slabs_node(kmem_cache_node, node, slab->objects); /* * No locks need to be taken here as it has just been * initialized and there is no concurrent access. */ - __add_partial(n, page, DEACTIVATE_TO_HEAD); + __add_partial(n, slab, DEACTIVATE_TO_HEAD); } static void free_kmem_cache_nodes(struct kmem_cache *s) @@ -4241,20 +4241,20 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) return -EINVAL; } -static void list_slab_objects(struct kmem_cache *s, struct page *page, +static void list_slab_objects(struct kmem_cache *s, struct slab *slab, const char *text) { #ifdef CONFIG_SLUB_DEBUG - void *addr = page_address(page); + void *addr = slab_address(slab); unsigned long flags; unsigned long *map; void *p; - slab_err(s, page, text, s->name); - slab_lock(page, &flags); + slab_err(s, slab, text, s->name); + slab_lock(slab, &flags); - map = get_map(s, page); - for_each_object(p, s, addr, page->objects) { + map = get_map(s, slab); + for_each_object(p, s, addr, slab->objects) { if (!test_bit(__obj_to_index(s, addr, p), map)) { pr_err("Object 0x%p @offset=%tu\n", p, p - addr); @@ -4262,7 +4262,7 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page, } } put_map(map); - slab_unlock(page, &flags); + slab_unlock(slab, &flags); #endif } @@ -4274,23 +4274,23 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page, static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) { LIST_HEAD(discard); - struct page *page, *h; + struct slab *slab, *h; BUG_ON(irqs_disabled()); spin_lock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &n->partial, slab_list) { - if (!page->inuse) { - remove_partial(n, page); - list_add(&page->slab_list, &discard); + list_for_each_entry_safe(slab, h, &n->partial, slab_list) { + if (!slab->inuse) { + remove_partial(n, slab); + list_add(&slab->slab_list, &discard); } else { - list_slab_objects(s, page, + list_slab_objects(s, slab, "Objects remaining in %s on __kmem_cache_shutdown()"); } } spin_unlock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &discard, slab_list) - discard_slab(s, page); + list_for_each_entry_safe(slab, h, &discard, slab_list) + discard_slab(s, slab); } bool __kmem_cache_empty(struct kmem_cache *s) @@ -4559,7 +4559,7 @@ void kfree(const void *x) return; } slab = folio_slab(folio); - slab_free(slab->slab_cache, slab_page(slab), object, NULL, 1, _RET_IP_); + slab_free(slab->slab_cache, slab, object, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kfree); @@ -4579,8 +4579,8 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) int node; int i; struct kmem_cache_node *n; - struct page *page; - struct page *t; + struct slab *slab; + struct slab *t; struct list_head discard; struct list_head promote[SHRINK_PROMOTE_MAX]; unsigned long flags; @@ -4599,8 +4599,8 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) * Note that concurrent frees may occur while we hold the * list_lock. page->inuse here is the upper limit. */ - list_for_each_entry_safe(page, t, &n->partial, slab_list) { - int free = page->objects - page->inuse; + list_for_each_entry_safe(slab, t, &n->partial, slab_list) { + int free = slab->objects - slab->inuse; /* Do not reread page->inuse */ barrier(); @@ -4608,11 +4608,11 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) /* We do not keep full slabs on the list */ BUG_ON(free <= 0); - if (free == page->objects) { - list_move(&page->slab_list, &discard); + if (free == slab->objects) { + list_move(&slab->slab_list, &discard); n->nr_partial--; } else if (free <= SHRINK_PROMOTE_MAX) - list_move(&page->slab_list, promote + free - 1); + list_move(&slab->slab_list, promote + free - 1); } /* @@ -4625,8 +4625,8 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) spin_unlock_irqrestore(&n->list_lock, flags); /* Release empty slabs */ - list_for_each_entry_safe(page, t, &discard, slab_list) - discard_slab(s, page); + list_for_each_entry_safe(slab, t, &discard, slab_list) + discard_slab(s, slab); if (slabs_node(s, node)) ret = 1; @@ -4787,7 +4787,7 @@ static struct kmem_cache * __init bootstrap(struct kmem_cache *static_cache) */ __flush_cpu_slab(s, smp_processor_id()); for_each_kmem_cache_node(s, node, n) { - struct page *p; + struct slab *p; list_for_each_entry(p, &n->partial, slab_list) p->slab_cache = s; @@ -4965,54 +4965,54 @@ EXPORT_SYMBOL(__kmalloc_node_track_caller); #endif #ifdef CONFIG_SYSFS -static int count_inuse(struct page *page) +static int count_inuse(struct slab *slab) { - return page->inuse; + return slab->inuse; } -static int count_total(struct page *page) +static int count_total(struct slab *slab) { - return page->objects; + return slab->objects; } #endif #ifdef CONFIG_SLUB_DEBUG -static void validate_slab(struct kmem_cache *s, struct page *page, +static void validate_slab(struct kmem_cache *s, struct slab *slab, unsigned long *obj_map) { void *p; - void *addr = page_address(page); + void *addr = slab_address(slab); unsigned long flags; - slab_lock(page, &flags); + slab_lock(slab, &flags); - if (!check_slab(s, page) || !on_freelist(s, page, NULL)) + if (!check_slab(s, slab) || !on_freelist(s, slab, NULL)) goto unlock; /* Now we know that a valid freelist exists */ - __fill_map(obj_map, s, page); - for_each_object(p, s, addr, page->objects) { + __fill_map(obj_map, s, slab); + for_each_object(p, s, addr, slab->objects) { u8 val = test_bit(__obj_to_index(s, addr, p), obj_map) ? SLUB_RED_INACTIVE : SLUB_RED_ACTIVE; - if (!check_object(s, page, p, val)) + if (!check_object(s, slab, p, val)) break; } unlock: - slab_unlock(page, &flags); + slab_unlock(slab, &flags); } static int validate_slab_node(struct kmem_cache *s, struct kmem_cache_node *n, unsigned long *obj_map) { unsigned long count = 0; - struct page *page; + struct slab *slab; unsigned long flags; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, slab_list) { - validate_slab(s, page, obj_map); + list_for_each_entry(slab, &n->partial, slab_list) { + validate_slab(s, slab, obj_map); count++; } if (count != n->nr_partial) { @@ -5024,8 +5024,8 @@ static int validate_slab_node(struct kmem_cache *s, if (!(s->flags & SLAB_STORE_USER)) goto out; - list_for_each_entry(page, &n->full, slab_list) { - validate_slab(s, page, obj_map); + list_for_each_entry(slab, &n->full, slab_list) { + validate_slab(s, slab, obj_map); count++; } if (count != atomic_long_read(&n->nr_slabs)) { @@ -5190,15 +5190,15 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, } static void process_slab(struct loc_track *t, struct kmem_cache *s, - struct page *page, enum track_item alloc, + struct slab *slab, enum track_item alloc, unsigned long *obj_map) { - void *addr = page_address(page); + void *addr = slab_address(slab); void *p; - __fill_map(obj_map, s, page); + __fill_map(obj_map, s, slab); - for_each_object(p, s, addr, page->objects) + for_each_object(p, s, addr, slab->objects) if (!test_bit(__obj_to_index(s, addr, p), obj_map)) add_location(t, s, get_track(s, p, alloc)); } @@ -5240,32 +5240,32 @@ static ssize_t show_slab_objects(struct kmem_cache *s, struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); int node; - struct page *page; + struct slab *slab; - page = READ_ONCE(c->page); - if (!page) + slab = READ_ONCE(c->slab); + if (!slab) continue; - node = page_to_nid(page); + node = slab_nid(slab); if (flags & SO_TOTAL) - x = page->objects; + x = slab->objects; else if (flags & SO_OBJECTS) - x = page->inuse; + x = slab->inuse; else x = 1; total += x; nodes[node] += x; - page = slub_percpu_partial_read_once(c); - if (page) { - node = page_to_nid(page); + slab = slub_percpu_partial_read_once(c); + if (slab) { + node = slab_nid(slab); if (flags & SO_TOTAL) WARN_ON_ONCE(1); else if (flags & SO_OBJECTS) WARN_ON_ONCE(1); else - x = page->pages; + x = slab->slabs; total += x; nodes[node] += x; } @@ -5467,33 +5467,33 @@ SLAB_ATTR_RO(objects_partial); static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf) { int objects = 0; - int pages = 0; + int slabs = 0; int cpu; int len = 0; for_each_online_cpu(cpu) { - struct page *page; + struct slab *slab; - page = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); + slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); - if (page) - pages += page->pages; + if (slab) + slabs += slab->slabs; } /* Approximate half-full pages , see slub_set_cpu_partial() */ - objects = (pages * oo_objects(s->oo)) / 2; - len += sysfs_emit_at(buf, len, "%d(%d)", objects, pages); + objects = (slabs * oo_objects(s->oo)) / 2; + len += sysfs_emit_at(buf, len, "%d(%d)", objects, slabs); #ifdef CONFIG_SMP for_each_online_cpu(cpu) { - struct page *page; + struct slab *slab; - page = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); - if (page) { - pages = READ_ONCE(page->pages); - objects = (pages * oo_objects(s->oo)) / 2; + slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); + if (slab) { + slabs = READ_ONCE(slab->slabs); + objects = (slabs * oo_objects(s->oo)) / 2; len += sysfs_emit_at(buf, len, " C%d=%d(%d)", - cpu, objects, pages); + cpu, objects, slabs); } } #endif @@ -6159,16 +6159,16 @@ static int slab_debug_trace_open(struct inode *inode, struct file *filep) for_each_kmem_cache_node(s, node, n) { unsigned long flags; - struct page *page; + struct slab *slab; if (!atomic_long_read(&n->nr_slabs)) continue; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, slab_list) - process_slab(t, s, page, alloc, obj_map); - list_for_each_entry(page, &n->full, slab_list) - process_slab(t, s, page, alloc, obj_map); + list_for_each_entry(slab, &n->partial, slab_list) + process_slab(t, s, slab, alloc, obj_map); + list_for_each_entry(slab, &n->full, slab_list) + process_slab(t, s, slab, alloc, obj_map); spin_unlock_irqrestore(&n->list_lock, flags); } From patchwork Wed Dec 1 18:14:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8529C433F5 for ; Wed, 1 Dec 2021 18:27:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 493936B009A; Wed, 1 Dec 2021 13:15:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0ED226B009B; Wed, 1 Dec 2021 13:15:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD6376B0098; Wed, 1 Dec 2021 13:15:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id A14056B009B for ; Wed, 1 Dec 2021 13:15:31 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 604B28957B for ; Wed, 1 Dec 2021 18:15:21 +0000 (UTC) X-FDA: 78870027642.30.D79DD68 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf13.hostedemail.com (Postfix) with ESMTP id 72AF9104628C for ; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 647FB1FDFB; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SPx3DlseVfNM6W+nEWZLSU46FE/QGORigx3Jlr4HhP4=; b=RV/+Jfq6L0RyciarPaTMs91WA+2Diur8J4IQz1ESyIajtu89AvrH20hweqa3ihL8b99q70 4oW93GlFpDffI+kXbKAJYWYwQWAyJE3pOTKTwLfMaD5+UA+0P8p39Bcq/Emu7MX6syT8H2 wLsj9Ciy6ViCgYaX5aM+FAaYzuFBSrI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SPx3DlseVfNM6W+nEWZLSU46FE/QGORigx3Jlr4HhP4=; b=GDiOn8NN3Fe2xmi9XgF7uBhP41j/mWhcrQc/lhUeqCwwsJiadbTfpjANUeuzYq5fz41ztK oYDIDJi4/GhpmzCA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 39E9E13D9D; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id eHGBDba7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:18 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 18/33] mm/slub: Finish struct page to struct slab conversion Date: Wed, 1 Dec 2021 19:14:55 +0100 Message-Id: <20211201181510.18784-19-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=14948; h=from:subject; bh=lJ9UFowjKiQmGWlufBKZUEbEAb22GZwvaTQ8SilihOI=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7uGRXJueD2RdCBGmbUfCx9DIyw8K/xwAEFEAK3r NqdbsbGJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7hgAKCRDgIcpz8YmpEK78B/ 4mdFn0m3RjMHaZ/k8PxUR6B1f/sN01H/19sK9fDPFayPtp+kCjagUz3QTB62WBNtl86S5rbnUQDQpI oveSM3vauTEmWbWi+FDB3v4DL2723AJln7PMdaQsCQiQb8ynhHOmduPKLeYqMU7Qs/4pUAdWrpzcRK O2OJ89aG74Ei9HHXX43f3SVC/pvhq6SzzpC+4O86USEpbgJI8Kl0GUWdPOYy2skQdDmxkimLd1dspT KLXc5l7nG3HFH6q2orZkCykDK5YMf9ULHEdNFBag8iuxN1ATxdUsM2eSB5XdSa3evP2+7Tjog/hOVG tsYg+3kt0IaS4WL4GdMff2IIeeNJea X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 72AF9104628C X-Stat-Signature: 4eoqx9utnghq8z9mbpeuo5azj6yawt93 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="RV/+Jfq6"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=GDiOn8NN; spf=pass (imf13.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382519-848887 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Update comments mentioning pages to mention slabs where appropriate. Also some goto labels. Signed-off-by: Vlastimil Babka --- include/linux/slub_def.h | 2 +- mm/slub.c | 105 +++++++++++++++++++-------------------- 2 files changed, 53 insertions(+), 54 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 00d99afe1c0e..8a9c2876ca89 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -99,7 +99,7 @@ struct kmem_cache { #ifdef CONFIG_SLUB_CPU_PARTIAL /* Number of per cpu partial objects to keep around */ unsigned int cpu_partial; - /* Number of per cpu partial pages to keep around */ + /* Number of per cpu partial slabs to keep around */ unsigned int cpu_partial_slabs; #endif struct kmem_cache_order_objects oo; diff --git a/mm/slub.c b/mm/slub.c index be46a7151e8c..f5344211d8cc 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -48,7 +48,7 @@ * 1. slab_mutex (Global Mutex) * 2. node->list_lock (Spinlock) * 3. kmem_cache->cpu_slab->lock (Local lock) - * 4. slab_lock(page) (Only on some arches or for debugging) + * 4. slab_lock(slab) (Only on some arches or for debugging) * 5. object_map_lock (Only for debugging) * * slab_mutex @@ -64,19 +64,19 @@ * * The slab_lock is only used for debugging and on arches that do not * have the ability to do a cmpxchg_double. It only protects: - * A. page->freelist -> List of object free in a page - * B. page->inuse -> Number of objects in use - * C. page->objects -> Number of objects in page - * D. page->frozen -> frozen state + * A. slab->freelist -> List of free objects in a slab + * B. slab->inuse -> Number of objects in use + * C. slab->objects -> Number of objects in slab + * D. slab->frozen -> frozen state * * Frozen slabs * * If a slab is frozen then it is exempt from list management. It is not * on any list except per cpu partial list. The processor that froze the - * slab is the one who can perform list operations on the page. Other + * slab is the one who can perform list operations on the slab. Other * processors may put objects onto the freelist but the processor that * froze the slab is the only one that can retrieve the objects from the - * page's freelist. + * slab's freelist. * * list_lock * @@ -135,7 +135,7 @@ * minimal so we rely on the page allocators per cpu caches for * fast frees and allocs. * - * page->frozen The slab is frozen and exempt from list processing. + * slab->frozen The slab is frozen and exempt from list processing. * This means that the slab is dedicated to a purpose * such as satisfying allocations for a specific * processor. Objects may be freed in the slab while @@ -250,7 +250,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) #define OO_SHIFT 16 #define OO_MASK ((1 << OO_SHIFT) - 1) -#define MAX_OBJS_PER_PAGE 32767 /* since page.objects is u15 */ +#define MAX_OBJS_PER_PAGE 32767 /* since slab.objects is u15 */ /* Internal SLUB flags */ /* Poison object */ @@ -423,8 +423,8 @@ static void slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_objects) /* * We take the number of objects but actually limit the number of - * pages on the per cpu partial list, in order to limit excessive - * growth of the list. For simplicity we assume that the pages will + * slabs on the per cpu partial list, in order to limit excessive + * growth of the list. For simplicity we assume that the slabs will * be half-full. */ nr_slabs = DIV_ROUND_UP(nr_objects * 2, oo_objects(s->oo)); @@ -594,9 +594,9 @@ static inline bool slab_add_kunit_errors(void) { return false; } #endif /* - * Determine a map of object in use on a page. + * Determine a map of objects in use in a slab. * - * Node listlock must be held to guarantee that the page does + * Node listlock must be held to guarantee that the slab does * not vanish from under us. */ static unsigned long *get_map(struct kmem_cache *s, struct slab *slab) @@ -1139,7 +1139,7 @@ static int check_slab(struct kmem_cache *s, struct slab *slab) } /* - * Determine if a certain object on a page is on the freelist. Must hold the + * Determine if a certain object in a slab is on the freelist. Must hold the * slab lock to guarantee that the chains are in a consistent state. */ static int on_freelist(struct kmem_cache *s, struct slab *slab, void *search) @@ -2185,7 +2185,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, } /* - * Get a page from somewhere. Search in increasing NUMA distances. + * Get a slab from somewhere. Search in increasing NUMA distances. */ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, struct slab **ret_slab) @@ -2249,7 +2249,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, } /* - * Get a partial page, lock it and return it. + * Get a partial slab, lock it and return it. */ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node, struct slab **ret_slab) @@ -2341,7 +2341,7 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) } /* - * Finishes removing the cpu slab. Merges cpu's freelist with page's freelist, + * Finishes removing the cpu slab. Merges cpu's freelist with slab's freelist, * unfreezes the slabs and puts it on the proper list. * Assumes the slab has been already safely taken away from kmem_cache_cpu * by the caller. @@ -2388,18 +2388,18 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, } /* - * Stage two: Unfreeze the page while splicing the per-cpu - * freelist to the head of page's freelist. + * Stage two: Unfreeze the slab while splicing the per-cpu + * freelist to the head of slab's freelist. * - * Ensure that the page is unfrozen while the list presence + * Ensure that the slab is unfrozen while the list presence * reflects the actual number of objects during unfreeze. * * We setup the list membership and then perform a cmpxchg - * with the count. If there is a mismatch then the page - * is not unfrozen but the page is on the wrong list. + * with the count. If there is a mismatch then the slab + * is not unfrozen but the slab is on the wrong list. * * Then we restart the process which may have to remove - * the page from the list that we just put it on again + * the slab from the list that we just put it on again * because the number of objects in the slab may have * changed. */ @@ -2427,9 +2427,8 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, if (!lock) { lock = 1; /* - * Taking the spinlock removes the possibility - * that acquire_slab() will see a slab page that - * is frozen + * Taking the spinlock removes the possibility that + * acquire_slab() will see a slab that is frozen */ spin_lock_irqsave(&n->list_lock, flags); } @@ -2570,8 +2569,8 @@ static void unfreeze_partials_cpu(struct kmem_cache *s, } /* - * Put a page that was just frozen (in __slab_free|get_partial_node) into a - * partial page slot if available. + * Put a slab that was just frozen (in __slab_free|get_partial_node) into a + * partial slab slot if available. * * If we did not find a slot then simply move all the partials to the * per node partial list. @@ -2842,12 +2841,12 @@ static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags) } /* - * Check the page->freelist of a page and either transfer the freelist to the - * per cpu freelist or deactivate the page. + * Check the slab->freelist and either transfer the freelist to the + * per cpu freelist or deactivate the slab. * - * The page is still frozen if the return value is not NULL. + * The slab is still frozen if the return value is not NULL. * - * If this function returns NULL then the page has been unfrozen. + * If this function returns NULL then the slab has been unfrozen. */ static inline void *get_freelist(struct kmem_cache *s, struct slab *slab) { @@ -2903,7 +2902,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, stat(s, ALLOC_SLOWPATH); -reread_page: +reread_slab: slab = READ_ONCE(c->slab); if (!slab) { @@ -2940,11 +2939,11 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, if (unlikely(!pfmemalloc_match(slab, gfpflags))) goto deactivate_slab; - /* must check again c->page in case we got preempted and it changed */ + /* must check again c->slab in case we got preempted and it changed */ local_lock_irqsave(&s->cpu_slab->lock, flags); if (unlikely(slab != c->slab)) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); - goto reread_page; + goto reread_slab; } freelist = c->freelist; if (freelist) @@ -2967,8 +2966,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, /* * freelist is pointing to the list of objects to be used. - * page is pointing to the page from which the objects are obtained. - * That page must be frozen for per cpu allocations to work. + * slab is pointing to the slab from which the objects are obtained. + * That slab must be frozen for per cpu allocations to work. */ VM_BUG_ON(!c->slab->frozen); c->freelist = get_freepointer(s, freelist); @@ -2981,7 +2980,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, local_lock_irqsave(&s->cpu_slab->lock, flags); if (slab != c->slab) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); - goto reread_page; + goto reread_slab; } freelist = c->freelist; c->slab = NULL; @@ -2995,7 +2994,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, local_lock_irqsave(&s->cpu_slab->lock, flags); if (unlikely(c->slab)) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); - goto reread_page; + goto reread_slab; } if (unlikely(!slub_percpu_partial(c))) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); @@ -3014,7 +3013,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, freelist = get_partial(s, gfpflags, node, &slab); if (freelist) - goto check_new_page; + goto check_new_slab; slub_put_cpu_ptr(s->cpu_slab); slab = new_slab(s, gfpflags, node); @@ -3026,7 +3025,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, } /* - * No other reference to the page yet so we can + * No other reference to the slab yet so we can * muck around with it freely without cmpxchg */ freelist = slab->freelist; @@ -3034,7 +3033,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, stat(s, ALLOC_SLAB); -check_new_page: +check_new_slab: if (kmem_cache_debug(s)) { if (!alloc_debug_processing(s, slab, freelist, addr)) { @@ -3056,7 +3055,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, */ goto return_single; -retry_load_page: +retry_load_slab: local_lock_irqsave(&s->cpu_slab->lock, flags); if (unlikely(c->slab)) { @@ -3073,7 +3072,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, stat(s, CPUSLAB_FLUSH); - goto retry_load_page; + goto retry_load_slab; } c->slab = slab; @@ -3170,9 +3169,9 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, /* * Irqless object alloc/free algorithm used here depends on sequence * of fetching cpu_slab's data. tid should be fetched before anything - * on c to guarantee that object and page associated with previous tid + * on c to guarantee that object and slab associated with previous tid * won't be used with current tid. If we fetch tid first, object and - * page could be one associated with next tid and our alloc/free + * slab could be one associated with next tid and our alloc/free * request will be failed. In this case, we will retry. So, no problem. */ barrier(); @@ -3296,7 +3295,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_node_trace); * have a longer lifetime than the cpu slabs in most processing loads. * * So we still attempt to reduce cache line usage. Just take the slab - * lock and free the item. If there is no additional partial page + * lock and free the item. If there is no additional partial slab * handling required then we can return immediately. */ static void __slab_free(struct kmem_cache *s, struct slab *slab, @@ -3374,7 +3373,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, stat(s, FREE_FROZEN); } else if (new.frozen) { /* - * If we just froze the page then put it onto the + * If we just froze the slab then put it onto the * per cpu partial list. */ put_cpu_partial(s, slab, 1); @@ -3428,7 +3427,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * with all sorts of special processing. * * Bulk free of a freelist with several objects (all pointing to the - * same page) possible by specifying head and tail ptr, plus objects + * same slab) possible by specifying head and tail ptr, plus objects * count (cnt). Bulk free indicated by tail pointer being set. */ static __always_inline void do_slab_free(struct kmem_cache *s, @@ -4213,7 +4212,7 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) #endif /* - * The larger the object size is, the more pages we want on the partial + * The larger the object size is, the more slabs we want on the partial * list to avoid pounding the page allocator excessively. */ set_min_partial(s, ilog2(s->size) / 2); @@ -4597,12 +4596,12 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) * Build lists of slabs to discard or promote. * * Note that concurrent frees may occur while we hold the - * list_lock. page->inuse here is the upper limit. + * list_lock. slab->inuse here is the upper limit. */ list_for_each_entry_safe(slab, t, &n->partial, slab_list) { int free = slab->objects - slab->inuse; - /* Do not reread page->inuse */ + /* Do not reread slab->inuse */ barrier(); /* We do not keep full slabs on the list */ @@ -5480,7 +5479,7 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf) slabs += slab->slabs; } - /* Approximate half-full pages , see slub_set_cpu_partial() */ + /* Approximate half-full slabs, see slub_set_cpu_partial() */ objects = (slabs * oo_objects(s->oo)) / 2; len += sysfs_emit_at(buf, len, "%d(%d)", objects, slabs); From patchwork Wed Dec 1 18:14:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650723 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00289C433F5 for ; Wed, 1 Dec 2021 18:24:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F01596B008A; Wed, 1 Dec 2021 13:15:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A8AE86B0096; Wed, 1 Dec 2021 13:15:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F2FD6B0098; Wed, 1 Dec 2021 13:15:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 4FB2A6B0093 for ; Wed, 1 Dec 2021 13:15:30 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 117641814ECBC for ; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) X-FDA: 78870027600.09.D38B654 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf11.hostedemail.com (Postfix) with ESMTP id A4800F0000B8 for ; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 8E2AF1FDFC; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nrEpL2E5pw+Og7b0Zwmd4pFI3CG3AzaQt/eIyU3++g0=; b=Q2NTaHa2HtKdUcAnoDdqv1oFFEijnThtYkjnd1iAsu91pw7yyh85FP2t+TEH47MiwMdiZT W9gFFxY4t0KxqZe/cpsL378h2PmoeRtIPJ0hmCcosa8uy3kPpfE/ZqDQTnEgnb1oRCH2PY GiUZDJyAxr2gGvhXeeVqTxnOXXjbEJI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nrEpL2E5pw+Og7b0Zwmd4pFI3CG3AzaQt/eIyU3++g0=; b=t+/xXx2b6F9rBukKRwK80Q6RHFoQQFbJY5i7erm67kxsuxCCXGFJ1Gt3vJG6Q8J0UQAhOJ lRmvpxy73uST00Dg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 67D7614050; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id QOecGLa7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:18 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 19/33] mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab Date: Wed, 1 Dec 2021 19:14:56 +0100 Message-Id: <20211201181510.18784-20-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3974; h=from:subject; bh=d7sd5fzR7CVnLcCqoMu7QSbuxxjM4EDUYeYWq27hNHc=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7uJjlYnHM9sMjb5+vROnrg3nJd5A9cCAHl/l4LW UoCcZOGJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7iQAKCRDgIcpz8YmpEIGUB/ 9jDsALRuzvelbryp1VMyoBRJV7GQMAxOs2w/gEAhYFOlLbxJTY1EetA8+JIutBbPu8MELzhobj83Ps wxE66TGZbndpr61yrz4e3TdTQziGYFZqbaIMzJTSi2karul8W9dkTE28kyuNgOdjW4+B/OvBrasBkk TiW5MsthpSUTmvap0ndpuRPWZEXji8R+5ciKjiSqixes1rik6/7/lwTRo0qzXnVBiZISDXkEIJ+h8U Tqo1L6yRvIw/9YLKLpVtCBNa4DAojY/K/cW5N9A80i09sjZISlM0gfINu1G2emYI3IlylS33kFGr5Q RKEJVSyLaFcLooo0r3r8KvBMnG9o0b X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Queue-Id: A4800F0000B8 X-Stat-Signature: g75zonw3wrg8zqqp1d57bfuyc57xpkt8 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=Q2NTaHa2; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="t+/xXx2b"; dmarc=none; spf=pass (imf11.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Rspamd-Server: rspam02 X-HE-Tag: 1638382519-587888 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These functions sit at the boundary to page allocator. Also use folio internally to avoid extra compound_head() when dealing with page flags. Signed-off-by: Vlastimil Babka --- mm/slab.c | 52 ++++++++++++++++++++++++++++------------------------ 1 file changed, 28 insertions(+), 24 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 38fcd3f496df..892f7042b3b9 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1367,57 +1367,61 @@ slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nodeid) * did not request dmaable memory, we might get it, but that * would be relatively rare and ignorable. */ -static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, +static struct slab *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, int nodeid) { - struct page *page; + struct folio *folio; + struct slab *slab; flags |= cachep->allocflags; - page = __alloc_pages_node(nodeid, flags, cachep->gfporder); - if (!page) { + folio = (struct folio *) __alloc_pages_node(nodeid, flags, cachep->gfporder); + if (!folio) { slab_out_of_memory(cachep, flags, nodeid); return NULL; } - account_slab(page_slab(page), cachep->gfporder, cachep, flags); - __SetPageSlab(page); + slab = folio_slab(folio); + + account_slab(slab, cachep->gfporder, cachep, flags); + __folio_set_slab(folio); /* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */ - if (sk_memalloc_socks() && page_is_pfmemalloc(page)) - SetPageSlabPfmemalloc(page); + if (sk_memalloc_socks() && page_is_pfmemalloc(folio_page(folio, 0))) + slab_set_pfmemalloc(slab); - return page; + return slab; } /* * Interface to system's page release. */ -static void kmem_freepages(struct kmem_cache *cachep, struct page *page) +static void kmem_freepages(struct kmem_cache *cachep, struct slab *slab) { int order = cachep->gfporder; + struct folio *folio = slab_folio(slab); - BUG_ON(!PageSlab(page)); - __ClearPageSlabPfmemalloc(page); - __ClearPageSlab(page); - page_mapcount_reset(page); + BUG_ON(!folio_test_slab(folio)); + __slab_clear_pfmemalloc(slab); + __folio_clear_slab(folio); + page_mapcount_reset(folio_page(folio, 0)); /* In union with page->mapping where page allocator expects NULL */ - page->slab_cache = NULL; + slab->slab_cache = NULL; if (current->reclaim_state) current->reclaim_state->reclaimed_slab += 1 << order; - unaccount_slab(page_slab(page), order, cachep); - __free_pages(page, order); + unaccount_slab(slab, order, cachep); + __free_pages(folio_page(folio, 0), order); } static void kmem_rcu_free(struct rcu_head *head) { struct kmem_cache *cachep; - struct page *page; + struct slab *slab; - page = container_of(head, struct page, rcu_head); - cachep = page->slab_cache; + slab = container_of(head, struct slab, rcu_head); + cachep = slab->slab_cache; - kmem_freepages(cachep, page); + kmem_freepages(cachep, slab); } #if DEBUG @@ -1624,7 +1628,7 @@ static void slab_destroy(struct kmem_cache *cachep, struct page *page) if (unlikely(cachep->flags & SLAB_TYPESAFE_BY_RCU)) call_rcu(&page->rcu_head, kmem_rcu_free); else - kmem_freepages(cachep, page); + kmem_freepages(cachep, page_slab(page)); /* * From now on, we don't use freelist @@ -2578,7 +2582,7 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep, * Get mem for the objs. Attempt to allocate a physical page from * 'nodeid'. */ - page = kmem_getpages(cachep, local_flags, nodeid); + page = slab_page(kmem_getpages(cachep, local_flags, nodeid)); if (!page) goto failed; @@ -2620,7 +2624,7 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep, return page; opps1: - kmem_freepages(cachep, page); + kmem_freepages(cachep, page_slab(page)); failed: if (gfpflags_allow_blocking(local_flags)) local_irq_disable(); From patchwork Wed Dec 1 18:14:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50E80C433F5 for ; Wed, 1 Dec 2021 18:30:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 734EE6B009C; Wed, 1 Dec 2021 13:15:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6EAD16B009F; Wed, 1 Dec 2021 13:15:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2757D6B00A2; Wed, 1 Dec 2021 13:15:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id 03CCF6B009D for ; Wed, 1 Dec 2021 13:15:38 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 360398992F for ; Wed, 1 Dec 2021 18:15:22 +0000 (UTC) X-FDA: 78870027684.25.51F4738 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf25.hostedemail.com (Postfix) with ESMTP id AAA78B000189 for ; Wed, 1 Dec 2021 18:15:21 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id C6495218CE; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VzmgEJc1EAOd9jiBRWrUyGzh82FQZEqVAVgy6PPb4Kk=; b=TAkvG9g7JmF7WPX0fNykBv+QyB/4T4LEYxTR6ZAiYwcEJPx9jw5uCxYwYG0enO6zEeTSaZ j10zbrjsMyItF2Zfbm+RSaSAYJA3ZTbh+UO7POHTn7myb4nYgLBwPkZGqzEN1oU4UBv1+7 CDGSWSWuI3ANJAk1HCDarZ43IItz+9U= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VzmgEJc1EAOd9jiBRWrUyGzh82FQZEqVAVgy6PPb4Kk=; b=Wcoxig7KW+BS06cwT3Pfcd/bnHkEqWpG35P/gHfBCd/YO0Cvngqid+XNPb+6Gx9/ugOFVL 6rejJvDPD1bIFYCw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9113F13D9D; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id QDXRIra7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:18 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka , Julia Lawall , Luis Chamberlain Subject: [PATCH v2 20/33] mm/slab: Convert most struct page to struct slab by spatch Date: Wed, 1 Dec 2021 19:14:57 +0100 Message-Id: <20211201181510.18784-21-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=32919; h=from:subject; bh=TwHW3kDtu2csvye5Vr2IAtz9JdaMvruf3iT9mSFIsDM=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7uLlYbPSCMv8PkN05wgCPp6fQen4yypWB23wwHT zv4xcFaJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7iwAKCRDgIcpz8YmpEAH2B/ 4+BMPKT8ksrcUN08e/kTn6RXkgSfM06z7aLekY7JPRWfyGutOOtn6/MSP1AgqO4CGszi9jViiKtMo9 B+bR5ZYs1pcb7Wl59nA/Xxg2m74o9JVPrc/YveS4AChdvQ9l7vcRf6DCNMLHLVP5NuIomBAmQryuu+ 26gRb5T24vK+Hd8PqRcQb+FjEFSp1r6sdObpXKOhP5UNSWm125DuPzxb0ukKKFaSPSqo8og/EtApI0 U4MQNO4iR/91hT8+rG/hNczRbFVG9WTo/NzCN64KF4yCZMiorktabwI7iipFxZxJ5utwovwRcf0ZNA U6NOmqrYzEw8UZGlv2IHWm5r+yVUAR X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Stat-Signature: 5kipgdg6d3xoyor3fxo3bwch9jpmoemm Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=TAkvG9g7; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=Wcoxig7K; dmarc=none; spf=pass (imf25.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: AAA78B000189 X-HE-Tag: 1638382521-844375 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The majority of conversion from struct page to struct slab in SLAB internals can be delegated to a coccinelle semantic patch. This includes renaming of variables with 'page' in name to 'slab', and similar. Big thanks to Julia Lawall and Luis Chamberlain for help with coccinelle. // Options: --include-headers --no-includes --smpl-spacing mm/slab.c // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the // embedded script script // build list of functions for applying the next rule @initialize:ocaml@ @@ let ok_function p = not (List.mem (List.hd p).current_element ["kmem_getpages";"kmem_freepages"]) // convert the type in selected functions @@ position p : script:ocaml() { ok_function p }; @@ - struct page@p + struct slab @@ @@ -PageSlabPfmemalloc(page) +slab_test_pfmemalloc(slab) @@ @@ -ClearPageSlabPfmemalloc(page) +slab_clear_pfmemalloc(slab) @@ @@ obj_to_index( ..., - page + slab_page(slab) ,...) // for all functions, change any "struct slab *page" parameter to "struct slab // *slab" in the signature, and generally all occurences of "page" to "slab" in // the body - with some special cases. @@ identifier fn; expression E; @@ fn(..., - struct slab *page + struct slab *slab ,...) { <... ( - int page_node; + int slab_node; | - page_node + slab_node | - page_slab(page) + slab | - page_address(page) + slab_address(slab) | - page_size(page) + slab_size(slab) | - page_to_nid(page) + slab_nid(slab) | - virt_to_head_page(E) + virt_to_slab(E) | - page + slab ) ...> } // rename a function parameter @@ identifier fn; expression E; @@ fn(..., - int page_node + int slab_node ,...) { <... - page_node + slab_node ...> } // functions converted by previous rules that were temporarily called using // slab_page(E) so we want to remove the wrapper now that they accept struct // slab ptr directly @@ identifier fn =~ "index_to_obj"; expression E; @@ fn(..., - slab_page(E) + E ,...) // functions that were returning struct page ptr and now will return struct // slab ptr, including slab_page() wrapper removal @@ identifier fn =~ "cache_grow_begin|get_valid_first_slab|get_first_slab"; expression E; @@ fn(...) { <... - slab_page(E) + E ...> } // rename any former struct page * declarations @@ @@ struct slab * -page +slab ; // all functions (with exceptions) with a local "struct slab *page" variable // that will be renamed to "struct slab *slab" @@ identifier fn !~ "kmem_getpages|kmem_freepages"; expression E; @@ fn(...) { <... ( - page_slab(page) + slab | - page_to_nid(page) + slab_nid(slab) | - kasan_poison_slab(page) + kasan_poison_slab(slab_page(slab)) | - page_address(page) + slab_address(slab) | - page_size(page) + slab_size(slab) | - page->pages + slab->slabs | - page = virt_to_head_page(E) + slab = virt_to_slab(E) | - virt_to_head_page(E) + virt_to_slab(E) | - page + slab ) ...> } Signed-off-by: Vlastimil Babka Cc: Julia Lawall Cc: Luis Chamberlain --- mm/slab.c | 360 +++++++++++++++++++++++++++--------------------------- 1 file changed, 180 insertions(+), 180 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 892f7042b3b9..970b3e13b4c1 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -218,7 +218,7 @@ static void cache_reap(struct work_struct *unused); static inline void fixup_objfreelist_debug(struct kmem_cache *cachep, void **list); static inline void fixup_slab_list(struct kmem_cache *cachep, - struct kmem_cache_node *n, struct page *page, + struct kmem_cache_node *n, struct slab *slab, void **list); static int slab_early_init = 1; @@ -373,9 +373,9 @@ static int slab_max_order = SLAB_MAX_ORDER_LO; static bool slab_max_order_set __initdata; static inline void *index_to_obj(struct kmem_cache *cache, - const struct page *page, unsigned int idx) + const struct slab *slab, unsigned int idx) { - return page->s_mem + cache->size * idx; + return slab->s_mem + cache->size * idx; } #define BOOT_CPUCACHE_ENTRIES 1 @@ -550,17 +550,17 @@ static struct array_cache *alloc_arraycache(int node, int entries, } static noinline void cache_free_pfmemalloc(struct kmem_cache *cachep, - struct page *page, void *objp) + struct slab *slab, void *objp) { struct kmem_cache_node *n; - int page_node; + int slab_node; LIST_HEAD(list); - page_node = page_to_nid(page); - n = get_node(cachep, page_node); + slab_node = slab_nid(slab); + n = get_node(cachep, slab_node); spin_lock(&n->list_lock); - free_block(cachep, &objp, 1, page_node, &list); + free_block(cachep, &objp, 1, slab_node, &list); spin_unlock(&n->list_lock); slabs_destroy(cachep, &list); @@ -761,7 +761,7 @@ static void drain_alien_cache(struct kmem_cache *cachep, } static int __cache_free_alien(struct kmem_cache *cachep, void *objp, - int node, int page_node) + int node, int slab_node) { struct kmem_cache_node *n; struct alien_cache *alien = NULL; @@ -770,21 +770,21 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp, n = get_node(cachep, node); STATS_INC_NODEFREES(cachep); - if (n->alien && n->alien[page_node]) { - alien = n->alien[page_node]; + if (n->alien && n->alien[slab_node]) { + alien = n->alien[slab_node]; ac = &alien->ac; spin_lock(&alien->lock); if (unlikely(ac->avail == ac->limit)) { STATS_INC_ACOVERFLOW(cachep); - __drain_alien_cache(cachep, ac, page_node, &list); + __drain_alien_cache(cachep, ac, slab_node, &list); } __free_one(ac, objp); spin_unlock(&alien->lock); slabs_destroy(cachep, &list); } else { - n = get_node(cachep, page_node); + n = get_node(cachep, slab_node); spin_lock(&n->list_lock); - free_block(cachep, &objp, 1, page_node, &list); + free_block(cachep, &objp, 1, slab_node, &list); spin_unlock(&n->list_lock); slabs_destroy(cachep, &list); } @@ -1557,18 +1557,18 @@ static void check_poison_obj(struct kmem_cache *cachep, void *objp) /* Print some data about the neighboring objects, if they * exist: */ - struct page *page = virt_to_head_page(objp); + struct slab *slab = virt_to_slab(objp); unsigned int objnr; - objnr = obj_to_index(cachep, page, objp); + objnr = obj_to_index(cachep, slab_page(slab), objp); if (objnr) { - objp = index_to_obj(cachep, page, objnr - 1); + objp = index_to_obj(cachep, slab, objnr - 1); realobj = (char *)objp + obj_offset(cachep); pr_err("Prev obj: start=%px, len=%d\n", realobj, size); print_objinfo(cachep, objp, 2); } if (objnr + 1 < cachep->num) { - objp = index_to_obj(cachep, page, objnr + 1); + objp = index_to_obj(cachep, slab, objnr + 1); realobj = (char *)objp + obj_offset(cachep); pr_err("Next obj: start=%px, len=%d\n", realobj, size); print_objinfo(cachep, objp, 2); @@ -1579,17 +1579,17 @@ static void check_poison_obj(struct kmem_cache *cachep, void *objp) #if DEBUG static void slab_destroy_debugcheck(struct kmem_cache *cachep, - struct page *page) + struct slab *slab) { int i; if (OBJFREELIST_SLAB(cachep) && cachep->flags & SLAB_POISON) { - poison_obj(cachep, page->freelist - obj_offset(cachep), + poison_obj(cachep, slab->freelist - obj_offset(cachep), POISON_FREE); } for (i = 0; i < cachep->num; i++) { - void *objp = index_to_obj(cachep, page, i); + void *objp = index_to_obj(cachep, slab, i); if (cachep->flags & SLAB_POISON) { check_poison_obj(cachep, objp); @@ -1605,7 +1605,7 @@ static void slab_destroy_debugcheck(struct kmem_cache *cachep, } #else static void slab_destroy_debugcheck(struct kmem_cache *cachep, - struct page *page) + struct slab *slab) { } #endif @@ -1619,16 +1619,16 @@ static void slab_destroy_debugcheck(struct kmem_cache *cachep, * Before calling the slab page must have been unlinked from the cache. The * kmem_cache_node ->list_lock is not held/needed. */ -static void slab_destroy(struct kmem_cache *cachep, struct page *page) +static void slab_destroy(struct kmem_cache *cachep, struct slab *slab) { void *freelist; - freelist = page->freelist; - slab_destroy_debugcheck(cachep, page); + freelist = slab->freelist; + slab_destroy_debugcheck(cachep, slab); if (unlikely(cachep->flags & SLAB_TYPESAFE_BY_RCU)) - call_rcu(&page->rcu_head, kmem_rcu_free); + call_rcu(&slab->rcu_head, kmem_rcu_free); else - kmem_freepages(cachep, page_slab(page)); + kmem_freepages(cachep, slab); /* * From now on, we don't use freelist @@ -1644,11 +1644,11 @@ static void slab_destroy(struct kmem_cache *cachep, struct page *page) */ static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list) { - struct page *page, *n; + struct slab *slab, *n; - list_for_each_entry_safe(page, n, list, slab_list) { - list_del(&page->slab_list); - slab_destroy(cachep, page); + list_for_each_entry_safe(slab, n, list, slab_list) { + list_del(&slab->slab_list); + slab_destroy(cachep, slab); } } @@ -2198,7 +2198,7 @@ static int drain_freelist(struct kmem_cache *cache, { struct list_head *p; int nr_freed; - struct page *page; + struct slab *slab; nr_freed = 0; while (nr_freed < tofree && !list_empty(&n->slabs_free)) { @@ -2210,8 +2210,8 @@ static int drain_freelist(struct kmem_cache *cache, goto out; } - page = list_entry(p, struct page, slab_list); - list_del(&page->slab_list); + slab = list_entry(p, struct slab, slab_list); + list_del(&slab->slab_list); n->free_slabs--; n->total_slabs--; /* @@ -2220,7 +2220,7 @@ static int drain_freelist(struct kmem_cache *cache, */ n->free_objects -= cache->num; spin_unlock_irq(&n->list_lock); - slab_destroy(cache, page); + slab_destroy(cache, slab); nr_freed++; } out: @@ -2295,14 +2295,14 @@ void __kmem_cache_release(struct kmem_cache *cachep) * which are all initialized during kmem_cache_init(). */ static void *alloc_slabmgmt(struct kmem_cache *cachep, - struct page *page, int colour_off, + struct slab *slab, int colour_off, gfp_t local_flags, int nodeid) { void *freelist; - void *addr = page_address(page); + void *addr = slab_address(slab); - page->s_mem = addr + colour_off; - page->active = 0; + slab->s_mem = addr + colour_off; + slab->active = 0; if (OBJFREELIST_SLAB(cachep)) freelist = NULL; @@ -2319,24 +2319,24 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep, return freelist; } -static inline freelist_idx_t get_free_obj(struct page *page, unsigned int idx) +static inline freelist_idx_t get_free_obj(struct slab *slab, unsigned int idx) { - return ((freelist_idx_t *)page->freelist)[idx]; + return ((freelist_idx_t *) slab->freelist)[idx]; } -static inline void set_free_obj(struct page *page, +static inline void set_free_obj(struct slab *slab, unsigned int idx, freelist_idx_t val) { - ((freelist_idx_t *)(page->freelist))[idx] = val; + ((freelist_idx_t *)(slab->freelist))[idx] = val; } -static void cache_init_objs_debug(struct kmem_cache *cachep, struct page *page) +static void cache_init_objs_debug(struct kmem_cache *cachep, struct slab *slab) { #if DEBUG int i; for (i = 0; i < cachep->num; i++) { - void *objp = index_to_obj(cachep, page, i); + void *objp = index_to_obj(cachep, slab, i); if (cachep->flags & SLAB_STORE_USER) *dbg_userword(cachep, objp) = NULL; @@ -2420,17 +2420,17 @@ static freelist_idx_t next_random_slot(union freelist_init_state *state) } /* Swap two freelist entries */ -static void swap_free_obj(struct page *page, unsigned int a, unsigned int b) +static void swap_free_obj(struct slab *slab, unsigned int a, unsigned int b) { - swap(((freelist_idx_t *)page->freelist)[a], - ((freelist_idx_t *)page->freelist)[b]); + swap(((freelist_idx_t *) slab->freelist)[a], + ((freelist_idx_t *) slab->freelist)[b]); } /* * Shuffle the freelist initialization state based on pre-computed lists. * return true if the list was successfully shuffled, false otherwise. */ -static bool shuffle_freelist(struct kmem_cache *cachep, struct page *page) +static bool shuffle_freelist(struct kmem_cache *cachep, struct slab *slab) { unsigned int objfreelist = 0, i, rand, count = cachep->num; union freelist_init_state state; @@ -2447,7 +2447,7 @@ static bool shuffle_freelist(struct kmem_cache *cachep, struct page *page) objfreelist = count - 1; else objfreelist = next_random_slot(&state); - page->freelist = index_to_obj(cachep, page, objfreelist) + + slab->freelist = index_to_obj(cachep, slab, objfreelist) + obj_offset(cachep); count--; } @@ -2458,51 +2458,51 @@ static bool shuffle_freelist(struct kmem_cache *cachep, struct page *page) */ if (!precomputed) { for (i = 0; i < count; i++) - set_free_obj(page, i, i); + set_free_obj(slab, i, i); /* Fisher-Yates shuffle */ for (i = count - 1; i > 0; i--) { rand = prandom_u32_state(&state.rnd_state); rand %= (i + 1); - swap_free_obj(page, i, rand); + swap_free_obj(slab, i, rand); } } else { for (i = 0; i < count; i++) - set_free_obj(page, i, next_random_slot(&state)); + set_free_obj(slab, i, next_random_slot(&state)); } if (OBJFREELIST_SLAB(cachep)) - set_free_obj(page, cachep->num - 1, objfreelist); + set_free_obj(slab, cachep->num - 1, objfreelist); return true; } #else static inline bool shuffle_freelist(struct kmem_cache *cachep, - struct page *page) + struct slab *slab) { return false; } #endif /* CONFIG_SLAB_FREELIST_RANDOM */ static void cache_init_objs(struct kmem_cache *cachep, - struct page *page) + struct slab *slab) { int i; void *objp; bool shuffled; - cache_init_objs_debug(cachep, page); + cache_init_objs_debug(cachep, slab); /* Try to randomize the freelist if enabled */ - shuffled = shuffle_freelist(cachep, page); + shuffled = shuffle_freelist(cachep, slab); if (!shuffled && OBJFREELIST_SLAB(cachep)) { - page->freelist = index_to_obj(cachep, page, cachep->num - 1) + + slab->freelist = index_to_obj(cachep, slab, cachep->num - 1) + obj_offset(cachep); } for (i = 0; i < cachep->num; i++) { - objp = index_to_obj(cachep, page, i); + objp = index_to_obj(cachep, slab, i); objp = kasan_init_slab_obj(cachep, objp); /* constructor could break poison info */ @@ -2513,48 +2513,48 @@ static void cache_init_objs(struct kmem_cache *cachep, } if (!shuffled) - set_free_obj(page, i, i); + set_free_obj(slab, i, i); } } -static void *slab_get_obj(struct kmem_cache *cachep, struct page *page) +static void *slab_get_obj(struct kmem_cache *cachep, struct slab *slab) { void *objp; - objp = index_to_obj(cachep, page, get_free_obj(page, page->active)); - page->active++; + objp = index_to_obj(cachep, slab, get_free_obj(slab, slab->active)); + slab->active++; return objp; } static void slab_put_obj(struct kmem_cache *cachep, - struct page *page, void *objp) + struct slab *slab, void *objp) { - unsigned int objnr = obj_to_index(cachep, page, objp); + unsigned int objnr = obj_to_index(cachep, slab_page(slab), objp); #if DEBUG unsigned int i; /* Verify double free bug */ - for (i = page->active; i < cachep->num; i++) { - if (get_free_obj(page, i) == objnr) { + for (i = slab->active; i < cachep->num; i++) { + if (get_free_obj(slab, i) == objnr) { pr_err("slab: double free detected in cache '%s', objp %px\n", cachep->name, objp); BUG(); } } #endif - page->active--; - if (!page->freelist) - page->freelist = objp + obj_offset(cachep); + slab->active--; + if (!slab->freelist) + slab->freelist = objp + obj_offset(cachep); - set_free_obj(page, page->active, objnr); + set_free_obj(slab, slab->active, objnr); } /* * Grow (by 1) the number of slabs within a cache. This is called by * kmem_cache_alloc() when there are no active objs left in a cache. */ -static struct page *cache_grow_begin(struct kmem_cache *cachep, +static struct slab *cache_grow_begin(struct kmem_cache *cachep, gfp_t flags, int nodeid) { void *freelist; @@ -2562,7 +2562,7 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep, gfp_t local_flags; int page_node; struct kmem_cache_node *n; - struct page *page; + struct slab *slab; /* * Be lazy and only check for valid flags here, keeping it out of the @@ -2582,11 +2582,11 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep, * Get mem for the objs. Attempt to allocate a physical page from * 'nodeid'. */ - page = slab_page(kmem_getpages(cachep, local_flags, nodeid)); - if (!page) + slab = kmem_getpages(cachep, local_flags, nodeid); + if (!slab) goto failed; - page_node = page_to_nid(page); + page_node = slab_nid(slab); n = get_node(cachep, page_node); /* Get colour for the slab, and cal the next value. */ @@ -2605,55 +2605,55 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep, * page_address() in the latter returns a non-tagged pointer, * as it should be for slab pages. */ - kasan_poison_slab(page); + kasan_poison_slab(slab_page(slab)); /* Get slab management. */ - freelist = alloc_slabmgmt(cachep, page, offset, + freelist = alloc_slabmgmt(cachep, slab, offset, local_flags & ~GFP_CONSTRAINT_MASK, page_node); if (OFF_SLAB(cachep) && !freelist) goto opps1; - page->slab_cache = cachep; - page->freelist = freelist; + slab->slab_cache = cachep; + slab->freelist = freelist; - cache_init_objs(cachep, page); + cache_init_objs(cachep, slab); if (gfpflags_allow_blocking(local_flags)) local_irq_disable(); - return page; + return slab; opps1: - kmem_freepages(cachep, page_slab(page)); + kmem_freepages(cachep, slab); failed: if (gfpflags_allow_blocking(local_flags)) local_irq_disable(); return NULL; } -static void cache_grow_end(struct kmem_cache *cachep, struct page *page) +static void cache_grow_end(struct kmem_cache *cachep, struct slab *slab) { struct kmem_cache_node *n; void *list = NULL; check_irq_off(); - if (!page) + if (!slab) return; - INIT_LIST_HEAD(&page->slab_list); - n = get_node(cachep, page_to_nid(page)); + INIT_LIST_HEAD(&slab->slab_list); + n = get_node(cachep, slab_nid(slab)); spin_lock(&n->list_lock); n->total_slabs++; - if (!page->active) { - list_add_tail(&page->slab_list, &n->slabs_free); + if (!slab->active) { + list_add_tail(&slab->slab_list, &n->slabs_free); n->free_slabs++; } else - fixup_slab_list(cachep, n, page, &list); + fixup_slab_list(cachep, n, slab, &list); STATS_INC_GROWN(cachep); - n->free_objects += cachep->num - page->active; + n->free_objects += cachep->num - slab->active; spin_unlock(&n->list_lock); fixup_objfreelist_debug(cachep, &list); @@ -2701,13 +2701,13 @@ static void *cache_free_debugcheck(struct kmem_cache *cachep, void *objp, unsigned long caller) { unsigned int objnr; - struct page *page; + struct slab *slab; BUG_ON(virt_to_cache(objp) != cachep); objp -= obj_offset(cachep); kfree_debugcheck(objp); - page = virt_to_head_page(objp); + slab = virt_to_slab(objp); if (cachep->flags & SLAB_RED_ZONE) { verify_redzone_free(cachep, objp); @@ -2717,10 +2717,10 @@ static void *cache_free_debugcheck(struct kmem_cache *cachep, void *objp, if (cachep->flags & SLAB_STORE_USER) *dbg_userword(cachep, objp) = (void *)caller; - objnr = obj_to_index(cachep, page, objp); + objnr = obj_to_index(cachep, slab_page(slab), objp); BUG_ON(objnr >= cachep->num); - BUG_ON(objp != index_to_obj(cachep, page, objnr)); + BUG_ON(objp != index_to_obj(cachep, slab, objnr)); if (cachep->flags & SLAB_POISON) { poison_obj(cachep, objp, POISON_FREE); @@ -2750,97 +2750,97 @@ static inline void fixup_objfreelist_debug(struct kmem_cache *cachep, } static inline void fixup_slab_list(struct kmem_cache *cachep, - struct kmem_cache_node *n, struct page *page, + struct kmem_cache_node *n, struct slab *slab, void **list) { /* move slabp to correct slabp list: */ - list_del(&page->slab_list); - if (page->active == cachep->num) { - list_add(&page->slab_list, &n->slabs_full); + list_del(&slab->slab_list); + if (slab->active == cachep->num) { + list_add(&slab->slab_list, &n->slabs_full); if (OBJFREELIST_SLAB(cachep)) { #if DEBUG /* Poisoning will be done without holding the lock */ if (cachep->flags & SLAB_POISON) { - void **objp = page->freelist; + void **objp = slab->freelist; *objp = *list; *list = objp; } #endif - page->freelist = NULL; + slab->freelist = NULL; } } else - list_add(&page->slab_list, &n->slabs_partial); + list_add(&slab->slab_list, &n->slabs_partial); } /* Try to find non-pfmemalloc slab if needed */ -static noinline struct page *get_valid_first_slab(struct kmem_cache_node *n, - struct page *page, bool pfmemalloc) +static noinline struct slab *get_valid_first_slab(struct kmem_cache_node *n, + struct slab *slab, bool pfmemalloc) { - if (!page) + if (!slab) return NULL; if (pfmemalloc) - return page; + return slab; - if (!PageSlabPfmemalloc(page)) - return page; + if (!slab_test_pfmemalloc(slab)) + return slab; /* No need to keep pfmemalloc slab if we have enough free objects */ if (n->free_objects > n->free_limit) { - ClearPageSlabPfmemalloc(page); - return page; + slab_clear_pfmemalloc(slab); + return slab; } /* Move pfmemalloc slab to the end of list to speed up next search */ - list_del(&page->slab_list); - if (!page->active) { - list_add_tail(&page->slab_list, &n->slabs_free); + list_del(&slab->slab_list); + if (!slab->active) { + list_add_tail(&slab->slab_list, &n->slabs_free); n->free_slabs++; } else - list_add_tail(&page->slab_list, &n->slabs_partial); + list_add_tail(&slab->slab_list, &n->slabs_partial); - list_for_each_entry(page, &n->slabs_partial, slab_list) { - if (!PageSlabPfmemalloc(page)) - return page; + list_for_each_entry(slab, &n->slabs_partial, slab_list) { + if (!slab_test_pfmemalloc(slab)) + return slab; } n->free_touched = 1; - list_for_each_entry(page, &n->slabs_free, slab_list) { - if (!PageSlabPfmemalloc(page)) { + list_for_each_entry(slab, &n->slabs_free, slab_list) { + if (!slab_test_pfmemalloc(slab)) { n->free_slabs--; - return page; + return slab; } } return NULL; } -static struct page *get_first_slab(struct kmem_cache_node *n, bool pfmemalloc) +static struct slab *get_first_slab(struct kmem_cache_node *n, bool pfmemalloc) { - struct page *page; + struct slab *slab; assert_spin_locked(&n->list_lock); - page = list_first_entry_or_null(&n->slabs_partial, struct page, + slab = list_first_entry_or_null(&n->slabs_partial, struct slab, slab_list); - if (!page) { + if (!slab) { n->free_touched = 1; - page = list_first_entry_or_null(&n->slabs_free, struct page, + slab = list_first_entry_or_null(&n->slabs_free, struct slab, slab_list); - if (page) + if (slab) n->free_slabs--; } if (sk_memalloc_socks()) - page = get_valid_first_slab(n, page, pfmemalloc); + slab = get_valid_first_slab(n, slab, pfmemalloc); - return page; + return slab; } static noinline void *cache_alloc_pfmemalloc(struct kmem_cache *cachep, struct kmem_cache_node *n, gfp_t flags) { - struct page *page; + struct slab *slab; void *obj; void *list = NULL; @@ -2848,16 +2848,16 @@ static noinline void *cache_alloc_pfmemalloc(struct kmem_cache *cachep, return NULL; spin_lock(&n->list_lock); - page = get_first_slab(n, true); - if (!page) { + slab = get_first_slab(n, true); + if (!slab) { spin_unlock(&n->list_lock); return NULL; } - obj = slab_get_obj(cachep, page); + obj = slab_get_obj(cachep, slab); n->free_objects--; - fixup_slab_list(cachep, n, page, &list); + fixup_slab_list(cachep, n, slab, &list); spin_unlock(&n->list_lock); fixup_objfreelist_debug(cachep, &list); @@ -2870,20 +2870,20 @@ static noinline void *cache_alloc_pfmemalloc(struct kmem_cache *cachep, * or cache_grow_end() for new slab */ static __always_inline int alloc_block(struct kmem_cache *cachep, - struct array_cache *ac, struct page *page, int batchcount) + struct array_cache *ac, struct slab *slab, int batchcount) { /* * There must be at least one object available for * allocation. */ - BUG_ON(page->active >= cachep->num); + BUG_ON(slab->active >= cachep->num); - while (page->active < cachep->num && batchcount--) { + while (slab->active < cachep->num && batchcount--) { STATS_INC_ALLOCED(cachep); STATS_INC_ACTIVE(cachep); STATS_SET_HIGH(cachep); - ac->entry[ac->avail++] = slab_get_obj(cachep, page); + ac->entry[ac->avail++] = slab_get_obj(cachep, slab); } return batchcount; @@ -2896,7 +2896,7 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags) struct array_cache *ac, *shared; int node; void *list = NULL; - struct page *page; + struct slab *slab; check_irq_off(); node = numa_mem_id(); @@ -2929,14 +2929,14 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags) while (batchcount > 0) { /* Get slab alloc is to come from. */ - page = get_first_slab(n, false); - if (!page) + slab = get_first_slab(n, false); + if (!slab) goto must_grow; check_spinlock_acquired(cachep); - batchcount = alloc_block(cachep, ac, page, batchcount); - fixup_slab_list(cachep, n, page, &list); + batchcount = alloc_block(cachep, ac, slab, batchcount); + fixup_slab_list(cachep, n, slab, &list); } must_grow: @@ -2955,16 +2955,16 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags) return obj; } - page = cache_grow_begin(cachep, gfp_exact_node(flags), node); + slab = cache_grow_begin(cachep, gfp_exact_node(flags), node); /* * cache_grow_begin() can reenable interrupts, * then ac could change. */ ac = cpu_cache_get(cachep); - if (!ac->avail && page) - alloc_block(cachep, ac, page, batchcount); - cache_grow_end(cachep, page); + if (!ac->avail && slab) + alloc_block(cachep, ac, slab, batchcount); + cache_grow_end(cachep, slab); if (!ac->avail) return NULL; @@ -3094,7 +3094,7 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags) struct zone *zone; enum zone_type highest_zoneidx = gfp_zone(flags); void *obj = NULL; - struct page *page; + struct slab *slab; int nid; unsigned int cpuset_mems_cookie; @@ -3130,10 +3130,10 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags) * We may trigger various forms of reclaim on the allowed * set and go into memory reserves if necessary. */ - page = cache_grow_begin(cache, flags, numa_mem_id()); - cache_grow_end(cache, page); - if (page) { - nid = page_to_nid(page); + slab = cache_grow_begin(cache, flags, numa_mem_id()); + cache_grow_end(cache, slab); + if (slab) { + nid = slab_nid(slab); obj = ____cache_alloc_node(cache, gfp_exact_node(flags), nid); @@ -3157,7 +3157,7 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags) static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { - struct page *page; + struct slab *slab; struct kmem_cache_node *n; void *obj = NULL; void *list = NULL; @@ -3168,8 +3168,8 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, check_irq_off(); spin_lock(&n->list_lock); - page = get_first_slab(n, false); - if (!page) + slab = get_first_slab(n, false); + if (!slab) goto must_grow; check_spinlock_acquired_node(cachep, nodeid); @@ -3178,12 +3178,12 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, STATS_INC_ACTIVE(cachep); STATS_SET_HIGH(cachep); - BUG_ON(page->active == cachep->num); + BUG_ON(slab->active == cachep->num); - obj = slab_get_obj(cachep, page); + obj = slab_get_obj(cachep, slab); n->free_objects--; - fixup_slab_list(cachep, n, page, &list); + fixup_slab_list(cachep, n, slab, &list); spin_unlock(&n->list_lock); fixup_objfreelist_debug(cachep, &list); @@ -3191,12 +3191,12 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, must_grow: spin_unlock(&n->list_lock); - page = cache_grow_begin(cachep, gfp_exact_node(flags), nodeid); - if (page) { + slab = cache_grow_begin(cachep, gfp_exact_node(flags), nodeid); + if (slab) { /* This slab isn't counted yet so don't update free_objects */ - obj = slab_get_obj(cachep, page); + obj = slab_get_obj(cachep, slab); } - cache_grow_end(cachep, page); + cache_grow_end(cachep, slab); return obj ? obj : fallback_alloc(cachep, flags); } @@ -3326,40 +3326,40 @@ static void free_block(struct kmem_cache *cachep, void **objpp, { int i; struct kmem_cache_node *n = get_node(cachep, node); - struct page *page; + struct slab *slab; n->free_objects += nr_objects; for (i = 0; i < nr_objects; i++) { void *objp; - struct page *page; + struct slab *slab; objp = objpp[i]; - page = virt_to_head_page(objp); - list_del(&page->slab_list); + slab = virt_to_slab(objp); + list_del(&slab->slab_list); check_spinlock_acquired_node(cachep, node); - slab_put_obj(cachep, page, objp); + slab_put_obj(cachep, slab, objp); STATS_DEC_ACTIVE(cachep); /* fixup slab chains */ - if (page->active == 0) { - list_add(&page->slab_list, &n->slabs_free); + if (slab->active == 0) { + list_add(&slab->slab_list, &n->slabs_free); n->free_slabs++; } else { /* Unconditionally move a slab to the end of the * partial list on free - maximum time for the * other objects to be freed, too. */ - list_add_tail(&page->slab_list, &n->slabs_partial); + list_add_tail(&slab->slab_list, &n->slabs_partial); } } while (n->free_objects > n->free_limit && !list_empty(&n->slabs_free)) { n->free_objects -= cachep->num; - page = list_last_entry(&n->slabs_free, struct page, slab_list); - list_move(&page->slab_list, list); + slab = list_last_entry(&n->slabs_free, struct slab, slab_list); + list_move(&slab->slab_list, list); n->free_slabs--; n->total_slabs--; } @@ -3395,10 +3395,10 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) #if STATS { int i = 0; - struct page *page; + struct slab *slab; - list_for_each_entry(page, &n->slabs_free, slab_list) { - BUG_ON(page->active); + list_for_each_entry(slab, &n->slabs_free, slab_list) { + BUG_ON(slab->active); i++; } @@ -3474,10 +3474,10 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, } if (sk_memalloc_socks()) { - struct page *page = virt_to_head_page(objp); + struct slab *slab = virt_to_slab(objp); - if (unlikely(PageSlabPfmemalloc(page))) { - cache_free_pfmemalloc(cachep, page, objp); + if (unlikely(slab_test_pfmemalloc(slab))) { + cache_free_pfmemalloc(cachep, slab, objp); return; } } @@ -3664,7 +3664,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) kpp->kp_data_offset = obj_offset(cachep); slab = virt_to_slab(objp); objnr = obj_to_index(cachep, slab_page(slab), objp); - objp = index_to_obj(cachep, slab_page(slab), objnr); + objp = index_to_obj(cachep, slab, objnr); kpp->kp_objp = objp; if (DEBUG && cachep->flags & SLAB_STORE_USER) kpp->kp_ret = *dbg_userword(cachep, objp); @@ -4188,7 +4188,7 @@ void __check_heap_object(const void *ptr, unsigned long n, if (is_kfence_address(ptr)) offset = ptr - kfence_object_start(ptr); else - offset = ptr - index_to_obj(cachep, slab_page(slab), objnr) - obj_offset(cachep); + offset = ptr - index_to_obj(cachep, slab, objnr) - obj_offset(cachep); /* Allow address range falling entirely within usercopy region. */ if (offset >= cachep->useroffset && From patchwork Wed Dec 1 18:14:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650727 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0571C43217 for ; Wed, 1 Dec 2021 18:25:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 66CE06B0092; Wed, 1 Dec 2021 13:15:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5A9626B0095; Wed, 1 Dec 2021 13:15:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D7B26B0099; Wed, 1 Dec 2021 13:15:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0238.hostedemail.com [216.40.44.238]) by kanga.kvack.org (Postfix) with ESMTP id 2471A6B0095 for ; Wed, 1 Dec 2021 13:15:31 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E19B38A3DB for ; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) X-FDA: 78870027600.18.38AF962 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf07.hostedemail.com (Postfix) with ESMTP id 30B551000098 for ; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id ECD53218D5; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W2fRPmsPB5LxAUNMejsoLgDZCDH023bhJsBe2+9TQkg=; b=xocNJbAzEGzFv10Mqli9paUpd0iqTlLWTtdWd0KwbDGe3X3/hUAMQutjTQb7Zdx343r/7U Cp3ShevI5pv3GEt7Yk2xFSg0w2xQOyYOT0KbyQAH+wnwB80hlRL4X3jLvFFAJ+vWIfvQD8 HZhePi8gbCSoRvdRqBlKYFGXZkmt8XA= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W2fRPmsPB5LxAUNMejsoLgDZCDH023bhJsBe2+9TQkg=; b=NFPKzsxH3wpfWs4uJQekQJZfcqStGv5HulMhv0E6Na07sLbYkiprgN62IcMD5Vam3iTTG+ 0GmJTLblzh1pf9CA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C9C3E14050; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id oF+dMLa7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:18 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 21/33] mm/slab: Finish struct page to struct slab conversion Date: Wed, 1 Dec 2021 19:14:58 +0100 Message-Id: <20211201181510.18784-22-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2668; h=from:subject; bh=20CMJsNmHf7llpkm0IkbVj9CDsYnAJam0AGjdnEvmck=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7uNnFWIMtfdJEVKJYgJJBvPuGRe3a0HEfKU6G2f UoCgyx+JATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7jQAKCRDgIcpz8YmpEJExCA CqWZZMl2avIj8qxF/3H+ulwB/0Bec86KhAKaOqH8ab2Wkods4mCD7mxhsf12FIgVUGO5IJ3fxdnGLC g9py/tiQ5OW6QnHEqIUjghTQsf3dOh0vh4Bqy2u8sQB1x7S9atJPTswuuN/sDu6CCOmW4aWhtbDgrg yNQXwAKGgxPoP25rxoaIx5JsYgewb6xLu5vXUCezOgODC6iuvX9kM0VW0hs+ePD5yNmt8VSV8+N9iY b8LI4cmSZU60vA7ZlbI/1wsVXzB74wTrhHCHOpOAzARVIUj9ZHAOf0eJsyxwrwHYM8yy6wwIVBGTiw uSgiXRXS2ilkPZ0rUmefWCFKsMgUSr X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 30B551000098 X-Stat-Signature: xkfqsr8nmf5dr7fgmb15qwfw13o83x3s Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=xocNJbAz; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=NFPKzsxH; spf=pass (imf07.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382520-228186 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change cache_free_alien() to use slab_nid(virt_to_slab()). Otherwise just update of comments and some remaining variable names. Signed-off-by: Vlastimil Babka --- mm/slab.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 970b3e13b4c1..f0447b087d02 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -793,16 +793,16 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp, static inline int cache_free_alien(struct kmem_cache *cachep, void *objp) { - int page_node = page_to_nid(virt_to_page(objp)); + int slab_node = slab_nid(virt_to_slab(objp)); int node = numa_mem_id(); /* * Make sure we are not freeing a object from another node to the array * cache on this cpu. */ - if (likely(node == page_node)) + if (likely(node == slab_node)) return 0; - return __cache_free_alien(cachep, objp, node, page_node); + return __cache_free_alien(cachep, objp, node, slab_node); } /* @@ -1613,10 +1613,10 @@ static void slab_destroy_debugcheck(struct kmem_cache *cachep, /** * slab_destroy - destroy and release all objects in a slab * @cachep: cache pointer being destroyed - * @page: page pointer being destroyed + * @slab: slab being destroyed * - * Destroy all the objs in a slab page, and release the mem back to the system. - * Before calling the slab page must have been unlinked from the cache. The + * Destroy all the objs in a slab, and release the mem back to the system. + * Before calling the slab must have been unlinked from the cache. The * kmem_cache_node ->list_lock is not held/needed. */ static void slab_destroy(struct kmem_cache *cachep, struct slab *slab) @@ -2560,7 +2560,7 @@ static struct slab *cache_grow_begin(struct kmem_cache *cachep, void *freelist; size_t offset; gfp_t local_flags; - int page_node; + int slab_node; struct kmem_cache_node *n; struct slab *slab; @@ -2586,8 +2586,8 @@ static struct slab *cache_grow_begin(struct kmem_cache *cachep, if (!slab) goto failed; - page_node = slab_nid(slab); - n = get_node(cachep, page_node); + slab_node = slab_nid(slab); + n = get_node(cachep, slab_node); /* Get colour for the slab, and cal the next value. */ n->colour_next++; @@ -2609,7 +2609,7 @@ static struct slab *cache_grow_begin(struct kmem_cache *cachep, /* Get slab management. */ freelist = alloc_slabmgmt(cachep, slab, offset, - local_flags & ~GFP_CONSTRAINT_MASK, page_node); + local_flags & ~GFP_CONSTRAINT_MASK, slab_node); if (OFF_SLAB(cachep) && !freelist) goto opps1; From patchwork Wed Dec 1 18:14:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77634C433F5 for ; Wed, 1 Dec 2021 18:26:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E0CC6B0099; Wed, 1 Dec 2021 13:15:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8A62F6B0095; Wed, 1 Dec 2021 13:15:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4CA1A6B0098; Wed, 1 Dec 2021 13:15:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id 2E8C36B0092 for ; Wed, 1 Dec 2021 13:15:31 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E8D688A3DC for ; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) X-FDA: 78870027600.09.D9B9886 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf14.hostedemail.com (Postfix) with ESMTP id 4D25C600198C for ; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 46216218D6; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382519; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0hE0fFmqoiNZ5hGJ3u4gMJGlq9TtEyRvQ+tV3jhQOoQ=; b=z170mukVloZHhS6lJptM10xtAdRsGMxhNSPKammn6pvi8TlZQcjaglrR6cj7LVZVRJA4Oy JErbVeTrM9RAURU1MaaLNKzH3jXcaTygrZzBCIKOVqLt28AAQ8/qiMu25F+ZmfDV2YbKzT CXYBTxx+dIZgMe+mdOhnG+wFjfdCabc= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382519; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0hE0fFmqoiNZ5hGJ3u4gMJGlq9TtEyRvQ+tV3jhQOoQ=; b=5DrVK1sbQLQp5OFueaxVJonuaPm+HG74o1cuMdaw8/IvCEWjHCJfivzYR8141RA2V9eAs2 7hIYv+kV1HnIJuBg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id F344713D9D; Wed, 1 Dec 2021 18:15:18 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id qAKTOra7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:18 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka , Julia Lawall , Luis Chamberlain , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Marco Elver , Johannes Weiner , Michal Hocko , Vladimir Davydov , kasan-dev@googlegroups.com, cgroups@vger.kernel.org Subject: [PATCH v2 22/33] mm: Convert struct page to struct slab in functions used by other subsystems Date: Wed, 1 Dec 2021 19:14:59 +0100 Message-Id: <20211201181510.18784-23-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=12792; h=from:subject; bh=4aC1m/tIrV4f6A5tkcUR9hSbFwicVFxxUNwxtFuHz/M=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7uQyjTM0RJhg4ccVNFUdyJhAwEXyRJTcz2A4Bws HDG/JZ6JATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7kAAKCRDgIcpz8YmpEBwHCA CV52ZgS8UXOsnBBdrE53Gl6hG4swDw+A5JfmgHoqHu4rUV/Tz3WPV+nTXk71knhMXKKsCm/4i3EYKK GWVge1l2dvBcnhjpD+BbCTPOcz1o3HfQi2aIclyf14+n7WuS3sI/fxSKyZCOevsOa7FB/d5dTuSleZ XxaBG+9au8ZW0MUX8oGEtY45J1XEKbYSf/iHIsCiDR3j6oH/lon+dSvJPKRFWt2oTaovTUXfZQMnBJ yWKAjskRJMfrL4/TSqYvOAre8YMDwGzZ52KS7Pf3x+mVP8TDARN/5StCCkyamZ5SXwfpdNPzJuerUk ViQTe1/XyDC56z5cZNeqEf4FFAjMFa X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Stat-Signature: nb6toph1iiuhn44batz3k99zcwfrmo6a Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=z170mukV; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=5DrVK1sb; dmarc=none; spf=pass (imf14.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 4D25C600198C X-HE-Tag: 1638382520-818489 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KASAN, KFENCE and memcg interact with SLAB or SLUB internals through functions nearest_obj(), obj_to_index() and objs_per_slab() that use struct page as parameter. This patch converts it to struct slab including all callers, through a coccinelle semantic patch. // Options: --include-headers --no-includes --smpl-spacing include/linux/slab_def.h include/linux/slub_def.h mm/slab.h mm/kasan/*.c mm/kfence/kfence_test.c mm/memcontrol.c mm/slab.c mm/slub.c // Note: needs coccinelle 1.1.1 to avoid breaking whitespace @@ @@ -objs_per_slab_page( +objs_per_slab( ... ) { ... } @@ @@ -objs_per_slab_page( +objs_per_slab( ... ) @@ identifier fn =~ "obj_to_index|objs_per_slab"; @@ fn(..., - const struct page *page + const struct slab *slab ,...) { <... ( - page_address(page) + slab_address(slab) | - page + slab ) ...> } @@ identifier fn =~ "nearest_obj"; @@ fn(..., - struct page *page + const struct slab *slab ,...) { <... ( - page_address(page) + slab_address(slab) | - page + slab ) ...> } @@ identifier fn =~ "nearest_obj|obj_to_index|objs_per_slab"; expression E; @@ fn(..., ( - slab_page(E) + E | - virt_to_page(E) + virt_to_slab(E) | - virt_to_head_page(E) + virt_to_slab(E) | - page + page_slab(page) ) ,...) Signed-off-by: Vlastimil Babka Cc: Julia Lawall Cc: Luis Chamberlain Cc: Andrey Ryabinin Cc: Alexander Potapenko Cc: Andrey Konovalov Cc: Dmitry Vyukov Cc: Marco Elver Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Cc: Cc: Reviewed-by: Andrey Konovalov Acked-by: Johannes Weiner --- include/linux/slab_def.h | 16 ++++++++-------- include/linux/slub_def.h | 18 +++++++++--------- mm/kasan/common.c | 4 ++-- mm/kasan/generic.c | 2 +- mm/kasan/report.c | 2 +- mm/kasan/report_tags.c | 2 +- mm/kfence/kfence_test.c | 4 ++-- mm/memcontrol.c | 4 ++-- mm/slab.c | 10 +++++----- mm/slab.h | 4 ++-- mm/slub.c | 2 +- 11 files changed, 34 insertions(+), 34 deletions(-) diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 3aa5e1e73ab6..e24c9aff6fed 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -87,11 +87,11 @@ struct kmem_cache { struct kmem_cache_node *node[MAX_NUMNODES]; }; -static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, +static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, void *x) { - void *object = x - (x - page->s_mem) % cache->size; - void *last_object = page->s_mem + (cache->num - 1) * cache->size; + void *object = x - (x - slab->s_mem) % cache->size; + void *last_object = slab->s_mem + (cache->num - 1) * cache->size; if (unlikely(object > last_object)) return last_object; @@ -106,16 +106,16 @@ static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, * reciprocal_divide(offset, cache->reciprocal_buffer_size) */ static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct page *page, void *obj) + const struct slab *slab, void *obj) { - u32 offset = (obj - page->s_mem); + u32 offset = (obj - slab->s_mem); return reciprocal_divide(offset, cache->reciprocal_buffer_size); } -static inline int objs_per_slab_page(const struct kmem_cache *cache, - const struct page *page) +static inline int objs_per_slab(const struct kmem_cache *cache, + const struct slab *slab) { - if (is_kfence_address(page_address(page))) + if (is_kfence_address(slab_address(slab))) return 1; return cache->num; } diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 8a9c2876ca89..33c5c0e3bd8d 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -158,11 +158,11 @@ static inline void sysfs_slab_release(struct kmem_cache *s) void *fixup_red_left(struct kmem_cache *s, void *p); -static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, +static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, void *x) { - void *object = x - (x - page_address(page)) % cache->size; - void *last_object = page_address(page) + - (page->objects - 1) * cache->size; + void *object = x - (x - slab_address(slab)) % cache->size; + void *last_object = slab_address(slab) + + (slab->objects - 1) * cache->size; void *result = (unlikely(object > last_object)) ? last_object : object; result = fixup_red_left(cache, result); @@ -178,16 +178,16 @@ static inline unsigned int __obj_to_index(const struct kmem_cache *cache, } static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct page *page, void *obj) + const struct slab *slab, void *obj) { if (is_kfence_address(obj)) return 0; - return __obj_to_index(cache, page_address(page), obj); + return __obj_to_index(cache, slab_address(slab), obj); } -static inline int objs_per_slab_page(const struct kmem_cache *cache, - const struct page *page) +static inline int objs_per_slab(const struct kmem_cache *cache, + const struct slab *slab) { - return page->objects; + return slab->objects; } #endif /* _LINUX_SLUB_DEF_H */ diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 8428da2aaf17..6a1cd2d38bff 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -298,7 +298,7 @@ static inline u8 assign_tag(struct kmem_cache *cache, /* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */ #ifdef CONFIG_SLAB /* For SLAB assign tags based on the object index in the freelist. */ - return (u8)obj_to_index(cache, virt_to_head_page(object), (void *)object); + return (u8)obj_to_index(cache, virt_to_slab(object), (void *)object); #else /* * For SLUB assign a random tag during slab creation, otherwise reuse @@ -341,7 +341,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object, if (is_kfence_address(object)) return false; - if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) != + if (unlikely(nearest_obj(cache, virt_to_slab(object), object) != object)) { kasan_report_invalid_free(tagged_object, ip); return true; diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c index 84a038b07c6f..5d0b79416c4e 100644 --- a/mm/kasan/generic.c +++ b/mm/kasan/generic.c @@ -339,7 +339,7 @@ static void __kasan_record_aux_stack(void *addr, bool can_alloc) return; cache = page->slab_cache; - object = nearest_obj(cache, page, addr); + object = nearest_obj(cache, page_slab(page), addr); alloc_meta = kasan_get_alloc_meta(cache, object); if (!alloc_meta) return; diff --git a/mm/kasan/report.c b/mm/kasan/report.c index 0bc10f452f7e..e00999dc6499 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -249,7 +249,7 @@ static void print_address_description(void *addr, u8 tag) if (page && PageSlab(page)) { struct kmem_cache *cache = page->slab_cache; - void *object = nearest_obj(cache, page, addr); + void *object = nearest_obj(cache, page_slab(page), addr); describe_object(cache, object, addr, tag); } diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c index 8a319fc16dab..06c21dd77493 100644 --- a/mm/kasan/report_tags.c +++ b/mm/kasan/report_tags.c @@ -23,7 +23,7 @@ const char *kasan_get_bug_type(struct kasan_access_info *info) page = kasan_addr_to_page(addr); if (page && PageSlab(page)) { cache = page->slab_cache; - object = nearest_obj(cache, page, (void *)addr); + object = nearest_obj(cache, page_slab(page), (void *)addr); alloc_meta = kasan_get_alloc_meta(cache, object); if (alloc_meta) { diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c index 695030c1fff8..f7276711d7b9 100644 --- a/mm/kfence/kfence_test.c +++ b/mm/kfence/kfence_test.c @@ -291,8 +291,8 @@ static void *test_alloc(struct kunit *test, size_t size, gfp_t gfp, enum allocat * even for KFENCE objects; these are required so that * memcg accounting works correctly. */ - KUNIT_EXPECT_EQ(test, obj_to_index(s, page, alloc), 0U); - KUNIT_EXPECT_EQ(test, objs_per_slab_page(s, page), 1); + KUNIT_EXPECT_EQ(test, obj_to_index(s, page_slab(page), alloc), 0U); + KUNIT_EXPECT_EQ(test, objs_per_slab(s, page_slab(page)), 1); if (policy == ALLOCATE_ANY) return alloc; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6863a834ed42..906edbd92436 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2819,7 +2819,7 @@ static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s, gfp_t gfp, bool new_page) { - unsigned int objects = objs_per_slab_page(s, page); + unsigned int objects = objs_per_slab(s, page_slab(page)); unsigned long memcg_data; void *vec; @@ -2881,7 +2881,7 @@ struct mem_cgroup *mem_cgroup_from_obj(void *p) struct obj_cgroup *objcg; unsigned int off; - off = obj_to_index(page->slab_cache, page, p); + off = obj_to_index(page->slab_cache, page_slab(page), p); objcg = page_objcgs(page)[off]; if (objcg) return obj_cgroup_memcg(objcg); diff --git a/mm/slab.c b/mm/slab.c index f0447b087d02..785fffd527fe 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1560,7 +1560,7 @@ static void check_poison_obj(struct kmem_cache *cachep, void *objp) struct slab *slab = virt_to_slab(objp); unsigned int objnr; - objnr = obj_to_index(cachep, slab_page(slab), objp); + objnr = obj_to_index(cachep, slab, objp); if (objnr) { objp = index_to_obj(cachep, slab, objnr - 1); realobj = (char *)objp + obj_offset(cachep); @@ -2530,7 +2530,7 @@ static void *slab_get_obj(struct kmem_cache *cachep, struct slab *slab) static void slab_put_obj(struct kmem_cache *cachep, struct slab *slab, void *objp) { - unsigned int objnr = obj_to_index(cachep, slab_page(slab), objp); + unsigned int objnr = obj_to_index(cachep, slab, objp); #if DEBUG unsigned int i; @@ -2717,7 +2717,7 @@ static void *cache_free_debugcheck(struct kmem_cache *cachep, void *objp, if (cachep->flags & SLAB_STORE_USER) *dbg_userword(cachep, objp) = (void *)caller; - objnr = obj_to_index(cachep, slab_page(slab), objp); + objnr = obj_to_index(cachep, slab, objp); BUG_ON(objnr >= cachep->num); BUG_ON(objp != index_to_obj(cachep, slab, objnr)); @@ -3663,7 +3663,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) objp = object - obj_offset(cachep); kpp->kp_data_offset = obj_offset(cachep); slab = virt_to_slab(objp); - objnr = obj_to_index(cachep, slab_page(slab), objp); + objnr = obj_to_index(cachep, slab, objp); objp = index_to_obj(cachep, slab, objnr); kpp->kp_objp = objp; if (DEBUG && cachep->flags & SLAB_STORE_USER) @@ -4181,7 +4181,7 @@ void __check_heap_object(const void *ptr, unsigned long n, /* Find and validate object. */ cachep = slab->slab_cache; - objnr = obj_to_index(cachep, slab_page(slab), (void *)ptr); + objnr = obj_to_index(cachep, slab, (void *)ptr); BUG_ON(objnr >= cachep->num); /* Find offset within object. */ diff --git a/mm/slab.h b/mm/slab.h index 7376c9d8aa2b..15d109d8ec89 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -483,7 +483,7 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, continue; } - off = obj_to_index(s, page, p[i]); + off = obj_to_index(s, page_slab(page), p[i]); obj_cgroup_get(objcg); page_objcgs(page)[off] = objcg; mod_objcg_state(objcg, page_pgdat(page), @@ -522,7 +522,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig, else s = s_orig; - off = obj_to_index(s, page, p[i]); + off = obj_to_index(s, page_slab(page), p[i]); objcg = objcgs[off]; if (!objcg) continue; diff --git a/mm/slub.c b/mm/slub.c index f5344211d8cc..61aaaa662c5e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4342,7 +4342,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) #else objp = objp0; #endif - objnr = obj_to_index(s, slab_page(slab), objp); + objnr = obj_to_index(s, slab, objp); kpp->kp_data_offset = (unsigned long)((char *)objp0 - (char *)objp); objp = base + s->size * objnr; kpp->kp_objp = objp; From patchwork Wed Dec 1 18:15:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA1EAC433FE for ; Wed, 1 Dec 2021 18:27:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 170E76B0095; Wed, 1 Dec 2021 13:15:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EAAF96B009C; Wed, 1 Dec 2021 13:15:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B27AC6B009A; Wed, 1 Dec 2021 13:15:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id 8AF026B0098 for ; Wed, 1 Dec 2021 13:15:31 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 4B5568248076 for ; Wed, 1 Dec 2021 18:15:21 +0000 (UTC) X-FDA: 78870027642.05.C9AB896 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf27.hostedemail.com (Postfix) with ESMTP id 89BD670000AD for ; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 8AA941FDFD; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382519; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=R0/rBeFeucpyB5eR+yI6wcQARAoucV5Vh3arlrutk1o=; b=P/uWa8pvMldBBBpARhPMDG3dBFh2v2Dph/QymG0bhj9spZT+IycvMJZI2kZzCKKt54eD6y JxE0blxeSD1AJwTRvvClpMiqJi9CA02jj0vhe9dd+xDiDj1xO/+qoX0iXx8KRXd50ZZ7yf mTprlFdCsoiU48HLnmSX9gNBgGU5P3o= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382519; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=R0/rBeFeucpyB5eR+yI6wcQARAoucV5Vh3arlrutk1o=; b=6pZ79VbmmNRCvP0QgeKeWTD9X6UdQl4Vr6DouJZ8MkCiq1mi3FjctM8Vr89FneiIpnlplA /SOHPW4C8/DbwACw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4991114050; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id oLo5Ebe7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:19 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka , Johannes Weiner , Michal Hocko , Vladimir Davydov , cgroups@vger.kernel.org Subject: [PATCH v2 23/33] mm/memcg: Convert slab objcgs from struct page to struct slab Date: Wed, 1 Dec 2021 19:15:00 +0100 Message-Id: <20211201181510.18784-24-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=12729; h=from:subject; bh=2gFwaSUFu5+TO8dizzXad+fz0D9VNIeN/5XUKZMSrXU=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7uSU3f/a01ekx7yjssTe9je59PedL4begohchKJ GqCoWZGJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7kgAKCRDgIcpz8YmpEIZoCA Cuo91LO4fXmvgL4Yl2bGr+XEUJr76fTt9lmH36HhwGYm2zxMb32tA5A5cWVcLvLydT02w01MAcGKTk CJDZyqLDA8eozr29DLFv75qPNcTi9YLnlbQOETwdhM6qj28VBQ7ba9qg0AQH/OETOzs97VvWx/E4tP /xwWWJEc3tTiGCYkFVc0/dKvF28ONhGy2ZvgULhmO/Ty7myOQN8x2gjxpNaf0ZEEQK06n1p0ZMHDV9 UD7uasdvohcq1kwNPJnpVoPNUBDzWyjQEMdUYrRHMXbFN+scvCPrcz1hKoxsxDb0y0P6ITW9W2ukKf +u4QJwW3Tijbnl9Fjps6vBSnmdojuD X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 89BD670000AD X-Stat-Signature: qzehrrwx4nex7xjpfzgir3658qs73hqg Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="P/uWa8pv"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=6pZ79Vbm; spf=pass (imf27.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382520-577993 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: page->memcg_data is used with MEMCG_DATA_OBJCGS flag only for slab pages so convert all the related infrastructure to struct slab. To avoid include cycles, move the inline definitions of slab_objcgs() and slab_objcgs_check() from memcontrol.h to mm/slab.h. This is not just mechanistic changing of types and names. Now in mem_cgroup_from_obj() we use PageSlab flag to decide if we interpret the page as slab, instead of relying on MEMCG_DATA_OBJCGS bit checked in page_objcgs_check() (now slab_objcgs_check()). Similarly in memcg_slab_free_hook() where we can encounter kmalloc_large() pages (here the PageSlab flag check is implied by virt_to_slab()). Signed-off-by: Vlastimil Babka Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Cc: --- include/linux/memcontrol.h | 48 ------------------ mm/memcontrol.c | 43 +++++++++------- mm/slab.h | 101 ++++++++++++++++++++++++++++--------- 3 files changed, 103 insertions(+), 89 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 0c5c403f4be6..e34112f6a369 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -536,45 +536,6 @@ static inline bool folio_memcg_kmem(struct folio *folio) return folio->memcg_data & MEMCG_DATA_KMEM; } -/* - * page_objcgs - get the object cgroups vector associated with a page - * @page: a pointer to the page struct - * - * Returns a pointer to the object cgroups vector associated with the page, - * or NULL. This function assumes that the page is known to have an - * associated object cgroups vector. It's not safe to call this function - * against pages, which might have an associated memory cgroup: e.g. - * kernel stack pages. - */ -static inline struct obj_cgroup **page_objcgs(struct page *page) -{ - unsigned long memcg_data = READ_ONCE(page->memcg_data); - - VM_BUG_ON_PAGE(memcg_data && !(memcg_data & MEMCG_DATA_OBJCGS), page); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page); - - return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); -} - -/* - * page_objcgs_check - get the object cgroups vector associated with a page - * @page: a pointer to the page struct - * - * Returns a pointer to the object cgroups vector associated with the page, - * or NULL. This function is safe to use if the page can be directly associated - * with a memory cgroup. - */ -static inline struct obj_cgroup **page_objcgs_check(struct page *page) -{ - unsigned long memcg_data = READ_ONCE(page->memcg_data); - - if (!memcg_data || !(memcg_data & MEMCG_DATA_OBJCGS)) - return NULL; - - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page); - - return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); -} #else static inline bool folio_memcg_kmem(struct folio *folio) @@ -582,15 +543,6 @@ static inline bool folio_memcg_kmem(struct folio *folio) return false; } -static inline struct obj_cgroup **page_objcgs(struct page *page) -{ - return NULL; -} - -static inline struct obj_cgroup **page_objcgs_check(struct page *page) -{ - return NULL; -} #endif static inline bool PageMemcgKmem(struct page *page) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 906edbd92436..522fff11d6d1 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2816,31 +2816,31 @@ static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) */ #define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT) -int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s, - gfp_t gfp, bool new_page) +int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_slab) { - unsigned int objects = objs_per_slab(s, page_slab(page)); + unsigned int objects = objs_per_slab(s, slab); unsigned long memcg_data; void *vec; gfp &= ~OBJCGS_CLEAR_MASK; vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp, - page_to_nid(page)); + slab_nid(slab)); if (!vec) return -ENOMEM; memcg_data = (unsigned long) vec | MEMCG_DATA_OBJCGS; - if (new_page) { + if (new_slab) { /* - * If the slab page is brand new and nobody can yet access - * it's memcg_data, no synchronization is required and - * memcg_data can be simply assigned. + * If the slab is brand new and nobody can yet access its + * memcg_data, no synchronization is required and memcg_data can + * be simply assigned. */ - page->memcg_data = memcg_data; - } else if (cmpxchg(&page->memcg_data, 0, memcg_data)) { + slab->memcg_data = memcg_data; + } else if (cmpxchg(&slab->memcg_data, 0, memcg_data)) { /* - * If the slab page is already in use, somebody can allocate - * and assign obj_cgroups in parallel. In this case the existing + * If the slab is already in use, somebody can allocate and + * assign obj_cgroups in parallel. In this case the existing * objcg vector should be reused. */ kfree(vec); @@ -2865,24 +2865,31 @@ int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s, */ struct mem_cgroup *mem_cgroup_from_obj(void *p) { - struct page *page; + struct folio *folio; if (mem_cgroup_disabled()) return NULL; - page = virt_to_head_page(p); + folio = virt_to_folio(p); /* * Slab objects are accounted individually, not per-page. * Memcg membership data for each individual object is saved in * the page->obj_cgroups. */ - if (page_objcgs_check(page)) { + if (folio_test_slab(folio)) { + struct obj_cgroup **objcgs; struct obj_cgroup *objcg; + struct slab *slab; unsigned int off; - off = obj_to_index(page->slab_cache, page_slab(page), p); - objcg = page_objcgs(page)[off]; + slab = folio_slab(folio); + objcgs = slab_objcgs_check(slab); + if (!objcgs) + return NULL; + + off = obj_to_index(slab->slab_cache, slab, p); + objcg = objcgs[off]; if (objcg) return obj_cgroup_memcg(objcg); @@ -2896,7 +2903,7 @@ struct mem_cgroup *mem_cgroup_from_obj(void *p) * page_memcg_check(page) will guarantee that a proper memory * cgroup pointer or NULL will be returned. */ - return page_memcg_check(page); + return page_memcg_check(folio_page(folio, 0)); } __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void) diff --git a/mm/slab.h b/mm/slab.h index 15d109d8ec89..0760f20686a7 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -412,15 +412,56 @@ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t fla } #ifdef CONFIG_MEMCG_KMEM -int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s, - gfp_t gfp, bool new_page); +/* + * slab_objcgs - get the object cgroups vector associated with a slab + * @slab: a pointer to the slab struct + * + * Returns a pointer to the object cgroups vector associated with the slab, + * or NULL. This function assumes that the slab is known to have an + * associated object cgroups vector. It's not safe to call this function + * against slabs with underlying pages, which might have an associated memory + * cgroup: e.g. kernel stack pages. + */ +static inline struct obj_cgroup **slab_objcgs(struct slab *slab) +{ + unsigned long memcg_data = READ_ONCE(slab->memcg_data); + + VM_BUG_ON_PAGE(memcg_data && !(memcg_data & MEMCG_DATA_OBJCGS), + slab_page(slab)); + VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, slab_page(slab)); + + return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); +} + +/* + * slab_objcgs_check - get the object cgroups vector associated with a slab + * @slab: a pointer to the slab struct + * + * Returns a pointer to the object cgroups vector associated with the slab, or + * NULL. This function is safe to use if the underlying page can be directly + * associated with a memory cgroup. + */ +static inline struct obj_cgroup **slab_objcgs_check(struct slab *slab) +{ + unsigned long memcg_data = READ_ONCE(slab->memcg_data); + + if (!memcg_data || !(memcg_data & MEMCG_DATA_OBJCGS)) + return NULL; + + VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, slab_page(slab)); + + return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); +} + +int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_slab); void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, enum node_stat_item idx, int nr); -static inline void memcg_free_page_obj_cgroups(struct page *page) +static inline void memcg_free_slab_cgroups(struct slab *slab) { - kfree(page_objcgs(page)); - page->memcg_data = 0; + kfree(slab_objcgs(slab)); + slab->memcg_data = 0; } static inline size_t obj_full_size(struct kmem_cache *s) @@ -465,7 +506,7 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, size_t size, void **p) { - struct page *page; + struct slab *slab; unsigned long off; size_t i; @@ -474,19 +515,19 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, for (i = 0; i < size; i++) { if (likely(p[i])) { - page = virt_to_head_page(p[i]); + slab = virt_to_slab(p[i]); - if (!page_objcgs(page) && - memcg_alloc_page_obj_cgroups(page, s, flags, + if (!slab_objcgs(slab) && + memcg_alloc_slab_cgroups(slab, s, flags, false)) { obj_cgroup_uncharge(objcg, obj_full_size(s)); continue; } - off = obj_to_index(s, page_slab(page), p[i]); + off = obj_to_index(s, slab, p[i]); obj_cgroup_get(objcg); - page_objcgs(page)[off] = objcg; - mod_objcg_state(objcg, page_pgdat(page), + slab_objcgs(slab)[off] = objcg; + mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), obj_full_size(s)); } else { obj_cgroup_uncharge(objcg, obj_full_size(s)); @@ -501,7 +542,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig, struct kmem_cache *s; struct obj_cgroup **objcgs; struct obj_cgroup *objcg; - struct page *page; + struct slab *slab; unsigned int off; int i; @@ -512,43 +553,57 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig, if (unlikely(!p[i])) continue; - page = virt_to_head_page(p[i]); - objcgs = page_objcgs_check(page); + slab = virt_to_slab(p[i]); + /* we could be given a kmalloc_large() object, skip those */ + if (!slab) + continue; + + objcgs = slab_objcgs_check(slab); if (!objcgs) continue; if (!s_orig) - s = page->slab_cache; + s = slab->slab_cache; else s = s_orig; - off = obj_to_index(s, page_slab(page), p[i]); + off = obj_to_index(s, slab, p[i]); objcg = objcgs[off]; if (!objcg) continue; objcgs[off] = NULL; obj_cgroup_uncharge(objcg, obj_full_size(s)); - mod_objcg_state(objcg, page_pgdat(page), cache_vmstat_idx(s), + mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), -obj_full_size(s)); obj_cgroup_put(objcg); } } #else /* CONFIG_MEMCG_KMEM */ +static inline struct obj_cgroup **slab_objcgs(struct slab *slab) +{ + return NULL; +} + +static inline struct obj_cgroup **slab_objcgs_check(struct slab *slab) +{ + return NULL; +} + static inline struct mem_cgroup *memcg_from_slab_obj(void *ptr) { return NULL; } -static inline int memcg_alloc_page_obj_cgroups(struct page *page, +static inline int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, gfp_t gfp, - bool new_page) + bool new_slab) { return 0; } -static inline void memcg_free_page_obj_cgroups(struct page *page) +static inline void memcg_free_slab_cgroups(struct slab *slab) { } @@ -587,7 +642,7 @@ static __always_inline void account_slab(struct slab *slab, int order, struct kmem_cache *s, gfp_t gfp) { if (memcg_kmem_enabled() && (s->flags & SLAB_ACCOUNT)) - memcg_alloc_page_obj_cgroups(slab_page(slab), s, gfp, true); + memcg_alloc_slab_cgroups(slab, s, gfp, true); mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), PAGE_SIZE << order); @@ -597,7 +652,7 @@ static __always_inline void unaccount_slab(struct slab *slab, int order, struct kmem_cache *s) { if (memcg_kmem_enabled()) - memcg_free_page_obj_cgroups(slab_page(slab)); + memcg_free_slab_cgroups(slab); mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), -(PAGE_SIZE << order)); From patchwork Wed Dec 1 18:15:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AE1AC433EF for ; Wed, 1 Dec 2021 18:26:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D4CD86B0096; Wed, 1 Dec 2021 13:15:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D24726B0095; Wed, 1 Dec 2021 13:15:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A36C56B009C; Wed, 1 Dec 2021 13:15:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0236.hostedemail.com [216.40.44.236]) by kanga.kvack.org (Postfix) with ESMTP id 7158F6B0096 for ; Wed, 1 Dec 2021 13:15:31 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 30F3A181616A5 for ; Wed, 1 Dec 2021 18:15:21 +0000 (UTC) X-FDA: 78870027642.28.F6C3EE0 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf03.hostedemail.com (Postfix) with ESMTP id 3C49E30000A5 for ; Wed, 1 Dec 2021 18:15:21 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id A30DF218D9; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382519; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/8AMLYvdpfTtBdZFcpvfDY+08zMPxrszJpq3L8Y4IkI=; b=A5ZDyy0DwEtkKOY46fXuMpzX2K2K0tBwv5pfKgjPDfS5uJQBYE6peA00kbLIymsJaUaiyc qpTUe2h4+IHgKzRbNvn4bYS7H7S2zXgfTpywD9tdHoPq/LY76Z9hibYtzPigIfp/MTvXJC IjRq0pl4uwJJiT8dLplok49BDtwsIX0= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382519; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/8AMLYvdpfTtBdZFcpvfDY+08zMPxrszJpq3L8Y4IkI=; b=y5yHBIwhkjmyPET4XTB/DaXSQ4QvlltquSEF2FiOUsqCDpWZXpQDodCvcXdG5oaIgvKamy S27kuahzkhAvKDDQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 804C613D9D; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id aBO2Hre7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:19 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 24/33] mm/slob: Convert SLOB to use struct slab Date: Wed, 1 Dec 2021 19:15:01 +0100 Message-Id: <20211201181510.18784-25-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3794; i=vbabka@suse.cz; h=from:subject; bh=xa214YLDtbNUR9XtnN+78HMPqwWCI0KceeNn5J2llgk=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7uUM5ZvyXBhOt83ZAlKuo/T27s4yLaQth/3ox1G pQRJuYuJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7lAAKCRDgIcpz8YmpEHXrB/ 9PQLY2NBX4I6k6ddxtLuVK8Ut9r4hMykmlMZKoNRWJWoKNvjulpU1s/2JwOOVxy0iDzArEVL83pR9R pjZ9RSAm5eXXodJnAlRzKF8/dn3MWM74eJNhZqjx4oGDtZvdm9c7ajg8RuIxthzFyM3TxB/eDHdpJu uf8zfEjWvwr8boxkXXtgcHJjdPD9Vefis6K2v+s91T+MNu1H9tNZG+NPNLEUQyuaJUGI8/+crTPQyj o3elbbiPYjjZcC0P/eLiqTKzH3banYY/IoZVU6/kR+C6SqjMJNxsAQ3ploBm/nLWiLOBBI1WFAMPZb vSVk6tF1XVmLfSlPSDcCOn/15pI6ux X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Queue-Id: 3C49E30000A5 X-Stat-Signature: mgwm6n9aiqoz3uwdor5tgbz5aahnbzeb Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=A5ZDyy0D; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=y5yHBIwh; dmarc=none; spf=pass (imf03.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Rspamd-Server: rspam02 X-HE-Tag: 1638382521-700057 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Use struct slab throughout the slob allocator. [ vbabka@suse.cz: don't introduce wrappers for PageSlobFree in mm/slab.h just for the single callers being wrappers in mm/slob.c ] Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slob.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/mm/slob.c b/mm/slob.c index d2d15e7f191c..d3512bcc3141 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -105,21 +105,21 @@ static LIST_HEAD(free_slob_large); /* * slob_page_free: true for pages on free_slob_pages list. */ -static inline int slob_page_free(struct page *sp) +static inline int slob_page_free(struct slab *slab) { - return PageSlobFree(sp); + return PageSlobFree(slab_page(slab)); } -static void set_slob_page_free(struct page *sp, struct list_head *list) +static void set_slob_page_free(struct slab *slab, struct list_head *list) { - list_add(&sp->slab_list, list); - __SetPageSlobFree(sp); + list_add(&slab->slab_list, list); + __SetPageSlobFree(slab_page(slab)); } -static inline void clear_slob_page_free(struct page *sp) +static inline void clear_slob_page_free(struct slab *slab) { - list_del(&sp->slab_list); - __ClearPageSlobFree(sp); + list_del(&slab->slab_list); + __ClearPageSlobFree(slab_page(slab)); } #define SLOB_UNIT sizeof(slob_t) @@ -234,7 +234,7 @@ static void slob_free_pages(void *b, int order) * freelist, in this case @page_removed_from_list will be set to * true (set to false otherwise). */ -static void *slob_page_alloc(struct page *sp, size_t size, int align, +static void *slob_page_alloc(struct slab *sp, size_t size, int align, int align_offset, bool *page_removed_from_list) { slob_t *prev, *cur, *aligned = NULL; @@ -301,7 +301,7 @@ static void *slob_page_alloc(struct page *sp, size_t size, int align, static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, int align_offset) { - struct page *sp; + struct slab *sp; struct list_head *slob_list; slob_t *b = NULL; unsigned long flags; @@ -323,7 +323,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, * If there's a node specification, search for a partial * page with a matching node id in the freelist. */ - if (node != NUMA_NO_NODE && page_to_nid(sp) != node) + if (node != NUMA_NO_NODE && slab_nid(sp) != node) continue; #endif /* Enough room on this page? */ @@ -358,8 +358,8 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, b = slob_new_pages(gfp & ~__GFP_ZERO, 0, node); if (!b) return NULL; - sp = virt_to_page(b); - __SetPageSlab(sp); + sp = virt_to_slab(b); + __SetPageSlab(slab_page(sp)); spin_lock_irqsave(&slob_lock, flags); sp->units = SLOB_UNITS(PAGE_SIZE); @@ -381,7 +381,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, */ static void slob_free(void *block, int size) { - struct page *sp; + struct slab *sp; slob_t *prev, *next, *b = (slob_t *)block; slobidx_t units; unsigned long flags; @@ -391,7 +391,7 @@ static void slob_free(void *block, int size) return; BUG_ON(!size); - sp = virt_to_page(block); + sp = virt_to_slab(block); units = SLOB_UNITS(size); spin_lock_irqsave(&slob_lock, flags); @@ -401,8 +401,8 @@ static void slob_free(void *block, int size) if (slob_page_free(sp)) clear_slob_page_free(sp); spin_unlock_irqrestore(&slob_lock, flags); - __ClearPageSlab(sp); - page_mapcount_reset(sp); + __ClearPageSlab(slab_page(sp)); + page_mapcount_reset(slab_page(sp)); slob_free_pages(b, 0); return; } From patchwork Wed Dec 1 18:15:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F9C5C433EF for ; Wed, 1 Dec 2021 18:29:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B57D6B00A0; Wed, 1 Dec 2021 13:15:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2ECD16B009E; Wed, 1 Dec 2021 13:15:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 13EDD6B009F; Wed, 1 Dec 2021 13:15:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0096.hostedemail.com [216.40.44.96]) by kanga.kvack.org (Postfix) with ESMTP id EE5F76B009C for ; Wed, 1 Dec 2021 13:15:37 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 967C289084 for ; Wed, 1 Dec 2021 18:15:21 +0000 (UTC) X-FDA: 78870027642.07.EABCA95 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf17.hostedemail.com (Postfix) with ESMTP id 1ECF2F0001D5 for ; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id D05C0218DF; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382519; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p8tv2+xq9sBsttSOv4xOUAPywnT1KNNx/eDzXd9OiSg=; b=DQAFhYvruW2e8HkFx+rcGzjlb3OHlyWjcvO/JX3ym16PP5bEXOLe8wKg61hldeLE2SK7SJ wLrQOaZyjS/kAx/1ugZXgEGKtcwuNHpjutJi2iILFuOBWRiKR3AqgAF8L7FbK++Ai3U4Bz J1sBe5UHwsiCVwFeTEc5DUx6JyJZYl8= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382519; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p8tv2+xq9sBsttSOv4xOUAPywnT1KNNx/eDzXd9OiSg=; b=M2nU5i8mo5pSFBdpdVrFQaujntGoO4ts0mohQQPKmujXworIK79U1aooucQoyVIwTZ0v8t La3Y+yzDhFFt2rAw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id A467B14050; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id SKCaJ7e7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:19 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , kasan-dev@googlegroups.com Subject: [PATCH v2 25/33] mm/kasan: Convert to struct folio and struct slab Date: Wed, 1 Dec 2021 19:15:02 +0100 Message-Id: <20211201181510.18784-26-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=9591; i=vbabka@suse.cz; h=from:subject; bh=wdLlaDab2DMicugm8HZ3tbv9bFA1JUWGOF5KHqZutpA=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7uZnahBhsq3DyszeNzgSBLe4X6TtyGjdQLSRoBm 752quM+JATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7mQAKCRDgIcpz8YmpEKAGB/ 9R6s987FhRQpwo6Cbom6KV3SQaHUre1R7fzA0tLie6bWZNHjQ0gdLlamw0fOvSZoMI51dOl2xJLVrH 9bnGWRDjjpc7cSSaYnfWdkZZPTMzI+hCGDtk8MikGfOPfXUmqNNtHKzn2nFJK81HuwIURsOHbPOS18 Qe4zwHNpu/+zMKxN61OREUXQsqUgmYv940e6sNR2hUqE7RJ8p+cLnSDBZpp4gemuRUM3hTr10BmzV4 8kKZjH8GL1Lx1T+sfLxe4anm4BRt6+7QMVMGPPmMrGbOlFyagiIc2Knk6T9tJdIPdxriSRvvZUpimB Z0lgl260tmirQWxrepr0AKJc8MfD+7 X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Stat-Signature: w1s5k7ed5ehomd57t8hyfdyuwhrero9t X-Rspamd-Queue-Id: 1ECF2F0001D5 X-Rspamd-Server: rspam07 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=DQAFhYvr; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=M2nU5i8m; spf=pass (imf17.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382520-900430 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" KASAN accesses some slab related struct page fields so we need to convert it to struct slab. Some places are a bit simplified thanks to kasan_addr_to_slab() encapsulating the PageSlab flag check through virt_to_slab(). When resolving object address to either a real slab or a large kmalloc, use struct folio as the intermediate type for testing the slab flag to avoid unnecessary implicit compound_head(). [ vbabka@suse.cz: use struct folio, adjust to differences in previous patches ] Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka Cc: Andrey Ryabinin Cc: Alexander Potapenko Cc: Andrey Konovalov Cc: Dmitry Vyukov Cc: Reviewed-by: Andrey Konovalov --- include/linux/kasan.h | 9 +++++---- mm/kasan/common.c | 23 +++++++++++++---------- mm/kasan/generic.c | 8 ++++---- mm/kasan/kasan.h | 1 + mm/kasan/quarantine.c | 2 +- mm/kasan/report.c | 13 +++++++++++-- mm/kasan/report_tags.c | 10 +++++----- mm/slab.c | 2 +- mm/slub.c | 2 +- 9 files changed, 42 insertions(+), 28 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index d8783b682669..fb78108d694e 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -9,6 +9,7 @@ struct kmem_cache; struct page; +struct slab; struct vm_struct; struct task_struct; @@ -193,11 +194,11 @@ static __always_inline size_t kasan_metadata_size(struct kmem_cache *cache) return 0; } -void __kasan_poison_slab(struct page *page); -static __always_inline void kasan_poison_slab(struct page *page) +void __kasan_poison_slab(struct slab *slab); +static __always_inline void kasan_poison_slab(struct slab *slab) { if (kasan_enabled()) - __kasan_poison_slab(page); + __kasan_poison_slab(slab); } void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object); @@ -322,7 +323,7 @@ static inline void kasan_cache_create(struct kmem_cache *cache, slab_flags_t *flags) {} static inline void kasan_cache_create_kmalloc(struct kmem_cache *cache) {} static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; } -static inline void kasan_poison_slab(struct page *page) {} +static inline void kasan_poison_slab(struct slab *slab) {} static inline void kasan_unpoison_object_data(struct kmem_cache *cache, void *object) {} static inline void kasan_poison_object_data(struct kmem_cache *cache, diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 6a1cd2d38bff..7c06db78a76c 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -247,8 +247,9 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache, } #endif -void __kasan_poison_slab(struct page *page) +void __kasan_poison_slab(struct slab *slab) { + struct page *page = slab_page(slab); unsigned long i; for (i = 0; i < compound_nr(page); i++) @@ -401,9 +402,9 @@ void __kasan_kfree_large(void *ptr, unsigned long ip) void __kasan_slab_free_mempool(void *ptr, unsigned long ip) { - struct page *page; + struct folio *folio; - page = virt_to_head_page(ptr); + folio = virt_to_folio(ptr); /* * Even though this function is only called for kmem_cache_alloc and @@ -411,12 +412,14 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip) * !PageSlab() when the size provided to kmalloc is larger than * KMALLOC_MAX_SIZE, and kmalloc falls back onto page_alloc. */ - if (unlikely(!PageSlab(page))) { + if (unlikely(!folio_test_slab(folio))) { if (____kasan_kfree_large(ptr, ip)) return; - kasan_poison(ptr, page_size(page), KASAN_FREE_PAGE, false); + kasan_poison(ptr, folio_size(folio), KASAN_FREE_PAGE, false); } else { - ____kasan_slab_free(page->slab_cache, ptr, ip, false, false); + struct slab *slab = folio_slab(folio); + + ____kasan_slab_free(slab->slab_cache, ptr, ip, false, false); } } @@ -560,7 +563,7 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size, void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags) { - struct page *page; + struct slab *slab; if (unlikely(object == ZERO_SIZE_PTR)) return (void *)object; @@ -572,13 +575,13 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag */ kasan_unpoison(object, size, false); - page = virt_to_head_page(object); + slab = virt_to_slab(object); /* Piggy-back on kmalloc() instrumentation to poison the redzone. */ - if (unlikely(!PageSlab(page))) + if (unlikely(!slab)) return __kasan_kmalloc_large(object, size, flags); else - return ____kasan_kmalloc(page->slab_cache, object, size, flags); + return ____kasan_kmalloc(slab->slab_cache, object, size, flags); } bool __kasan_check_byte(const void *address, unsigned long ip) diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c index 5d0b79416c4e..a25ad4090615 100644 --- a/mm/kasan/generic.c +++ b/mm/kasan/generic.c @@ -330,16 +330,16 @@ DEFINE_ASAN_SET_SHADOW(f8); static void __kasan_record_aux_stack(void *addr, bool can_alloc) { - struct page *page = kasan_addr_to_page(addr); + struct slab *slab = kasan_addr_to_slab(addr); struct kmem_cache *cache; struct kasan_alloc_meta *alloc_meta; void *object; - if (is_kfence_address(addr) || !(page && PageSlab(page))) + if (is_kfence_address(addr) || !slab) return; - cache = page->slab_cache; - object = nearest_obj(cache, page_slab(page), addr); + cache = slab->slab_cache; + object = nearest_obj(cache, slab, addr); alloc_meta = kasan_get_alloc_meta(cache, object); if (!alloc_meta) return; diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index aebd8df86a1f..c17fa8d26ffe 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -265,6 +265,7 @@ bool kasan_report(unsigned long addr, size_t size, void kasan_report_invalid_free(void *object, unsigned long ip); struct page *kasan_addr_to_page(const void *addr); +struct slab *kasan_addr_to_slab(const void *addr); depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc); void kasan_set_track(struct kasan_track *track, gfp_t flags); diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c index d8ccff4c1275..587da8995f2d 100644 --- a/mm/kasan/quarantine.c +++ b/mm/kasan/quarantine.c @@ -117,7 +117,7 @@ static unsigned long quarantine_batch_size; static struct kmem_cache *qlink_to_cache(struct qlist_node *qlink) { - return virt_to_head_page(qlink)->slab_cache; + return virt_to_slab(qlink)->slab_cache; } static void *qlink_to_object(struct qlist_node *qlink, struct kmem_cache *cache) diff --git a/mm/kasan/report.c b/mm/kasan/report.c index e00999dc6499..3ad9624dcc56 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -150,6 +150,14 @@ struct page *kasan_addr_to_page(const void *addr) return NULL; } +struct slab *kasan_addr_to_slab(const void *addr) +{ + if ((addr >= (void *)PAGE_OFFSET) && + (addr < high_memory)) + return virt_to_slab(addr); + return NULL; +} + static void describe_object_addr(struct kmem_cache *cache, void *object, const void *addr) { @@ -248,8 +256,9 @@ static void print_address_description(void *addr, u8 tag) pr_err("\n"); if (page && PageSlab(page)) { - struct kmem_cache *cache = page->slab_cache; - void *object = nearest_obj(cache, page_slab(page), addr); + struct slab *slab = page_slab(page); + struct kmem_cache *cache = slab->slab_cache; + void *object = nearest_obj(cache, slab, addr); describe_object(cache, object, addr, tag); } diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c index 06c21dd77493..1b41de88c53e 100644 --- a/mm/kasan/report_tags.c +++ b/mm/kasan/report_tags.c @@ -12,7 +12,7 @@ const char *kasan_get_bug_type(struct kasan_access_info *info) #ifdef CONFIG_KASAN_TAGS_IDENTIFY struct kasan_alloc_meta *alloc_meta; struct kmem_cache *cache; - struct page *page; + struct slab *slab; const void *addr; void *object; u8 tag; @@ -20,10 +20,10 @@ const char *kasan_get_bug_type(struct kasan_access_info *info) tag = get_tag(info->access_addr); addr = kasan_reset_tag(info->access_addr); - page = kasan_addr_to_page(addr); - if (page && PageSlab(page)) { - cache = page->slab_cache; - object = nearest_obj(cache, page_slab(page), (void *)addr); + slab = kasan_addr_to_slab(addr); + if (slab) { + cache = slab->slab_cache; + object = nearest_obj(cache, slab, (void *)addr); alloc_meta = kasan_get_alloc_meta(cache, object); if (alloc_meta) { diff --git a/mm/slab.c b/mm/slab.c index 785fffd527fe..fed55fa1b7d0 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2605,7 +2605,7 @@ static struct slab *cache_grow_begin(struct kmem_cache *cachep, * page_address() in the latter returns a non-tagged pointer, * as it should be for slab pages. */ - kasan_poison_slab(slab_page(slab)); + kasan_poison_slab(slab); /* Get slab management. */ freelist = alloc_slabmgmt(cachep, slab, offset, diff --git a/mm/slub.c b/mm/slub.c index 61aaaa662c5e..58f0d499a293 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1961,7 +1961,7 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) slab->slab_cache = s; - kasan_poison_slab(slab_page(slab)); + kasan_poison_slab(slab); start = slab_address(slab); From patchwork Wed Dec 1 18:15:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7FCDC433F5 for ; Wed, 1 Dec 2021 18:34:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7293F6B0075; Wed, 1 Dec 2021 13:22:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 694716B007B; Wed, 1 Dec 2021 13:22:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1AAFD6B007E; Wed, 1 Dec 2021 13:22:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0205.hostedemail.com [216.40.44.205]) by kanga.kvack.org (Postfix) with ESMTP id EF2856B007B for ; Wed, 1 Dec 2021 13:22:09 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B3C0A181327C9 for ; Wed, 1 Dec 2021 18:21:59 +0000 (UTC) X-FDA: 78870044358.05.2DD5B95 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf08.hostedemail.com (Postfix) with ESMTP id EF66530000AE for ; Wed, 1 Dec 2021 18:21:59 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 0A010218E0; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382520; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pRNn3FUWmIkB+AIBt8k4Hc3p8VqTdjyb4KkYPCXkybg=; b=ELzro/zIxjVcRcN04EgCAo22xNe9OBJSJDb3yhxSZGuQ9259suTkCJSXQVIbMMJsY8Ow+W Mr5zTKf5qsTwbjwbDYGphhlaSIxjrs8g1Vn+DKuhueggUyaL9PfXfbE/PrIMOpo2wAc4dD s1vO6dRNQ0U4iqKykj8uElhVMmrYY20= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382520; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pRNn3FUWmIkB+AIBt8k4Hc3p8VqTdjyb4KkYPCXkybg=; b=kmr6F2+S1arvtzhRZO9wnXSmRb0Fqlw/CTQvGuUnDXUW/CBZ3QhyJk6fs+tQ0K2s+5NtDN YDF27BS9XOA/cbBA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D33C213D9D; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id yAUOM7e7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:19 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka , Marco Elver , Alexander Potapenko , Dmitry Vyukov , kasan-dev@googlegroups.com Subject: [PATCH v2 26/33] mm/kfence: Convert kfence_guarded_alloc() to struct slab Date: Wed, 1 Dec 2021 19:15:03 +0100 Message-Id: <20211201181510.18784-27-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2616; h=from:subject; bh=+CNHtzGum/tOmd/kW8GAUx888uquumIQ2d0wfPG8TF4=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7ubr9qa0wQH93+qc0CSNF9NLizxF8+su0OTZzLH lGacLvSJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7mwAKCRDgIcpz8YmpEJDXCA CpJyrm/iqqE/v6T1tHy7bIGV5jHGrybBMM/EmUqQSj86aWhtP8Oi8CPAIFBtOu+CTFldmPk4Nh/0nq uw93aFHn4LHexWfSTIp53E0vyNSuHBYqtokdvrdbJN848c4mu37p5Ip8eLzmFavLDfrBdCoQYkGvpL IEcYpRa+yB8sVCIywEoZSrPi6OQwzslxQspXoZb7yM7Pm6AHdV/uPNfOui/2aRHVfFh0nYWuvz6ER2 WK77d7PonT7Mpg5p4Hwd82vtUvdGkz/R4PMOTbU3gIYV5zFvZ+2VOTvua7gTWrKL/iQG+0yxHKrhZf QchjOVxsNvaOcxzNOGhcIz4RzdVBRz X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: EF66530000AE X-Stat-Signature: wejmw679qqthzseocds39pbg7ogqix31 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="ELzro/zI"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=kmr6F2+S; dmarc=none; spf=pass (imf08.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-HE-Tag: 1638382919-622387 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The function sets some fields that are being moved from struct page to struct slab so it needs to be converted. Signed-off-by: Vlastimil Babka Tested-by: Marco Elver Cc: Alexander Potapenko Cc: Marco Elver Cc: Dmitry Vyukov Cc: --- mm/kfence/core.c | 12 ++++++------ mm/kfence/kfence_test.c | 6 +++--- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 09945784df9e..4eb60cf5ff8b 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -360,7 +360,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g { struct kfence_metadata *meta = NULL; unsigned long flags; - struct page *page; + struct slab *slab; void *addr; /* Try to obtain a free object. */ @@ -424,13 +424,13 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g alloc_covered_add(alloc_stack_hash, 1); - /* Set required struct page fields. */ - page = virt_to_page(meta->addr); - page->slab_cache = cache; + /* Set required slab fields. */ + slab = virt_to_slab((void *)meta->addr); + slab->slab_cache = cache; if (IS_ENABLED(CONFIG_SLUB)) - page->objects = 1; + slab->objects = 1; if (IS_ENABLED(CONFIG_SLAB)) - page->s_mem = addr; + slab->s_mem = addr; /* Memory initialization. */ for_each_canary(meta, set_canary_byte); diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c index f7276711d7b9..a22b1af85577 100644 --- a/mm/kfence/kfence_test.c +++ b/mm/kfence/kfence_test.c @@ -282,7 +282,7 @@ static void *test_alloc(struct kunit *test, size_t size, gfp_t gfp, enum allocat alloc = kmalloc(size, gfp); if (is_kfence_address(alloc)) { - struct page *page = virt_to_head_page(alloc); + struct slab *slab = virt_to_slab(alloc); struct kmem_cache *s = test_cache ?: kmalloc_caches[kmalloc_type(GFP_KERNEL)][__kmalloc_index(size, false)]; @@ -291,8 +291,8 @@ static void *test_alloc(struct kunit *test, size_t size, gfp_t gfp, enum allocat * even for KFENCE objects; these are required so that * memcg accounting works correctly. */ - KUNIT_EXPECT_EQ(test, obj_to_index(s, page_slab(page), alloc), 0U); - KUNIT_EXPECT_EQ(test, objs_per_slab(s, page_slab(page)), 1); + KUNIT_EXPECT_EQ(test, obj_to_index(s, slab, alloc), 0U); + KUNIT_EXPECT_EQ(test, objs_per_slab(s, slab), 1); if (policy == ALLOCATE_ANY) return alloc; From patchwork Wed Dec 1 18:15:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650793 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96AEAC433F5 for ; Wed, 1 Dec 2021 18:34:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F0AE6B0073; Wed, 1 Dec 2021 13:22:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 37F5F6B007D; Wed, 1 Dec 2021 13:22:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 074F16B0075; Wed, 1 Dec 2021 13:22:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0018.hostedemail.com [216.40.44.18]) by kanga.kvack.org (Postfix) with ESMTP id E4CCC6B0074 for ; Wed, 1 Dec 2021 13:22:09 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id AAFE377C5F for ; Wed, 1 Dec 2021 18:21:59 +0000 (UTC) X-FDA: 78870044358.15.84E2323 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf16.hostedemail.com (Postfix) with ESMTP id 16B37F000090 for ; Wed, 1 Dec 2021 18:21:58 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 2EC4B218E1; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382520; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=k9ZixtfM35tzxNogC9Q/Zjb3jw/3HxLVaAR2HEzD94w=; b=pHJqcViLFus8H+B3R7sMcv77cqlWRn7nMcYPxJqcCsyckfABolUU1WRrwKPmh477x1exxY mB76xzl4qQDgtxHX9eHOKKLebzDdfHdKTckgpewWrjumHLrHJi+XmYaUCXq76k4JzDqOIf PJ9BOERnAOFI/ns5Cb3r0u6R0X1KjWY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382520; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=k9ZixtfM35tzxNogC9Q/Zjb3jw/3HxLVaAR2HEzD94w=; b=Y0qngyg7344Njp7pDNolPu2UCxwHX43d5Gdhf1NOa0UWH1dsjRFONG6/FGRjV39T9wfnuQ VahLwZDyC7g/U5Dw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 09AF514050; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 4ADSAbi7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:20 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka , Minchan Kim , Nitin Gupta , Sergey Senozhatsky Subject: [PATCH v2 27/33] zsmalloc: Stop using slab fields in struct page Date: Wed, 1 Dec 2021 19:15:04 +0100 Message-Id: <20211201181510.18784-28-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3314; i=vbabka@suse.cz; h=from:subject; bh=8SuDkeW+Y9PZVFez1G0gmyr5ZITWgrIDjjaoMCRZfSM=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7udZNIGX2PL0DK79GP20slO6qqTJzMyvKUPAkqA Yu0ypNaJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7nQAKCRDgIcpz8YmpEOl8B/ 0anHTWLi8ZxwIoiv+zcRRwEvyq62a6UOIDOMCeP7SLIIt5OB3dmi9SnRwYPvm+11a/1maE3PGGxKsX W3tqpUN9Fj9n2bbuJPpPmF1yUT8lxeE1Yfs4RBnolORJmAjYUF5MhGFdgOOsmCep/l0NHY5qsaU6OJ jYkSioyR3nGbzYv+5RL5jRz6RBER9V91uTo8INbQoqA7SfLw90DlDMS3pqOI3GOGYAlwHgvknFEeP1 8wiG6gJ/gQD5Q/Es0CPhfgbiu0vhUtDlpxc8OGzkGMoo6t1IlnCM3PPVSvRBogQJNGYLQisY9+L1Y0 nhydCso5J1wCFBikf5DG7PjOeoWaoS X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 16B37F000090 X-Stat-Signature: ynchgh87tb4m7jgxpdf7ax1wugp83n68 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=pHJqcViL; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=Y0qngyg7; spf=pass (imf16.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382918-240969 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" The ->freelist and ->units members of struct page are for the use of slab only. I'm not particularly familiar with zsmalloc, so generate the same code by using page->index to store 'page' (page->index and page->freelist are at the same offset in struct page). This should be cleaned up properly at some point by somebody who is familiar with zsmalloc. Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka Cc: Minchan Kim Cc: Nitin Gupta Cc: Sergey Senozhatsky Acked-by: Minchan Kim Acked-by: Johannes Weiner --- mm/zsmalloc.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index b897ce3b399a..0d3b65939016 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -17,10 +17,10 @@ * * Usage of struct page fields: * page->private: points to zspage - * page->freelist(index): links together all component pages of a zspage + * page->index: links together all component pages of a zspage * For the huge page, this is always 0, so we use this field * to store handle. - * page->units: first object offset in a subpage of zspage + * page->page_type: first object offset in a subpage of zspage * * Usage of struct page flags: * PG_private: identifies the first component page @@ -489,12 +489,12 @@ static inline struct page *get_first_page(struct zspage *zspage) static inline int get_first_obj_offset(struct page *page) { - return page->units; + return page->page_type; } static inline void set_first_obj_offset(struct page *page, int offset) { - page->units = offset; + page->page_type = offset; } static inline unsigned int get_freeobj(struct zspage *zspage) @@ -827,7 +827,7 @@ static struct page *get_next_page(struct page *page) if (unlikely(PageHugeObject(page))) return NULL; - return page->freelist; + return (struct page *)page->index; } /** @@ -901,7 +901,7 @@ static void reset_page(struct page *page) set_page_private(page, 0); page_mapcount_reset(page); ClearPageHugeObject(page); - page->freelist = NULL; + page->index = 0; } static int trylock_zspage(struct zspage *zspage) @@ -1027,7 +1027,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage, /* * Allocate individual pages and link them together as: - * 1. all pages are linked together using page->freelist + * 1. all pages are linked together using page->index * 2. each sub-page point to zspage using page->private * * we set PG_private to identify the first page (i.e. no other sub-page @@ -1036,7 +1036,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage, for (i = 0; i < nr_pages; i++) { page = pages[i]; set_page_private(page, (unsigned long)zspage); - page->freelist = NULL; + page->index = 0; if (i == 0) { zspage->first_page = page; SetPagePrivate(page); @@ -1044,7 +1044,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage, class->pages_per_zspage == 1)) SetPageHugeObject(page); } else { - prev_page->freelist = page; + prev_page->index = (unsigned long)page; } prev_page = page; } From patchwork Wed Dec 1 18:15:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650737 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 929FEC433EF for ; Wed, 1 Dec 2021 18:28:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 86B986B0098; Wed, 1 Dec 2021 13:15:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 819F06B009B; Wed, 1 Dec 2021 13:15:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B9206B009D; Wed, 1 Dec 2021 13:15:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id 397BF6B0098 for ; Wed, 1 Dec 2021 13:15:32 -0500 (EST) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id F3EC718155813 for ; Wed, 1 Dec 2021 18:15:21 +0000 (UTC) X-FDA: 78870027684.31.124D19E Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf26.hostedemail.com (Postfix) with ESMTP id 1D93B20019C3 for ; Wed, 1 Dec 2021 18:15:19 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 669E4218ED; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382520; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HDHk28IYG3nPE8FSGbkg+IPCxoRF+ZUHe+rhMS6Hszk=; b=CgBdqQui2C/i/zrSfqU0YbaqMrFntGRcMw0kgmwkLwSy3hYRIPBkmoOoxbzfN5VpmnIaGV JMDd+kb7W2uvfvJPhBKvMh5D/GXwJ8+BAGHpbViw8zo6z8E13M3lZgd7RpDW98IDOOSCif regw+Psj2N828l1t8rgV5ioFcvnSKSE= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382520; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HDHk28IYG3nPE8FSGbkg+IPCxoRF+ZUHe+rhMS6Hszk=; b=IdaaFVfox+7Dl73DGQmqDgPZUTiC4t50MrmHEvdR4KpDhlVDJcuogzlncX7bIoyaOB01i1 xryhoGaVmuliXVAg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 32ADD13D9D; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id iJvLC7i7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:20 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH v2 28/33] bootmem: Use page->index instead of page->freelist Date: Wed, 1 Dec 2021 19:15:05 +0100 Message-Id: <20211201181510.18784-29-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3092; i=vbabka@suse.cz; h=from:subject; bh=bKqqOHqIBO4fd73/dn9hB1Rvxk7MmLjJHMNETUZo9WA=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7ugVVXW5uJqO2Hu5Z8+5B4FLOBGfRIiRw/AvvtG 19WY/RKJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7oAAKCRDgIcpz8YmpEMFzB/ sEdc93SIwIzlJWLU2+DSoWSd58/CagnhPxz7bM9hzaCuNaRqtLEOyLt1v4jAmkLezpi8Hbi/H3yO7j IHfOprZ8UR9+38EcPUOpJSZkzMYXfb2E8zgEn/+hp9pu2J3uWt1OzGyIH6buI415+oIHalN4Pn7xdN oEzIcaVv/q0/ADv2mAq4uMf688zdNLeadAUx+wpUP614K8ipN0kBR7KDrzWSnwk/QsHtvjS9xfLlv8 y8Bu2F0u5p63NFPIEOIqlYcPZ8tHAqUiONxRGQoca1DR7MHBKKKRZFGFDsWpcuoLFEI6rhnmsG/G2N ulBJYFDJLkaUQZGzkwTmcpUIAkeXun X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 1D93B20019C3 X-Stat-Signature: nwij4o9tweroxs8768i14nfcxqb3h4zc Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=CgBdqQui; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=IdaaFVfo; dmarc=none; spf=pass (imf26.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-HE-Tag: 1638382519-343130 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" page->freelist is for the use of slab. Using page->index is the same set of bits as page->freelist, and by using an integer instead of a pointer, we can avoid casts. Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Cc: "H. Peter Anvin" Acked-by: Johannes Weiner --- arch/x86/mm/init_64.c | 2 +- include/linux/bootmem_info.h | 2 +- mm/bootmem_info.c | 7 +++---- mm/sparse.c | 2 +- 4 files changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 36098226a957..96d34ebb20a9 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -981,7 +981,7 @@ static void __meminit free_pagetable(struct page *page, int order) if (PageReserved(page)) { __ClearPageReserved(page); - magic = (unsigned long)page->freelist; + magic = page->index; if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) { while (nr_pages--) put_page_bootmem(page++); diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 2bc8b1f69c93..cc35d010fa94 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -30,7 +30,7 @@ void put_page_bootmem(struct page *page); */ static inline void free_bootmem_page(struct page *page) { - unsigned long magic = (unsigned long)page->freelist; + unsigned long magic = page->index; /* * The reserve_bootmem_region sets the reserved flag on bootmem diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index f03f42f426f6..f18a631e7479 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -15,7 +15,7 @@ void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) { - page->freelist = (void *)type; + page->index = type; SetPagePrivate(page); set_page_private(page, info); page_ref_inc(page); @@ -23,14 +23,13 @@ void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) void put_page_bootmem(struct page *page) { - unsigned long type; + unsigned long type = page->index; - type = (unsigned long) page->freelist; BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); if (page_ref_dec_return(page) == 1) { - page->freelist = NULL; + page->index = 0; ClearPagePrivate(page); set_page_private(page, 0); INIT_LIST_HEAD(&page->lru); diff --git a/mm/sparse.c b/mm/sparse.c index e5c84b0cf0c9..d21c6e5910d0 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -722,7 +722,7 @@ static void free_map_bootmem(struct page *memmap) >> PAGE_SHIFT; for (i = 0; i < nr_pages; i++, page++) { - magic = (unsigned long) page->freelist; + magic = page->index; BUG_ON(magic == NODE_INFO); From patchwork Wed Dec 1 18:15:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3243EC433F5 for ; Wed, 1 Dec 2021 18:28:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D26856B009B; Wed, 1 Dec 2021 13:15:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CD5F76B009C; Wed, 1 Dec 2021 13:15:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B77876B009D; Wed, 1 Dec 2021 13:15:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0114.hostedemail.com [216.40.44.114]) by kanga.kvack.org (Postfix) with ESMTP id 9011B6B009C for ; Wed, 1 Dec 2021 13:15:32 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 515F318106139 for ; Wed, 1 Dec 2021 18:15:22 +0000 (UTC) X-FDA: 78870027684.09.19C5977 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf04.hostedemail.com (Postfix) with ESMTP id 24B675000097 for ; Wed, 1 Dec 2021 18:15:21 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 99C741FDFE; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382520; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=az6sFyj1e45fY/bBnrxdX3b+BRZdTiwGv8NbVsxpP3M=; b=NT1RFzzi3OGQN/xRn0p4koCTY7K6jnXwbq1+Uc7vwbX19emcIwv5R0oZDtufKUraLtV90F phRnBD/ZJVxwZxjqrRzROurPKLVxZlWCeTKkIEnXuoax//Djnb5k7RUz1OAzd98tPdusIU wG0kLw2SqXP5BHFWNkk4BiC5wYkbGmQ= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382520; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=az6sFyj1e45fY/bBnrxdX3b+BRZdTiwGv8NbVsxpP3M=; b=TmWYVL7cPGiRxn4R6wiUa9HXSbsHfD+BZkwuwYfUrcXqEJjj+JyjfO4xHBnDvqVdQD+v+U ZkeQiNecKIV0DjDQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 670CF14050; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 0OeYGLi7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:20 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka , Joerg Roedel , Suravee Suthikulpanit , Will Deacon , David Woodhouse , Lu Baolu , iommu@lists.linux-foundation.org Subject: [PATCH v2 29/33] iommu: Use put_pages_list Date: Wed, 1 Dec 2021 19:15:06 +0100 Message-Id: <20211201181510.18784-30-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=14154; i=vbabka@suse.cz; h=from:subject; bh=pLlXe7dSmOifDup/aySOAuI6iuF91lovDBJ0efL+weE=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7uia2Tt+TMTC5mrb9JgIVqtq/cL3fYBFo3kKe1Y uiqNi0mJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7ogAKCRDgIcpz8YmpEPXkB/ 9F2yl0GALIVvuaFp6Z0dtmri3q4tvGZmEkIeK6BTqTOQ4WBPy7aC74I7uivwwoT0XjZ2w0OlBoBqCm HhChnbR9XArXMewbtlI0uSACjXw3emI5pLN3ceFkfrQ1HcJXiZag5ZJaGbmeU8FtSLd+wKmrcuuHOJ AAX+8M1s9J1CB+8QTN8v80469J3u0wu8wno2pcRoJmhOIOjNfQGIZ8jbnBH3g3Km8hGSAiaRHxmgPp w8JNjl9htRlhaReT4yqdtYsUh7YOjg+8bQ0IZKwtIUb9eBb8hL3I9BLurTlg/JPaY3rfq5AvoWpGwx ig364SAdNAoDWYqRnFmbvqEYkvC4hW X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Stat-Signature: scsaj6jzcxn589jpemyg1eq54aumkjrb Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=NT1RFzzi; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=TmWYVL7c; dmarc=none; spf=pass (imf04.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 24B675000097 X-HE-Tag: 1638382521-768472 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" page->freelist is for the use of slab. We already have the ability to free a list of pages in the core mm, but it requires the use of a list_head and for the pages to be chained together through page->lru. Switch the iommu code over to using put_pages_list(). Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka Cc: Joerg Roedel Cc: Suravee Suthikulpanit Cc: Will Deacon Cc: David Woodhouse Cc: Lu Baolu Cc: --- drivers/iommu/amd/io_pgtable.c | 59 +++++++++------------- drivers/iommu/dma-iommu.c | 11 +---- drivers/iommu/intel/iommu.c | 89 ++++++++++++---------------------- include/linux/iommu.h | 3 +- 4 files changed, 57 insertions(+), 105 deletions(-) diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c index 182c93a43efd..08ea6a02cda9 100644 --- a/drivers/iommu/amd/io_pgtable.c +++ b/drivers/iommu/amd/io_pgtable.c @@ -74,27 +74,15 @@ static u64 *first_pte_l7(u64 *pte, unsigned long *page_size, * ****************************************************************************/ -static void free_page_list(struct page *freelist) -{ - while (freelist != NULL) { - unsigned long p = (unsigned long)page_address(freelist); - - freelist = freelist->freelist; - free_page(p); - } -} - -static struct page *free_pt_page(unsigned long pt, struct page *freelist) +static void free_pt_page(unsigned long pt, struct list_head *list) { struct page *p = virt_to_page((void *)pt); - p->freelist = freelist; - - return p; + list_add(&p->lru, list); } #define DEFINE_FREE_PT_FN(LVL, FN) \ -static struct page *free_pt_##LVL (unsigned long __pt, struct page *freelist) \ +static void free_pt_##LVL (unsigned long __pt, struct list_head *list) \ { \ unsigned long p; \ u64 *pt; \ @@ -113,10 +101,10 @@ static struct page *free_pt_##LVL (unsigned long __pt, struct page *freelist) \ continue; \ \ p = (unsigned long)IOMMU_PTE_PAGE(pt[i]); \ - freelist = FN(p, freelist); \ + FN(p, list); \ } \ \ - return free_pt_page((unsigned long)pt, freelist); \ + free_pt_page((unsigned long)pt, list); \ } DEFINE_FREE_PT_FN(l2, free_pt_page) @@ -125,36 +113,33 @@ DEFINE_FREE_PT_FN(l4, free_pt_l3) DEFINE_FREE_PT_FN(l5, free_pt_l4) DEFINE_FREE_PT_FN(l6, free_pt_l5) -static struct page *free_sub_pt(unsigned long root, int mode, - struct page *freelist) +static void free_sub_pt(unsigned long root, int mode, struct list_head *list) { switch (mode) { case PAGE_MODE_NONE: case PAGE_MODE_7_LEVEL: break; case PAGE_MODE_1_LEVEL: - freelist = free_pt_page(root, freelist); + free_pt_page(root, list); break; case PAGE_MODE_2_LEVEL: - freelist = free_pt_l2(root, freelist); + free_pt_l2(root, list); break; case PAGE_MODE_3_LEVEL: - freelist = free_pt_l3(root, freelist); + free_pt_l3(root, list); break; case PAGE_MODE_4_LEVEL: - freelist = free_pt_l4(root, freelist); + free_pt_l4(root, list); break; case PAGE_MODE_5_LEVEL: - freelist = free_pt_l5(root, freelist); + free_pt_l5(root, list); break; case PAGE_MODE_6_LEVEL: - freelist = free_pt_l6(root, freelist); + free_pt_l6(root, list); break; default: BUG(); } - - return freelist; } void amd_iommu_domain_set_pgtable(struct protection_domain *domain, @@ -362,7 +347,7 @@ static u64 *fetch_pte(struct amd_io_pgtable *pgtable, return pte; } -static struct page *free_clear_pte(u64 *pte, u64 pteval, struct page *freelist) +static void free_clear_pte(u64 *pte, u64 pteval, struct list_head *list) { unsigned long pt; int mode; @@ -373,12 +358,12 @@ static struct page *free_clear_pte(u64 *pte, u64 pteval, struct page *freelist) } if (!IOMMU_PTE_PRESENT(pteval)) - return freelist; + return; pt = (unsigned long)IOMMU_PTE_PAGE(pteval); mode = IOMMU_PTE_MODE(pteval); - return free_sub_pt(pt, mode, freelist); + free_sub_pt(pt, mode, list); } /* @@ -392,7 +377,7 @@ static int iommu_v1_map_page(struct io_pgtable_ops *ops, unsigned long iova, phys_addr_t paddr, size_t size, int prot, gfp_t gfp) { struct protection_domain *dom = io_pgtable_ops_to_domain(ops); - struct page *freelist = NULL; + LIST_HEAD(freelist); bool updated = false; u64 __pte, *pte; int ret, i, count; @@ -412,9 +397,9 @@ static int iommu_v1_map_page(struct io_pgtable_ops *ops, unsigned long iova, goto out; for (i = 0; i < count; ++i) - freelist = free_clear_pte(&pte[i], pte[i], freelist); + free_clear_pte(&pte[i], pte[i], &freelist); - if (freelist != NULL) + if (!list_empty(&freelist)) updated = true; if (count > 1) { @@ -449,7 +434,7 @@ static int iommu_v1_map_page(struct io_pgtable_ops *ops, unsigned long iova, } /* Everything flushed out, free pages now */ - free_page_list(freelist); + put_pages_list(&freelist); return ret; } @@ -511,7 +496,7 @@ static void v1_free_pgtable(struct io_pgtable *iop) { struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, iop); struct protection_domain *dom; - struct page *freelist = NULL; + LIST_HEAD(freelist); unsigned long root; if (pgtable->mode == PAGE_MODE_NONE) @@ -530,9 +515,9 @@ static void v1_free_pgtable(struct io_pgtable *iop) pgtable->mode > PAGE_MODE_6_LEVEL); root = (unsigned long)pgtable->root; - freelist = free_sub_pt(root, pgtable->mode, freelist); + free_sub_pt(root, pgtable->mode, &freelist); - free_page_list(freelist); + put_pages_list(&freelist); } static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index b42e38a0dbe2..e61881c2c258 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -66,14 +66,7 @@ early_param("iommu.forcedac", iommu_dma_forcedac_setup); static void iommu_dma_entry_dtor(unsigned long data) { - struct page *freelist = (struct page *)data; - - while (freelist) { - unsigned long p = (unsigned long)page_address(freelist); - - freelist = freelist->freelist; - free_page(p); - } + put_pages_list((struct list_head *)data); } static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) @@ -479,7 +472,7 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, else if (gather && gather->queued) queue_iova(iovad, iova_pfn(iovad, iova), size >> iova_shift(iovad), - (unsigned long)gather->freelist); + (unsigned long)&gather->freelist); else free_iova_fast(iovad, iova_pfn(iovad, iova), size >> iova_shift(iovad)); diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index b6a8f3282411..cd2ec6779cac 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -1303,35 +1303,30 @@ static void dma_pte_free_pagetable(struct dmar_domain *domain, know the hardware page-walk will no longer touch them. The 'pte' argument is the *parent* PTE, pointing to the page that is to be freed. */ -static struct page *dma_pte_list_pagetables(struct dmar_domain *domain, - int level, struct dma_pte *pte, - struct page *freelist) +static void dma_pte_list_pagetables(struct dmar_domain *domain, + int level, struct dma_pte *pte, + struct list_head *list) { struct page *pg; pg = pfn_to_page(dma_pte_addr(pte) >> PAGE_SHIFT); - pg->freelist = freelist; - freelist = pg; + list_add(&pg->lru, list); if (level == 1) - return freelist; + return; pte = page_address(pg); do { if (dma_pte_present(pte) && !dma_pte_superpage(pte)) - freelist = dma_pte_list_pagetables(domain, level - 1, - pte, freelist); + dma_pte_list_pagetables(domain, level - 1, pte, list); pte++; } while (!first_pte_in_page(pte)); - - return freelist; } -static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level, - struct dma_pte *pte, unsigned long pfn, - unsigned long start_pfn, - unsigned long last_pfn, - struct page *freelist) +static void dma_pte_clear_level(struct dmar_domain *domain, int level, + struct dma_pte *pte, unsigned long pfn, + unsigned long start_pfn, unsigned long last_pfn, + struct list_head *list) { struct dma_pte *first_pte = NULL, *last_pte = NULL; @@ -1350,7 +1345,7 @@ static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level, /* These suborbinate page tables are going away entirely. Don't bother to clear them; we're just going to *free* them. */ if (level > 1 && !dma_pte_superpage(pte)) - freelist = dma_pte_list_pagetables(domain, level - 1, pte, freelist); + dma_pte_list_pagetables(domain, level - 1, pte, list); dma_clear_pte(pte); if (!first_pte) @@ -1358,10 +1353,10 @@ static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level, last_pte = pte; } else if (level > 1) { /* Recurse down into a level that isn't *entirely* obsolete */ - freelist = dma_pte_clear_level(domain, level - 1, - phys_to_virt(dma_pte_addr(pte)), - level_pfn, start_pfn, last_pfn, - freelist); + dma_pte_clear_level(domain, level - 1, + phys_to_virt(dma_pte_addr(pte)), + level_pfn, start_pfn, last_pfn, + list); } next: pfn = level_pfn + level_size(level); @@ -1370,47 +1365,28 @@ static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level, if (first_pte) domain_flush_cache(domain, first_pte, (void *)++last_pte - (void *)first_pte); - - return freelist; } /* We can't just free the pages because the IOMMU may still be walking the page tables, and may have cached the intermediate levels. The pages can only be freed after the IOTLB flush has been done. */ -static struct page *domain_unmap(struct dmar_domain *domain, - unsigned long start_pfn, - unsigned long last_pfn, - struct page *freelist) +static void domain_unmap(struct dmar_domain *domain, unsigned long start_pfn, + unsigned long last_pfn, struct list_head *list) { BUG_ON(!domain_pfn_supported(domain, start_pfn)); BUG_ON(!domain_pfn_supported(domain, last_pfn)); BUG_ON(start_pfn > last_pfn); /* we don't need lock here; nobody else touches the iova range */ - freelist = dma_pte_clear_level(domain, agaw_to_level(domain->agaw), - domain->pgd, 0, start_pfn, last_pfn, - freelist); + dma_pte_clear_level(domain, agaw_to_level(domain->agaw), + domain->pgd, 0, start_pfn, last_pfn, list); /* free pgd */ if (start_pfn == 0 && last_pfn == DOMAIN_MAX_PFN(domain->gaw)) { struct page *pgd_page = virt_to_page(domain->pgd); - pgd_page->freelist = freelist; - freelist = pgd_page; - + list_add(&pgd_page->lru, list); domain->pgd = NULL; } - - return freelist; -} - -static void dma_free_pagelist(struct page *freelist) -{ - struct page *pg; - - while ((pg = freelist)) { - freelist = pg->freelist; - free_pgtable_page(page_address(pg)); - } } /* iommu handling */ @@ -2095,11 +2071,10 @@ static void domain_exit(struct dmar_domain *domain) domain_remove_dev_info(domain); if (domain->pgd) { - struct page *freelist; + LIST_HEAD(pages); - freelist = domain_unmap(domain, 0, - DOMAIN_MAX_PFN(domain->gaw), NULL); - dma_free_pagelist(freelist); + domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw), &pages); + put_pages_list(&pages); } free_domain_mem(domain); @@ -4192,19 +4167,17 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb, { struct dmar_drhd_unit *drhd; struct intel_iommu *iommu; - struct page *freelist; + LIST_HEAD(pages); - freelist = domain_unmap(si_domain, - start_vpfn, last_vpfn, - NULL); + domain_unmap(si_domain, start_vpfn, last_vpfn, &pages); rcu_read_lock(); for_each_active_iommu(iommu, drhd) iommu_flush_iotlb_psi(iommu, si_domain, start_vpfn, mhp->nr_pages, - !freelist, 0); + list_empty(&pages), 0); rcu_read_unlock(); - dma_free_pagelist(freelist); + put_pages_list(&pages); } break; } @@ -5211,8 +5184,7 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain, start_pfn = iova >> VTD_PAGE_SHIFT; last_pfn = (iova + size - 1) >> VTD_PAGE_SHIFT; - gather->freelist = domain_unmap(dmar_domain, start_pfn, - last_pfn, gather->freelist); + domain_unmap(dmar_domain, start_pfn, last_pfn, &gather->freelist); if (dmar_domain->max_addr == iova + size) dmar_domain->max_addr = iova; @@ -5248,9 +5220,10 @@ static void intel_iommu_tlb_sync(struct iommu_domain *domain, for_each_domain_iommu(iommu_id, dmar_domain) iommu_flush_iotlb_psi(g_iommus[iommu_id], dmar_domain, - start_pfn, nrpages, !gather->freelist, 0); + start_pfn, nrpages, + list_empty(&gather->freelist), 0); - dma_free_pagelist(gather->freelist); + put_pages_list(&gather->freelist); } static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain, diff --git a/include/linux/iommu.h b/include/linux/iommu.h index d2f3435e7d17..de0c57a567c8 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -186,7 +186,7 @@ struct iommu_iotlb_gather { unsigned long start; unsigned long end; size_t pgsize; - struct page *freelist; + struct list_head freelist; bool queued; }; @@ -399,6 +399,7 @@ static inline void iommu_iotlb_gather_init(struct iommu_iotlb_gather *gather) { *gather = (struct iommu_iotlb_gather) { .start = ULONG_MAX, + .freelist = LIST_HEAD_INIT(gather->freelist), }; } From patchwork Wed Dec 1 18:15:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1314C433EF for ; Wed, 1 Dec 2021 18:30:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B263D6B009D; Wed, 1 Dec 2021 13:15:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 86C2A6B00A1; Wed, 1 Dec 2021 13:15:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 388366B009D; Wed, 1 Dec 2021 13:15:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id 141606B00A0 for ; Wed, 1 Dec 2021 13:15:38 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6C1408993F for ; Wed, 1 Dec 2021 18:15:22 +0000 (UTC) X-FDA: 78870027684.06.08F1D56 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf08.hostedemail.com (Postfix) with ESMTP id A565230000A8 for ; Wed, 1 Dec 2021 18:15:22 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id B7E7D1FDFF; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382520; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8mheWxBZ58tQ7Vg4bkiAv6PXYeavaSncFl/VEkySlCU=; b=QM/Ob75Qo+r4ufqiaVrGjVZBlKtu8FG/LHy4k9jKNdBTBUQ3Og5tDAmqabw8ijB0uVLSse WCY+M5yMtYWy25aikaSXQnIUbqtuAZ1UT3ixbbTwyPj7HrpxRMZRGxM/YxcxcTPt/bg129 N56oW8kW1rRrERJyO8KoLWcQYDw4Jm0= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382520; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8mheWxBZ58tQ7Vg4bkiAv6PXYeavaSncFl/VEkySlCU=; b=TeiB0O/anFb1X8Yn3Cc3SJmGm6xX0+q34N0dLGysTaxDDasNoy90NCIX5PbaiFE5YPVW2E f73gm+RouepDZ/CQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 99EAD13D9D; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id iAgJJbi7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:20 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 30/33] mm: Remove slab from struct page Date: Wed, 1 Dec 2021 19:15:07 +0100 Message-Id: <20211201181510.18784-31-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3653; i=vbabka@suse.cz; h=from:subject; bh=VB4Uk0F7SdRBBpNuXzDoUgJibZlI+vi95uC0NtNOkZM=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7ul3Sq1ufulztxv3a1e0TGMZ9tQHW4IQwGbUL4E AxsSWTCJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7pQAKCRDgIcpz8YmpEF4nCA Cx36qSZnmrssdHsYMJWEMk7zQxurm+HfDapzNvWnEx4MMLJEyMHBYAhWUMZeeZI0c6Z6jlzwnfC4zw gH/7m6cB6iA6IuXku//UaqL29oWP7ke1Rbam65Dk+J8/DJqtQgxgHrUv6/o6h2Tgi8bUHK80JhvhLc pReT9Vusjlwul8C40dhdYsa5wJRuvyZJLKdw0M9W78Cx6PMQLdFVS5cPAuWqldhd5LwgUkmoTHwzBC unlwTafIwAmrHSPfnJUi3Sl479wLYcgSridG2LPdKgnj8CASEXc9RIYvNzLa3v5yrv928G4D1MS4gZ dhWygoJXlwLnPMu1P3vt8NS1Kl4POV X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Stat-Signature: ztj5ome4itf1utujysgr7syhzt6ydexm X-Rspamd-Queue-Id: A565230000A8 X-Rspamd-Server: rspam07 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="QM/Ob75Q"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="TeiB0O/a"; spf=pass (imf08.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382522-839241 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" All members of struct slab can now be removed from struct page. This shrinks the definition of struct page by 30 LOC, making it easier to understand. Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Vlastimil Babka Acked-by: Johannes Weiner --- include/linux/mm_types.h | 28 ---------------------------- include/linux/page-flags.h | 37 ------------------------------------- mm/slab.h | 4 ---- 3 files changed, 69 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 1ae3537c7920..646f3ed4f6df 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -118,31 +118,6 @@ struct page { atomic_long_t pp_frag_count; }; }; - struct { /* slab, slob and slub */ - union { - struct list_head slab_list; - struct { /* Partial pages */ - struct page *next; -#ifdef CONFIG_64BIT - int pages; /* Nr of pages left */ -#else - short int pages; -#endif - }; - }; - struct kmem_cache *slab_cache; /* not slob */ - /* Double-word boundary */ - void *freelist; /* first free object */ - union { - void *s_mem; /* slab: first object */ - unsigned long counters; /* SLUB */ - struct { /* SLUB */ - unsigned inuse:16; - unsigned objects:15; - unsigned frozen:1; - }; - }; - }; struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ @@ -206,9 +181,6 @@ struct page { * which are currently stored here. */ unsigned int page_type; - - unsigned int active; /* SLAB */ - int units; /* SLOB */ }; /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */ diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index b5f14d581113..1b08e33265fa 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -909,43 +909,6 @@ extern bool is_free_buddy_page(struct page *page); __PAGEFLAG(Isolated, isolated, PF_ANY); -/* - * If network-based swap is enabled, sl*b must keep track of whether pages - * were allocated from pfmemalloc reserves. - */ -static inline int PageSlabPfmemalloc(struct page *page) -{ - VM_BUG_ON_PAGE(!PageSlab(page), page); - return PageActive(page); -} - -/* - * A version of PageSlabPfmemalloc() for opportunistic checks where the page - * might have been freed under us and not be a PageSlab anymore. - */ -static inline int __PageSlabPfmemalloc(struct page *page) -{ - return PageActive(page); -} - -static inline void SetPageSlabPfmemalloc(struct page *page) -{ - VM_BUG_ON_PAGE(!PageSlab(page), page); - SetPageActive(page); -} - -static inline void __ClearPageSlabPfmemalloc(struct page *page) -{ - VM_BUG_ON_PAGE(!PageSlab(page), page); - __ClearPageActive(page); -} - -static inline void ClearPageSlabPfmemalloc(struct page *page) -{ - VM_BUG_ON_PAGE(!PageSlab(page), page); - ClearPageActive(page); -} - #ifdef CONFIG_MMU #define __PG_MLOCKED (1UL << PG_mlocked) #else diff --git a/mm/slab.h b/mm/slab.h index 0760f20686a7..2d50c099a222 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -47,11 +47,7 @@ struct slab { static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl)) SLAB_MATCH(flags, __page_flags); SLAB_MATCH(compound_head, slab_list); /* Ensure bit 0 is clear */ -SLAB_MATCH(slab_list, slab_list); SLAB_MATCH(rcu_head, rcu_head); -SLAB_MATCH(slab_cache, slab_cache); -SLAB_MATCH(s_mem, s_mem); -SLAB_MATCH(active, active); SLAB_MATCH(_refcount, __page_refcount); #ifdef CONFIG_MEMCG SLAB_MATCH(memcg_data, memcg_data); From patchwork Wed Dec 1 18:15:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F5FDC433EF for ; Wed, 1 Dec 2021 18:31:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E81486B009E; Wed, 1 Dec 2021 13:15:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C50096B00A3; Wed, 1 Dec 2021 13:15:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 64C5F6B009E; Wed, 1 Dec 2021 13:15:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0201.hostedemail.com [216.40.44.201]) by kanga.kvack.org (Postfix) with ESMTP id 14F936B009C for ; Wed, 1 Dec 2021 13:15:38 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 758921815BBFD for ; Wed, 1 Dec 2021 18:15:22 +0000 (UTC) X-FDA: 78870027684.02.E99A503 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf23.hostedemail.com (Postfix) with ESMTP id EEC4590000AA for ; Wed, 1 Dec 2021 18:15:21 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id EE5B51FE00; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382520; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XZjPNTGeYgeJU7BzAGxIgYnoqBmAt3jbNWLNHSCHwj8=; b=tFIyyHyXhefRcMZhzoPmf7dKmh0Lqk3fXjGe4oRb4rRLFF3hmx9VCSKnTwouT5jl3uNkMj HW3te9peW3oY8j8RTaOljIQG2reTkTlfru/6D/bICIkm69MvLQd5UQOWpFfS9pYK44bCNR JSq0E7L9ELoToAGcfX6nf/Ckyssrhew= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382520; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XZjPNTGeYgeJU7BzAGxIgYnoqBmAt3jbNWLNHSCHwj8=; b=uEE09zEvA3IQB94KFE2ThSsGe3tZeOOXI2cZEzFBVva5LdsWRuIe0Wk52UK+a9GzXhpCj7 Mho9LeMTCRQijFAw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BAE1214050; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id ILcZLbi7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:20 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka , Marco Elver , Alexander Potapenko , Dmitry Vyukov , kasan-dev@googlegroups.com Subject: [PATCH v2 31/33] mm/sl*b: Differentiate struct slab fields by sl*b implementations Date: Wed, 1 Dec 2021 19:15:08 +0100 Message-Id: <20211201181510.18784-32-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4577; h=from:subject; bh=VqBKrlF8E1KP8dUW3jPbmTVnQ1WbeBkExRZte4OfzrU=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7unZdUzLs0RNxbE/xmUdWJ810YKUGhGeSZfPW6o 7rs8ZZ2JATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7pwAKCRDgIcpz8YmpEIPsB/ 9/DvPvnX4saooYE2CjNL/WpfwKE9KVeMTAfr2tSHpcIbRGXVO4R5UjvNOGzrOfs15j2/BgmZ2zBqrM FcwzelfEE+fyPKRmfzCLhfNbrCGQY+1wP/p07yHmAuV/Qco3xyynOf0epe0HDPhXa2AljhKEflo+U/ lef6W9BycDdK1D8Bjka9VgpPbrTJsSiSn72/qpX2KPmYBvGM8xbw5LeiY1y4WuiLWJQQfaNwUAyURt zDpxfztinl2aDcm0yetB/cGl3RcXxYBwRbWiW4hUbo6bESX32RnCR297fo35LbADHhEuA9bVcOs2t5 jo1r/oPaVD5EClfgWhqm5bor1t37SQ X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: EEC4590000AA X-Stat-Signature: 44rzs8n8g5s3a3qwumeewc1z9rhx65j4 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=tFIyyHyX; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=uEE09zEv; spf=pass (imf23.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382521-427476 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With a struct slab definition separate from struct page, we can go further and define only fields that the chosen sl*b implementation uses. This means everything between __page_flags and __page_refcount placeholders now depends on the chosen CONFIG_SL*B. Some fields exist in all implementations (slab_list) but can be part of a union in some, so it's simpler to repeat them than complicate the definition with ifdefs even more. The patch doesn't change physical offsets of the fields, although it could be done later - for example it's now clear that tighter packing in SLOB could be possible. This should also prevent accidental use of fields that don't exist in given implementation. Before this patch virt_to_cache() and and cache_from_obj() was visible for SLOB (albeit not used), although it relies on the slab_cache field that isn't set by SLOB. With this patch it's now a compile error, so these functions are now hidden behind #ifndef CONFIG_SLOB. Signed-off-by: Vlastimil Babka Tested-by: Marco Elver # kfence Cc: Alexander Potapenko Cc: Marco Elver Cc: Dmitry Vyukov Cc: --- mm/kfence/core.c | 9 +++++---- mm/slab.h | 46 ++++++++++++++++++++++++++++++++++++---------- 2 files changed, 41 insertions(+), 14 deletions(-) diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 4eb60cf5ff8b..46103a7628a6 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -427,10 +427,11 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g /* Set required slab fields. */ slab = virt_to_slab((void *)meta->addr); slab->slab_cache = cache; - if (IS_ENABLED(CONFIG_SLUB)) - slab->objects = 1; - if (IS_ENABLED(CONFIG_SLAB)) - slab->s_mem = addr; +#if defined(CONFIG_SLUB) + slab->objects = 1; +#elif defined (CONFIG_SLAB) + slab->s_mem = addr; +#endif /* Memory initialization. */ for_each_canary(meta, set_canary_byte); diff --git a/mm/slab.h b/mm/slab.h index 2d50c099a222..8c5a8c005896 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -8,9 +8,24 @@ /* Reuses the bits in struct page */ struct slab { unsigned long __page_flags; + +#if defined(CONFIG_SLAB) + + union { + struct list_head slab_list; + struct rcu_head rcu_head; + }; + struct kmem_cache *slab_cache; + void *freelist; /* array of free object indexes */ + void * s_mem; /* first object */ + unsigned int active; + +#elif defined(CONFIG_SLUB) + union { struct list_head slab_list; - struct { /* Partial pages */ + struct rcu_head rcu_head; + struct { struct slab *next; #ifdef CONFIG_64BIT int slabs; /* Nr of slabs left */ @@ -18,25 +33,32 @@ struct slab { short int slabs; #endif }; - struct rcu_head rcu_head; }; - struct kmem_cache *slab_cache; /* not slob */ + struct kmem_cache *slab_cache; /* Double-word boundary */ void *freelist; /* first free object */ union { - void *s_mem; /* slab: first object */ - unsigned long counters; /* SLUB */ - struct { /* SLUB */ + unsigned long counters; + struct { unsigned inuse:16; unsigned objects:15; unsigned frozen:1; }; }; + unsigned int __unused; + +#elif defined(CONFIG_SLOB) + + struct list_head slab_list; + void * __unused_1; + void *freelist; /* first free block */ + void * __unused_2; + int units; + +#else +#error "Unexpected slab allocator configured" +#endif - union { - unsigned int active; /* SLAB */ - int units; /* SLOB */ - }; atomic_t __page_refcount; #ifdef CONFIG_MEMCG unsigned long memcg_data; @@ -47,7 +69,9 @@ struct slab { static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl)) SLAB_MATCH(flags, __page_flags); SLAB_MATCH(compound_head, slab_list); /* Ensure bit 0 is clear */ +#ifndef CONFIG_SLOB SLAB_MATCH(rcu_head, rcu_head); +#endif SLAB_MATCH(_refcount, __page_refcount); #ifdef CONFIG_MEMCG SLAB_MATCH(memcg_data, memcg_data); @@ -623,6 +647,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, } #endif /* CONFIG_MEMCG_KMEM */ +#ifndef CONFIG_SLOB static inline struct kmem_cache *virt_to_cache(const void *obj) { struct slab *slab; @@ -669,6 +694,7 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) print_tracking(cachep, x); return cachep; } +#endif /* CONFIG_SLOB */ static inline size_t slab_ksize(const struct kmem_cache *s) { From patchwork Wed Dec 1 18:15:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 111C3C433F5 for ; Wed, 1 Dec 2021 18:31:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 17D1E6B00A1; Wed, 1 Dec 2021 13:15:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DE7226B009F; Wed, 1 Dec 2021 13:15:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A65986B00A4; Wed, 1 Dec 2021 13:15:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0205.hostedemail.com [216.40.44.205]) by kanga.kvack.org (Postfix) with ESMTP id 156216B00A1 for ; Wed, 1 Dec 2021 13:15:38 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8F3E9181616CA for ; Wed, 1 Dec 2021 18:15:22 +0000 (UTC) X-FDA: 78870027684.07.1F4B320 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf09.hostedemail.com (Postfix) with ESMTP id 007663000109 for ; Wed, 1 Dec 2021 18:15:23 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 0E5C01FE01; Wed, 1 Dec 2021 18:15:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382521; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SlUv2OtrC7eRXASO71bKR3Wlk76iz2Ou0YyjK0XRGig=; b=DnSf2A6NCH10S73TLXwAJYUNL+xOvhQ3GN559e72v4YhgY2y7Atyff01DHcJYt6qCx9Ojt p2tM5QV+M80dELSnLNYeyfdgFnFeGjBIlkXm5jbxdh8vDRBy4DRbSG1PJvgc8S1LUS61XP orgANGZ5HfnNazPWlL1XY1ZgbxSxkKo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382521; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SlUv2OtrC7eRXASO71bKR3Wlk76iz2Ou0YyjK0XRGig=; b=QZYELxLu0RZtD+MkfYvbOD6sgO25lHF1P5GawuNdwVSS8IH5gntYr0pH56KTl7FsgheJBl mjPciRkLb19n/8BA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E4E2E13D9D; Wed, 1 Dec 2021 18:15:20 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id gJZIN7i7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:20 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 32/33] mm/slub: Simplify struct slab slabs field definition Date: Wed, 1 Dec 2021 19:15:09 +0100 Message-Id: <20211201181510.18784-33-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=800; h=from:subject; bh=Bs7CnNqvzouQkyTUczu6bOXcjfNRkUUfsin8TCcV3Ow=; b=owGbwMvMwMH4QPFU8cfOlQKMp9WSGBKX7155/pJ1SuhvdR3vS332BuLpl4t8n9VPOtge/FnXearW Hh+1TkZjFgZGDgZZMUWWXu/JjCtNH0vs84g7AzOIlQlkCgMXpwBM5HEu+z+D8vwW5k1+RuLy+Q4eM4 TerfGf7bhhx626jrB3Eefc6x8vnSX+wPrv/zaf29piV1JZFH4KnjDuOy3bE7nAM//WuTQxvfY9Ex7n Kjy16Zmh1ely4XlD8ev+bLEdi9ezGHoxz17eztteEnUpud9gfXMgV67e6vUvNtqfd9LdYbRLcbbzcq uvx1varV3/c/d5us/mC63RWTzBQlys55Nv1Hep9c1aB6dODv+XdFXkkIK+dFzjFccnmzcuW/3I9bno R7v9HyqOZgtdjWPRL9u2eUKzvshB7dXr3h6/XbW6W4gtMKr9qHmIUcgrngP2+kz5JvqaMVN91No65p xztplrk28oJeaQGZDbcf2Zz+kFAA== X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Stat-Signature: ucq4k5u1bxzsg771y9143j5hd6qjmpki X-Rspamd-Queue-Id: 007663000109 X-Rspamd-Server: rspam07 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=DnSf2A6N; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=QZYELxLu; spf=pass (imf09.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382523-222976 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Before commit b47291ef02b0 ("mm, slub: change percpu partial accounting from objects to pages") we had to fit two integer fields into a native word size, so we used short int on 32-bit and int on 64-bit via #ifdef. After that commit there is only one integer field, so we can simply define it as int everywhere. Signed-off-by: Vlastimil Babka Acked-by: Johannes Weiner --- mm/slab.h | 4 ---- 1 file changed, 4 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 8c5a8c005896..ee4344a44987 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -27,11 +27,7 @@ struct slab { struct rcu_head rcu_head; struct { struct slab *next; -#ifdef CONFIG_64BIT int slabs; /* Nr of slabs left */ -#else - short int slabs; -#endif }; }; struct kmem_cache *slab_cache; From patchwork Wed Dec 1 18:15:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12650779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D3BAC433F5 for ; Wed, 1 Dec 2021 18:32:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 42CA16B009F; Wed, 1 Dec 2021 13:15:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 13F966B00A4; Wed, 1 Dec 2021 13:15:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8BD96B00A2; Wed, 1 Dec 2021 13:15:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id 38ED36B00A3 for ; Wed, 1 Dec 2021 13:15:38 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BEE0A824C42E for ; Wed, 1 Dec 2021 18:15:22 +0000 (UTC) X-FDA: 78870027684.03.BB564E3 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf24.hostedemail.com (Postfix) with ESMTP id EE864B0000A2 for ; Wed, 1 Dec 2021 18:15:21 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 453521FE02; Wed, 1 Dec 2021 18:15:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638382521; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EbhpLd+Q4B2M+R0uCA9t2TiAJGnc3XWbqibvAWCPhjU=; b=DOoG0xpGWK48kuGkzPBns42ZWeB3Zlc7D2dZ/FWk1HdmydKs8DNEUzMNPS5i0OylFS/1yp nV0TH2Qn4QqnQKtYBIBP3tyKgGzsSIgZ/7pJOl2ME8CGSz8PWD+iJwT2WfmF3Gawv5zdjS N7i5keQJJiQmRIhPo7lYJAvLe/vW0AM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638382521; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EbhpLd+Q4B2M+R0uCA9t2TiAJGnc3XWbqibvAWCPhjU=; b=1Gfb1nKWeKO3ojRYR2YpP9Bv9c/lBSC0fqB37339FT7b7lOLXH91CuDk7H8rLCz3TAwPgS 1elJGn+jdSitHmCw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1136614050; Wed, 1 Dec 2021 18:15:21 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id IDOuA7m7p2HPSAAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:15:21 +0000 From: Vlastimil Babka To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2 33/33] mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled Date: Wed, 1 Dec 2021 19:15:10 +0100 Message-Id: <20211201181510.18784-34-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> References: <20211201181510.18784-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2322; h=from:subject; bh=m/z7+hlXnV3bCVCWVrLaMyJvvbYxX/z2IC4fNzb2k5U=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhp7usdZf4xHNwKrqZ8Vb+Rdj6YUM2kPeGgg+UWq+m r4/yMLWJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYae7rAAKCRDgIcpz8YmpEDgTCA ChfJZ4KJYE2UVojqmdcj+hW1R2bJrjwyO/TAsOIC2e5gHglGppN2zz67dqWCfkw4RUGwc/UIr5mzuD RAkXpcIiLaduq5/rqhs4qGEBjl3hSIQdjqV226o5wgw4bJBmQjQcwUO80SLj4tN3uQxXRpI+DF2wlO jS0BlZJHZcIkx5reEAhMrnJYnZeKosd0bcwwLoyoTQQ/jw+bFsSnwMYAQ5Sgldh1YLBZtqlGK79hrP FQQnWWTcWIWtsS3BrGx4SyE+Fs3cxz1u35QdrxFFg35LzLAcKAGRn7G0bTlUkifzewfX6zxJ7997gN zJxAhU62qKaFzoSITVVE/o11wZG+m8 X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: EE864B0000A2 X-Stat-Signature: sxyakj93bq6p17xew8sqc9buykzkbf3r Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=DOoG0xpG; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=1Gfb1nKW; spf=pass (imf24.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638382521-43459 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The fields 'next' and 'slabs' are only used when CONFIG_SLUB_CPU_PARTIAL is enabled. We can put their definition to #ifdef to prevent accidental use when disabled. Currenlty show_slab_objects() and slabs_cpu_partial_show() contain code accessing the slabs field that's effectively dead with CONFIG_SLUB_CPU_PARTIAL=n through the wrappers slub_percpu_partial() and slub_percpu_partial_read_once(), but to prevent a compile error, we need to hide all this code behind #ifdef. Signed-off-by: Vlastimil Babka --- mm/slab.h | 2 ++ mm/slub.c | 8 ++++++-- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index ee4344a44987..46ea468b29aa 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -25,10 +25,12 @@ struct slab { union { struct list_head slab_list; struct rcu_head rcu_head; +#ifdef CONFIG_SLUB_CPU_PARTIAL struct { struct slab *next; int slabs; /* Nr of slabs left */ }; +#endif }; struct kmem_cache *slab_cache; /* Double-word boundary */ diff --git a/mm/slub.c b/mm/slub.c index 58f0d499a293..978d754d0ada 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5256,6 +5256,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s, total += x; nodes[node] += x; +#ifdef CONFIG_SLUB_CPU_PARTIAL slab = slub_percpu_partial_read_once(c); if (slab) { node = slab_nid(slab); @@ -5268,6 +5269,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s, total += x; nodes[node] += x; } +#endif } } @@ -5467,9 +5469,10 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf) { int objects = 0; int slabs = 0; - int cpu; + int cpu __maybe_unused; int len = 0; +#ifdef CONFIG_SLUB_CPU_PARTIAL for_each_online_cpu(cpu) { struct slab *slab; @@ -5478,12 +5481,13 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf) if (slab) slabs += slab->slabs; } +#endif /* Approximate half-full slabs, see slub_set_cpu_partial() */ objects = (slabs * oo_objects(s->oo)) / 2; len += sysfs_emit_at(buf, len, "%d(%d)", objects, slabs); -#ifdef CONFIG_SMP +#if defined(CONFIG_SLUB_CPU_PARTIAL) && defined(CONFIG_SMP) for_each_online_cpu(cpu) { struct slab *slab;