From patchwork Mon Dec 4 19:34:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13479022 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61072C10DC1 for ; Mon, 4 Dec 2023 19:34:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AAAAF6B02EB; Mon, 4 Dec 2023 14:34:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C5286B02FA; Mon, 4 Dec 2023 14:34:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 701AC6B02F1; Mon, 4 Dec 2023 14:34:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 420806B02E7 for ; Mon, 4 Dec 2023 14:34:57 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id DE8751C0608 for ; Mon, 4 Dec 2023 19:34:56 +0000 (UTC) X-FDA: 81530138592.15.8ACD716 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf02.hostedemail.com (Postfix) with ESMTP id 92A1F8000C for ; Mon, 4 Dec 2023 19:34:54 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=3aoiT3pa; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=LJvwh73C; dmarc=none; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701718495; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EqcJTAjZFB8lQHDKYDjIZchVKot1dBAP9whijzHZRBs=; b=aeXyAkIn8HuVATBNRJ71XpjYPNAuq0XXtyCssu5q9v5hH+QC33XpiP2ChrkhpU6I1kQLfv hMl1nGR6/3udnrBCrs3ndww5674vszEM+/wvmOTZBM8Bx+t1Qm2LDNRdSSr9g4WeTyaVPR t3+EpfHRw0QWYo56GU1SCCR2aG9jR8c= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=3aoiT3pa; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=LJvwh73C; dmarc=none; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701718495; a=rsa-sha256; cv=none; b=cJ2f7baYUJlHwdKwE2PCEWUvKZoeRd/9z/vFaTUKDTAzsmZUd6zTTc1A71NgiDYXeGAUbt GQ9IEMo2fItGEIn6s/T2XisDQZu1oZoq3n9dd1MV0il0oE324BBD4VAj39CQgwsQ0IfI8C BX9JgsJKNalCik69ZfFBZM+kp4ALbss= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 8DF80220FD; Mon, 4 Dec 2023 19:34:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1701718492; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EqcJTAjZFB8lQHDKYDjIZchVKot1dBAP9whijzHZRBs=; b=3aoiT3paks0mcXiEeezN/xPw9jv7Di3tra3+7n/QDct0IcBebFNeztkLnzZNR1eA+r+5P7 5HTYlXZjd6w/d/4hPiYwAl8U4a6RDNwPAenVha70NBnhh7hjIhw5Mnrb5mEfP6x/vm7DcI RvtCEN+PAXA5BPRclQuH0Zu5ugFOvzQ= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1701718492; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EqcJTAjZFB8lQHDKYDjIZchVKot1dBAP9whijzHZRBs=; b=LJvwh73CRk0ogt5DyHZYk9KKPNPbooLArhcWhOV7Z8vEi3KnNjT7llf/nkqXmRF2A2S/E1 I9N7BuX+zof3jCDw== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 70C1D13AC1; Mon, 4 Dec 2023 19:34:52 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id KHI2G9wpbmUPMwAAD6G6ig (envelope-from ); Mon, 04 Dec 2023 19:34:52 +0000 From: Vlastimil Babka Date: Mon, 04 Dec 2023 20:34:40 +0100 Subject: [PATCH 1/4] mm/slub: fix bulk alloc and free stats MIME-Version: 1.0 Message-Id: <20231204-slub-cleanup-hooks-v1-1-88b65f7cd9d5@suse.cz> References: <20231204-slub-cleanup-hooks-v1-0-88b65f7cd9d5@suse.cz> In-Reply-To: <20231204-slub-cleanup-hooks-v1-0-88b65f7cd9d5@suse.cz> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim Cc: Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Alexander Potapenko , Marco Elver , Dmitry Vyukov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.12.4 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 92A1F8000C X-Stat-Signature: cikwf3th78pidcmfqmr4s6e4yg7g1scp X-Rspam-User: X-HE-Tag: 1701718494-246981 X-HE-Meta: U2FsdGVkX1+VbpQYNt7h7yOMnBAKKOHhWP7XOgczj9ab3fDos7jYLmkKfpRaNkP1ZsBPWlj/MZstwmNwPWr9e7k3teKlrv4rxrOPlgPjTycY4Rlo7UPqpa6iz1x/IPlU0PJkNiu8N8CEEDjSpHvR9OEWoiLYnZoEL16UFzApYCaAhbR9hZekud5AqJdUrJcOuSeKZQ475Fl7E9YDb/mEzovRe/lcRmWZxEyX0Ei2NCOU3MANfZFCGD6zNJwCkFzfaQ8PgvspHV3ZtluhptfyTT0uX99rRh9185j3Y9gZRAxhtG3TivM7yOJARHIFsMXGK30aay1DgCY4Z9IRo2y57Hlo7It/y69qqQcsPzP3ApDNSD3E1EA0orJhwVuI9vV5m3oUZwcEkVhBhWFk58JEWkn9Hvc74xdu6v5xHT16uYsfMohjQ1oQgFCGQzZwTPBSb9LsSZHg8WBzQw3+VuemCziOD8Y+jlLBdolW28nO9yKhuUsg8ZT6bZZDwaDHgT0DP3HKXiF7wZKmiUB0auC4AZT7jCdnIeHxTSZmiWY1wn0J8Eh55+YElytc3G+aZZgQ3uxWl0miFmN8QWUiD013aQFv5zra29kXsDfUN6kLLItGb3X2yF3OiYUlTkEkEzdwxCJ0y3Zqsk+ITknzn4757WYLYorWkzo0xSToylN1lyj8wgmPapa7WZZKz/W8twaaE0quBnPCq0JBtqFmZ30RUIwPG/FB5GyPLPbArSG5gQXIwRbOJut5IzaiTn1bY069kxlCNz1vffbvg5iY6UPCHCcYdjgZT2iWx26oyBDMetF0hJ1HsrZfo7dGZks3pvtwluMbK39PobvAKjC1TYIycM/IVlEPRgv6nrEHM26nuv+D0MGq1+t6RZ6lrXn6qUr+OoDkgpauRUhvws9qKWMffStdlItDo+uJ9LCtyKJkh5qv3/d35d7xT0330/gBOl8T4GnUXBhsw01r9tpCX8A WNMFuynH dJrcmNxGG7+x3JCcwHOCv+NIC0YEV2VycYnKAgjXFMFu3aOHO6t1dLOv26QDi2KqrTEFVH4K3ZlPprRXr3IdNaEKmT6u2nH49xlhVw1UnObqGtmri75mgt4rv43GLpNYpuPjKMME7JA6ec4KHag7aDFIkyVtrhxxoIYyQR29Zt9OKbpNzFpWMQFkcG0vy+p++UuTgmA8k5Cjz5Fbkv46rBQqMatpOg36lS8CQenk0knFBSsS+jj3SzHfEFBK3CDNVhq/DwkPDov3YNa/BrYtMpNDv1/eZas8lOXJM3k9iT6+24Zs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The SLUB sysfs stats enabled CONFIG_SLUB_STATS have two deficiencies identified wrt bulk alloc/free operations: - Bulk allocations from cpu freelist are not counted. Add the ALLOC_FASTPATH counter there. - Bulk fastpath freeing will count a list of multiple objects with a single FREE_FASTPATH inc. Add a stat_add() variant to count them all. Signed-off-by: Vlastimil Babka Reviewed-by: Chengming Zhou --- mm/slub.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 3f8b95757106..d7b0ca6012e0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -396,6 +396,14 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si) #endif } +static inline +void stat_add(const struct kmem_cache *s, enum stat_item si, int v) +{ +#ifdef CONFIG_SLUB_STATS + raw_cpu_add(s->cpu_slab->stat[si], v); +#endif +} + /* * The slab lists for all objects. */ @@ -4268,7 +4276,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s, local_unlock(&s->cpu_slab->lock); } - stat(s, FREE_FASTPATH); + stat_add(s, FREE_FASTPATH, cnt); } #else /* CONFIG_SLUB_TINY */ static void do_slab_free(struct kmem_cache *s, @@ -4545,6 +4553,7 @@ static inline int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, c->freelist = get_freepointer(s, object); p[i] = object; maybe_wipe_obj_freeptr(s, p[i]); + stat(s, ALLOC_FASTPATH); } c->tid = next_tid(c->tid); local_unlock_irqrestore(&s->cpu_slab->lock, irqflags); From patchwork Mon Dec 4 19:34:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13479025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE6A9C4167B for ; Mon, 4 Dec 2023 19:35:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 252496B02EF; Mon, 4 Dec 2023 14:34:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EB3316B02F0; Mon, 4 Dec 2023 14:34:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C595E6B02F6; Mon, 4 Dec 2023 14:34:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 58FB16B02F0 for ; Mon, 4 Dec 2023 14:34:57 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 332161C0502 for ; Mon, 4 Dec 2023 19:34:57 +0000 (UTC) X-FDA: 81530138634.14.4A6E400 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf17.hostedemail.com (Postfix) with ESMTP id AD7CB40004 for ; Mon, 4 Dec 2023 19:34:54 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=cYuegURd; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=z8JNsCGQ; dmarc=none; spf=pass (imf17.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701718495; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4fSahCwhPLa3fIglvL1TRmtTOqYm3xhkbDoawZHdw9A=; b=TtOEv6/0if8Xin5VKmXWsGPxcV0MAunevfCYxbwPsHrNZrTPMkngt6f1PSkSr9YjEJFSBo EeBJg6FGMqaRi+k7V5E1hY2dgj7VgzLsdYa8iq/N59/uw0Az64xxlfONZr+f3UIO4GBuY4 zdKlMDGRHmyfNZ53aNBQsO7hPIzwoBs= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=cYuegURd; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=z8JNsCGQ; dmarc=none; spf=pass (imf17.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701718495; a=rsa-sha256; cv=none; b=2TEpJvdxw8k5lTDGQMRfRtwWgRawkEQA/o33KC19j8ysdYXmJTDUiY6dD3rIxLJFlc6IsU DXgyH1Sc6U1VRRJs18WdMAKp9VTR/GbsUjyXPO30Bv+5ujKql77Ee9yKAQzlTDckQtYq4M Tif3RsSJ7G1bIjmNkB/ZycM5hw0lUto= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id A51611FE6E; Mon, 4 Dec 2023 19:34:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1701718492; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4fSahCwhPLa3fIglvL1TRmtTOqYm3xhkbDoawZHdw9A=; b=cYuegURdK6TarNJSSo68uWkNBOrhbmxZDmoXF/uS7znw/iEtrWLpnV32Dfv5L/PQtD6jMV QULfAq7/mAGtPm9Cbv6R2L7cGyXMaX4ps56QutuI/yGpFkZL7SUzA/8QS4L54o79rRjq4f NWPBKnLkVzZDYQTjYHbbwLf2ZFVPFKY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1701718492; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4fSahCwhPLa3fIglvL1TRmtTOqYm3xhkbDoawZHdw9A=; b=z8JNsCGQ0Xr3e3YcoHYBA6JiZnYEPfBf9mIKjtgLx1BoQFMrFN0F1iTKMetguIwoGikLxp evtQCq2pw+erbpCw== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 8875313AC2; Mon, 4 Dec 2023 19:34:52 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id OCn7INwpbmUPMwAAD6G6ig (envelope-from ); Mon, 04 Dec 2023 19:34:52 +0000 From: Vlastimil Babka Date: Mon, 04 Dec 2023 20:34:41 +0100 Subject: [PATCH 2/4] mm/slub: introduce __kmem_cache_free_bulk() without free hooks MIME-Version: 1.0 Message-Id: <20231204-slub-cleanup-hooks-v1-2-88b65f7cd9d5@suse.cz> References: <20231204-slub-cleanup-hooks-v1-0-88b65f7cd9d5@suse.cz> In-Reply-To: <20231204-slub-cleanup-hooks-v1-0-88b65f7cd9d5@suse.cz> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim Cc: Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Alexander Potapenko , Marco Elver , Dmitry Vyukov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.12.4 X-Rspam-User: X-Stat-Signature: j7p3gfs8qtkrp4g7iaqmcaruqw9rybyn X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: AD7CB40004 X-HE-Tag: 1701718494-946256 X-HE-Meta: U2FsdGVkX19aOry7Oiy1LDuEjxGTNRL/Pdz72rVMKaeLbFtn248COiDjEspeS5SHOwqJKGLrIHTXUG3VUYkgJtgqOe8o+/M8tudH1yfS8Itzka5+B4LbQ7T9u4zpOJhsVbWPrwgZ3+iT1NXnu8yJmzewrOYHSNpNQt+rrWLCy9FgDmtk0tzXfKx+ZJnqyrDsA7V1Aqca8Z/kohrLhhPhYXbGe8JkKQf5FWRvx46+1qncsPcfCdX4kGp8urjDru5IJ/b1+hgh5WBtb7kmGfxudSwzej5hBqwSKS8hePbhvnBCEvTMVfjDFtg37zKZCCr5YZ1fOTmtDCBdIVY4dxXMYCSYGLG7at4u5Jtv8eL66BUuKcyFeJt89imn8Bza8OmbnSwghRaprzNKYbqZaFnwA9Qp2tfjBzZUAFN6xCeLQ0lU6cReXrwbXublYyt6JV8parn+XkxPImUzA7IoIDvfUAz0i6HtqiiZYBPklQM5Je3RJYfN73Y++8dbb8RWDFesqswx5PmjMphF8ikudJ20xSxFh/wi1AXXolI3odKO3w8byF5JEAUAofPE0cpt+nIU0c7F3IT/NuatSCqCmtiZ/+fcKZLO7jIBduZ6BT+bqLkeI9gnUXu13bU5YKYH3fU8hmQGOnjl6pIdkQ53Uyk9n4IZhnLMs0/3KZ9zSwxDsTutfhCga91Vg7qLHRp6Z5fbL5VprXocX4A9f4vthRHfaj5Nns5hlWNUsZAa0VPvvILCBX5ZwOt0yKAG6LXi7ipj+gi4lyvfAB8XrOsM1zzRy0hPcEslSFnNFzzk9CUPLknQggOsUvn0MYlEIv+Is4FTBFDruJqaTFvu3ltiGy0aYllscGtHWqidJP86mUbq4t1il12NqIYLIWmXDvIPrFXM29EvTlhs0iIuVNPSnNuz+NCZPUJPRo3+smsIIgHlIJcvXmLuAuVn0kfgmKVQyktmo2SuoRd+EU9O/uIzCIf jGKxBN66 na8WiAgMUCqH3tTp26LZH6vG6bPUqtw3Y5Dbyz4Hozi8E4s3uESwBcb4Rz37wthlty7cmdqtpHA7eo9lq970DJaj6ODtbENR5fEw3CEH5tsVdZF03xDY22u6fqZtEYBJ0ZzaZ4UGqvTvm+ZeyrFRSWYWySwdofY8Hl6Y8zU+Vga6YBaDqj2PhSU1iwQ5W/nSV4ii5TYa3YUXckRck3liJQHzljYJzGgmpbxvqlFrVcQ9nM0tO2nFDzzpe1i8fpVkaamEGHfYdNXsotrnBIPWWNmRqjagsAJ3R9W4+/iR3KJT/Ep2kKBdMZ2aEe/X0+MWs/qblNdw+1maD0OPCFVoh6Ie/R6ZcuwRz7GOd X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, when __kmem_cache_alloc_bulk() fails, it frees back the objects that were allocated before the failure, using kmem_cache_free_bulk(). Because kmem_cache_free_bulk() calls the free hooks (KASAN etc.) and those expect objects that were processed by the post alloc hooks, slab_post_alloc_hook() is called before kmem_cache_free_bulk(). This is wasteful, although not a big concern in practice for the rare error path. But in order to efficiently handle percpu array batch refill and free in the near future, we will also need a variant of kmem_cache_free_bulk() that avoids the free hooks. So introduce it now and use it for the failure path. As a consequence, __kmem_cache_alloc_bulk() no longer needs the objcg parameter, remove it. Signed-off-by: Vlastimil Babka Signed-off-by: Vlastimil Babka Reviewed-by: Chengming Zhou --- mm/slub.c | 33 ++++++++++++++++++++++++++------- 1 file changed, 26 insertions(+), 7 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index d7b0ca6012e0..0742564c4538 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4478,6 +4478,27 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, return same; } +/* + * Internal bulk free of objects that were not initialised by the post alloc + * hooks and thus should not be processed by the free hooks + */ +static void __kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) +{ + if (!size) + return; + + do { + struct detached_freelist df; + + size = build_detached_freelist(s, size, p, &df); + if (!df.slab) + continue; + + do_slab_free(df.s, df.slab, df.freelist, df.tail, df.cnt, + _RET_IP_); + } while (likely(size)); +} + /* Note that interrupts must be enabled when calling this function. */ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) { @@ -4499,7 +4520,7 @@ EXPORT_SYMBOL(kmem_cache_free_bulk); #ifndef CONFIG_SLUB_TINY static inline int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, - size_t size, void **p, struct obj_cgroup *objcg) + size_t size, void **p) { struct kmem_cache_cpu *c; unsigned long irqflags; @@ -4563,14 +4584,13 @@ static inline int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, error: slub_put_cpu_ptr(s->cpu_slab); - slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size); - kmem_cache_free_bulk(s, i, p); + __kmem_cache_free_bulk(s, i, p); return 0; } #else /* CONFIG_SLUB_TINY */ static int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, - size_t size, void **p, struct obj_cgroup *objcg) + size_t size, void **p) { int i; @@ -4593,8 +4613,7 @@ static int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, return i; error: - slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size); - kmem_cache_free_bulk(s, i, p); + __kmem_cache_free_bulk(s, i, p); return 0; } #endif /* CONFIG_SLUB_TINY */ @@ -4614,7 +4633,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, if (unlikely(!s)) return 0; - i = __kmem_cache_alloc_bulk(s, flags, size, p, objcg); + i = __kmem_cache_alloc_bulk(s, flags, size, p); /* * memcg and kmem_cache debug support and memory initialization. From patchwork Mon Dec 4 19:34:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13479021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AE34C10DC3 for ; Mon, 4 Dec 2023 19:34:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 817856B02EC; Mon, 4 Dec 2023 14:34:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 79F386B02EB; Mon, 4 Dec 2023 14:34:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 63F5D6B02F3; Mon, 4 Dec 2023 14:34:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 42DEB6B02EB for ; Mon, 4 Dec 2023 14:34:57 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 0357CC0346 for ; Mon, 4 Dec 2023 19:34:56 +0000 (UTC) X-FDA: 81530138634.14.BFD6114 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf14.hostedemail.com (Postfix) with ESMTP id 9D00B100009 for ; Mon, 4 Dec 2023 19:34:54 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=heFvZSEq; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=AYjmL9sa; dmarc=none; spf=pass (imf14.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701718495; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oMJ38qh2PsFVzYJdgL9aE/PQtVEO5BW0t1lp5LOzGhE=; b=w2NLAyhU7HpbFjxaQKBdiRGDgPcHDUEPKBJGq6UT4RfInrMz8aD68CZo6hunUCLvRzoi4J pFY2H9Hfw3Wt7WHlD0S5zwKQyJUrLoU+HFEBHiKw+pNwZD7HMiGxMg/YONRX709euwBcRH RKB0vfGqIO10wJ4GE19dXONQH+LqEOM= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=heFvZSEq; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=AYjmL9sa; dmarc=none; spf=pass (imf14.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701718495; a=rsa-sha256; cv=none; b=F6bRo7DfD6o4kYWSRZjJ7TMhRhF15xdpwu1O8FrAllzEJv2F06q7qsW2chb219dU+0kLOE x1gQz8yNZo+fB2sN3KmH+kH2EgA+ZrwSUZcACQWvChHaVcaBk5CvBuNS2NMM9k5os2RDCJ x8z3QN3e+3OL7Gawbw4FA44RPWa9yVg= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id BDC741FE6F; Mon, 4 Dec 2023 19:34:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1701718492; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oMJ38qh2PsFVzYJdgL9aE/PQtVEO5BW0t1lp5LOzGhE=; b=heFvZSEqKXFYYnE56gLumyjwnovfMoThgCUklAnz/7rCIit0pSFbwcimdNm5j3c/yxeUWW gkrHrXu27VHKyotHjOARtOBi8vu9/U7UBOCZ6dklVeHUlRia8ZCmSyeJ+Yffki83sZt/QD oVMt4iNE5pnET1BPfPzYzcZjH7a7qF0= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1701718492; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oMJ38qh2PsFVzYJdgL9aE/PQtVEO5BW0t1lp5LOzGhE=; b=AYjmL9saWR49xKlagqu2kHY52YN3HlACqJSDOaY4oQSGsQj6skj3DVaT9RDS9UPQoco+TP i62y5/OcDjMfblCA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id A101E13AC1; Mon, 4 Dec 2023 19:34:52 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 6Gn/JtwpbmUPMwAAD6G6ig (envelope-from ); Mon, 04 Dec 2023 19:34:52 +0000 From: Vlastimil Babka Date: Mon, 04 Dec 2023 20:34:42 +0100 Subject: [PATCH 3/4] mm/slub: handle bulk and single object freeing separately MIME-Version: 1.0 Message-Id: <20231204-slub-cleanup-hooks-v1-3-88b65f7cd9d5@suse.cz> References: <20231204-slub-cleanup-hooks-v1-0-88b65f7cd9d5@suse.cz> In-Reply-To: <20231204-slub-cleanup-hooks-v1-0-88b65f7cd9d5@suse.cz> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim Cc: Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Alexander Potapenko , Marco Elver , Dmitry Vyukov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.12.4 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 9D00B100009 X-Stat-Signature: jmx73rs7paua7k3968chjhft5gygymjo X-Rspam-User: X-HE-Tag: 1701718494-184525 X-HE-Meta: U2FsdGVkX186MMwtqke8/PWH+2mlAYRq5duenzstLGrcwRKOzx0m4vxbCzZIJb8x3Tu71SaftYTq3S1pe31fuhj/UypyjHCLH4PJrbvHoZ5myZ/lMP7IR5S021MpxY2g5QFYBCFLBrPjdZ0p7Z/BYAeTH3BMs0r6awkP6ADWFSbR5y39akgxw9rbaRAwkJQrVwh2gBTrJa9eWpP7ntJKth0GjxhnEhYt+4tDPHO0fx6WIpzIwyCsJZ2jt+3sRMiKPG147MpZZFRhDjlgYdS3DaOSIqS5z+4cYbhP44yt5dmyFm1TCcY4du4a+BJtk4sR2gG58RcS6v35yu6pTIpxJIjsN5ctc34P2TRNMscYS5+GhWjogahZMiIwsyQjHuKXAnulBk3J9aWKzaO9K7I6uEN9gIkOA89ecPxxYR35RcrXVfpBI9eOJRvxiM7dvzf74Kao3GkROmS50hXuZAp9LX33Uez6KynAVLExaFUodDi0OaFtbCqhU6gIqNeBcpRtEEmrBbdboFB2mZXDIodQGSceWmsvJm9yekqzgPHt5kRQdGty53s+g9+U/JoqHVYfsc7A88KWyseEbGlVoptDPLEvympIbgNnFNyIo0rESpNZrZVZeNtRJBUZNUnrtOC2ncBrs0Fm/ULhGvkjXIm/P1PCkXB1/vmjgV6Mjh/w1JZBIqVv3HxhJ1VhrVjvvOQ0TpdSrmc9xFlikfoCKRQhTrwGjq3/DgHlDGG+erq3tRP+kFtVccMzy38zJ3mNJvUdUvR29Mo1j3dh8YtyQQ3WoR9onYLfsSqJd9oXlnbjzR4/bghZtwri5dwSBF8Sw6yp7pA1kSd97tE5Qn6D+X/Q2B5wJYlKghbA4YAIIw00jKYFnnWS9sRs2bkdNUoagV/W035gPh2VPVdfjA3ZQNKwq4Bl9aDoj+n7q/o+nOlF6gsM5DCFEmHDwi6WClORoxV9gcX58zR4/WfDzoiq4zo SRHyYwxF lGJ+Kw78tvvp3wEYd7GbbCpH9aDJvWvhjCk7RuMrSaaTS8vLYdPLkItehltDgsN6VgUr8T4w8/oG6p4wKHGiLsGXg+w101AsSinmGpcSX7YbLwHTibJCS3YrKJ+LjzLiHkAHH/pbluttudAHx7k0QdXMiYK/Nnw+WaYSdy7dKdGiuZFV9yghmydP1YNKeahpcUupylvuPmF8m9tXCsQBINYCwUPYk7AdXNTH4JGbANbB/T83kOtDOVTDJyV2+TUj1gO357h7F63ZSBm1j1UyHQz6PKOqzj/dYzgdhK0Xui7P+Nng= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently we have a single function slab_free() handling both single object freeing and bulk freeing with necessary hooks, the latter case requiring slab_free_freelist_hook(). It should be however better to distinguish the two use cases for the following reasons: - code simpler to follow for the single object case - better code generation - although inlining should eliminate the slab_free_freelist_hook() for single object freeing in case no debugging options are enabled, it seems it's not perfect. When e.g. KASAN is enabled, we're imposing additional unnecessary overhead for single object freeing. - preparation to add percpu array caches in near future Therefore, simplify slab_free() for the single object case by dropping unnecessary parameters and calling only slab_free_hook() instead of slab_free_freelist_hook(). Rename the bulk variant to slab_free_bulk() and adjust callers accordingly. While at it, flip (and document) slab_free_hook() return value so that it returns true when the freeing can proceed, which matches the logic of slab_free_freelist_hook() and is not confusingly the opposite. Additionally we can simplify a bit by changing the tail parameter of do_slab_free() when freeing a single object - instead of NULL we can set it equal to head. bloat-o-meter shows small code reduction with a .config that has KASAN etc disabled: add/remove: 0/0 grow/shrink: 0/4 up/down: 0/-118 (-118) Function old new delta kmem_cache_alloc_bulk 1203 1196 -7 kmem_cache_free 861 835 -26 __kmem_cache_free 741 704 -37 kmem_cache_free_bulk 911 863 -48 Signed-off-by: Vlastimil Babka Reviewed-by: Chengming Zhou --- mm/slub.c | 59 +++++++++++++++++++++++++++++++++++------------------------ 1 file changed, 35 insertions(+), 24 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 0742564c4538..ed2fa92e914c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2037,9 +2037,12 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, /* * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. + * + * Returns true if freeing of the object can proceed, false if its reuse + * was delayed by KASAN quarantine. */ -static __always_inline bool slab_free_hook(struct kmem_cache *s, - void *x, bool init) +static __always_inline +bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { kmemleak_free_recursive(x, s->flags); kmsan_slab_free(s, x); @@ -2072,7 +2075,7 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, s->size - s->inuse - rsize); } /* KASAN might put x into memory quarantine, delaying its reuse. */ - return kasan_slab_free(s, x, init); + return !kasan_slab_free(s, x, init); } static inline bool slab_free_freelist_hook(struct kmem_cache *s, @@ -2082,7 +2085,7 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, void *object; void *next = *head; - void *old_tail = *tail ? *tail : *head; + void *old_tail = *tail; if (is_kfence_address(next)) { slab_free_hook(s, next, false); @@ -2098,8 +2101,8 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, next = get_freepointer(s, object); /* If object's reuse doesn't have to be delayed */ - if (likely(!slab_free_hook(s, object, - slab_want_init_on_free(s)))) { + if (likely(slab_free_hook(s, object, + slab_want_init_on_free(s)))) { /* Move object to the new freelist */ set_freepointer(s, object, *head); *head = object; @@ -2114,9 +2117,6 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, } } while (object != old_tail); - if (*head == *tail) - *tail = NULL; - return *head != NULL; } @@ -4227,7 +4227,6 @@ static __always_inline void do_slab_free(struct kmem_cache *s, struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { - void *tail_obj = tail ? : head; struct kmem_cache_cpu *c; unsigned long tid; void **freelist; @@ -4246,14 +4245,14 @@ static __always_inline void do_slab_free(struct kmem_cache *s, barrier(); if (unlikely(slab != c->slab)) { - __slab_free(s, slab, head, tail_obj, cnt, addr); + __slab_free(s, slab, head, tail, cnt, addr); return; } if (USE_LOCKLESS_FAST_PATH()) { freelist = READ_ONCE(c->freelist); - set_freepointer(s, tail_obj, freelist); + set_freepointer(s, tail, freelist); if (unlikely(!__update_cpu_freelist_fast(s, freelist, head, tid))) { note_cmpxchg_failure("slab_free", s, tid); @@ -4270,7 +4269,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s, tid = c->tid; freelist = c->freelist; - set_freepointer(s, tail_obj, freelist); + set_freepointer(s, tail, freelist); c->freelist = head; c->tid = next_tid(tid); @@ -4283,15 +4282,27 @@ static void do_slab_free(struct kmem_cache *s, struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { - void *tail_obj = tail ? : head; - - __slab_free(s, slab, head, tail_obj, cnt, addr); + __slab_free(s, slab, head, tail, cnt, addr); } #endif /* CONFIG_SLUB_TINY */ -static __fastpath_inline void slab_free(struct kmem_cache *s, struct slab *slab, - void *head, void *tail, void **p, int cnt, - unsigned long addr) +static __fastpath_inline +void slab_free(struct kmem_cache *s, struct slab *slab, void *object, + unsigned long addr) +{ + bool init; + + memcg_slab_free_hook(s, slab, &object, 1); + + init = !is_kfence_address(object) && slab_want_init_on_free(s); + + if (likely(slab_free_hook(s, object, init))) + do_slab_free(s, slab, object, object, 1, addr); +} + +static __fastpath_inline +void slab_free_bulk(struct kmem_cache *s, struct slab *slab, void *head, + void *tail, void **p, int cnt, unsigned long addr) { memcg_slab_free_hook(s, slab, p, cnt); /* @@ -4305,7 +4316,7 @@ static __fastpath_inline void slab_free(struct kmem_cache *s, struct slab *slab, #ifdef CONFIG_KASAN_GENERIC void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) { - do_slab_free(cache, virt_to_slab(x), x, NULL, 1, addr); + do_slab_free(cache, virt_to_slab(x), x, x, 1, addr); } #endif @@ -4349,7 +4360,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) if (!s) return; trace_kmem_cache_free(_RET_IP_, x, s); - slab_free(s, virt_to_slab(x), x, NULL, &x, 1, _RET_IP_); + slab_free(s, virt_to_slab(x), x, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free); @@ -4395,7 +4406,7 @@ void kfree(const void *object) slab = folio_slab(folio); s = slab->slab_cache; - slab_free(s, slab, x, NULL, &x, 1, _RET_IP_); + slab_free(s, slab, x, _RET_IP_); } EXPORT_SYMBOL(kfree); @@ -4512,8 +4523,8 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) if (!df.slab) continue; - slab_free(df.s, df.slab, df.freelist, df.tail, &p[size], df.cnt, - _RET_IP_); + slab_free_bulk(df.s, df.slab, df.freelist, df.tail, &p[size], + df.cnt, _RET_IP_); } while (likely(size)); } EXPORT_SYMBOL(kmem_cache_free_bulk); From patchwork Mon Dec 4 19:34:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13479023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01972C4167B for ; Mon, 4 Dec 2023 19:35:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D6D766B02F1; Mon, 4 Dec 2023 14:34:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CF76C6B02E7; Mon, 4 Dec 2023 14:34:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8142E6B02E7; Mon, 4 Dec 2023 14:34:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 47A266B02EC for ; Mon, 4 Dec 2023 14:34:57 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 14B8114031D for ; Mon, 4 Dec 2023 19:34:57 +0000 (UTC) X-FDA: 81530138634.10.E842C7E Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf21.hostedemail.com (Postfix) with ESMTP id CA64C1C001F for ; Mon, 4 Dec 2023 19:34:54 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701718495; a=rsa-sha256; cv=none; b=XqyaQxQ/ypLNXXsyOBvqbmyNICKzgAs5jk/hNiKxTQ2pljnnu3MnJ2CF1DQgLMfzx2LUqG zu/18zglBFFDQjmqVC7ybubbWHRROk4sgqcfmKahumFVpRbaqMuGxs39ijESOMwhoN32pu n0y1ugMPLB0YxnN1furcnDcnfxEiIvQ= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701718495; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=l0vu8j+/9A2K31W5hfz2IyO5NKbuonHh7Yjg+JM4QPY=; b=whargo5496D2HUpjH5jPBl4giTAOBAGH1ruhb4nj6aB9Boz0ntTcO4WbDplU8eyhZKIRwy qiU4+xhClFRa1vv0jouoZ46PPE92VdG3h+Ivvy1BTi4OA6WXosqkPGEbNnbdsYZz/+JvRc yk52+vGs30KpWmwG2pEGCMaJlwmBjPY= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id D710B1FE70; Mon, 4 Dec 2023 19:34:52 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id B9C9B139AA; Mon, 4 Dec 2023 19:34:52 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id KLwRLdwpbmUPMwAAD6G6ig (envelope-from ); Mon, 04 Dec 2023 19:34:52 +0000 From: Vlastimil Babka Date: Mon, 04 Dec 2023 20:34:43 +0100 Subject: [PATCH 4/4] mm/slub: free KFENCE objects in slab_free_hook() MIME-Version: 1.0 Message-Id: <20231204-slub-cleanup-hooks-v1-4-88b65f7cd9d5@suse.cz> References: <20231204-slub-cleanup-hooks-v1-0-88b65f7cd9d5@suse.cz> In-Reply-To: <20231204-slub-cleanup-hooks-v1-0-88b65f7cd9d5@suse.cz> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim Cc: Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Alexander Potapenko , Marco Elver , Dmitry Vyukov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.12.4 X-Spamd-Bar: +++++ X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: CA64C1C001F X-Stat-Signature: ymraky8q9az5gmyh5h6pjj9xuimqo6fj X-HE-Tag: 1701718494-443843 X-HE-Meta: U2FsdGVkX1/HH3zJTW0qD9oWo88AIGN2F11yEvVBSQJSyZ4+/Yjbp2Zom8qHmejrvXfYZZk0iHnCdh2Ta4JsnYcMns03qTmv0kmV0HzBPk/hi8EdpZfBFfBEmJLNBtG9gbRmWdUrZOzyY1iDqKfEN2xpX4D7RtuDH2Is5WBAdO+an/FfmB6nzExFoqqd7OXxCRwXl1o+JgJbHiAkH4S+haMhNFI6QAMRwtEBmPGDiqZY8RO1sWvHTQvBTBh2Cy71qYypDnrF64wkZWf7dBO4jPHcX+F2koEUSrIGC9NTRreXNNyyD43S+x9LaRhxJNQ5PLUhgX3csTdB9VWh9cUa2KTPYSVb2qbYTSSowIcpea76ng672daPZ/bBWNS8mFv20oLZOd00+Mif5ZWl72eWtfu9yaUAdDIoCSayIfcHny5Qy5jydhfFLb5CkJ9vrh24ys9IiINSlCOD0KF5EjgEo6J3cVF6UO+ts5TNQuVItHohrGkcZqqu1hQrN6JTAiiJCNdQqt/xLP4mAKTlu56u8HsfjUPuIWQPxmlgh1ixDtsWCtYfxgfUcB7A3sUBoEcx048Fs6hQQ8/o29rcZYpU3s4JxZmCcG0eIb83hutDvoy9OCZN1TlNYbZQximMWqd+Gx7umfItxUHWb7+tMAV7xdVDxDX1dhbL+vdeVZnngQcRRdbcM6kBbTADd+BjmNfcc2W9Hl8X8WP28ginVpsXMOejCs97a055dR9rN3ozCcSOC67AMt4pYzmRLi/0O4GjmcP3khvdfXE75o/gPW/iH1lHfDjBSz2HanjMyG7pYjLA3IG/mZ3aCEB6bqaJ48xaKO9B/WXJgs67agNAFwZsc9KwAYw0APRZ54p4oEn7FYlZ7j9i5lnL8QOYOa/l4Nvgfs9my9YYnpe3K+5XOSr2Z+WjGrE6bGCvK6fUhqthErWbQVBvg3WlvqBO0kuJExln/RWVqvS8BCgzX5v1jEi q52RNjGZ uXILyg6BzHqezaiuf2+HV0cwlou0XtQNzbsR14bZTZYlFzTCT8SZ0Tj9hEqLtq2J/4ntit49ztZciYjxEHNin6t+IENppZUL4zEm34VtYOf9Mte3hSccEzDqFIVZ0oMLFKW4c1w+mKSCd1hGdxvMmIsZPSA4bfErg5y5v8Jqupg0AHvX9VGgOwIfyCNeHnruI6YLK4rfbywUw1dfv5UTP6K5ZxJJ4Rd5/CswTLX3TMjfNStCr40RJ6RkTAcPcSnTbE46d X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When freeing an object that was allocated from KFENCE, we do that in the slowpath __slab_free(), relying on the fact that KFENCE "slab" cannot be the cpu slab, so the fastpath has to fallback to the slowpath. This optimization doesn't help much though, because is_kfence_address() is checked earlier anyway during the free hook processing or detached freelist building. Thus we can simplify the code by making the slab_free_hook() free the KFENCE object immediately, similarly to KASAN quarantine. In slab_free_hook() we can place kfence_free() above init processing, as callers have been making sure to set init to false for KFENCE objects. This simplifies slab_free(). This places it also above kasan_slab_free() which is ok as that skips KFENCE objects anyway. While at it also determine the init value in slab_free_freelist_hook() outside of the loop. This change will also make introducing per cpu array caches easier. Tested-by: Marco Elver Signed-off-by: Vlastimil Babka Reviewed-by: Chengming Zhou --- mm/slub.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index ed2fa92e914c..e38c2b712f6c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2039,7 +2039,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, * production configuration these hooks all should produce no code at all. * * Returns true if freeing of the object can proceed, false if its reuse - * was delayed by KASAN quarantine. + * was delayed by KASAN quarantine, or it was returned to KFENCE. */ static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) @@ -2057,6 +2057,9 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) __kcsan_check_access(x, s->object_size, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT); + if (kfence_free(kasan_reset_tag(x))) + return false; + /* * As memory initialization might be integrated into KASAN, * kasan_slab_free and initialization memset's must be @@ -2086,23 +2089,25 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, void *object; void *next = *head; void *old_tail = *tail; + bool init; if (is_kfence_address(next)) { slab_free_hook(s, next, false); - return true; + return false; } /* Head and tail of the reconstructed freelist */ *head = NULL; *tail = NULL; + init = slab_want_init_on_free(s); + do { object = next; next = get_freepointer(s, object); /* If object's reuse doesn't have to be delayed */ - if (likely(slab_free_hook(s, object, - slab_want_init_on_free(s)))) { + if (likely(slab_free_hook(s, object, init))) { /* Move object to the new freelist */ set_freepointer(s, object, *head); *head = object; @@ -4103,9 +4108,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, stat(s, FREE_SLOWPATH); - if (kfence_free(head)) - return; - if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) { free_to_partial_list(s, slab, head, tail, cnt, addr); return; @@ -4290,13 +4292,9 @@ static __fastpath_inline void slab_free(struct kmem_cache *s, struct slab *slab, void *object, unsigned long addr) { - bool init; - memcg_slab_free_hook(s, slab, &object, 1); - init = !is_kfence_address(object) && slab_want_init_on_free(s); - - if (likely(slab_free_hook(s, object, init))) + if (likely(slab_free_hook(s, object, slab_want_init_on_free(s)))) do_slab_free(s, slab, object, object, 1, addr); }