From patchwork Wed Nov 13 14:23:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carlos Maiolino X-Patchwork-Id: 11241989 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B623416B1 for ; Wed, 13 Nov 2019 14:23:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8D6A0222D0 for ; Wed, 13 Nov 2019 14:23:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="aO7VCqMw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727521AbfKMOXu (ORCPT ); Wed, 13 Nov 2019 09:23:50 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:31463 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727481AbfKMOXu (ORCPT ); Wed, 13 Nov 2019 09:23:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573655027; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=g4OF4AgV+cQOEJJH3CD5Kh/QOuKBEzhQLaryNFqJ1Pg=; b=aO7VCqMwLvOCV5g40ABEW49WIUvdbsvmMa7DhjucXkLA++qL1y/YoKdq46qe0aYvjRY/Rz dec2pKs7TgmSnthmMIuEek1XOKqRSDXjVLnbE0tBYo9TDva8qLPHx5hpLeIe68caZhgU/R gLvNLXjYZC3zvK5zm8ElZVJRO+/t0kI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-272-3W6RHkvKNIa6JKfP5Osk5g-1; Wed, 13 Nov 2019 09:23:46 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8E5778C4F86 for ; Wed, 13 Nov 2019 14:23:45 +0000 (UTC) Received: from orion.redhat.com (ovpn-204-203.brq.redhat.com [10.40.204.203]) by smtp.corp.redhat.com (Postfix) with ESMTP id E62D466858 for ; Wed, 13 Nov 2019 14:23:44 +0000 (UTC) From: Carlos Maiolino To: linux-xfs@vger.kernel.org Subject: [PATCH 01/11] xfs: Remove slab init wrappers Date: Wed, 13 Nov 2019 15:23:25 +0100 Message-Id: <20191113142335.1045631-2-cmaiolino@redhat.com> In-Reply-To: <20191113142335.1045631-1-cmaiolino@redhat.com> References: <20191113142335.1045631-1-cmaiolino@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: 3W6RHkvKNIa6JKfP5Osk5g-1 X-Mimecast-Spam-Score: 0 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Remove kmem_zone_init() and kmem_zone_init_flags() together with their specific KM_* to SLAB_* flag wrappers. Use kmem_cache_create() directly. Signed-off-by: Carlos Maiolino Reviewed-by: Darrick J. Wong --- fs/xfs/kmem.h | 18 --------- fs/xfs/xfs_buf.c | 5 ++- fs/xfs/xfs_dquot.c | 10 +++-- fs/xfs/xfs_super.c | 99 +++++++++++++++++++++++++++------------------- 4 files changed, 68 insertions(+), 64 deletions(-) diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h index 8170d95cf930..15c5800128b3 100644 --- a/fs/xfs/kmem.h +++ b/fs/xfs/kmem.h @@ -78,27 +78,9 @@ kmem_zalloc_large(size_t size, xfs_km_flags_t flags) * Zone interfaces */ -#define KM_ZONE_HWALIGN SLAB_HWCACHE_ALIGN -#define KM_ZONE_RECLAIM SLAB_RECLAIM_ACCOUNT -#define KM_ZONE_SPREAD SLAB_MEM_SPREAD -#define KM_ZONE_ACCOUNT SLAB_ACCOUNT - #define kmem_zone kmem_cache #define kmem_zone_t struct kmem_cache -static inline kmem_zone_t * -kmem_zone_init(int size, char *zone_name) -{ - return kmem_cache_create(zone_name, size, 0, 0, NULL); -} - -static inline kmem_zone_t * -kmem_zone_init_flags(int size, char *zone_name, slab_flags_t flags, - void (*construct)(void *)) -{ - return kmem_cache_create(zone_name, size, 0, flags, construct); -} - static inline void kmem_zone_free(kmem_zone_t *zone, void *ptr) { diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 2ed3c65c602f..3741f5b369de 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -2060,8 +2060,9 @@ xfs_buf_delwri_pushbuf( int __init xfs_buf_init(void) { - xfs_buf_zone = kmem_zone_init_flags(sizeof(xfs_buf_t), "xfs_buf", - KM_ZONE_HWALIGN, NULL); + xfs_buf_zone = kmem_cache_create("xfs_buf", + sizeof(struct xfs_buf), 0, + SLAB_HWCACHE_ALIGN, NULL); if (!xfs_buf_zone) goto out; diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c index bcd4247b5014..90dd1623de5a 100644 --- a/fs/xfs/xfs_dquot.c +++ b/fs/xfs/xfs_dquot.c @@ -1211,13 +1211,15 @@ xfs_dqlock2( int __init xfs_qm_init(void) { - xfs_qm_dqzone = - kmem_zone_init(sizeof(struct xfs_dquot), "xfs_dquot"); + xfs_qm_dqzone = kmem_cache_create("xfs_dquot", + sizeof(struct xfs_dquot), + 0, 0, NULL); if (!xfs_qm_dqzone) goto out; - xfs_qm_dqtrxzone = - kmem_zone_init(sizeof(struct xfs_dquot_acct), "xfs_dqtrx"); + xfs_qm_dqtrxzone = kmem_cache_create("xfs_dqtrx", + sizeof(struct xfs_dquot_acct), + 0, 0, NULL); if (!xfs_qm_dqtrxzone) goto out_free_dqzone; diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 7f1fc76376f5..d3c3f7b5bdcf 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1797,32 +1797,39 @@ MODULE_ALIAS_FS("xfs"); STATIC int __init xfs_init_zones(void) { - xfs_log_ticket_zone = kmem_zone_init(sizeof(xlog_ticket_t), - "xfs_log_ticket"); + xfs_log_ticket_zone = kmem_cache_create("xfs_log_ticket", + sizeof(struct xlog_ticket), + 0, 0, NULL); if (!xfs_log_ticket_zone) goto out; - xfs_bmap_free_item_zone = kmem_zone_init( - sizeof(struct xfs_extent_free_item), - "xfs_bmap_free_item"); + xfs_bmap_free_item_zone = kmem_cache_create("xfs_bmap_free_item", + sizeof(struct xfs_extent_free_item), + 0, 0, NULL); if (!xfs_bmap_free_item_zone) goto out_destroy_log_ticket_zone; - xfs_btree_cur_zone = kmem_zone_init(sizeof(xfs_btree_cur_t), - "xfs_btree_cur"); + xfs_btree_cur_zone = kmem_cache_create("xfs_btree_cur", + sizeof(struct xfs_btree_cur), + 0, 0, NULL); if (!xfs_btree_cur_zone) goto out_destroy_bmap_free_item_zone; - xfs_da_state_zone = kmem_zone_init(sizeof(xfs_da_state_t), - "xfs_da_state"); + xfs_da_state_zone = kmem_cache_create("xfs_da_state", + sizeof(struct xfs_da_state), + 0, 0, NULL); if (!xfs_da_state_zone) goto out_destroy_btree_cur_zone; - xfs_ifork_zone = kmem_zone_init(sizeof(struct xfs_ifork), "xfs_ifork"); + xfs_ifork_zone = kmem_cache_create("xfs_ifork", + sizeof(struct xfs_ifork), + 0, 0, NULL); if (!xfs_ifork_zone) goto out_destroy_da_state_zone; - xfs_trans_zone = kmem_zone_init(sizeof(xfs_trans_t), "xfs_trans"); + xfs_trans_zone = kmem_cache_create("xf_trans", + sizeof(struct xfs_trans), + 0, 0, NULL); if (!xfs_trans_zone) goto out_destroy_ifork_zone; @@ -1832,70 +1839,82 @@ xfs_init_zones(void) * size possible under XFS. This wastes a little bit of memory, * but it is much faster. */ - xfs_buf_item_zone = kmem_zone_init(sizeof(struct xfs_buf_log_item), - "xfs_buf_item"); + xfs_buf_item_zone = kmem_cache_create("xfs_buf_item", + sizeof(struct xfs_buf_log_item), + 0, 0, NULL); if (!xfs_buf_item_zone) goto out_destroy_trans_zone; - xfs_efd_zone = kmem_zone_init((sizeof(xfs_efd_log_item_t) + - ((XFS_EFD_MAX_FAST_EXTENTS - 1) * - sizeof(xfs_extent_t))), "xfs_efd_item"); + xfs_efd_zone = kmem_cache_create("xfs_efd_item", + (sizeof(struct xfs_efd_log_item) + + (XFS_EFD_MAX_FAST_EXTENTS - 1) * + sizeof(struct xfs_extent)), + 0, 0, NULL); if (!xfs_efd_zone) goto out_destroy_buf_item_zone; - xfs_efi_zone = kmem_zone_init((sizeof(xfs_efi_log_item_t) + - ((XFS_EFI_MAX_FAST_EXTENTS - 1) * - sizeof(xfs_extent_t))), "xfs_efi_item"); + xfs_efi_zone = kmem_cache_create("xfs_efi_item", + (sizeof(struct xfs_efi_log_item) + + (XFS_EFI_MAX_FAST_EXTENTS - 1) * + sizeof(struct xfs_extent)), + 0, 0, NULL); if (!xfs_efi_zone) goto out_destroy_efd_zone; - xfs_inode_zone = - kmem_zone_init_flags(sizeof(xfs_inode_t), "xfs_inode", - KM_ZONE_HWALIGN | KM_ZONE_RECLAIM | KM_ZONE_SPREAD | - KM_ZONE_ACCOUNT, xfs_fs_inode_init_once); + xfs_inode_zone = kmem_cache_create("xfs_inode", + sizeof(struct xfs_inode), 0, + (SLAB_HWCACHE_ALIGN | + SLAB_RECLAIM_ACCOUNT | + SLAB_MEM_SPREAD | SLAB_ACCOUNT), + xfs_fs_inode_init_once); if (!xfs_inode_zone) goto out_destroy_efi_zone; - xfs_ili_zone = - kmem_zone_init_flags(sizeof(xfs_inode_log_item_t), "xfs_ili", - KM_ZONE_SPREAD, NULL); + xfs_ili_zone = kmem_cache_create("xfs_ili", + sizeof(struct xfs_inode_log_item), 0, + SLAB_MEM_SPREAD, NULL); if (!xfs_ili_zone) goto out_destroy_inode_zone; - xfs_icreate_zone = kmem_zone_init(sizeof(struct xfs_icreate_item), - "xfs_icr"); + + xfs_icreate_zone = kmem_cache_create("xfs_icr", + sizeof(struct xfs_icreate_item), + 0, 0, NULL); if (!xfs_icreate_zone) goto out_destroy_ili_zone; - xfs_rud_zone = kmem_zone_init(sizeof(struct xfs_rud_log_item), - "xfs_rud_item"); + xfs_rud_zone = kmem_cache_create("xfs_rud_item", + sizeof(struct xfs_rud_log_item), + 0, 0, NULL); if (!xfs_rud_zone) goto out_destroy_icreate_zone; - xfs_rui_zone = kmem_zone_init( + xfs_rui_zone = kmem_cache_create("xfs_rui_item", xfs_rui_log_item_sizeof(XFS_RUI_MAX_FAST_EXTENTS), - "xfs_rui_item"); + 0, 0, NULL); if (!xfs_rui_zone) goto out_destroy_rud_zone; - xfs_cud_zone = kmem_zone_init(sizeof(struct xfs_cud_log_item), - "xfs_cud_item"); + xfs_cud_zone = kmem_cache_create("xfs_cud_item", + sizeof(struct xfs_cud_log_item), + 0, 0, NULL); if (!xfs_cud_zone) goto out_destroy_rui_zone; - xfs_cui_zone = kmem_zone_init( + xfs_cui_zone = kmem_cache_create("xfs_cui_item", xfs_cui_log_item_sizeof(XFS_CUI_MAX_FAST_EXTENTS), - "xfs_cui_item"); + 0, 0, NULL); if (!xfs_cui_zone) goto out_destroy_cud_zone; - xfs_bud_zone = kmem_zone_init(sizeof(struct xfs_bud_log_item), - "xfs_bud_item"); + xfs_bud_zone = kmem_cache_create("xfs_bud_item", + sizeof(struct xfs_bud_log_item), + 0, 0, NULL); if (!xfs_bud_zone) goto out_destroy_cui_zone; - xfs_bui_zone = kmem_zone_init( + xfs_bui_zone = kmem_cache_create("xfs_bui_item", xfs_bui_log_item_sizeof(XFS_BUI_MAX_FAST_EXTENTS), - "xfs_bui_item"); + 0, 0, NULL); if (!xfs_bui_zone) goto out_destroy_bud_zone; From patchwork Wed Nov 13 14:23:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carlos Maiolino X-Patchwork-Id: 11242007 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D17381390 for ; Wed, 13 Nov 2019 14:24:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B2CD7222D3 for ; Wed, 13 Nov 2019 14:24:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="QsWB93aV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727617AbfKMOYA (ORCPT ); Wed, 13 Nov 2019 09:24:00 -0500 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:52250 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727578AbfKMOX7 (ORCPT ); Wed, 13 Nov 2019 09:23:59 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573655038; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4X1Ju/LKeAfKB/py4qCle3gS2xTYwAryffWbdk5otek=; b=QsWB93aVF/bgnn3BiP7iuvC2Y1BGLuBTTLX3azusNxkebI5yXiO9HUAwq+K9DMwhMsLZZ/ sN4D5lVBZkA25FpsWBO2Il8xTPqaaVo91weGkLr7PwdzAC6WAu7/Ss9GYy7VIwLL2xO0U2 5q5C1XSyt1kE9U0ySuN1Rtg3mwXhu0c= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-46--Xge2yftOKeB_cMeI0YI-g-1; Wed, 13 Nov 2019 09:23:47 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 92D151345A5 for ; Wed, 13 Nov 2019 14:23:46 +0000 (UTC) Received: from orion.redhat.com (ovpn-204-203.brq.redhat.com [10.40.204.203]) by smtp.corp.redhat.com (Postfix) with ESMTP id EAEC44D9E1 for ; Wed, 13 Nov 2019 14:23:45 +0000 (UTC) From: Carlos Maiolino To: linux-xfs@vger.kernel.org Subject: [PATCH 02/11] xfs: Remove kmem_zone_destroy() wrapper Date: Wed, 13 Nov 2019 15:23:26 +0100 Message-Id: <20191113142335.1045631-3-cmaiolino@redhat.com> In-Reply-To: <20191113142335.1045631-1-cmaiolino@redhat.com> References: <20191113142335.1045631-1-cmaiolino@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: -Xge2yftOKeB_cMeI0YI-g-1 X-Mimecast-Spam-Score: 0 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Use kmem_cache_destroy directly Signed-off-by: Carlos Maiolino Reviewed-by: Darrick J. Wong --- fs/xfs/kmem.h | 6 ---- fs/xfs/xfs_buf.c | 2 +- fs/xfs/xfs_dquot.c | 6 ++-- fs/xfs/xfs_super.c | 70 +++++++++++++++++++++++----------------------- 4 files changed, 39 insertions(+), 45 deletions(-) diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h index 15c5800128b3..70ed74c7f37e 100644 --- a/fs/xfs/kmem.h +++ b/fs/xfs/kmem.h @@ -87,12 +87,6 @@ kmem_zone_free(kmem_zone_t *zone, void *ptr) kmem_cache_free(zone, ptr); } -static inline void -kmem_zone_destroy(kmem_zone_t *zone) -{ - kmem_cache_destroy(zone); -} - extern void *kmem_zone_alloc(kmem_zone_t *, xfs_km_flags_t); static inline void * diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 3741f5b369de..ccccfb792ff8 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -2075,7 +2075,7 @@ xfs_buf_init(void) void xfs_buf_terminate(void) { - kmem_zone_destroy(xfs_buf_zone); + kmem_cache_destroy(xfs_buf_zone); } void xfs_buf_set_ref(struct xfs_buf *bp, int lru_ref) diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c index 90dd1623de5a..4f969d94fb74 100644 --- a/fs/xfs/xfs_dquot.c +++ b/fs/xfs/xfs_dquot.c @@ -1226,7 +1226,7 @@ xfs_qm_init(void) return 0; out_free_dqzone: - kmem_zone_destroy(xfs_qm_dqzone); + kmem_cache_destroy(xfs_qm_dqzone); out: return -ENOMEM; } @@ -1234,8 +1234,8 @@ xfs_qm_init(void) void xfs_qm_exit(void) { - kmem_zone_destroy(xfs_qm_dqtrxzone); - kmem_zone_destroy(xfs_qm_dqzone); + kmem_cache_destroy(xfs_qm_dqtrxzone); + kmem_cache_destroy(xfs_qm_dqzone); } /* diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index d3c3f7b5bdcf..d9ae27ddf253 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1921,39 +1921,39 @@ xfs_init_zones(void) return 0; out_destroy_bud_zone: - kmem_zone_destroy(xfs_bud_zone); + kmem_cache_destroy(xfs_bud_zone); out_destroy_cui_zone: - kmem_zone_destroy(xfs_cui_zone); + kmem_cache_destroy(xfs_cui_zone); out_destroy_cud_zone: - kmem_zone_destroy(xfs_cud_zone); + kmem_cache_destroy(xfs_cud_zone); out_destroy_rui_zone: - kmem_zone_destroy(xfs_rui_zone); + kmem_cache_destroy(xfs_rui_zone); out_destroy_rud_zone: - kmem_zone_destroy(xfs_rud_zone); + kmem_cache_destroy(xfs_rud_zone); out_destroy_icreate_zone: - kmem_zone_destroy(xfs_icreate_zone); + kmem_cache_destroy(xfs_icreate_zone); out_destroy_ili_zone: - kmem_zone_destroy(xfs_ili_zone); + kmem_cache_destroy(xfs_ili_zone); out_destroy_inode_zone: - kmem_zone_destroy(xfs_inode_zone); + kmem_cache_destroy(xfs_inode_zone); out_destroy_efi_zone: - kmem_zone_destroy(xfs_efi_zone); + kmem_cache_destroy(xfs_efi_zone); out_destroy_efd_zone: - kmem_zone_destroy(xfs_efd_zone); + kmem_cache_destroy(xfs_efd_zone); out_destroy_buf_item_zone: - kmem_zone_destroy(xfs_buf_item_zone); + kmem_cache_destroy(xfs_buf_item_zone); out_destroy_trans_zone: - kmem_zone_destroy(xfs_trans_zone); + kmem_cache_destroy(xfs_trans_zone); out_destroy_ifork_zone: - kmem_zone_destroy(xfs_ifork_zone); + kmem_cache_destroy(xfs_ifork_zone); out_destroy_da_state_zone: - kmem_zone_destroy(xfs_da_state_zone); + kmem_cache_destroy(xfs_da_state_zone); out_destroy_btree_cur_zone: - kmem_zone_destroy(xfs_btree_cur_zone); + kmem_cache_destroy(xfs_btree_cur_zone); out_destroy_bmap_free_item_zone: - kmem_zone_destroy(xfs_bmap_free_item_zone); + kmem_cache_destroy(xfs_bmap_free_item_zone); out_destroy_log_ticket_zone: - kmem_zone_destroy(xfs_log_ticket_zone); + kmem_cache_destroy(xfs_log_ticket_zone); out: return -ENOMEM; } @@ -1966,24 +1966,24 @@ xfs_destroy_zones(void) * destroy caches. */ rcu_barrier(); - kmem_zone_destroy(xfs_bui_zone); - kmem_zone_destroy(xfs_bud_zone); - kmem_zone_destroy(xfs_cui_zone); - kmem_zone_destroy(xfs_cud_zone); - kmem_zone_destroy(xfs_rui_zone); - kmem_zone_destroy(xfs_rud_zone); - kmem_zone_destroy(xfs_icreate_zone); - kmem_zone_destroy(xfs_ili_zone); - kmem_zone_destroy(xfs_inode_zone); - kmem_zone_destroy(xfs_efi_zone); - kmem_zone_destroy(xfs_efd_zone); - kmem_zone_destroy(xfs_buf_item_zone); - kmem_zone_destroy(xfs_trans_zone); - kmem_zone_destroy(xfs_ifork_zone); - kmem_zone_destroy(xfs_da_state_zone); - kmem_zone_destroy(xfs_btree_cur_zone); - kmem_zone_destroy(xfs_bmap_free_item_zone); - kmem_zone_destroy(xfs_log_ticket_zone); + kmem_cache_destroy(xfs_bui_zone); + kmem_cache_destroy(xfs_bud_zone); + kmem_cache_destroy(xfs_cui_zone); + kmem_cache_destroy(xfs_cud_zone); + kmem_cache_destroy(xfs_rui_zone); + kmem_cache_destroy(xfs_rud_zone); + kmem_cache_destroy(xfs_icreate_zone); + kmem_cache_destroy(xfs_ili_zone); + kmem_cache_destroy(xfs_inode_zone); + kmem_cache_destroy(xfs_efi_zone); + kmem_cache_destroy(xfs_efd_zone); + kmem_cache_destroy(xfs_buf_item_zone); + kmem_cache_destroy(xfs_trans_zone); + kmem_cache_destroy(xfs_ifork_zone); + kmem_cache_destroy(xfs_da_state_zone); + kmem_cache_destroy(xfs_btree_cur_zone); + kmem_cache_destroy(xfs_bmap_free_item_zone); + kmem_cache_destroy(xfs_log_ticket_zone); } STATIC int __init From patchwork Wed Nov 13 14:23:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carlos Maiolino X-Patchwork-Id: 11241999 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 215A11390 for ; Wed, 13 Nov 2019 14:23:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EE3102245D for ; Wed, 13 Nov 2019 14:23:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="h3km7fGW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727542AbfKMOX4 (ORCPT ); Wed, 13 Nov 2019 09:23:56 -0500 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:31005 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727450AbfKMOX4 (ORCPT ); Wed, 13 Nov 2019 09:23:56 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573655034; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6RNw2S2gNwpVOhiCiQLryRg1p0d9sRXFcyZm4Bc3KvY=; b=h3km7fGW9Duy26koZKRRKwUK5qKmHmWDvqHmXNgGxC1cW6KzFqCbAlFydTp8sVALQmJQ8v sFp6shvCqQRKNNLiKDwmcCUrIXbceW17bL7n6pMwO2rGHXANOOzyqRz5AgPXAFKsHIUo0Y vpXgdWp0Ijo2ifa3NHIpa1iOtB0QgNY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-48-Tz7gWfH6PGuzzUCylCklsg-1; Wed, 13 Nov 2019 09:23:48 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 98BD5102C8BE for ; Wed, 13 Nov 2019 14:23:47 +0000 (UTC) Received: from orion.redhat.com (ovpn-204-203.brq.redhat.com [10.40.204.203]) by smtp.corp.redhat.com (Postfix) with ESMTP id F090F4D9E1 for ; Wed, 13 Nov 2019 14:23:46 +0000 (UTC) From: Carlos Maiolino To: linux-xfs@vger.kernel.org Subject: [PATCH 03/11] xfs: Remove kmem_zone_free() wrapper Date: Wed, 13 Nov 2019 15:23:27 +0100 Message-Id: <20191113142335.1045631-4-cmaiolino@redhat.com> In-Reply-To: <20191113142335.1045631-1-cmaiolino@redhat.com> References: <20191113142335.1045631-1-cmaiolino@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: Tz7gWfH6PGuzzUCylCklsg-1 X-Mimecast-Spam-Score: 0 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org We can remove it now, without needing to rework the KM_ flags. Use kmem_cache_free() directly. Signed-off-by: Carlos Maiolino Reviewed-by: Darrick J. Wong --- fs/xfs/kmem.h | 6 ------ fs/xfs/libxfs/xfs_btree.c | 2 +- fs/xfs/libxfs/xfs_da_btree.c | 2 +- fs/xfs/libxfs/xfs_inode_fork.c | 8 ++++---- fs/xfs/xfs_bmap_item.c | 4 ++-- fs/xfs/xfs_buf.c | 6 +++--- fs/xfs/xfs_buf_item.c | 4 ++-- fs/xfs/xfs_dquot.c | 2 +- fs/xfs/xfs_extfree_item.c | 4 ++-- fs/xfs/xfs_icache.c | 4 ++-- fs/xfs/xfs_icreate_item.c | 2 +- fs/xfs/xfs_inode_item.c | 2 +- fs/xfs/xfs_log.c | 2 +- fs/xfs/xfs_refcount_item.c | 4 ++-- fs/xfs/xfs_rmap_item.c | 4 ++-- fs/xfs/xfs_trans.c | 2 +- fs/xfs/xfs_trans_dquot.c | 2 +- 17 files changed, 27 insertions(+), 33 deletions(-) diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h index 70ed74c7f37e..6143117770e9 100644 --- a/fs/xfs/kmem.h +++ b/fs/xfs/kmem.h @@ -81,12 +81,6 @@ kmem_zalloc_large(size_t size, xfs_km_flags_t flags) #define kmem_zone kmem_cache #define kmem_zone_t struct kmem_cache -static inline void -kmem_zone_free(kmem_zone_t *zone, void *ptr) -{ - kmem_cache_free(zone, ptr); -} - extern void *kmem_zone_alloc(kmem_zone_t *, xfs_km_flags_t); static inline void * diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c index 98843f1258b8..ac0b78ea417b 100644 --- a/fs/xfs/libxfs/xfs_btree.c +++ b/fs/xfs/libxfs/xfs_btree.c @@ -384,7 +384,7 @@ xfs_btree_del_cursor( /* * Free the cursor. */ - kmem_zone_free(xfs_btree_cur_zone, cur); + kmem_cache_free(xfs_btree_cur_zone, cur); } /* diff --git a/fs/xfs/libxfs/xfs_da_btree.c b/fs/xfs/libxfs/xfs_da_btree.c index 46b1c3fb305c..c5c0b73febae 100644 --- a/fs/xfs/libxfs/xfs_da_btree.c +++ b/fs/xfs/libxfs/xfs_da_btree.c @@ -107,7 +107,7 @@ xfs_da_state_free(xfs_da_state_t *state) #ifdef DEBUG memset((char *)state, 0, sizeof(*state)); #endif /* DEBUG */ - kmem_zone_free(xfs_da_state_zone, state); + kmem_cache_free(xfs_da_state_zone, state); } void diff --git a/fs/xfs/libxfs/xfs_inode_fork.c b/fs/xfs/libxfs/xfs_inode_fork.c index 15d6f947620f..ad2b9c313fd2 100644 --- a/fs/xfs/libxfs/xfs_inode_fork.c +++ b/fs/xfs/libxfs/xfs_inode_fork.c @@ -120,10 +120,10 @@ xfs_iformat_fork( break; } if (error) { - kmem_zone_free(xfs_ifork_zone, ip->i_afp); + kmem_cache_free(xfs_ifork_zone, ip->i_afp); ip->i_afp = NULL; if (ip->i_cowfp) - kmem_zone_free(xfs_ifork_zone, ip->i_cowfp); + kmem_cache_free(xfs_ifork_zone, ip->i_cowfp); ip->i_cowfp = NULL; xfs_idestroy_fork(ip, XFS_DATA_FORK); } @@ -531,10 +531,10 @@ xfs_idestroy_fork( } if (whichfork == XFS_ATTR_FORK) { - kmem_zone_free(xfs_ifork_zone, ip->i_afp); + kmem_cache_free(xfs_ifork_zone, ip->i_afp); ip->i_afp = NULL; } else if (whichfork == XFS_COW_FORK) { - kmem_zone_free(xfs_ifork_zone, ip->i_cowfp); + kmem_cache_free(xfs_ifork_zone, ip->i_cowfp); ip->i_cowfp = NULL; } } diff --git a/fs/xfs/xfs_bmap_item.c b/fs/xfs/xfs_bmap_item.c index 243e5e0f82a3..ee6f4229cebc 100644 --- a/fs/xfs/xfs_bmap_item.c +++ b/fs/xfs/xfs_bmap_item.c @@ -35,7 +35,7 @@ void xfs_bui_item_free( struct xfs_bui_log_item *buip) { - kmem_zone_free(xfs_bui_zone, buip); + kmem_cache_free(xfs_bui_zone, buip); } /* @@ -201,7 +201,7 @@ xfs_bud_item_release( struct xfs_bud_log_item *budp = BUD_ITEM(lip); xfs_bui_release(budp->bud_buip); - kmem_zone_free(xfs_bud_zone, budp); + kmem_cache_free(xfs_bud_zone, budp); } static const struct xfs_item_ops xfs_bud_item_ops = { diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index ccccfb792ff8..a0229c368e78 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -238,7 +238,7 @@ _xfs_buf_alloc( */ error = xfs_buf_get_maps(bp, nmaps); if (error) { - kmem_zone_free(xfs_buf_zone, bp); + kmem_cache_free(xfs_buf_zone, bp); return NULL; } @@ -328,7 +328,7 @@ xfs_buf_free( kmem_free(bp->b_addr); _xfs_buf_free_pages(bp); xfs_buf_free_maps(bp); - kmem_zone_free(xfs_buf_zone, bp); + kmem_cache_free(xfs_buf_zone, bp); } /* @@ -949,7 +949,7 @@ xfs_buf_get_uncached( _xfs_buf_free_pages(bp); fail_free_buf: xfs_buf_free_maps(bp); - kmem_zone_free(xfs_buf_zone, bp); + kmem_cache_free(xfs_buf_zone, bp); fail: return NULL; } diff --git a/fs/xfs/xfs_buf_item.c b/fs/xfs/xfs_buf_item.c index 6b69e6137b2b..3458a1264a3f 100644 --- a/fs/xfs/xfs_buf_item.c +++ b/fs/xfs/xfs_buf_item.c @@ -763,7 +763,7 @@ xfs_buf_item_init( error = xfs_buf_item_get_format(bip, bp->b_map_count); ASSERT(error == 0); if (error) { /* to stop gcc throwing set-but-unused warnings */ - kmem_zone_free(xfs_buf_item_zone, bip); + kmem_cache_free(xfs_buf_item_zone, bip); return error; } @@ -939,7 +939,7 @@ xfs_buf_item_free( { xfs_buf_item_free_format(bip); kmem_free(bip->bli_item.li_lv_shadow); - kmem_zone_free(xfs_buf_item_zone, bip); + kmem_cache_free(xfs_buf_item_zone, bip); } /* diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c index 4f969d94fb74..153815bf18fc 100644 --- a/fs/xfs/xfs_dquot.c +++ b/fs/xfs/xfs_dquot.c @@ -56,7 +56,7 @@ xfs_qm_dqdestroy( mutex_destroy(&dqp->q_qlock); XFS_STATS_DEC(dqp->q_mount, xs_qm_dquot); - kmem_zone_free(xfs_qm_dqzone, dqp); + kmem_cache_free(xfs_qm_dqzone, dqp); } /* diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c index a05a1074e8f8..6ea847f6e298 100644 --- a/fs/xfs/xfs_extfree_item.c +++ b/fs/xfs/xfs_extfree_item.c @@ -39,7 +39,7 @@ xfs_efi_item_free( if (efip->efi_format.efi_nextents > XFS_EFI_MAX_FAST_EXTENTS) kmem_free(efip); else - kmem_zone_free(xfs_efi_zone, efip); + kmem_cache_free(xfs_efi_zone, efip); } /* @@ -244,7 +244,7 @@ xfs_efd_item_free(struct xfs_efd_log_item *efdp) if (efdp->efd_format.efd_nextents > XFS_EFD_MAX_FAST_EXTENTS) kmem_free(efdp); else - kmem_zone_free(xfs_efd_zone, efdp); + kmem_cache_free(xfs_efd_zone, efdp); } /* diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 944add5ff8e0..950e8a51ec66 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -44,7 +44,7 @@ xfs_inode_alloc( if (!ip) return NULL; if (inode_init_always(mp->m_super, VFS_I(ip))) { - kmem_zone_free(xfs_inode_zone, ip); + kmem_cache_free(xfs_inode_zone, ip); return NULL; } @@ -104,7 +104,7 @@ xfs_inode_free_callback( ip->i_itemp = NULL; } - kmem_zone_free(xfs_inode_zone, ip); + kmem_cache_free(xfs_inode_zone, ip); } static void diff --git a/fs/xfs/xfs_icreate_item.c b/fs/xfs/xfs_icreate_item.c index 3ebd1b7f49d8..490fee22b878 100644 --- a/fs/xfs/xfs_icreate_item.c +++ b/fs/xfs/xfs_icreate_item.c @@ -55,7 +55,7 @@ STATIC void xfs_icreate_item_release( struct xfs_log_item *lip) { - kmem_zone_free(xfs_icreate_zone, ICR_ITEM(lip)); + kmem_cache_free(xfs_icreate_zone, ICR_ITEM(lip)); } static const struct xfs_item_ops xfs_icreate_item_ops = { diff --git a/fs/xfs/xfs_inode_item.c b/fs/xfs/xfs_inode_item.c index 726aa3bfd6e8..3a62976291a1 100644 --- a/fs/xfs/xfs_inode_item.c +++ b/fs/xfs/xfs_inode_item.c @@ -667,7 +667,7 @@ xfs_inode_item_destroy( xfs_inode_t *ip) { kmem_free(ip->i_itemp->ili_item.li_lv_shadow); - kmem_zone_free(xfs_ili_zone, ip->i_itemp); + kmem_cache_free(xfs_ili_zone, ip->i_itemp); } diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index 3806674090ed..6a147c63a8a6 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -3468,7 +3468,7 @@ xfs_log_ticket_put( { ASSERT(atomic_read(&ticket->t_ref) > 0); if (atomic_dec_and_test(&ticket->t_ref)) - kmem_zone_free(xfs_log_ticket_zone, ticket); + kmem_cache_free(xfs_log_ticket_zone, ticket); } xlog_ticket_t * diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c index d5708d40ad87..8eeed73928cd 100644 --- a/fs/xfs/xfs_refcount_item.c +++ b/fs/xfs/xfs_refcount_item.c @@ -34,7 +34,7 @@ xfs_cui_item_free( if (cuip->cui_format.cui_nextents > XFS_CUI_MAX_FAST_EXTENTS) kmem_free(cuip); else - kmem_zone_free(xfs_cui_zone, cuip); + kmem_cache_free(xfs_cui_zone, cuip); } /* @@ -206,7 +206,7 @@ xfs_cud_item_release( struct xfs_cud_log_item *cudp = CUD_ITEM(lip); xfs_cui_release(cudp->cud_cuip); - kmem_zone_free(xfs_cud_zone, cudp); + kmem_cache_free(xfs_cud_zone, cudp); } static const struct xfs_item_ops xfs_cud_item_ops = { diff --git a/fs/xfs/xfs_rmap_item.c b/fs/xfs/xfs_rmap_item.c index 02f84d9a511c..4911b68f95dd 100644 --- a/fs/xfs/xfs_rmap_item.c +++ b/fs/xfs/xfs_rmap_item.c @@ -34,7 +34,7 @@ xfs_rui_item_free( if (ruip->rui_format.rui_nextents > XFS_RUI_MAX_FAST_EXTENTS) kmem_free(ruip); else - kmem_zone_free(xfs_rui_zone, ruip); + kmem_cache_free(xfs_rui_zone, ruip); } /* @@ -229,7 +229,7 @@ xfs_rud_item_release( struct xfs_rud_log_item *rudp = RUD_ITEM(lip); xfs_rui_release(rudp->rud_ruip); - kmem_zone_free(xfs_rud_zone, rudp); + kmem_cache_free(xfs_rud_zone, rudp); } static const struct xfs_item_ops xfs_rud_item_ops = { diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c index f4795fdb7389..3b208f9a865c 100644 --- a/fs/xfs/xfs_trans.c +++ b/fs/xfs/xfs_trans.c @@ -71,7 +71,7 @@ xfs_trans_free( if (!(tp->t_flags & XFS_TRANS_NO_WRITECOUNT)) sb_end_intwrite(tp->t_mountp->m_super); xfs_trans_free_dqinfo(tp); - kmem_zone_free(xfs_trans_zone, tp); + kmem_cache_free(xfs_trans_zone, tp); } /* diff --git a/fs/xfs/xfs_trans_dquot.c b/fs/xfs/xfs_trans_dquot.c index 16457465833b..ff1c326826d3 100644 --- a/fs/xfs/xfs_trans_dquot.c +++ b/fs/xfs/xfs_trans_dquot.c @@ -872,6 +872,6 @@ xfs_trans_free_dqinfo( { if (!tp->t_dqinfo) return; - kmem_zone_free(xfs_qm_dqtrxzone, tp->t_dqinfo); + kmem_cache_free(xfs_qm_dqtrxzone, tp->t_dqinfo); tp->t_dqinfo = NULL; } From patchwork Wed Nov 13 14:23:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carlos Maiolino X-Patchwork-Id: 11241993 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 869D61709 for ; Wed, 13 Nov 2019 14:23:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5F10C222D3 for ; Wed, 13 Nov 2019 14:23:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BKwPzM09" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727481AbfKMOXw (ORCPT ); Wed, 13 Nov 2019 09:23:52 -0500 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:24369 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727450AbfKMOXw (ORCPT ); Wed, 13 Nov 2019 09:23:52 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573655031; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oDxROk7DczfwEULzpxU0Gm+CvKw7ZfB4ZXyXqACDCG0=; b=BKwPzM096AlFhydgPSMhE55pe0e9YRUHE1hJfJHDk85ONTVBZNuKyctiXWphWa2HhUHJp/ BdCsi1KysqUXa/Xamumrwv6QVZiEJmcf1k7y/Kpd7RXX3N+3aG6G9NBRVAyOc0VPh7YhbR jjyoXUBnmZav7Nnb1gSisbjQfJzXJWI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-77-KZ79FwVKPYmfVnecHAeH0g-1; Wed, 13 Nov 2019 09:23:49 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A0333102C85E for ; Wed, 13 Nov 2019 14:23:48 +0000 (UTC) Received: from orion.redhat.com (ovpn-204-203.brq.redhat.com [10.40.204.203]) by smtp.corp.redhat.com (Postfix) with ESMTP id 027124D9E1 for ; Wed, 13 Nov 2019 14:23:47 +0000 (UTC) From: Carlos Maiolino To: linux-xfs@vger.kernel.org Subject: [PATCH 04/11] xfs: remove kmem_zone_zalloc() Date: Wed, 13 Nov 2019 15:23:28 +0100 Message-Id: <20191113142335.1045631-5-cmaiolino@redhat.com> In-Reply-To: <20191113142335.1045631-1-cmaiolino@redhat.com> References: <20191113142335.1045631-1-cmaiolino@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: KZ79FwVKPYmfVnecHAeH0g-1 X-Mimecast-Spam-Score: 0 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Use kmem_cache_zalloc() directly. Signed-off-by: Carlos Maiolino Reviewed-by: Darrick J. Wong --- fs/xfs/kmem.h | 6 ------ fs/xfs/libxfs/xfs_alloc_btree.c | 2 +- fs/xfs/libxfs/xfs_bmap.c | 3 ++- fs/xfs/libxfs/xfs_bmap_btree.c | 2 +- fs/xfs/libxfs/xfs_da_btree.c | 2 +- fs/xfs/libxfs/xfs_ialloc_btree.c | 2 +- fs/xfs/libxfs/xfs_inode_fork.c | 6 +++--- fs/xfs/libxfs/xfs_refcount_btree.c | 2 +- fs/xfs/libxfs/xfs_rmap_btree.c | 2 +- fs/xfs/xfs_bmap_item.c | 4 ++-- fs/xfs/xfs_buf.c | 2 +- fs/xfs/xfs_buf_item.c | 2 +- fs/xfs/xfs_dquot.c | 2 +- fs/xfs/xfs_extfree_item.c | 6 ++++-- fs/xfs/xfs_icreate_item.c | 2 +- fs/xfs/xfs_inode_item.c | 3 ++- fs/xfs/xfs_log.c | 7 ++++--- fs/xfs/xfs_log_cil.c | 2 +- fs/xfs/xfs_log_priv.h | 2 +- fs/xfs/xfs_refcount_item.c | 4 ++-- fs/xfs/xfs_rmap_item.c | 5 +++-- fs/xfs/xfs_trans.c | 4 ++-- fs/xfs/xfs_trans_dquot.c | 3 ++- 23 files changed, 38 insertions(+), 37 deletions(-) diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h index 6143117770e9..c12ab170c396 100644 --- a/fs/xfs/kmem.h +++ b/fs/xfs/kmem.h @@ -83,12 +83,6 @@ kmem_zalloc_large(size_t size, xfs_km_flags_t flags) extern void *kmem_zone_alloc(kmem_zone_t *, xfs_km_flags_t); -static inline void * -kmem_zone_zalloc(kmem_zone_t *zone, xfs_km_flags_t flags) -{ - return kmem_zone_alloc(zone, flags | KM_ZERO); -} - static inline struct page * kmem_to_page(void *addr) { diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c index 279694d73e4e..0867c1fad11b 100644 --- a/fs/xfs/libxfs/xfs_alloc_btree.c +++ b/fs/xfs/libxfs/xfs_alloc_btree.c @@ -487,7 +487,7 @@ xfs_allocbt_init_cursor( ASSERT(btnum == XFS_BTNUM_BNO || btnum == XFS_BTNUM_CNT); - cur = kmem_zone_zalloc(xfs_btree_cur_zone, KM_NOFS); + cur = kmem_cache_zalloc(xfs_btree_cur_zone, GFP_NOFS | __GFP_NOFAIL); cur->bc_tp = tp; cur->bc_mp = mp; diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index b7cc2f9eae7b..9fbdca183465 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -1104,7 +1104,8 @@ xfs_bmap_add_attrfork( if (error) goto trans_cancel; ASSERT(ip->i_afp == NULL); - ip->i_afp = kmem_zone_zalloc(xfs_ifork_zone, 0); + ip->i_afp = kmem_cache_zalloc(xfs_ifork_zone, + GFP_KERNEL | __GFP_NOFAIL); ip->i_afp->if_flags = XFS_IFEXTENTS; logflags = 0; switch (ip->i_d.di_format) { diff --git a/fs/xfs/libxfs/xfs_bmap_btree.c b/fs/xfs/libxfs/xfs_bmap_btree.c index ffe608d2a2d9..77fe4ae671e5 100644 --- a/fs/xfs/libxfs/xfs_bmap_btree.c +++ b/fs/xfs/libxfs/xfs_bmap_btree.c @@ -552,7 +552,7 @@ xfs_bmbt_init_cursor( struct xfs_btree_cur *cur; ASSERT(whichfork != XFS_COW_FORK); - cur = kmem_zone_zalloc(xfs_btree_cur_zone, KM_NOFS); + cur = kmem_cache_zalloc(xfs_btree_cur_zone, GFP_NOFS | __GFP_NOFAIL); cur->bc_tp = tp; cur->bc_mp = mp; diff --git a/fs/xfs/libxfs/xfs_da_btree.c b/fs/xfs/libxfs/xfs_da_btree.c index c5c0b73febae..4e0ec46aec78 100644 --- a/fs/xfs/libxfs/xfs_da_btree.c +++ b/fs/xfs/libxfs/xfs_da_btree.c @@ -81,7 +81,7 @@ kmem_zone_t *xfs_da_state_zone; /* anchor for state struct zone */ xfs_da_state_t * xfs_da_state_alloc(void) { - return kmem_zone_zalloc(xfs_da_state_zone, KM_NOFS); + return kmem_cache_zalloc(xfs_da_state_zone, GFP_NOFS | __GFP_NOFAIL); } /* diff --git a/fs/xfs/libxfs/xfs_ialloc_btree.c b/fs/xfs/libxfs/xfs_ialloc_btree.c index b82992f795aa..5366a874b076 100644 --- a/fs/xfs/libxfs/xfs_ialloc_btree.c +++ b/fs/xfs/libxfs/xfs_ialloc_btree.c @@ -413,7 +413,7 @@ xfs_inobt_init_cursor( struct xfs_agi *agi = XFS_BUF_TO_AGI(agbp); struct xfs_btree_cur *cur; - cur = kmem_zone_zalloc(xfs_btree_cur_zone, KM_NOFS); + cur = kmem_cache_zalloc(xfs_btree_cur_zone, GFP_NOFS | __GFP_NOFAIL); cur->bc_tp = tp; cur->bc_mp = mp; diff --git a/fs/xfs/libxfs/xfs_inode_fork.c b/fs/xfs/libxfs/xfs_inode_fork.c index ad2b9c313fd2..2bffaa31d62a 100644 --- a/fs/xfs/libxfs/xfs_inode_fork.c +++ b/fs/xfs/libxfs/xfs_inode_fork.c @@ -98,7 +98,7 @@ xfs_iformat_fork( return 0; ASSERT(ip->i_afp == NULL); - ip->i_afp = kmem_zone_zalloc(xfs_ifork_zone, KM_NOFS); + ip->i_afp = kmem_cache_zalloc(xfs_ifork_zone, GFP_NOFS | __GFP_NOFAIL); switch (dip->di_aformat) { case XFS_DINODE_FMT_LOCAL: @@ -688,8 +688,8 @@ xfs_ifork_init_cow( if (ip->i_cowfp) return; - ip->i_cowfp = kmem_zone_zalloc(xfs_ifork_zone, - KM_NOFS); + ip->i_cowfp = kmem_cache_zalloc(xfs_ifork_zone, + GFP_NOFS | __GFP_NOFAIL); ip->i_cowfp->if_flags = XFS_IFEXTENTS; ip->i_cformat = XFS_DINODE_FMT_EXTENTS; ip->i_cnextents = 0; diff --git a/fs/xfs/libxfs/xfs_refcount_btree.c b/fs/xfs/libxfs/xfs_refcount_btree.c index 38529dbacd55..bb86988780ea 100644 --- a/fs/xfs/libxfs/xfs_refcount_btree.c +++ b/fs/xfs/libxfs/xfs_refcount_btree.c @@ -325,7 +325,7 @@ xfs_refcountbt_init_cursor( ASSERT(agno != NULLAGNUMBER); ASSERT(agno < mp->m_sb.sb_agcount); - cur = kmem_zone_zalloc(xfs_btree_cur_zone, KM_NOFS); + cur = kmem_cache_zalloc(xfs_btree_cur_zone, GFP_NOFS | __GFP_NOFAIL); cur->bc_tp = tp; cur->bc_mp = mp; diff --git a/fs/xfs/libxfs/xfs_rmap_btree.c b/fs/xfs/libxfs/xfs_rmap_btree.c index fc78efa52c94..8d84dd98e8d3 100644 --- a/fs/xfs/libxfs/xfs_rmap_btree.c +++ b/fs/xfs/libxfs/xfs_rmap_btree.c @@ -461,7 +461,7 @@ xfs_rmapbt_init_cursor( struct xfs_agf *agf = XFS_BUF_TO_AGF(agbp); struct xfs_btree_cur *cur; - cur = kmem_zone_zalloc(xfs_btree_cur_zone, KM_NOFS); + cur = kmem_cache_zalloc(xfs_btree_cur_zone, GFP_NOFS | __GFP_NOFAIL); cur->bc_tp = tp; cur->bc_mp = mp; /* Overlapping btree; 2 keys per pointer. */ diff --git a/fs/xfs/xfs_bmap_item.c b/fs/xfs/xfs_bmap_item.c index ee6f4229cebc..451d6b925930 100644 --- a/fs/xfs/xfs_bmap_item.c +++ b/fs/xfs/xfs_bmap_item.c @@ -141,7 +141,7 @@ xfs_bui_init( { struct xfs_bui_log_item *buip; - buip = kmem_zone_zalloc(xfs_bui_zone, 0); + buip = kmem_cache_zalloc(xfs_bui_zone, GFP_KERNEL | __GFP_NOFAIL); xfs_log_item_init(mp, &buip->bui_item, XFS_LI_BUI, &xfs_bui_item_ops); buip->bui_format.bui_nextents = XFS_BUI_MAX_FAST_EXTENTS; @@ -218,7 +218,7 @@ xfs_trans_get_bud( { struct xfs_bud_log_item *budp; - budp = kmem_zone_zalloc(xfs_bud_zone, 0); + budp = kmem_cache_zalloc(xfs_bud_zone, GFP_KERNEL | __GFP_NOFAIL); xfs_log_item_init(tp->t_mountp, &budp->bud_item, XFS_LI_BUD, &xfs_bud_item_ops); budp->bud_buip = buip; diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index a0229c368e78..85f9ef4f504e 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -209,7 +209,7 @@ _xfs_buf_alloc( int error; int i; - bp = kmem_zone_zalloc(xfs_buf_zone, KM_NOFS); + bp = kmem_cache_zalloc(xfs_buf_zone, GFP_NOFS | __GFP_NOFAIL); if (unlikely(!bp)) return NULL; diff --git a/fs/xfs/xfs_buf_item.c b/fs/xfs/xfs_buf_item.c index 3458a1264a3f..676149ac09a3 100644 --- a/fs/xfs/xfs_buf_item.c +++ b/fs/xfs/xfs_buf_item.c @@ -747,7 +747,7 @@ xfs_buf_item_init( return 0; } - bip = kmem_zone_zalloc(xfs_buf_item_zone, 0); + bip = kmem_cache_zalloc(xfs_buf_item_zone, GFP_KERNEL | __GFP_NOFAIL); xfs_log_item_init(mp, &bip->bli_item, XFS_LI_BUF, &xfs_buf_item_ops); bip->bli_buf = bp; diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c index 153815bf18fc..79f0de378123 100644 --- a/fs/xfs/xfs_dquot.c +++ b/fs/xfs/xfs_dquot.c @@ -440,7 +440,7 @@ xfs_dquot_alloc( { struct xfs_dquot *dqp; - dqp = kmem_zone_zalloc(xfs_qm_dqzone, 0); + dqp = kmem_cache_zalloc(xfs_qm_dqzone, GFP_KERNEL | __GFP_NOFAIL); dqp->dq_flags = type; dqp->q_core.d_id = cpu_to_be32(id); diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c index 6ea847f6e298..49ce6d6c4bb9 100644 --- a/fs/xfs/xfs_extfree_item.c +++ b/fs/xfs/xfs_extfree_item.c @@ -165,7 +165,8 @@ xfs_efi_init( ((nextents - 1) * sizeof(xfs_extent_t))); efip = kmem_zalloc(size, 0); } else { - efip = kmem_zone_zalloc(xfs_efi_zone, 0); + efip = kmem_cache_zalloc(xfs_efi_zone, + GFP_KERNEL | __GFP_NOFAIL); } xfs_log_item_init(mp, &efip->efi_item, XFS_LI_EFI, &xfs_efi_item_ops); @@ -336,7 +337,8 @@ xfs_trans_get_efd( (nextents - 1) * sizeof(struct xfs_extent), 0); } else { - efdp = kmem_zone_zalloc(xfs_efd_zone, 0); + efdp = kmem_cache_zalloc(xfs_efd_zone, + GFP_KERNEL | __GFP_NOFAIL); } xfs_log_item_init(tp->t_mountp, &efdp->efd_item, XFS_LI_EFD, diff --git a/fs/xfs/xfs_icreate_item.c b/fs/xfs/xfs_icreate_item.c index 490fee22b878..85bbf9dbe095 100644 --- a/fs/xfs/xfs_icreate_item.c +++ b/fs/xfs/xfs_icreate_item.c @@ -89,7 +89,7 @@ xfs_icreate_log( { struct xfs_icreate_item *icp; - icp = kmem_zone_zalloc(xfs_icreate_zone, 0); + icp = kmem_cache_zalloc(xfs_icreate_zone, GFP_KERNEL | __GFP_NOFAIL); xfs_log_item_init(tp->t_mountp, &icp->ic_item, XFS_LI_ICREATE, &xfs_icreate_item_ops); diff --git a/fs/xfs/xfs_inode_item.c b/fs/xfs/xfs_inode_item.c index 3a62976291a1..2097e6932a48 100644 --- a/fs/xfs/xfs_inode_item.c +++ b/fs/xfs/xfs_inode_item.c @@ -652,7 +652,8 @@ xfs_inode_item_init( struct xfs_inode_log_item *iip; ASSERT(ip->i_itemp == NULL); - iip = ip->i_itemp = kmem_zone_zalloc(xfs_ili_zone, 0); + iip = ip->i_itemp = kmem_cache_zalloc(xfs_ili_zone, + GFP_KERNEL | __GFP_NOFAIL); iip->ili_inode = ip; xfs_log_item_init(mp, &iip->ili_item, XFS_LI_INODE, diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index 6a147c63a8a6..30447bd477d2 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -454,7 +454,8 @@ xfs_log_reserve( XFS_STATS_INC(mp, xs_try_logspace); ASSERT(*ticp == NULL); - tic = xlog_ticket_alloc(log, unit_bytes, cnt, client, permanent, 0); + tic = xlog_ticket_alloc(log, unit_bytes, cnt, client, permanent, + GFP_KERNEL | __GFP_NOFAIL); *ticp = tic; xlog_grant_push_ail(log, tic->t_cnt ? tic->t_unit_res * tic->t_cnt @@ -3587,12 +3588,12 @@ xlog_ticket_alloc( int cnt, char client, bool permanent, - xfs_km_flags_t alloc_flags) + gfp_t alloc_flags) { struct xlog_ticket *tic; int unit_res; - tic = kmem_zone_zalloc(xfs_log_ticket_zone, alloc_flags); + tic = kmem_cache_zalloc(xfs_log_ticket_zone, alloc_flags); if (!tic) return NULL; diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c index 48435cf2aa16..630c2482c8f1 100644 --- a/fs/xfs/xfs_log_cil.c +++ b/fs/xfs/xfs_log_cil.c @@ -38,7 +38,7 @@ xlog_cil_ticket_alloc( struct xlog_ticket *tic; tic = xlog_ticket_alloc(log, 0, 1, XFS_TRANSACTION, 0, - KM_NOFS); + GFP_NOFS | __GFP_NOFAIL); /* * set the current reservation to zero so we know to steal the basic diff --git a/fs/xfs/xfs_log_priv.h b/fs/xfs/xfs_log_priv.h index c47aa2ca6dc7..54c95fee9dc4 100644 --- a/fs/xfs/xfs_log_priv.h +++ b/fs/xfs/xfs_log_priv.h @@ -427,7 +427,7 @@ xlog_ticket_alloc( int count, char client, bool permanent, - xfs_km_flags_t alloc_flags); + gfp_t alloc_flags); static inline void diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c index 8eeed73928cd..a242bc9874a6 100644 --- a/fs/xfs/xfs_refcount_item.c +++ b/fs/xfs/xfs_refcount_item.c @@ -146,7 +146,7 @@ xfs_cui_init( cuip = kmem_zalloc(xfs_cui_log_item_sizeof(nextents), 0); else - cuip = kmem_zone_zalloc(xfs_cui_zone, 0); + cuip = kmem_cache_zalloc(xfs_cui_zone, GFP_KERNEL | __GFP_NOFAIL); xfs_log_item_init(mp, &cuip->cui_item, XFS_LI_CUI, &xfs_cui_item_ops); cuip->cui_format.cui_nextents = nextents; @@ -223,7 +223,7 @@ xfs_trans_get_cud( { struct xfs_cud_log_item *cudp; - cudp = kmem_zone_zalloc(xfs_cud_zone, 0); + cudp = kmem_cache_zalloc(xfs_cud_zone, GFP_KERNEL | __GFP_NOFAIL); xfs_log_item_init(tp->t_mountp, &cudp->cud_item, XFS_LI_CUD, &xfs_cud_item_ops); cudp->cud_cuip = cuip; diff --git a/fs/xfs/xfs_rmap_item.c b/fs/xfs/xfs_rmap_item.c index 4911b68f95dd..857cc78dc440 100644 --- a/fs/xfs/xfs_rmap_item.c +++ b/fs/xfs/xfs_rmap_item.c @@ -144,7 +144,8 @@ xfs_rui_init( if (nextents > XFS_RUI_MAX_FAST_EXTENTS) ruip = kmem_zalloc(xfs_rui_log_item_sizeof(nextents), 0); else - ruip = kmem_zone_zalloc(xfs_rui_zone, 0); + ruip = kmem_cache_zalloc(xfs_rui_zone, + GFP_KERNEL | __GFP_NOFAIL); xfs_log_item_init(mp, &ruip->rui_item, XFS_LI_RUI, &xfs_rui_item_ops); ruip->rui_format.rui_nextents = nextents; @@ -246,7 +247,7 @@ xfs_trans_get_rud( { struct xfs_rud_log_item *rudp; - rudp = kmem_zone_zalloc(xfs_rud_zone, 0); + rudp = kmem_cache_zalloc(xfs_rud_zone, GFP_KERNEL | __GFP_NOFAIL); xfs_log_item_init(tp->t_mountp, &rudp->rud_item, XFS_LI_RUD, &xfs_rud_item_ops); rudp->rud_ruip = ruip; diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c index 3b208f9a865c..29f34492d5f4 100644 --- a/fs/xfs/xfs_trans.c +++ b/fs/xfs/xfs_trans.c @@ -90,7 +90,7 @@ xfs_trans_dup( trace_xfs_trans_dup(tp, _RET_IP_); - ntp = kmem_zone_zalloc(xfs_trans_zone, 0); + ntp = kmem_cache_zalloc(xfs_trans_zone, GFP_KERNEL | __GFP_NOFAIL); /* * Initialize the new transaction structure. @@ -263,7 +263,7 @@ xfs_trans_alloc( * GFP_NOFS allocation context so that we avoid lockdep false positives * by doing GFP_KERNEL allocations inside sb_start_intwrite(). */ - tp = kmem_zone_zalloc(xfs_trans_zone, 0); + tp = kmem_cache_zalloc(xfs_trans_zone, GFP_KERNEL | __GFP_NOFAIL); if (!(flags & XFS_TRANS_NO_WRITECOUNT)) sb_start_intwrite(mp->m_super); diff --git a/fs/xfs/xfs_trans_dquot.c b/fs/xfs/xfs_trans_dquot.c index ff1c326826d3..69e8f6d049aa 100644 --- a/fs/xfs/xfs_trans_dquot.c +++ b/fs/xfs/xfs_trans_dquot.c @@ -863,7 +863,8 @@ STATIC void xfs_trans_alloc_dqinfo( xfs_trans_t *tp) { - tp->t_dqinfo = kmem_zone_zalloc(xfs_qm_dqtrxzone, 0); + tp->t_dqinfo = kmem_cache_zalloc(xfs_qm_dqtrxzone, + GFP_KERNEL | __GFP_NOFAIL); } void From patchwork Wed Nov 13 14:23:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carlos Maiolino X-Patchwork-Id: 11241991 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4CFB11390 for ; Wed, 13 Nov 2019 14:23:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2D5582245B for ; Wed, 13 Nov 2019 14:23:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ejWqgDxb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727540AbfKMOXw (ORCPT ); Wed, 13 Nov 2019 09:23:52 -0500 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:35999 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727481AbfKMOXw (ORCPT ); Wed, 13 Nov 2019 09:23:52 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573655031; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=znB/YEulxrbr0008vyBylitQtloZz5evIJsXy8Xjwt4=; b=ejWqgDxbwl4dIs03w5S0LHiTPgi7XiBg/fUhXFO63koJRRp+xYuU3PWwwRzndccoTWQrVI DHHz1o8ynBrCH2twkB/qt4k0lhXf4vSXcQWVoGjqMSUwoi+BAJKmeAOBJ6xdWB0MCOcNjc d2Kqh4OXXUtry4odRmFk1U9i7zxBJ6U= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-154-qm6tbNMXNFqhzpBP3udLcQ-1; Wed, 13 Nov 2019 09:23:50 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A5EA9102C860 for ; Wed, 13 Nov 2019 14:23:49 +0000 (UTC) Received: from orion.redhat.com (ovpn-204-203.brq.redhat.com [10.40.204.203]) by smtp.corp.redhat.com (Postfix) with ESMTP id 086144D9E1 for ; Wed, 13 Nov 2019 14:23:48 +0000 (UTC) From: Carlos Maiolino To: linux-xfs@vger.kernel.org Subject: [PATCH 05/11] xfs: Remove kmem_zone_alloc() wrapper Date: Wed, 13 Nov 2019 15:23:29 +0100 Message-Id: <20191113142335.1045631-6-cmaiolino@redhat.com> In-Reply-To: <20191113142335.1045631-1-cmaiolino@redhat.com> References: <20191113142335.1045631-1-cmaiolino@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: qm6tbNMXNFqhzpBP3udLcQ-1 X-Mimecast-Spam-Score: 0 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Use kmem_cache_alloc() directly. Signed-off-by: Carlos Maiolino Reviewed-by: Darrick J. Wong --- fs/xfs/kmem.c | 21 --------------------- fs/xfs/kmem.h | 2 -- fs/xfs/libxfs/xfs_alloc.c | 3 ++- fs/xfs/libxfs/xfs_bmap.c | 3 ++- fs/xfs/xfs_icache.c | 2 +- fs/xfs/xfs_trace.h | 1 - 6 files changed, 5 insertions(+), 27 deletions(-) diff --git a/fs/xfs/kmem.c b/fs/xfs/kmem.c index 1da94237a8cf..2644fdaa0549 100644 --- a/fs/xfs/kmem.c +++ b/fs/xfs/kmem.c @@ -115,24 +115,3 @@ kmem_realloc(const void *old, size_t newsize, xfs_km_flags_t flags) congestion_wait(BLK_RW_ASYNC, HZ/50); } while (1); } - -void * -kmem_zone_alloc(kmem_zone_t *zone, xfs_km_flags_t flags) -{ - int retries = 0; - gfp_t lflags = kmem_flags_convert(flags); - void *ptr; - - trace_kmem_zone_alloc(kmem_cache_size(zone), flags, _RET_IP_); - do { - ptr = kmem_cache_alloc(zone, lflags); - if (ptr || (flags & KM_MAYFAIL)) - return ptr; - if (!(++retries % 100)) - xfs_err(NULL, - "%s(%u) possible memory allocation deadlock in %s (mode:0x%x)", - current->comm, current->pid, - __func__, lflags); - congestion_wait(BLK_RW_ASYNC, HZ/50); - } while (1); -} diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h index c12ab170c396..33523a0b5801 100644 --- a/fs/xfs/kmem.h +++ b/fs/xfs/kmem.h @@ -81,8 +81,6 @@ kmem_zalloc_large(size_t size, xfs_km_flags_t flags) #define kmem_zone kmem_cache #define kmem_zone_t struct kmem_cache -extern void *kmem_zone_alloc(kmem_zone_t *, xfs_km_flags_t); - static inline struct page * kmem_to_page(void *addr) { diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c index 675613c7bacb..42cae87bdd2d 100644 --- a/fs/xfs/libxfs/xfs_alloc.c +++ b/fs/xfs/libxfs/xfs_alloc.c @@ -2351,7 +2351,8 @@ xfs_defer_agfl_block( ASSERT(xfs_bmap_free_item_zone != NULL); ASSERT(oinfo != NULL); - new = kmem_zone_alloc(xfs_bmap_free_item_zone, 0); + new = kmem_cache_alloc(xfs_bmap_free_item_zone, + GFP_KERNEL | __GFP_NOFAIL); new->xefi_startblock = XFS_AGB_TO_FSB(mp, agno, agbno); new->xefi_blockcount = 1; new->xefi_oinfo = *oinfo; diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index 9fbdca183465..37596e49b92e 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -554,7 +554,8 @@ __xfs_bmap_add_free( #endif ASSERT(xfs_bmap_free_item_zone != NULL); - new = kmem_zone_alloc(xfs_bmap_free_item_zone, 0); + new = kmem_cache_alloc(xfs_bmap_free_item_zone, + GFP_KERNEL | __GFP_NOFAIL); new->xefi_startblock = bno; new->xefi_blockcount = (xfs_extlen_t)len; if (oinfo) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 950e8a51ec66..985f48e3795f 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -40,7 +40,7 @@ xfs_inode_alloc( * KM_MAYFAIL and return NULL here on ENOMEM. Set the * code up to do this anyway. */ - ip = kmem_zone_alloc(xfs_inode_zone, 0); + ip = kmem_cache_alloc(xfs_inode_zone, GFP_KERNEL | __GFP_NOFAIL); if (!ip) return NULL; if (inode_init_always(mp->m_super, VFS_I(ip))) { diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h index c13bb3655e48..192f499ccd7e 100644 --- a/fs/xfs/xfs_trace.h +++ b/fs/xfs/xfs_trace.h @@ -3571,7 +3571,6 @@ DEFINE_KMEM_EVENT(kmem_alloc); DEFINE_KMEM_EVENT(kmem_alloc_io); DEFINE_KMEM_EVENT(kmem_alloc_large); DEFINE_KMEM_EVENT(kmem_realloc); -DEFINE_KMEM_EVENT(kmem_zone_alloc); #endif /* _TRACE_XFS_H */ From patchwork Wed Nov 13 14:23:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carlos Maiolino X-Patchwork-Id: 11241995 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81BC316B1 for ; Wed, 13 Nov 2019 14:23:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 52090222D3 for ; Wed, 13 Nov 2019 14:23:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HYjgSRbP" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727550AbfKMOXz (ORCPT ); Wed, 13 Nov 2019 09:23:55 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:60497 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727450AbfKMOXy (ORCPT ); Wed, 13 Nov 2019 09:23:54 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573655032; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kxGPOxPp8KgCyRi/h597b5P2SpC2FDTMSGeoLYYlPnM=; b=HYjgSRbPHs20wmRvaKgLFm5rST3OqWwFf5mm5QYEJIeEuMxzjRoBX/lIp72a0UVvb1EccJ VZKMQiRdLM+AQDtjPJrEJbZPFX+6QmDqDMYIIG30Lq1QQnuZTkGNPZI2OoXDOt07FpBogK 6GPkOqDpHC1gca2xG33BOYG03uCByO8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-336-aFdsSnT_O7mgfYH5VSPoog-1; Wed, 13 Nov 2019 09:23:51 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DEC031345B0 for ; Wed, 13 Nov 2019 14:23:50 +0000 (UTC) Received: from orion.redhat.com (ovpn-204-203.brq.redhat.com [10.40.204.203]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0EA4666835 for ; Wed, 13 Nov 2019 14:23:49 +0000 (UTC) From: Carlos Maiolino To: linux-xfs@vger.kernel.org Subject: [PATCH 06/11] xfs: remove kmem_zalloc() wrapper Date: Wed, 13 Nov 2019 15:23:30 +0100 Message-Id: <20191113142335.1045631-7-cmaiolino@redhat.com> In-Reply-To: <20191113142335.1045631-1-cmaiolino@redhat.com> References: <20191113142335.1045631-1-cmaiolino@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: aFdsSnT_O7mgfYH5VSPoog-1 X-Mimecast-Spam-Score: 0 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Use kzalloc() directly Special attention goes to function xfs_buf_map_from_irec(). Giving the fact we are not allowed to fail there, I removed the 'if (!map)' conditional from there, I'd just like somebody to double check if it's fine as I believe it is Signed-off-by: Carlos Maiolino --- fs/xfs/kmem.h | 6 ------ fs/xfs/libxfs/xfs_attr_leaf.c | 3 ++- fs/xfs/libxfs/xfs_da_btree.c | 10 ++++------ fs/xfs/libxfs/xfs_dir2.c | 18 +++++++++--------- fs/xfs/libxfs/xfs_iext_tree.c | 12 ++++++++---- fs/xfs/scrub/agheader.c | 4 ++-- fs/xfs/scrub/fscounters.c | 3 ++- fs/xfs/xfs_buf.c | 6 +++--- fs/xfs/xfs_buf_item.c | 4 ++-- fs/xfs/xfs_dquot_item.c | 3 ++- fs/xfs/xfs_error.c | 4 ++-- fs/xfs/xfs_extent_busy.c | 3 ++- fs/xfs/xfs_extfree_item.c | 6 +++--- fs/xfs/xfs_inode.c | 2 +- fs/xfs/xfs_itable.c | 8 ++++---- fs/xfs/xfs_iwalk.c | 3 ++- fs/xfs/xfs_log.c | 5 +++-- fs/xfs/xfs_log_cil.c | 6 +++--- fs/xfs/xfs_log_recover.c | 12 ++++++------ fs/xfs/xfs_mount.c | 3 ++- fs/xfs/xfs_mru_cache.c | 5 +++-- fs/xfs/xfs_qm.c | 3 ++- fs/xfs/xfs_refcount_item.c | 4 ++-- fs/xfs/xfs_rmap_item.c | 3 ++- fs/xfs/xfs_trans_ail.c | 3 ++- 25 files changed, 73 insertions(+), 66 deletions(-) diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h index 33523a0b5801..46c8c5546674 100644 --- a/fs/xfs/kmem.h +++ b/fs/xfs/kmem.h @@ -62,12 +62,6 @@ static inline void kmem_free(const void *ptr) } -static inline void * -kmem_zalloc(size_t size, xfs_km_flags_t flags) -{ - return kmem_alloc(size, flags | KM_ZERO); -} - static inline void * kmem_zalloc_large(size_t size, xfs_km_flags_t flags) { diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c index 85ec5945d29f..9f54e59f4004 100644 --- a/fs/xfs/libxfs/xfs_attr_leaf.c +++ b/fs/xfs/libxfs/xfs_attr_leaf.c @@ -2253,7 +2253,8 @@ xfs_attr3_leaf_unbalance( struct xfs_attr_leafblock *tmp_leaf; struct xfs_attr3_icleaf_hdr tmphdr; - tmp_leaf = kmem_zalloc(state->args->geo->blksize, 0); + tmp_leaf = kzalloc(state->args->geo->blksize, + GFP_KERNEL | __GFP_NOFAIL); /* * Copy the header into the temp leaf so that all the stuff diff --git a/fs/xfs/libxfs/xfs_da_btree.c b/fs/xfs/libxfs/xfs_da_btree.c index 4e0ec46aec78..dbd2434e68b5 100644 --- a/fs/xfs/libxfs/xfs_da_btree.c +++ b/fs/xfs/libxfs/xfs_da_btree.c @@ -2534,10 +2534,8 @@ xfs_buf_map_from_irec( ASSERT(nirecs >= 1); if (nirecs > 1) { - map = kmem_zalloc(nirecs * sizeof(struct xfs_buf_map), - KM_NOFS); - if (!map) - return -ENOMEM; + map = kzalloc(nirecs * sizeof(struct xfs_buf_map), + GFP_NOFS | __GFP_NOFAIL); *mapp = map; } @@ -2593,8 +2591,8 @@ xfs_dabuf_map( * Optimize the one-block case. */ if (nfsb != 1) - irecs = kmem_zalloc(sizeof(irec) * nfsb, - KM_NOFS); + irecs = kzalloc(sizeof(irec) * nfsb, + GFP_NOFS | __GFP_NOFAIL); nirecs = nfsb; error = xfs_bmapi_read(dp, (xfs_fileoff_t)bno, nfsb, irecs, diff --git a/fs/xfs/libxfs/xfs_dir2.c b/fs/xfs/libxfs/xfs_dir2.c index 624c05e77ab4..67172e376e1d 100644 --- a/fs/xfs/libxfs/xfs_dir2.c +++ b/fs/xfs/libxfs/xfs_dir2.c @@ -104,10 +104,10 @@ xfs_da_mount( ASSERT(mp->m_sb.sb_versionnum & XFS_SB_VERSION_DIRV2BIT); ASSERT(xfs_dir2_dirblock_bytes(&mp->m_sb) <= XFS_MAX_BLOCKSIZE); - mp->m_dir_geo = kmem_zalloc(sizeof(struct xfs_da_geometry), - KM_MAYFAIL); - mp->m_attr_geo = kmem_zalloc(sizeof(struct xfs_da_geometry), - KM_MAYFAIL); + mp->m_dir_geo = kzalloc(sizeof(struct xfs_da_geometry), + GFP_KERNEL | __GFP_RETRY_MAYFAIL); + mp->m_attr_geo = kzalloc(sizeof(struct xfs_da_geometry), + GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!mp->m_dir_geo || !mp->m_attr_geo) { kmem_free(mp->m_dir_geo); kmem_free(mp->m_attr_geo); @@ -234,7 +234,7 @@ xfs_dir_init( if (error) return error; - args = kmem_zalloc(sizeof(*args), KM_NOFS); + args = kzalloc(sizeof(*args), GFP_NOFS | __GFP_NOFAIL); if (!args) return -ENOMEM; @@ -271,7 +271,7 @@ xfs_dir_createname( XFS_STATS_INC(dp->i_mount, xs_dir_create); } - args = kmem_zalloc(sizeof(*args), KM_NOFS); + args = kzalloc(sizeof(*args), GFP_NOFS | __GFP_NOFAIL); if (!args) return -ENOMEM; @@ -370,7 +370,7 @@ xfs_dir_lookup( * lockdep Doing this avoids having to add a bunch of lockdep class * annotations into the reclaim path for the ilock. */ - args = kmem_zalloc(sizeof(*args), KM_NOFS); + args = kzalloc(sizeof(*args), GFP_NOFS | __GFP_NOFAIL); args->geo = dp->i_mount->m_dir_geo; args->name = name->name; args->namelen = name->len; @@ -439,7 +439,7 @@ xfs_dir_removename( ASSERT(S_ISDIR(VFS_I(dp)->i_mode)); XFS_STATS_INC(dp->i_mount, xs_dir_remove); - args = kmem_zalloc(sizeof(*args), KM_NOFS); + args = kzalloc(sizeof(*args), GFP_NOFS | __GFP_NOFAIL); if (!args) return -ENOMEM; @@ -500,7 +500,7 @@ xfs_dir_replace( if (rval) return rval; - args = kmem_zalloc(sizeof(*args), KM_NOFS); + args = kzalloc(sizeof(*args), GFP_NOFS | __GFP_NOFAIL); if (!args) return -ENOMEM; diff --git a/fs/xfs/libxfs/xfs_iext_tree.c b/fs/xfs/libxfs/xfs_iext_tree.c index 52451809c478..f2005671e86c 100644 --- a/fs/xfs/libxfs/xfs_iext_tree.c +++ b/fs/xfs/libxfs/xfs_iext_tree.c @@ -398,7 +398,8 @@ static void xfs_iext_grow( struct xfs_ifork *ifp) { - struct xfs_iext_node *node = kmem_zalloc(NODE_SIZE, KM_NOFS); + struct xfs_iext_node *node = kzalloc(NODE_SIZE, + GFP_NOFS | __GFP_NOFAIL); int i; if (ifp->if_height == 1) { @@ -454,7 +455,8 @@ xfs_iext_split_node( int *nr_entries) { struct xfs_iext_node *node = *nodep; - struct xfs_iext_node *new = kmem_zalloc(NODE_SIZE, KM_NOFS); + struct xfs_iext_node *new = kzalloc(NODE_SIZE, + GFP_NOFS | __GFP_NOFAIL); const int nr_move = KEYS_PER_NODE / 2; int nr_keep = nr_move + (KEYS_PER_NODE & 1); int i = 0; @@ -542,7 +544,8 @@ xfs_iext_split_leaf( int *nr_entries) { struct xfs_iext_leaf *leaf = cur->leaf; - struct xfs_iext_leaf *new = kmem_zalloc(NODE_SIZE, KM_NOFS); + struct xfs_iext_leaf *new = kzalloc(NODE_SIZE, + GFP_NOFS | __GFP_NOFAIL); const int nr_move = RECS_PER_LEAF / 2; int nr_keep = nr_move + (RECS_PER_LEAF & 1); int i; @@ -583,7 +586,8 @@ xfs_iext_alloc_root( { ASSERT(ifp->if_bytes == 0); - ifp->if_u1.if_root = kmem_zalloc(sizeof(struct xfs_iext_rec), KM_NOFS); + ifp->if_u1.if_root = kzalloc(sizeof(struct xfs_iext_rec), + GFP_NOFS | __GFP_NOFAIL); ifp->if_height = 1; /* now that we have a node step into it */ diff --git a/fs/xfs/scrub/agheader.c b/fs/xfs/scrub/agheader.c index ba0f747c82e8..93b9a93b40f3 100644 --- a/fs/xfs/scrub/agheader.c +++ b/fs/xfs/scrub/agheader.c @@ -720,8 +720,8 @@ xchk_agfl( memset(&sai, 0, sizeof(sai)); sai.sc = sc; sai.sz_entries = agflcount; - sai.entries = kmem_zalloc(sizeof(xfs_agblock_t) * agflcount, - KM_MAYFAIL); + sai.entries = kzalloc(sizeof(xfs_agblock_t) * agflcount, + GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!sai.entries) { error = -ENOMEM; goto out; diff --git a/fs/xfs/scrub/fscounters.c b/fs/xfs/scrub/fscounters.c index 7251c66a82c9..bb036c5a6f21 100644 --- a/fs/xfs/scrub/fscounters.c +++ b/fs/xfs/scrub/fscounters.c @@ -125,7 +125,8 @@ xchk_setup_fscounters( struct xchk_fscounters *fsc; int error; - sc->buf = kmem_zalloc(sizeof(struct xchk_fscounters), 0); + sc->buf = kzalloc(sizeof(struct xchk_fscounters), + GFP_KERNEL | __GFP_NOFAIL); if (!sc->buf) return -ENOMEM; fsc = sc->buf; diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 85f9ef4f504e..e2a7eac03d04 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -178,8 +178,8 @@ xfs_buf_get_maps( return 0; } - bp->b_maps = kmem_zalloc(map_count * sizeof(struct xfs_buf_map), - KM_NOFS); + bp->b_maps = kzalloc(map_count * sizeof(struct xfs_buf_map), + GFP_NOFS | __GFP_NOFAIL); if (!bp->b_maps) return -ENOMEM; return 0; @@ -1749,7 +1749,7 @@ xfs_alloc_buftarg( { xfs_buftarg_t *btp; - btp = kmem_zalloc(sizeof(*btp), KM_NOFS); + btp = kzalloc(sizeof(*btp), GFP_NOFS | __GFP_NOFAIL); btp->bt_mount = mp; btp->bt_dev = bdev->bd_dev; diff --git a/fs/xfs/xfs_buf_item.c b/fs/xfs/xfs_buf_item.c index 676149ac09a3..e6f48fe24537 100644 --- a/fs/xfs/xfs_buf_item.c +++ b/fs/xfs/xfs_buf_item.c @@ -701,8 +701,8 @@ xfs_buf_item_get_format( return 0; } - bip->bli_formats = kmem_zalloc(count * sizeof(struct xfs_buf_log_format), - 0); + bip->bli_formats = kzalloc(count * sizeof(struct xfs_buf_log_format), + GFP_KERNEL | __GFP_NOFAIL); if (!bip->bli_formats) return -ENOMEM; return 0; diff --git a/fs/xfs/xfs_dquot_item.c b/fs/xfs/xfs_dquot_item.c index d60647d7197b..a720425d0728 100644 --- a/fs/xfs/xfs_dquot_item.c +++ b/fs/xfs/xfs_dquot_item.c @@ -347,7 +347,8 @@ xfs_qm_qoff_logitem_init( { struct xfs_qoff_logitem *qf; - qf = kmem_zalloc(sizeof(struct xfs_qoff_logitem), 0); + qf = kzalloc(sizeof(struct xfs_qoff_logitem), + GFP_KERNEL | __GFP_NOFAIL); xfs_log_item_init(mp, &qf->qql_item, XFS_LI_QUOTAOFF, start ? &xfs_qm_qoffend_logitem_ops : &xfs_qm_qoff_logitem_ops); diff --git a/fs/xfs/xfs_error.c b/fs/xfs/xfs_error.c index 51dd1f43d12f..51ca07eed4f3 100644 --- a/fs/xfs/xfs_error.c +++ b/fs/xfs/xfs_error.c @@ -212,8 +212,8 @@ int xfs_errortag_init( struct xfs_mount *mp) { - mp->m_errortag = kmem_zalloc(sizeof(unsigned int) * XFS_ERRTAG_MAX, - KM_MAYFAIL); + mp->m_errortag = kzalloc(sizeof(unsigned int) * XFS_ERRTAG_MAX, + GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!mp->m_errortag) return -ENOMEM; diff --git a/fs/xfs/xfs_extent_busy.c b/fs/xfs/xfs_extent_busy.c index 3991e59cfd18..76422684449c 100644 --- a/fs/xfs/xfs_extent_busy.c +++ b/fs/xfs/xfs_extent_busy.c @@ -33,7 +33,8 @@ xfs_extent_busy_insert( struct rb_node **rbp; struct rb_node *parent = NULL; - new = kmem_zalloc(sizeof(struct xfs_extent_busy), 0); + new = kzalloc(sizeof(struct xfs_extent_busy), + GFP_KERNEL | __GFP_NOFAIL); new->agno = agno; new->bno = bno; new->length = len; diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c index 49ce6d6c4bb9..f8f0efe42513 100644 --- a/fs/xfs/xfs_extfree_item.c +++ b/fs/xfs/xfs_extfree_item.c @@ -163,7 +163,7 @@ xfs_efi_init( if (nextents > XFS_EFI_MAX_FAST_EXTENTS) { size = (uint)(sizeof(xfs_efi_log_item_t) + ((nextents - 1) * sizeof(xfs_extent_t))); - efip = kmem_zalloc(size, 0); + efip = kzalloc(size, GFP_KERNEL | __GFP_NOFAIL); } else { efip = kmem_cache_zalloc(xfs_efi_zone, GFP_KERNEL | __GFP_NOFAIL); @@ -333,9 +333,9 @@ xfs_trans_get_efd( ASSERT(nextents > 0); if (nextents > XFS_EFD_MAX_FAST_EXTENTS) { - efdp = kmem_zalloc(sizeof(struct xfs_efd_log_item) + + efdp = kzalloc(sizeof(struct xfs_efd_log_item) + (nextents - 1) * sizeof(struct xfs_extent), - 0); + GFP_KERNEL | __GFP_NOFAIL); } else { efdp = kmem_cache_zalloc(xfs_efd_zone, GFP_KERNEL | __GFP_NOFAIL); diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index a92d4521748d..8a67e97ecbfc 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -2024,7 +2024,7 @@ xfs_iunlink_add_backref( if (XFS_TEST_ERROR(false, pag->pag_mount, XFS_ERRTAG_IUNLINK_FALLBACK)) return 0; - iu = kmem_zalloc(sizeof(*iu), KM_NOFS); + iu = kzalloc(sizeof(*iu), GFP_NOFS | __GFP_NOFAIL); iu->iu_agino = prev_agino; iu->iu_next_unlinked = this_agino; diff --git a/fs/xfs/xfs_itable.c b/fs/xfs/xfs_itable.c index 884950adbd16..b9b78874e60d 100644 --- a/fs/xfs/xfs_itable.c +++ b/fs/xfs/xfs_itable.c @@ -168,8 +168,8 @@ xfs_bulkstat_one( ASSERT(breq->icount == 1); - bc.buf = kmem_zalloc(sizeof(struct xfs_bulkstat), - KM_MAYFAIL); + bc.buf = kzalloc(sizeof(struct xfs_bulkstat), + GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!bc.buf) return -ENOMEM; @@ -242,8 +242,8 @@ xfs_bulkstat( if (xfs_bulkstat_already_done(breq->mp, breq->startino)) return 0; - bc.buf = kmem_zalloc(sizeof(struct xfs_bulkstat), - KM_MAYFAIL); + bc.buf = kzalloc(sizeof(struct xfs_bulkstat), + GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!bc.buf) return -ENOMEM; diff --git a/fs/xfs/xfs_iwalk.c b/fs/xfs/xfs_iwalk.c index aa375cf53021..c812b14af3bb 100644 --- a/fs/xfs/xfs_iwalk.c +++ b/fs/xfs/xfs_iwalk.c @@ -616,7 +616,8 @@ xfs_iwalk_threaded( if (xfs_pwork_ctl_want_abort(&pctl)) break; - iwag = kmem_zalloc(sizeof(struct xfs_iwalk_ag), 0); + iwag = kzalloc(sizeof(struct xfs_iwalk_ag), + GFP_KERNEL | __GFP_NOFAIL); iwag->mp = mp; iwag->iwalk_fn = iwalk_fn; iwag->data = data; diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index 30447bd477d2..28e82d5d5943 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -1412,7 +1412,7 @@ xlog_alloc_log( int error = -ENOMEM; uint log2_size = 0; - log = kmem_zalloc(sizeof(struct xlog), KM_MAYFAIL); + log = kzalloc(sizeof(struct xlog), GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!log) { xfs_warn(mp, "Log allocation failed: No memory!"); goto out; @@ -1482,7 +1482,8 @@ xlog_alloc_log( size_t bvec_size = howmany(log->l_iclog_size, PAGE_SIZE) * sizeof(struct bio_vec); - iclog = kmem_zalloc(sizeof(*iclog) + bvec_size, KM_MAYFAIL); + iclog = kzalloc(sizeof(*iclog) + bvec_size, + GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!iclog) goto out_free_iclog; diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c index 630c2482c8f1..aa1b923f7293 100644 --- a/fs/xfs/xfs_log_cil.c +++ b/fs/xfs/xfs_log_cil.c @@ -660,7 +660,7 @@ xlog_cil_push( if (!cil) return 0; - new_ctx = kmem_zalloc(sizeof(*new_ctx), KM_NOFS); + new_ctx = kzalloc(sizeof(*new_ctx), GFP_NOFS | __GFP_NOFAIL); new_ctx->ticket = xlog_cil_ticket_alloc(log); down_write(&cil->xc_ctx_lock); @@ -1179,11 +1179,11 @@ xlog_cil_init( struct xfs_cil *cil; struct xfs_cil_ctx *ctx; - cil = kmem_zalloc(sizeof(*cil), KM_MAYFAIL); + cil = kzalloc(sizeof(*cil), GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!cil) return -ENOMEM; - ctx = kmem_zalloc(sizeof(*ctx), KM_MAYFAIL); + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!ctx) { kmem_free(cil); return -ENOMEM; diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c index 02f2147952b3..bc5c0aef051c 100644 --- a/fs/xfs/xfs_log_recover.c +++ b/fs/xfs/xfs_log_recover.c @@ -4171,7 +4171,7 @@ xlog_recover_add_item( { xlog_recover_item_t *item; - item = kmem_zalloc(sizeof(xlog_recover_item_t), 0); + item = kzalloc(sizeof(xlog_recover_item_t), GFP_KERNEL | __GFP_NOFAIL); INIT_LIST_HEAD(&item->ri_list); list_add_tail(&item->ri_list, head); } @@ -4298,8 +4298,8 @@ xlog_recover_add_to_trans( item->ri_total = in_f->ilf_size; item->ri_buf = - kmem_zalloc(item->ri_total * sizeof(xfs_log_iovec_t), - 0); + kzalloc(item->ri_total * sizeof(xfs_log_iovec_t), + GFP_KERNEL | __GFP_NOFAIL); } if (item->ri_total <= item->ri_cnt) { @@ -4442,7 +4442,7 @@ xlog_recover_ophdr_to_trans( * This is a new transaction so allocate a new recovery container to * hold the recovery ops that will follow. */ - trans = kmem_zalloc(sizeof(struct xlog_recover), 0); + trans = kzalloc(sizeof(struct xlog_recover), GFP_KERNEL | __GFP_NOFAIL); trans->r_log_tid = tid; trans->r_lsn = be64_to_cpu(rhead->h_lsn); INIT_LIST_HEAD(&trans->r_itemq); @@ -5561,9 +5561,9 @@ xlog_do_log_recovery( * First do a pass to find all of the cancelled buf log items. * Store them in the buf_cancel_table for use in the second pass. */ - log->l_buf_cancel_table = kmem_zalloc(XLOG_BC_TABLE_SIZE * + log->l_buf_cancel_table = kzalloc(XLOG_BC_TABLE_SIZE * sizeof(struct list_head), - 0); + GFP_KERNEL | __GFP_NOFAIL); for (i = 0; i < XLOG_BC_TABLE_SIZE; i++) INIT_LIST_HEAD(&log->l_buf_cancel_table[i]); diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 5ea95247a37f..91a5354f20fb 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -194,7 +194,8 @@ xfs_initialize_perag( continue; } - pag = kmem_zalloc(sizeof(*pag), KM_MAYFAIL); + pag = kzalloc(sizeof(*pag), + GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!pag) goto out_unwind_new_pags; pag->pag_agno = index; diff --git a/fs/xfs/xfs_mru_cache.c b/fs/xfs/xfs_mru_cache.c index a06661dac5be..d281db58934e 100644 --- a/fs/xfs/xfs_mru_cache.c +++ b/fs/xfs/xfs_mru_cache.c @@ -333,12 +333,13 @@ xfs_mru_cache_create( if (!(grp_time = msecs_to_jiffies(lifetime_ms) / grp_count)) return -EINVAL; - if (!(mru = kmem_zalloc(sizeof(*mru), 0))) + if (!(mru = kzalloc(sizeof(*mru), GFP_KERNEL | __GFP_NOFAIL))) return -ENOMEM; /* An extra list is needed to avoid reaping up to a grp_time early. */ mru->grp_count = grp_count + 1; - mru->lists = kmem_zalloc(mru->grp_count * sizeof(*mru->lists), 0); + mru->lists = kzalloc(mru->grp_count * sizeof(*mru->lists), + GFP_KERNEL | __GFP_NOFAIL); if (!mru->lists) { err = -ENOMEM; diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index 66ea8e4fca86..771f695d8092 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -643,7 +643,8 @@ xfs_qm_init_quotainfo( ASSERT(XFS_IS_QUOTA_RUNNING(mp)); - qinf = mp->m_quotainfo = kmem_zalloc(sizeof(xfs_quotainfo_t), 0); + qinf = mp->m_quotainfo = kzalloc(sizeof(xfs_quotainfo_t), + GFP_KERNEL | __GFP_NOFAIL); error = list_lru_init(&qinf->qi_lru); if (error) diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c index a242bc9874a6..37e46a908784 100644 --- a/fs/xfs/xfs_refcount_item.c +++ b/fs/xfs/xfs_refcount_item.c @@ -143,8 +143,8 @@ xfs_cui_init( ASSERT(nextents > 0); if (nextents > XFS_CUI_MAX_FAST_EXTENTS) - cuip = kmem_zalloc(xfs_cui_log_item_sizeof(nextents), - 0); + cuip = kzalloc(xfs_cui_log_item_sizeof(nextents), + GFP_KERNEL | __GFP_NOFAIL); else cuip = kmem_cache_zalloc(xfs_cui_zone, GFP_KERNEL | __GFP_NOFAIL); diff --git a/fs/xfs/xfs_rmap_item.c b/fs/xfs/xfs_rmap_item.c index 857cc78dc440..e7ae8f99305c 100644 --- a/fs/xfs/xfs_rmap_item.c +++ b/fs/xfs/xfs_rmap_item.c @@ -142,7 +142,8 @@ xfs_rui_init( ASSERT(nextents > 0); if (nextents > XFS_RUI_MAX_FAST_EXTENTS) - ruip = kmem_zalloc(xfs_rui_log_item_sizeof(nextents), 0); + ruip = kzalloc(xfs_rui_log_item_sizeof(nextents), + GFP_KERNEL | __GFP_NOFAIL); else ruip = kmem_cache_zalloc(xfs_rui_zone, GFP_KERNEL | __GFP_NOFAIL); diff --git a/fs/xfs/xfs_trans_ail.c b/fs/xfs/xfs_trans_ail.c index 00cc5b8734be..d8ef4fa033eb 100644 --- a/fs/xfs/xfs_trans_ail.c +++ b/fs/xfs/xfs_trans_ail.c @@ -824,7 +824,8 @@ xfs_trans_ail_init( { struct xfs_ail *ailp; - ailp = kmem_zalloc(sizeof(struct xfs_ail), KM_MAYFAIL); + ailp = kzalloc(sizeof(struct xfs_ail), + GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!ailp) return -ENOMEM; From patchwork Wed Nov 13 14:23:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carlos Maiolino X-Patchwork-Id: 11241997 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4F9B16B1 for ; Wed, 13 Nov 2019 14:23:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C6F6A222D3 for ; Wed, 13 Nov 2019 14:23:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hZPr/9Sc" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727573AbfKMOX4 (ORCPT ); Wed, 13 Nov 2019 09:23:56 -0500 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:34412 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727542AbfKMOX4 (ORCPT ); Wed, 13 Nov 2019 09:23:56 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573655034; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Towf53ptrEqnsIv0OY2h1+es/G4e+qZyfojxBLUewys=; b=hZPr/9ScoiqikWhFfanl/PAPEidewV6fopYvi6AMDhKSGc/NRKH6cPVeV2/3OTf/U9DUcU erX7qCI8iFq79l8ZKRDMveSGyx+qHmF9DC9+CMzD6KYdolF9UBX8rSOw1tQLXK5EyNOArH QdgcDaPd6VE182tbPhuQ9qtOTO1kxkw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-137-xi4g4ES-MM-K4iwg198jJA-1; Wed, 13 Nov 2019 09:23:52 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E14A1102CB91 for ; Wed, 13 Nov 2019 14:23:51 +0000 (UTC) Received: from orion.redhat.com (ovpn-204-203.brq.redhat.com [10.40.204.203]) by smtp.corp.redhat.com (Postfix) with ESMTP id 470D94D9E1 for ; Wed, 13 Nov 2019 14:23:51 +0000 (UTC) From: Carlos Maiolino To: linux-xfs@vger.kernel.org Subject: [PATCH 07/11] xfs: Remove kmem_realloc Date: Wed, 13 Nov 2019 15:23:31 +0100 Message-Id: <20191113142335.1045631-8-cmaiolino@redhat.com> In-Reply-To: <20191113142335.1045631-1-cmaiolino@redhat.com> References: <20191113142335.1045631-1-cmaiolino@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: xi4g4ES-MM-K4iwg198jJA-1 X-Mimecast-Spam-Score: 0 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org We can use krealloc() with __GFP_NOFAIL directly Signed-off-by: Carlos Maiolino Reviewed-by: Darrick J. Wong --- fs/xfs/kmem.c | 22 ---------------------- fs/xfs/kmem.h | 1 - fs/xfs/libxfs/xfs_iext_tree.c | 2 +- fs/xfs/libxfs/xfs_inode_fork.c | 8 ++++---- fs/xfs/xfs_log_recover.c | 2 +- fs/xfs/xfs_mount.c | 4 ++-- 6 files changed, 8 insertions(+), 31 deletions(-) diff --git a/fs/xfs/kmem.c b/fs/xfs/kmem.c index 2644fdaa0549..6e10e565632c 100644 --- a/fs/xfs/kmem.c +++ b/fs/xfs/kmem.c @@ -93,25 +93,3 @@ kmem_alloc_large(size_t size, xfs_km_flags_t flags) return ptr; return __kmem_vmalloc(size, flags); } - -void * -kmem_realloc(const void *old, size_t newsize, xfs_km_flags_t flags) -{ - int retries = 0; - gfp_t lflags = kmem_flags_convert(flags); - void *ptr; - - trace_kmem_realloc(newsize, flags, _RET_IP_); - - do { - ptr = krealloc(old, newsize, lflags); - if (ptr || (flags & KM_MAYFAIL)) - return ptr; - if (!(++retries % 100)) - xfs_err(NULL, - "%s(%u) possible memory allocation deadlock size %zu in %s (mode:0x%x)", - current->comm, current->pid, - newsize, __func__, lflags); - congestion_wait(BLK_RW_ASYNC, HZ/50); - } while (1); -} diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h index 46c8c5546674..18b62eee3177 100644 --- a/fs/xfs/kmem.h +++ b/fs/xfs/kmem.h @@ -55,7 +55,6 @@ kmem_flags_convert(xfs_km_flags_t flags) extern void *kmem_alloc(size_t, xfs_km_flags_t); extern void *kmem_alloc_io(size_t size, int align_mask, xfs_km_flags_t flags); extern void *kmem_alloc_large(size_t size, xfs_km_flags_t); -extern void *kmem_realloc(const void *, size_t, xfs_km_flags_t); static inline void kmem_free(const void *ptr) { kvfree(ptr); diff --git a/fs/xfs/libxfs/xfs_iext_tree.c b/fs/xfs/libxfs/xfs_iext_tree.c index f2005671e86c..a929ea0b09b7 100644 --- a/fs/xfs/libxfs/xfs_iext_tree.c +++ b/fs/xfs/libxfs/xfs_iext_tree.c @@ -607,7 +607,7 @@ xfs_iext_realloc_root( if (new_size / sizeof(struct xfs_iext_rec) == RECS_PER_LEAF) new_size = NODE_SIZE; - new = kmem_realloc(ifp->if_u1.if_root, new_size, KM_NOFS); + new = krealloc(ifp->if_u1.if_root, new_size, GFP_NOFS | __GFP_NOFAIL); memset(new + ifp->if_bytes, 0, new_size - ifp->if_bytes); ifp->if_u1.if_root = new; cur->leaf = new; diff --git a/fs/xfs/libxfs/xfs_inode_fork.c b/fs/xfs/libxfs/xfs_inode_fork.c index 2bffaa31d62a..34c336f45796 100644 --- a/fs/xfs/libxfs/xfs_inode_fork.c +++ b/fs/xfs/libxfs/xfs_inode_fork.c @@ -387,8 +387,8 @@ xfs_iroot_realloc( cur_max = xfs_bmbt_maxrecs(mp, ifp->if_broot_bytes, 0); new_max = cur_max + rec_diff; new_size = XFS_BMAP_BROOT_SPACE_CALC(mp, new_max); - ifp->if_broot = kmem_realloc(ifp->if_broot, new_size, - KM_NOFS); + ifp->if_broot = krealloc(ifp->if_broot, new_size, + GFP_NOFS | __GFP_NOFAIL); op = (char *)XFS_BMAP_BROOT_PTR_ADDR(mp, ifp->if_broot, 1, ifp->if_broot_bytes); np = (char *)XFS_BMAP_BROOT_PTR_ADDR(mp, ifp->if_broot, 1, @@ -497,8 +497,8 @@ xfs_idata_realloc( * in size so that it can be logged and stay on word boundaries. * We enforce that here. */ - ifp->if_u1.if_data = kmem_realloc(ifp->if_u1.if_data, - roundup(new_size, 4), KM_NOFS); + ifp->if_u1.if_data = krealloc(ifp->if_u1.if_data, roundup(new_size, 4), + GFP_NOFS | __GFP_NOFAIL); ifp->if_bytes = new_size; } diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c index bc5c0aef051c..a7f1dcecc640 100644 --- a/fs/xfs/xfs_log_recover.c +++ b/fs/xfs/xfs_log_recover.c @@ -4211,7 +4211,7 @@ xlog_recover_add_to_cont_trans( old_ptr = item->ri_buf[item->ri_cnt-1].i_addr; old_len = item->ri_buf[item->ri_cnt-1].i_len; - ptr = kmem_realloc(old_ptr, len + old_len, 0); + ptr = krealloc(old_ptr, len + old_len, GFP_KERNEL | __GFP_NOFAIL); memcpy(&ptr[old_len], dp, len); item->ri_buf[item->ri_cnt-1].i_len += len; item->ri_buf[item->ri_cnt-1].i_addr = ptr; diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 91a5354f20fb..a14046314c1f 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -80,9 +80,9 @@ xfs_uuid_mount( } if (hole < 0) { - xfs_uuid_table = kmem_realloc(xfs_uuid_table, + xfs_uuid_table = krealloc(xfs_uuid_table, (xfs_uuid_table_size + 1) * sizeof(*xfs_uuid_table), - 0); + GFP_KERNEL | __GFP_NOFAIL); hole = xfs_uuid_table_size++; } xfs_uuid_table[hole] = *uuid; From patchwork Wed Nov 13 14:23:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carlos Maiolino X-Patchwork-Id: 11242001 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5252B18B6 for ; Wed, 13 Nov 2019 14:23:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2A21C2245D for ; Wed, 13 Nov 2019 14:23:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bTHQfwrl" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727450AbfKMOX4 (ORCPT ); Wed, 13 Nov 2019 09:23:56 -0500 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:43806 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727559AbfKMOX4 (ORCPT ); Wed, 13 Nov 2019 09:23:56 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573655035; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yNuaejvAsY3pweHbuPpRZ0zJIDhB/9GPCWtubPEEAfo=; b=bTHQfwrl1vNDyOTuhb7VWXcc5TG3e5zJ9xDyMZivsooPsSwL2YQUC9SurBx6yySqpqxLc5 hd1MXwTcFhOSH9RU7v4POGf8hr+pLrrNR0aUpdUWvhzK0Nv6Osxx9mOx7kFJ17BAkJ5ogs Ti1XUxzVQ67gSc5SoMpGSXUkVAxXwZg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-11-se8IeDVTOxSdCPYshm99Yg-1; Wed, 13 Nov 2019 09:23:54 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E86AB19248A8 for ; Wed, 13 Nov 2019 14:23:52 +0000 (UTC) Received: from orion.redhat.com (ovpn-204-203.brq.redhat.com [10.40.204.203]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4C4B54D9E1 for ; Wed, 13 Nov 2019 14:23:52 +0000 (UTC) From: Carlos Maiolino To: linux-xfs@vger.kernel.org Subject: [PATCH 08/11] xfs: Convert kmem_alloc() users Date: Wed, 13 Nov 2019 15:23:32 +0100 Message-Id: <20191113142335.1045631-9-cmaiolino@redhat.com> In-Reply-To: <20191113142335.1045631-1-cmaiolino@redhat.com> References: <20191113142335.1045631-1-cmaiolino@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: se8IeDVTOxSdCPYshm99Yg-1 X-Mimecast-Spam-Score: 0 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Use kmalloc() directly. kmem_alloc_io() and kmem_alloc_large() still have use for kmem_alloc(), due their fallback to vmalloc() and also the alignment check. But for that, there is no need to export kmem_alloc() to the whole XFS driver, so, also convert kmem_alloc() into a static, local function __kmem_alloc(). Signed-off-by: Carlos Maiolino Reviewed-by: Darrick J. Wong --- fs/xfs/kmem.c | 8 ++++---- fs/xfs/kmem.h | 1 - fs/xfs/libxfs/xfs_attr_leaf.c | 6 +++--- fs/xfs/libxfs/xfs_bmap.c | 2 +- fs/xfs/libxfs/xfs_da_btree.c | 4 +++- fs/xfs/libxfs/xfs_defer.c | 4 ++-- fs/xfs/libxfs/xfs_dir2.c | 2 +- fs/xfs/libxfs/xfs_dir2_block.c | 2 +- fs/xfs/libxfs/xfs_dir2_sf.c | 8 ++++---- fs/xfs/libxfs/xfs_inode_fork.c | 10 ++++++---- fs/xfs/libxfs/xfs_refcount.c | 9 +++++---- fs/xfs/libxfs/xfs_rmap.c | 2 +- fs/xfs/scrub/bitmap.c | 7 ++++--- fs/xfs/scrub/btree.c | 4 ++-- fs/xfs/scrub/refcount.c | 4 ++-- fs/xfs/xfs_attr_inactive.c | 2 +- fs/xfs/xfs_attr_list.c | 2 +- fs/xfs/xfs_buf.c | 5 +++-- fs/xfs/xfs_filestream.c | 2 +- fs/xfs/xfs_inode.c | 2 +- fs/xfs/xfs_iwalk.c | 2 +- fs/xfs/xfs_log_recover.c | 7 ++++--- fs/xfs/xfs_qm.c | 3 ++- fs/xfs/xfs_rtalloc.c | 2 +- fs/xfs/xfs_super.c | 2 +- 25 files changed, 55 insertions(+), 47 deletions(-) diff --git a/fs/xfs/kmem.c b/fs/xfs/kmem.c index 6e10e565632c..79467813d810 100644 --- a/fs/xfs/kmem.c +++ b/fs/xfs/kmem.c @@ -8,8 +8,8 @@ #include "xfs_message.h" #include "xfs_trace.h" -void * -kmem_alloc(size_t size, xfs_km_flags_t flags) +static void * +__kmem_alloc(size_t size, xfs_km_flags_t flags) { int retries = 0; gfp_t lflags = kmem_flags_convert(flags); @@ -72,7 +72,7 @@ kmem_alloc_io(size_t size, int align_mask, xfs_km_flags_t flags) if (WARN_ON_ONCE(align_mask >= PAGE_SIZE)) align_mask = PAGE_SIZE - 1; - ptr = kmem_alloc(size, flags | KM_MAYFAIL); + ptr = __kmem_alloc(size, flags | KM_MAYFAIL); if (ptr) { if (!((uintptr_t)ptr & align_mask)) return ptr; @@ -88,7 +88,7 @@ kmem_alloc_large(size_t size, xfs_km_flags_t flags) trace_kmem_alloc_large(size, flags, _RET_IP_); - ptr = kmem_alloc(size, flags | KM_MAYFAIL); + ptr = __kmem_alloc(size, flags | KM_MAYFAIL); if (ptr) return ptr; return __kmem_vmalloc(size, flags); diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h index 18b62eee3177..29d02c71fb22 100644 --- a/fs/xfs/kmem.h +++ b/fs/xfs/kmem.h @@ -52,7 +52,6 @@ kmem_flags_convert(xfs_km_flags_t flags) return lflags; } -extern void *kmem_alloc(size_t, xfs_km_flags_t); extern void *kmem_alloc_io(size_t size, int align_mask, xfs_km_flags_t flags); extern void *kmem_alloc_large(size_t size, xfs_km_flags_t); static inline void kmem_free(const void *ptr) diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c index 9f54e59f4004..e78cba993eae 100644 --- a/fs/xfs/libxfs/xfs_attr_leaf.c +++ b/fs/xfs/libxfs/xfs_attr_leaf.c @@ -885,7 +885,7 @@ xfs_attr_shortform_to_leaf( ifp = dp->i_afp; sf = (xfs_attr_shortform_t *)ifp->if_u1.if_data; size = be16_to_cpu(sf->hdr.totsize); - tmpbuffer = kmem_alloc(size, 0); + tmpbuffer = kmalloc(size, GFP_KERNEL | __GFP_NOFAIL); ASSERT(tmpbuffer != NULL); memcpy(tmpbuffer, ifp->if_u1.if_data, size); sf = (xfs_attr_shortform_t *)tmpbuffer; @@ -1073,7 +1073,7 @@ xfs_attr3_leaf_to_shortform( trace_xfs_attr_leaf_to_sf(args); - tmpbuffer = kmem_alloc(args->geo->blksize, 0); + tmpbuffer = kmalloc(args->geo->blksize, GFP_KERNEL | __GFP_NOFAIL); if (!tmpbuffer) return -ENOMEM; @@ -1534,7 +1534,7 @@ xfs_attr3_leaf_compact( trace_xfs_attr_leaf_compact(args); - tmpbuffer = kmem_alloc(args->geo->blksize, 0); + tmpbuffer = kmalloc(args->geo->blksize, GFP_KERNEL | __GFP_NOFAIL); memcpy(tmpbuffer, bp->b_addr, args->geo->blksize); memset(bp->b_addr, 0, args->geo->blksize); leaf_src = (xfs_attr_leafblock_t *)tmpbuffer; diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index 37596e49b92e..fc5bed95bd44 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -6045,7 +6045,7 @@ __xfs_bmap_add( bmap->br_blockcount, bmap->br_state); - bi = kmem_alloc(sizeof(struct xfs_bmap_intent), KM_NOFS); + bi = kmalloc(sizeof(struct xfs_bmap_intent), GFP_NOFS | __GFP_NOFAIL); INIT_LIST_HEAD(&bi->bi_list); bi->bi_type = type; bi->bi_owner = ip; diff --git a/fs/xfs/libxfs/xfs_da_btree.c b/fs/xfs/libxfs/xfs_da_btree.c index dbd2434e68b5..d1211153e7d9 100644 --- a/fs/xfs/libxfs/xfs_da_btree.c +++ b/fs/xfs/libxfs/xfs_da_btree.c @@ -2152,7 +2152,9 @@ xfs_da_grow_inode_int( * If we didn't get it and the block might work if fragmented, * try without the CONTIG flag. Loop until we get it all. */ - mapp = kmem_alloc(sizeof(*mapp) * count, 0); + mapp = kmalloc(sizeof(*mapp) * count, + GFP_KERNEL | __GFP_NOFAIL); + for (b = *bno, mapi = 0; b < *bno + count; ) { nmap = min(XFS_BMAP_MAX_NMAP, count); c = (int)(*bno + count - b); diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c index 22557527cfdb..24f71a59462f 100644 --- a/fs/xfs/libxfs/xfs_defer.c +++ b/fs/xfs/libxfs/xfs_defer.c @@ -516,8 +516,8 @@ xfs_defer_add( dfp = NULL; } if (!dfp) { - dfp = kmem_alloc(sizeof(struct xfs_defer_pending), - KM_NOFS); + dfp = kmalloc(sizeof(struct xfs_defer_pending), + GFP_NOFS | __GFP_NOFAIL); dfp->dfp_type = type; dfp->dfp_intent = NULL; dfp->dfp_done = NULL; diff --git a/fs/xfs/libxfs/xfs_dir2.c b/fs/xfs/libxfs/xfs_dir2.c index 67172e376e1d..2606d3070cba 100644 --- a/fs/xfs/libxfs/xfs_dir2.c +++ b/fs/xfs/libxfs/xfs_dir2.c @@ -331,7 +331,7 @@ xfs_dir_cilookup_result( !(args->op_flags & XFS_DA_OP_CILOOKUP)) return -EEXIST; - args->value = kmem_alloc(len, KM_NOFS | KM_MAYFAIL); + args->value = kmalloc(len, GFP_NOFS | __GFP_RETRY_MAYFAIL); if (!args->value) return -ENOMEM; diff --git a/fs/xfs/libxfs/xfs_dir2_block.c b/fs/xfs/libxfs/xfs_dir2_block.c index 358151ddfa75..c90d2d001815 100644 --- a/fs/xfs/libxfs/xfs_dir2_block.c +++ b/fs/xfs/libxfs/xfs_dir2_block.c @@ -1083,7 +1083,7 @@ xfs_dir2_sf_to_block( * Copy the directory into a temporary buffer. * Then pitch the incore inode data so we can make extents. */ - sfp = kmem_alloc(ifp->if_bytes, 0); + sfp = kmalloc(ifp->if_bytes, GFP_KERNEL | __GFP_NOFAIL); memcpy(sfp, oldsfp, ifp->if_bytes); xfs_idata_realloc(dp, -ifp->if_bytes, XFS_DATA_FORK); diff --git a/fs/xfs/libxfs/xfs_dir2_sf.c b/fs/xfs/libxfs/xfs_dir2_sf.c index db1a82972d9e..bf38294ba785 100644 --- a/fs/xfs/libxfs/xfs_dir2_sf.c +++ b/fs/xfs/libxfs/xfs_dir2_sf.c @@ -276,7 +276,7 @@ xfs_dir2_block_to_sf( * format the data into. Once we have formatted the data, we can free * the block and copy the formatted data into the inode literal area. */ - sfp = kmem_alloc(mp->m_sb.sb_inodesize, 0); + sfp = kmalloc(mp->m_sb.sb_inodesize, GFP_KERNEL | __GFP_NOFAIL); memcpy(sfp, sfhp, xfs_dir2_sf_hdr_size(sfhp->i8count)); /* @@ -530,7 +530,7 @@ xfs_dir2_sf_addname_hard( */ sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data; old_isize = (int)dp->i_d.di_size; - buf = kmem_alloc(old_isize, 0); + buf = kmalloc(old_isize, GFP_KERNEL | __GFP_NOFAIL); oldsfp = (xfs_dir2_sf_hdr_t *)buf; memcpy(oldsfp, sfp, old_isize); /* @@ -1162,7 +1162,7 @@ xfs_dir2_sf_toino4( * Don't want xfs_idata_realloc copying the data here. */ oldsize = dp->i_df.if_bytes; - buf = kmem_alloc(oldsize, 0); + buf = kmalloc(oldsize, GFP_KERNEL | __GFP_NOFAIL); oldsfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data; ASSERT(oldsfp->i8count == 1); memcpy(buf, oldsfp, oldsize); @@ -1235,7 +1235,7 @@ xfs_dir2_sf_toino8( * Don't want xfs_idata_realloc copying the data here. */ oldsize = dp->i_df.if_bytes; - buf = kmem_alloc(oldsize, 0); + buf = kmalloc(oldsize, GFP_KERNEL | __GFP_NOFAIL); oldsfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data; ASSERT(oldsfp->i8count == 0); memcpy(buf, oldsfp, oldsize); diff --git a/fs/xfs/libxfs/xfs_inode_fork.c b/fs/xfs/libxfs/xfs_inode_fork.c index 34c336f45796..1e4c93cde07e 100644 --- a/fs/xfs/libxfs/xfs_inode_fork.c +++ b/fs/xfs/libxfs/xfs_inode_fork.c @@ -153,7 +153,8 @@ xfs_init_local_fork( if (size) { real_size = roundup(mem_size, 4); - ifp->if_u1.if_data = kmem_alloc(real_size, KM_NOFS); + ifp->if_u1.if_data = kmalloc(real_size, + GFP_NOFS | __GFP_NOFAIL); memcpy(ifp->if_u1.if_data, data, size); if (zero_terminate) ifp->if_u1.if_data[size] = '\0'; @@ -308,7 +309,7 @@ xfs_iformat_btree( } ifp->if_broot_bytes = size; - ifp->if_broot = kmem_alloc(size, KM_NOFS); + ifp->if_broot = kmalloc(size, GFP_NOFS | __GFP_NOFAIL); ASSERT(ifp->if_broot != NULL); /* * Copy and convert from the on-disk structure @@ -373,7 +374,8 @@ xfs_iroot_realloc( */ if (ifp->if_broot_bytes == 0) { new_size = XFS_BMAP_BROOT_SPACE_CALC(mp, rec_diff); - ifp->if_broot = kmem_alloc(new_size, KM_NOFS); + ifp->if_broot = kmalloc(new_size, + GFP_NOFS | __GFP_NOFAIL); ifp->if_broot_bytes = (int)new_size; return; } @@ -414,7 +416,7 @@ xfs_iroot_realloc( else new_size = 0; if (new_size > 0) { - new_broot = kmem_alloc(new_size, KM_NOFS); + new_broot = kmalloc(new_size, GFP_NOFS | __GFP_NOFAIL); /* * First copy over the btree block header. */ diff --git a/fs/xfs/libxfs/xfs_refcount.c b/fs/xfs/libxfs/xfs_refcount.c index 78236bd6c64f..5b76d6bbfa58 100644 --- a/fs/xfs/libxfs/xfs_refcount.c +++ b/fs/xfs/libxfs/xfs_refcount.c @@ -1188,8 +1188,8 @@ __xfs_refcount_add( type, XFS_FSB_TO_AGBNO(tp->t_mountp, startblock), blockcount); - ri = kmem_alloc(sizeof(struct xfs_refcount_intent), - KM_NOFS); + ri = kmalloc(sizeof(struct xfs_refcount_intent), + GFP_NOFS | __GFP_NOFAIL); INIT_LIST_HEAD(&ri->ri_list); ri->ri_type = type; ri->ri_startblock = startblock; @@ -1584,7 +1584,7 @@ struct xfs_refcount_recovery { /* Stuff an extent on the recovery list. */ STATIC int xfs_refcount_recover_extent( - struct xfs_btree_cur *cur, + struct xfs_btree_cur *cur, union xfs_btree_rec *rec, void *priv) { @@ -1596,7 +1596,8 @@ xfs_refcount_recover_extent( return -EFSCORRUPTED; } - rr = kmem_alloc(sizeof(struct xfs_refcount_recovery), 0); + rr = kmalloc(sizeof(struct xfs_refcount_recovery), + GFP_KERNEL | __GFP_NOFAIL); xfs_refcount_btrec_to_irec(rec, &rr->rr_rrec); list_add_tail(&rr->rr_list, debris); diff --git a/fs/xfs/libxfs/xfs_rmap.c b/fs/xfs/libxfs/xfs_rmap.c index 38e9414878b3..0e1e8cbb8862 100644 --- a/fs/xfs/libxfs/xfs_rmap.c +++ b/fs/xfs/libxfs/xfs_rmap.c @@ -2286,7 +2286,7 @@ __xfs_rmap_add( bmap->br_blockcount, bmap->br_state); - ri = kmem_alloc(sizeof(struct xfs_rmap_intent), KM_NOFS); + ri = kmalloc(sizeof(struct xfs_rmap_intent), GFP_NOFS | __GFP_NOFAIL); INIT_LIST_HEAD(&ri->ri_list); ri->ri_type = type; ri->ri_owner = owner; diff --git a/fs/xfs/scrub/bitmap.c b/fs/xfs/scrub/bitmap.c index 18a684e18a69..37aaab8cca7f 100644 --- a/fs/xfs/scrub/bitmap.c +++ b/fs/xfs/scrub/bitmap.c @@ -25,7 +25,8 @@ xfs_bitmap_set( { struct xfs_bitmap_range *bmr; - bmr = kmem_alloc(sizeof(struct xfs_bitmap_range), KM_MAYFAIL); + bmr = kmalloc(sizeof(struct xfs_bitmap_range), + GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!bmr) return -ENOMEM; @@ -181,8 +182,8 @@ xfs_bitmap_disunion( * Deleting from the middle: add the new right extent * and then shrink the left extent. */ - new_br = kmem_alloc(sizeof(struct xfs_bitmap_range), - KM_MAYFAIL); + new_br = kmalloc(sizeof(struct xfs_bitmap_range), + GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!new_br) { error = -ENOMEM; goto out; diff --git a/fs/xfs/scrub/btree.c b/fs/xfs/scrub/btree.c index f52a7b8256f9..93c2371d128b 100644 --- a/fs/xfs/scrub/btree.c +++ b/fs/xfs/scrub/btree.c @@ -429,8 +429,8 @@ xchk_btree_check_owner( * later scanning. */ if (cur->bc_btnum == XFS_BTNUM_BNO || cur->bc_btnum == XFS_BTNUM_RMAP) { - co = kmem_alloc(sizeof(struct check_owner), - KM_MAYFAIL); + co = kmalloc(sizeof(struct check_owner), + GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!co) return -ENOMEM; co->level = level; diff --git a/fs/xfs/scrub/refcount.c b/fs/xfs/scrub/refcount.c index 0cab11a5d390..468b739b90b5 100644 --- a/fs/xfs/scrub/refcount.c +++ b/fs/xfs/scrub/refcount.c @@ -125,8 +125,8 @@ xchk_refcountbt_rmap_check( * is healthy each rmap_irec we see will be in agbno order * so we don't need insertion sort here. */ - frag = kmem_alloc(sizeof(struct xchk_refcnt_frag), - KM_MAYFAIL); + frag = kmalloc(sizeof(struct xchk_refcnt_frag), + GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!frag) return -ENOMEM; memcpy(&frag->rm, rec, sizeof(frag->rm)); diff --git a/fs/xfs/xfs_attr_inactive.c b/fs/xfs/xfs_attr_inactive.c index a78c501f6fb1..ac0931919999 100644 --- a/fs/xfs/xfs_attr_inactive.c +++ b/fs/xfs/xfs_attr_inactive.c @@ -148,7 +148,7 @@ xfs_attr3_leaf_inactive( * Allocate storage for a list of all the "remote" value extents. */ size = count * sizeof(xfs_attr_inactive_list_t); - list = kmem_alloc(size, 0); + list = kmalloc(size, GFP_KERNEL | __GFP_NOFAIL); /* * Identify each of the "remote" value extents. diff --git a/fs/xfs/xfs_attr_list.c b/fs/xfs/xfs_attr_list.c index 0ec6606149a2..1b39bbff113e 100644 --- a/fs/xfs/xfs_attr_list.c +++ b/fs/xfs/xfs_attr_list.c @@ -116,7 +116,7 @@ xfs_attr_shortform_list( * It didn't all fit, so we have to sort everything on hashval. */ sbsize = sf->hdr.count * sizeof(*sbuf); - sbp = sbuf = kmem_alloc(sbsize, KM_NOFS); + sbp = sbuf = kmalloc(sbsize, GFP_NOFS | __GFP_NOFAIL); /* * Scan the attribute list for the rest of the entries, storing diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index e2a7eac03d04..8a0cc7593212 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -274,8 +274,9 @@ _xfs_buf_get_pages( if (page_count <= XB_PAGES) { bp->b_pages = bp->b_page_array; } else { - bp->b_pages = kmem_alloc(sizeof(struct page *) * - page_count, KM_NOFS); + bp->b_pages = kmalloc(sizeof(struct page *) * + page_count, + GFP_NOFS | __GFP_NOFAIL); if (bp->b_pages == NULL) return -ENOMEM; } diff --git a/fs/xfs/xfs_filestream.c b/fs/xfs/xfs_filestream.c index 2ae356775f63..0a4bd510e631 100644 --- a/fs/xfs/xfs_filestream.c +++ b/fs/xfs/xfs_filestream.c @@ -247,7 +247,7 @@ xfs_filestream_pick_ag( return 0; err = -ENOMEM; - item = kmem_alloc(sizeof(*item), KM_MAYFAIL); + item = kmalloc(sizeof(*item), GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!item) goto out_put_ag; diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index 8a67e97ecbfc..48d162b0c254 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -3493,7 +3493,7 @@ xfs_iflush_cluster( pag = xfs_perag_get(mp, XFS_INO_TO_AGNO(mp, ip->i_ino)); cilist_size = igeo->inodes_per_cluster * sizeof(struct xfs_inode *); - cilist = kmem_alloc(cilist_size, KM_MAYFAIL|KM_NOFS); + cilist = kmalloc(cilist_size, GFP_NOFS | __GFP_RETRY_MAYFAIL); if (!cilist) goto out_put; diff --git a/fs/xfs/xfs_iwalk.c b/fs/xfs/xfs_iwalk.c index c812b14af3bb..d6b93a8ee1dc 100644 --- a/fs/xfs/xfs_iwalk.c +++ b/fs/xfs/xfs_iwalk.c @@ -152,7 +152,7 @@ xfs_iwalk_alloc( /* Allocate a prefetch buffer for inobt records. */ size = iwag->sz_recs * sizeof(struct xfs_inobt_rec_incore); - iwag->recs = kmem_alloc(size, KM_MAYFAIL); + iwag->recs = kmalloc(size, GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (iwag->recs == NULL) return -ENOMEM; diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c index a7f1dcecc640..d46240152518 100644 --- a/fs/xfs/xfs_log_recover.c +++ b/fs/xfs/xfs_log_recover.c @@ -1962,7 +1962,7 @@ xlog_recover_buffer_pass1( } } - bcp = kmem_alloc(sizeof(struct xfs_buf_cancel), 0); + bcp = kmalloc(sizeof(struct xfs_buf_cancel), GFP_KERNEL | __GFP_NOFAIL); bcp->bc_blkno = buf_f->blf_blkno; bcp->bc_len = buf_f->blf_len; bcp->bc_refcount = 1; @@ -2932,7 +2932,8 @@ xlog_recover_inode_pass2( if (item->ri_buf[0].i_len == sizeof(struct xfs_inode_log_format)) { in_f = item->ri_buf[0].i_addr; } else { - in_f = kmem_alloc(sizeof(struct xfs_inode_log_format), 0); + in_f = kmalloc(sizeof(struct xfs_inode_log_format), + GFP_KERNEL | __GFP_NOFAIL); need_free = 1; error = xfs_inode_item_format_convert(&item->ri_buf[0], in_f); if (error) @@ -4271,7 +4272,7 @@ xlog_recover_add_to_trans( return 0; } - ptr = kmem_alloc(len, 0); + ptr = kmalloc(len, GFP_KERNEL | __GFP_NOFAIL); memcpy(ptr, dp, len); in_f = (struct xfs_inode_log_format *)ptr; diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index 771f695d8092..ce0c1dddb784 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -988,7 +988,8 @@ xfs_qm_reset_dqcounts_buf( if (qip->i_d.di_nblocks == 0) return 0; - map = kmem_alloc(XFS_DQITER_MAP_SIZE * sizeof(*map), 0); + map = kmalloc(XFS_DQITER_MAP_SIZE * sizeof(*map), + GFP_KERNEL | __GFP_NOFAIL); lblkno = 0; maxlblkcnt = XFS_B_TO_FSB(mp, mp->m_super->s_maxbytes); diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c index d42b5a2047e0..1875484123d7 100644 --- a/fs/xfs/xfs_rtalloc.c +++ b/fs/xfs/xfs_rtalloc.c @@ -962,7 +962,7 @@ xfs_growfs_rt( /* * Allocate a new (fake) mount/sb. */ - nmp = kmem_alloc(sizeof(*nmp), 0); + nmp = kmalloc(sizeof(*nmp), GFP_KERNEL | __GFP_NOFAIL); /* * Loop over the bitmap blocks. * We will do everything one bitmap block at a time. diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index d9ae27ddf253..c6c423f76447 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1739,7 +1739,7 @@ static int xfs_init_fs_context( { struct xfs_mount *mp; - mp = kmem_alloc(sizeof(struct xfs_mount), KM_ZERO); + mp = kzalloc(sizeof(struct xfs_mount), GFP_KERNEL | __GFP_NOFAIL); if (!mp) return -ENOMEM; From patchwork Wed Nov 13 14:23:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carlos Maiolino X-Patchwork-Id: 11242005 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 936E91390 for ; Wed, 13 Nov 2019 14:23:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6AA99222C9 for ; Wed, 13 Nov 2019 14:23:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZENO1lWj" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727606AbfKMOX6 (ORCPT ); Wed, 13 Nov 2019 09:23:58 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:55051 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727578AbfKMOX6 (ORCPT ); Wed, 13 Nov 2019 09:23:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573655037; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yLuo1Do5wfqp2XuwnOkr6ZHc0JTGJJKIP4rxk5RSiro=; b=ZENO1lWj0m0UWtIlc0xO54OnjBB+nvkPlkoND5QqDzZcnbVwS6Cq/3hylGRAVCrx4TWJFx ZaeCVZ+AgrzLXgylCORAQMRuEwiDgrVjH6x4bSB1/FY4ZJRcixu+0pJoIlLqZR2riHEOO0 rPd28VmxVoWsppSMTRnVYAkXI93tmV8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-401-sXwqSnNtN0eoPFt6iKLPNA-1; Wed, 13 Nov 2019 09:23:54 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EF1E7102CB91 for ; Wed, 13 Nov 2019 14:23:53 +0000 (UTC) Received: from orion.redhat.com (ovpn-204-203.brq.redhat.com [10.40.204.203]) by smtp.corp.redhat.com (Postfix) with ESMTP id 51C794D9E1 for ; Wed, 13 Nov 2019 14:23:53 +0000 (UTC) From: Carlos Maiolino To: linux-xfs@vger.kernel.org Subject: [PATCH 09/11] xfs: rework kmem_alloc_{io,large} to use GFP_* flags Date: Wed, 13 Nov 2019 15:23:33 +0100 Message-Id: <20191113142335.1045631-10-cmaiolino@redhat.com> In-Reply-To: <20191113142335.1045631-1-cmaiolino@redhat.com> References: <20191113142335.1045631-1-cmaiolino@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: sXwqSnNtN0eoPFt6iKLPNA-1 X-Mimecast-Spam-Score: 0 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Pass slab flags directly to these functions Signed-off-by: Carlos Maiolino --- fs/xfs/kmem.c | 60 ++++------------------------------- fs/xfs/kmem.h | 8 ++--- fs/xfs/libxfs/xfs_attr_leaf.c | 2 +- fs/xfs/scrub/attr.c | 8 ++--- fs/xfs/scrub/attr.h | 3 +- fs/xfs/xfs_buf.c | 7 ++-- fs/xfs/xfs_log.c | 2 +- fs/xfs/xfs_log_cil.c | 2 +- fs/xfs/xfs_log_recover.c | 3 +- 9 files changed, 22 insertions(+), 73 deletions(-) diff --git a/fs/xfs/kmem.c b/fs/xfs/kmem.c index 79467813d810..44145293cfc9 100644 --- a/fs/xfs/kmem.c +++ b/fs/xfs/kmem.c @@ -8,54 +8,6 @@ #include "xfs_message.h" #include "xfs_trace.h" -static void * -__kmem_alloc(size_t size, xfs_km_flags_t flags) -{ - int retries = 0; - gfp_t lflags = kmem_flags_convert(flags); - void *ptr; - - trace_kmem_alloc(size, flags, _RET_IP_); - - do { - ptr = kmalloc(size, lflags); - if (ptr || (flags & KM_MAYFAIL)) - return ptr; - if (!(++retries % 100)) - xfs_err(NULL, - "%s(%u) possible memory allocation deadlock size %u in %s (mode:0x%x)", - current->comm, current->pid, - (unsigned int)size, __func__, lflags); - congestion_wait(BLK_RW_ASYNC, HZ/50); - } while (1); -} - - -/* - * __vmalloc() will allocate data pages and auxiliary structures (e.g. - * pagetables) with GFP_KERNEL, yet we may be under GFP_NOFS context here. Hence - * we need to tell memory reclaim that we are in such a context via - * PF_MEMALLOC_NOFS to prevent memory reclaim re-entering the filesystem here - * and potentially deadlocking. - */ -static void * -__kmem_vmalloc(size_t size, xfs_km_flags_t flags) -{ - unsigned nofs_flag = 0; - void *ptr; - gfp_t lflags = kmem_flags_convert(flags); - - if (flags & KM_NOFS) - nofs_flag = memalloc_nofs_save(); - - ptr = __vmalloc(size, lflags, PAGE_KERNEL); - - if (flags & KM_NOFS) - memalloc_nofs_restore(nofs_flag); - - return ptr; -} - /* * Same as kmem_alloc_large, except we guarantee the buffer returned is aligned * to the @align_mask. We only guarantee alignment up to page size, we'll clamp @@ -63,7 +15,7 @@ __kmem_vmalloc(size_t size, xfs_km_flags_t flags) * aligned region. */ void * -kmem_alloc_io(size_t size, int align_mask, xfs_km_flags_t flags) +kmem_alloc_io(size_t size, int align_mask, gfp_t flags) { void *ptr; @@ -72,24 +24,24 @@ kmem_alloc_io(size_t size, int align_mask, xfs_km_flags_t flags) if (WARN_ON_ONCE(align_mask >= PAGE_SIZE)) align_mask = PAGE_SIZE - 1; - ptr = __kmem_alloc(size, flags | KM_MAYFAIL); + ptr = kmalloc(size, flags | __GFP_RETRY_MAYFAIL); if (ptr) { if (!((uintptr_t)ptr & align_mask)) return ptr; kfree(ptr); } - return __kmem_vmalloc(size, flags); + return __vmalloc(size, flags | __GFP_NOFAIL, PAGE_KERNEL); } void * -kmem_alloc_large(size_t size, xfs_km_flags_t flags) +kmem_alloc_large(size_t size, gfp_t flags) { void *ptr; trace_kmem_alloc_large(size, flags, _RET_IP_); - ptr = __kmem_alloc(size, flags | KM_MAYFAIL); + ptr = kmalloc(size, flags | __GFP_RETRY_MAYFAIL); if (ptr) return ptr; - return __kmem_vmalloc(size, flags); + return __vmalloc(size, flags | __GFP_NOFAIL, PAGE_KERNEL); } diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h index 29d02c71fb22..9249323567ce 100644 --- a/fs/xfs/kmem.h +++ b/fs/xfs/kmem.h @@ -52,8 +52,8 @@ kmem_flags_convert(xfs_km_flags_t flags) return lflags; } -extern void *kmem_alloc_io(size_t size, int align_mask, xfs_km_flags_t flags); -extern void *kmem_alloc_large(size_t size, xfs_km_flags_t); +extern void *kmem_alloc_io(size_t size, int align_mask, gfp_t flags); +extern void *kmem_alloc_large(size_t size, gfp_t); static inline void kmem_free(const void *ptr) { kvfree(ptr); @@ -61,9 +61,9 @@ static inline void kmem_free(const void *ptr) static inline void * -kmem_zalloc_large(size_t size, xfs_km_flags_t flags) +kmem_zalloc_large(size_t size, gfp_t flags) { - return kmem_alloc_large(size, flags | KM_ZERO); + return kmem_alloc_large(size, flags | __GFP_ZERO); } /* diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c index e78cba993eae..d3f872460ea6 100644 --- a/fs/xfs/libxfs/xfs_attr_leaf.c +++ b/fs/xfs/libxfs/xfs_attr_leaf.c @@ -479,7 +479,7 @@ xfs_attr_copy_value( } if (args->op_flags & XFS_DA_OP_ALLOCVAL) { - args->value = kmem_alloc_large(valuelen, 0); + args->value = kmem_alloc_large(valuelen, GFP_KERNEL); if (!args->value) return -ENOMEM; } diff --git a/fs/xfs/scrub/attr.c b/fs/xfs/scrub/attr.c index d9f0dd444b80..bc09c46f4ff2 100644 --- a/fs/xfs/scrub/attr.c +++ b/fs/xfs/scrub/attr.c @@ -29,7 +29,7 @@ int xchk_setup_xattr_buf( struct xfs_scrub *sc, size_t value_size, - xfs_km_flags_t flags) + gfp_t flags) { size_t sz; struct xchk_xattr_buf *ab = sc->buf; @@ -80,7 +80,7 @@ xchk_setup_xattr( * without the inode lock held, which means we can sleep. */ if (sc->flags & XCHK_TRY_HARDER) { - error = xchk_setup_xattr_buf(sc, XATTR_SIZE_MAX, 0); + error = xchk_setup_xattr_buf(sc, XATTR_SIZE_MAX, GFP_KERNEL); if (error) return error; } @@ -139,7 +139,7 @@ xchk_xattr_listent( * doesn't work, we overload the seen_enough variable to convey * the error message back to the main scrub function. */ - error = xchk_setup_xattr_buf(sx->sc, valuelen, KM_MAYFAIL); + error = xchk_setup_xattr_buf(sx->sc, valuelen, GFP_KERNEL); if (error == -ENOMEM) error = -EDEADLOCK; if (error) { @@ -324,7 +324,7 @@ xchk_xattr_block( return 0; /* Allocate memory for block usage checking. */ - error = xchk_setup_xattr_buf(ds->sc, 0, KM_MAYFAIL); + error = xchk_setup_xattr_buf(ds->sc, 0, GFP_KERNEL); if (error == -ENOMEM) return -EDEADLOCK; if (error) diff --git a/fs/xfs/scrub/attr.h b/fs/xfs/scrub/attr.h index 13a1d2e8424d..2c27a82574cb 100644 --- a/fs/xfs/scrub/attr.h +++ b/fs/xfs/scrub/attr.h @@ -65,7 +65,6 @@ xchk_xattr_dstmap( BITS_TO_LONGS(sc->mp->m_attr_geo->blksize); } -int xchk_setup_xattr_buf(struct xfs_scrub *sc, size_t value_size, - xfs_km_flags_t flags); +int xchk_setup_xattr_buf(struct xfs_scrub *sc, size_t value_size, gfp_t flags); #endif /* __XFS_SCRUB_ATTR_H__ */ diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 8a0cc7593212..678e024f7f1c 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -346,15 +346,12 @@ xfs_buf_allocate_memory( unsigned short page_count, i; xfs_off_t start, end; int error; - xfs_km_flags_t kmflag_mask = 0; /* * assure zeroed buffer for non-read cases. */ - if (!(flags & XBF_READ)) { - kmflag_mask |= KM_ZERO; + if (!(flags & XBF_READ)) gfp_mask |= __GFP_ZERO; - } /* * for buffers that are contained within a single page, just allocate @@ -365,7 +362,7 @@ xfs_buf_allocate_memory( if (size < PAGE_SIZE) { int align_mask = xfs_buftarg_dma_alignment(bp->b_target); bp->b_addr = kmem_alloc_io(size, align_mask, - KM_NOFS | kmflag_mask); + GFP_NOFS | __GFP_ZERO); if (!bp->b_addr) { /* low memory - use alloc_page loop instead */ goto use_alloc_page; diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index 28e82d5d5943..dd65fdabf50e 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -1492,7 +1492,7 @@ xlog_alloc_log( prev_iclog = iclog; iclog->ic_data = kmem_alloc_io(log->l_iclog_size, align_mask, - KM_MAYFAIL | KM_ZERO); + GFP_KERNEL | __GFP_ZERO); if (!iclog->ic_data) goto out_free_iclog; #ifdef DEBUG diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c index aa1b923f7293..9250b6b2f0fd 100644 --- a/fs/xfs/xfs_log_cil.c +++ b/fs/xfs/xfs_log_cil.c @@ -186,7 +186,7 @@ xlog_cil_alloc_shadow_bufs( */ kmem_free(lip->li_lv_shadow); - lv = kmem_alloc_large(buf_size, KM_NOFS); + lv = kmem_alloc_large(buf_size, GFP_NOFS); memset(lv, 0, xlog_cil_iovec_space(niovecs)); lv->lv_item = lip; diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c index d46240152518..76b99ebdfcd9 100644 --- a/fs/xfs/xfs_log_recover.c +++ b/fs/xfs/xfs_log_recover.c @@ -127,7 +127,8 @@ xlog_alloc_buffer( if (nbblks > 1 && log->l_sectBBsize > 1) nbblks += log->l_sectBBsize; nbblks = round_up(nbblks, log->l_sectBBsize); - return kmem_alloc_io(BBTOB(nbblks), align_mask, KM_MAYFAIL | KM_ZERO); + return kmem_alloc_io(BBTOB(nbblks), align_mask, + GFP_KERNEL | __GFP_ZERO); } /* From patchwork Wed Nov 13 14:23:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carlos Maiolino X-Patchwork-Id: 11242003 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B9DA21390 for ; Wed, 13 Nov 2019 14:23:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9B1FA222D3 for ; Wed, 13 Nov 2019 14:23:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="X2kmhAME" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727608AbfKMOX6 (ORCPT ); Wed, 13 Nov 2019 09:23:58 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:46298 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727559AbfKMOX5 (ORCPT ); Wed, 13 Nov 2019 09:23:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573655037; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zdCt0DF4Oq8T2gUoVXbx4ynJe9R0exXFbbZMgjxSZKI=; b=X2kmhAMEf556cA+PG5QRL44ghphsATzFgf92bKm/cAAHN/SawfWVzA0pE1o7tX3KM9iCgw 2gHhv6JH0WIxGER0ETY/S2W23eT6hS+7qpjL53DuO70jlaovSujmijZRADu8m+7z17ljMY U4ff++d5v5CMaFTD3ruCg1SrNZ58OUY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-262-cAxSJrIPO7KsinIl3MzEuA-1; Wed, 13 Nov 2019 09:23:55 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F317C1345C1 for ; Wed, 13 Nov 2019 14:23:54 +0000 (UTC) Received: from orion.redhat.com (ovpn-204-203.brq.redhat.com [10.40.204.203]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5874D4D9E1 for ; Wed, 13 Nov 2019 14:23:54 +0000 (UTC) From: Carlos Maiolino To: linux-xfs@vger.kernel.org Subject: [PATCH 10/11] xfs: Remove KM_* flags Date: Wed, 13 Nov 2019 15:23:34 +0100 Message-Id: <20191113142335.1045631-11-cmaiolino@redhat.com> In-Reply-To: <20191113142335.1045631-1-cmaiolino@redhat.com> References: <20191113142335.1045631-1-cmaiolino@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: cAxSJrIPO7KsinIl3MzEuA-1 X-Mimecast-Spam-Score: 0 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org We now use slab flags directly, so get rid of KM_flags and the kmem_flags_convert() function. Signed-off-by: Carlos Maiolino Reviewed-by: Darrick J. Wong --- fs/xfs/kmem.h | 37 ------------------------------------- 1 file changed, 37 deletions(-) diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h index 9249323567ce..791e770be0eb 100644 --- a/fs/xfs/kmem.h +++ b/fs/xfs/kmem.h @@ -15,43 +15,6 @@ * General memory allocation interfaces */ -typedef unsigned __bitwise xfs_km_flags_t; -#define KM_NOFS ((__force xfs_km_flags_t)0x0004u) -#define KM_MAYFAIL ((__force xfs_km_flags_t)0x0008u) -#define KM_ZERO ((__force xfs_km_flags_t)0x0010u) - -/* - * We use a special process flag to avoid recursive callbacks into - * the filesystem during transactions. We will also issue our own - * warnings, so we explicitly skip any generic ones (silly of us). - */ -static inline gfp_t -kmem_flags_convert(xfs_km_flags_t flags) -{ - gfp_t lflags; - - BUG_ON(flags & ~(KM_NOFS|KM_MAYFAIL|KM_ZERO)); - - lflags = GFP_KERNEL | __GFP_NOWARN; - if (flags & KM_NOFS) - lflags &= ~__GFP_FS; - - /* - * Default page/slab allocator behavior is to retry for ever - * for small allocations. We can override this behavior by using - * __GFP_RETRY_MAYFAIL which will tell the allocator to retry as long - * as it is feasible but rather fail than retry forever for all - * request sizes. - */ - if (flags & KM_MAYFAIL) - lflags |= __GFP_RETRY_MAYFAIL; - - if (flags & KM_ZERO) - lflags |= __GFP_ZERO; - - return lflags; -} - extern void *kmem_alloc_io(size_t size, int align_mask, gfp_t flags); extern void *kmem_alloc_large(size_t size, gfp_t); static inline void kmem_free(const void *ptr) From patchwork Wed Nov 13 14:23:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carlos Maiolino X-Patchwork-Id: 11242009 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 05F1D16B1 for ; Wed, 13 Nov 2019 14:24:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DB201222D3 for ; Wed, 13 Nov 2019 14:24:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hleTIj7U" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727578AbfKMOYA (ORCPT ); Wed, 13 Nov 2019 09:24:00 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:58755 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727559AbfKMOX7 (ORCPT ); Wed, 13 Nov 2019 09:23:59 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573655038; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZbWC5AKkO5UkYH5atxBlcNswchSvVpcUYcY0S4PPZWQ=; b=hleTIj7UrD0BLQZxKpfVBCNQTI9G1ljE33sDiqJ9Wabzyu1R4UP8yJwNjypyIPUKqv58VO 3mnf3tgVZPetjWh5ozxJQ3HZNUMAexl5r85Q97gcuVjjNgxptiIASoi1BJubmhQUhxodp8 R3+GTGLY1fIGAOYXHGkpr7WubJ82N+o= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-355-Y3GHNF19PYmIH1yBI_QoVw-1; Wed, 13 Nov 2019 09:23:57 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0575C104ED1A for ; Wed, 13 Nov 2019 14:23:56 +0000 (UTC) Received: from orion.redhat.com (ovpn-204-203.brq.redhat.com [10.40.204.203]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5DDED63742 for ; Wed, 13 Nov 2019 14:23:55 +0000 (UTC) From: Carlos Maiolino To: linux-xfs@vger.kernel.org Subject: [PATCH 11/11] xfs: Remove kmem_alloc_{io, large} and kmem_zalloc_large Date: Wed, 13 Nov 2019 15:23:35 +0100 Message-Id: <20191113142335.1045631-12-cmaiolino@redhat.com> In-Reply-To: <20191113142335.1045631-1-cmaiolino@redhat.com> References: <20191113142335.1045631-1-cmaiolino@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: Y3GHNF19PYmIH1yBI_QoVw-1 X-Mimecast-Spam-Score: 0 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Getting rid of these functions, is a bit more complicated, giving the fact they use a vmalloc fallback, and (in case of _io version) uses an alignment check, so they have their useness. Instead of keeping both of them, I think sharing the same function for both cases is a more interesting idea, giving the fact they both have the same purpose, with the only difference being the alignment check, which can be selected by using a flag. Signed-off-by: Carlos Maiolino --- fs/xfs/kmem.c | 39 +++++++++++------------------------ fs/xfs/kmem.h | 10 +-------- fs/xfs/libxfs/xfs_attr_leaf.c | 2 +- fs/xfs/scrub/attr.c | 2 +- fs/xfs/scrub/symlink.c | 3 ++- fs/xfs/xfs_acl.c | 3 ++- fs/xfs/xfs_buf.c | 4 ++-- fs/xfs/xfs_ioctl.c | 8 ++++--- fs/xfs/xfs_ioctl32.c | 3 ++- fs/xfs/xfs_log.c | 5 +++-- fs/xfs/xfs_log_cil.c | 2 +- fs/xfs/xfs_log_recover.c | 4 ++-- fs/xfs/xfs_rtalloc.c | 3 ++- 13 files changed, 36 insertions(+), 52 deletions(-) diff --git a/fs/xfs/kmem.c b/fs/xfs/kmem.c index 44145293cfc9..bb4990970647 100644 --- a/fs/xfs/kmem.c +++ b/fs/xfs/kmem.c @@ -8,40 +8,25 @@ #include "xfs_message.h" #include "xfs_trace.h" -/* - * Same as kmem_alloc_large, except we guarantee the buffer returned is aligned - * to the @align_mask. We only guarantee alignment up to page size, we'll clamp - * alignment at page size if it is larger. vmalloc always returns a PAGE_SIZE - * aligned region. - */ void * -kmem_alloc_io(size_t size, int align_mask, gfp_t flags) +xfs_kmem_alloc(size_t size, gfp_t flags, bool align, int align_mask) { void *ptr; - trace_kmem_alloc_io(size, flags, _RET_IP_); - - if (WARN_ON_ONCE(align_mask >= PAGE_SIZE)) - align_mask = PAGE_SIZE - 1; - ptr = kmalloc(size, flags | __GFP_RETRY_MAYFAIL); if (ptr) { - if (!((uintptr_t)ptr & align_mask)) + if (align) { + trace_kmem_alloc_io(size, flags, _RET_IP_); + if (WARN_ON_ONCE(align_mask >= PAGE_SIZE)) + align_mask = PAGE_SIZE - 1; + + if (!((uintptr_t)ptr & align_mask)) + return ptr; + kfree(ptr); + } else { + trace_kmem_alloc_large(size, flags, _RET_IP_); return ptr; - kfree(ptr); + } } return __vmalloc(size, flags | __GFP_NOFAIL, PAGE_KERNEL); } - -void * -kmem_alloc_large(size_t size, gfp_t flags) -{ - void *ptr; - - trace_kmem_alloc_large(size, flags, _RET_IP_); - - ptr = kmalloc(size, flags | __GFP_RETRY_MAYFAIL); - if (ptr) - return ptr; - return __vmalloc(size, flags | __GFP_NOFAIL, PAGE_KERNEL); -} diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h index 791e770be0eb..ee4c0152cdeb 100644 --- a/fs/xfs/kmem.h +++ b/fs/xfs/kmem.h @@ -15,20 +15,12 @@ * General memory allocation interfaces */ -extern void *kmem_alloc_io(size_t size, int align_mask, gfp_t flags); -extern void *kmem_alloc_large(size_t size, gfp_t); +extern void *xfs_kmem_alloc(size_t, gfp_t, bool, int); static inline void kmem_free(const void *ptr) { kvfree(ptr); } - -static inline void * -kmem_zalloc_large(size_t size, gfp_t flags) -{ - return kmem_alloc_large(size, flags | __GFP_ZERO); -} - /* * Zone interfaces */ diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c index d3f872460ea6..eeb90f63cf2e 100644 --- a/fs/xfs/libxfs/xfs_attr_leaf.c +++ b/fs/xfs/libxfs/xfs_attr_leaf.c @@ -479,7 +479,7 @@ xfs_attr_copy_value( } if (args->op_flags & XFS_DA_OP_ALLOCVAL) { - args->value = kmem_alloc_large(valuelen, GFP_KERNEL); + args->value = xfs_kmem_alloc(valuelen, GFP_KERNEL, false, 0); if (!args->value) return -ENOMEM; } diff --git a/fs/xfs/scrub/attr.c b/fs/xfs/scrub/attr.c index bc09c46f4ff2..90239b902b47 100644 --- a/fs/xfs/scrub/attr.c +++ b/fs/xfs/scrub/attr.c @@ -57,7 +57,7 @@ xchk_setup_xattr_buf( * Don't zero the buffer upon allocation to avoid runtime overhead. * All users must be careful never to read uninitialized contents. */ - ab = kmem_alloc_large(sizeof(*ab) + sz, flags); + ab = xfs_kmem_alloc(sizeof(*ab) + sz, flags, false, 0); if (!ab) return -ENOMEM; diff --git a/fs/xfs/scrub/symlink.c b/fs/xfs/scrub/symlink.c index 5641ae512c9e..78f6d0dd8f2e 100644 --- a/fs/xfs/scrub/symlink.c +++ b/fs/xfs/scrub/symlink.c @@ -22,7 +22,8 @@ xchk_setup_symlink( struct xfs_inode *ip) { /* Allocate the buffer without the inode lock held. */ - sc->buf = kmem_zalloc_large(XFS_SYMLINK_MAXLEN + 1, 0); + sc->buf = xfs_kmem_alloc(XFS_SYMLINK_MAXLEN + 1, + GFP_KERNEL | __GFP_ZERO, false, 0); if (!sc->buf) return -ENOMEM; diff --git a/fs/xfs/xfs_acl.c b/fs/xfs/xfs_acl.c index 91693fce34a8..988598e4e07c 100644 --- a/fs/xfs/xfs_acl.c +++ b/fs/xfs/xfs_acl.c @@ -186,7 +186,8 @@ __xfs_set_acl(struct inode *inode, struct posix_acl *acl, int type) struct xfs_acl *xfs_acl; int len = XFS_ACL_MAX_SIZE(ip->i_mount); - xfs_acl = kmem_zalloc_large(len, 0); + xfs_acl = xfs_kmem_alloc(len, GFP_KERNEL | __GFP_ZERO, + false, 0); if (!xfs_acl) return -ENOMEM; diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 678e024f7f1c..b36e4c4d3b9a 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -361,8 +361,8 @@ xfs_buf_allocate_memory( size = BBTOB(bp->b_length); if (size < PAGE_SIZE) { int align_mask = xfs_buftarg_dma_alignment(bp->b_target); - bp->b_addr = kmem_alloc_io(size, align_mask, - GFP_NOFS | __GFP_ZERO); + bp->b_addr = xfs_kmem_alloc(size, GFP_NOFS | __GFP_ZERO, true, + align_mask); if (!bp->b_addr) { /* low memory - use alloc_page loop instead */ goto use_alloc_page; diff --git a/fs/xfs/xfs_ioctl.c b/fs/xfs/xfs_ioctl.c index 364961c23cd0..72e26b7ac48f 100644 --- a/fs/xfs/xfs_ioctl.c +++ b/fs/xfs/xfs_ioctl.c @@ -398,7 +398,8 @@ xfs_attrlist_by_handle( if (IS_ERR(dentry)) return PTR_ERR(dentry); - kbuf = kmem_zalloc_large(al_hreq.buflen, 0); + kbuf = xfs_kmem_alloc(al_hreq.buflen, GFP_KERNEL | __GFP_ZERO, + false, 0); if (!kbuf) goto out_dput; @@ -436,7 +437,7 @@ xfs_attrmulti_attr_get( if (*len > XFS_XATTR_SIZE_MAX) return -EINVAL; - kbuf = kmem_zalloc_large(*len, 0); + kbuf = xfs_kmem_alloc(*len, GFP_KERNEL | __GFP_ZERO, false, 0); if (!kbuf) return -ENOMEM; @@ -1756,7 +1757,8 @@ xfs_ioc_getbmap( if (bmx.bmv_count > ULONG_MAX / recsize) return -ENOMEM; - buf = kmem_zalloc_large(bmx.bmv_count * sizeof(*buf), 0); + buf = xfs_kmem_alloc(bmx.bmv_count * sizeof(*buf), + GFP_KERNEL | __GFP_ZERO, false, 0); if (!buf) return -ENOMEM; diff --git a/fs/xfs/xfs_ioctl32.c b/fs/xfs/xfs_ioctl32.c index 3c0d518e1039..99886b1ba319 100644 --- a/fs/xfs/xfs_ioctl32.c +++ b/fs/xfs/xfs_ioctl32.c @@ -381,7 +381,8 @@ xfs_compat_attrlist_by_handle( return PTR_ERR(dentry); error = -ENOMEM; - kbuf = kmem_zalloc_large(al_hreq.buflen, 0); + kbuf = xfs_kmem_alloc(al_hreq.buflen, GFP_KERNEL | __GFP_ZERO, + false, 0); if (!kbuf) goto out_dput; diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index dd65fdabf50e..c5e26080262c 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -1491,8 +1491,9 @@ xlog_alloc_log( iclog->ic_prev = prev_iclog; prev_iclog = iclog; - iclog->ic_data = kmem_alloc_io(log->l_iclog_size, align_mask, - GFP_KERNEL | __GFP_ZERO); + iclog->ic_data = xfs_kmem_alloc(log->l_iclog_size, + GFP_KERNEL | __GFP_ZERO, + true, align_mask); if (!iclog->ic_data) goto out_free_iclog; #ifdef DEBUG diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c index 9250b6b2f0fd..2585dbf653cc 100644 --- a/fs/xfs/xfs_log_cil.c +++ b/fs/xfs/xfs_log_cil.c @@ -186,7 +186,7 @@ xlog_cil_alloc_shadow_bufs( */ kmem_free(lip->li_lv_shadow); - lv = kmem_alloc_large(buf_size, GFP_NOFS); + lv = xfs_kmem_alloc(buf_size, GFP_NOFS, false, 0); memset(lv, 0, xlog_cil_iovec_space(niovecs)); lv->lv_item = lip; diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c index 76b99ebdfcd9..3eb23f71a415 100644 --- a/fs/xfs/xfs_log_recover.c +++ b/fs/xfs/xfs_log_recover.c @@ -127,8 +127,8 @@ xlog_alloc_buffer( if (nbblks > 1 && log->l_sectBBsize > 1) nbblks += log->l_sectBBsize; nbblks = round_up(nbblks, log->l_sectBBsize); - return kmem_alloc_io(BBTOB(nbblks), align_mask, - GFP_KERNEL | __GFP_ZERO); + return xfs_kmem_alloc(BBTOB(nbblks), GFP_KERNEL | __GFP_ZERO, true, + align_mask); } /* diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c index 1875484123d7..b2fa5f1a6acb 100644 --- a/fs/xfs/xfs_rtalloc.c +++ b/fs/xfs/xfs_rtalloc.c @@ -864,7 +864,8 @@ xfs_alloc_rsum_cache( * lower bound on the minimum level with any free extents. We can * continue without the cache if it couldn't be allocated. */ - mp->m_rsum_cache = kmem_zalloc_large(rbmblocks, 0); + mp->m_rsum_cache = xfs_kmem_alloc(rbmblocks, GFP_KERNEL | __GFP_ZERO, + false, 0); if (!mp->m_rsum_cache) xfs_warn(mp, "could not allocate realtime summary cache"); }