From patchwork Mon Jan 15 22:59:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 13520256 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F168C4707B for ; Mon, 15 Jan 2024 23:01:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AADB06B0074; Mon, 15 Jan 2024 18:01:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A35AB6B007B; Mon, 15 Jan 2024 18:01:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8876F6B0078; Mon, 15 Jan 2024 18:01:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 638866B007B for ; Mon, 15 Jan 2024 18:01:23 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 3BBD71C12A3 for ; Mon, 15 Jan 2024 23:01:23 +0000 (UTC) X-FDA: 81683068446.20.49047DA Received: from mail-oi1-f172.google.com (mail-oi1-f172.google.com [209.85.167.172]) by imf21.hostedemail.com (Postfix) with ESMTP id 572E51C0017 for ; Mon, 15 Jan 2024 23:01:21 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=fromorbit-com.20230601.gappssmtp.com header.s=20230601 header.b=EKcGLITh; dmarc=pass (policy=quarantine) header.from=fromorbit.com; spf=pass (imf21.hostedemail.com: domain of david@fromorbit.com designates 209.85.167.172 as permitted sender) smtp.mailfrom=david@fromorbit.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705359681; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AReH4fPhX0qnf2rGMtllGHRPMoAw4H7smgC9h2elFy0=; b=vHa3GhB0U85+qNaz61hrt6IP163TggJKqCd/p06eKImfVFPqNm9NEmEbt/u6KNCPHXFO5c hgN1vD7TiWnT9yRte8Gm/c7LABR0A4fQ93OQ9K6ziVOLjvIWN8MFpJab9BKXAUuYI26fSM yMcmm+Va9KFCKMLAnSxdU8UalMCvBj0= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=fromorbit-com.20230601.gappssmtp.com header.s=20230601 header.b=EKcGLITh; dmarc=pass (policy=quarantine) header.from=fromorbit.com; spf=pass (imf21.hostedemail.com: domain of david@fromorbit.com designates 209.85.167.172 as permitted sender) smtp.mailfrom=david@fromorbit.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705359681; a=rsa-sha256; cv=none; b=zvlVzcG2WvhEN+ZbTKzGotIdIZ3mnRbIzdSv4XiRWB+dE7fQqZOiFw1ec4DGeSkWDGiiD9 YKbAakpeohewroYyo613Ww+98+ZFt5JhPWwjhwF0m3SvSYi4q/kyStxOG136YJ+D8llN6v 97e+INpdEc91Nw+q1uEaAtDuI/6aQvk= Received: by mail-oi1-f172.google.com with SMTP id 5614622812f47-3bd3b34a58dso4219275b6e.3 for ; Mon, 15 Jan 2024 15:01:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20230601.gappssmtp.com; s=20230601; t=1705359680; x=1705964480; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AReH4fPhX0qnf2rGMtllGHRPMoAw4H7smgC9h2elFy0=; b=EKcGLITh5GWhHsHIDL4RVk0cDRTyUyHM+hrK4B5Yo1OQWv3F7WWQyYvitV5OQRytz1 nqykYXzEt5oiLDOQqovvErdFN0KmoNaY3vl0qw5wFo3PI1zBPhHdM+I/RwvT7y6MAE/+ JmMCRmPWIdgw1INQg/mQIxTTJZc5GdES8bILvW9h0qkHl2m010cnSTi0qrPT7rxyznQp Q9Gyrzfzv04EsJgsIHGEy9FHbG990mCWkUFEXzjq/jzpjOczT7mB1M/x8ImnxefS9Lb0 +3vMFcRimPV4QhJoEH1kmnNXWoKf2ffl7GIBNeG4zGMhzlXwHI3EZrwh2CCXd8O6ERET nOAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705359680; x=1705964480; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AReH4fPhX0qnf2rGMtllGHRPMoAw4H7smgC9h2elFy0=; b=ujbMwl/IO0M9AY0LOdKCXYAArydelFRQ/Epb/I8y2ZKR257dD1k/eL1ZC6JCr1kpCY 289wEoU3uEAWpd0q0q4ZnKeSJP9MsdnyK9yH60fmj+O+EYzajwzGZL2opA9xxfzAYdJ9 hYaRnyobDAUXYQfv2XI0AaKJvgwct5WM0uHQRO56Xi/pc3U04Hz1xKzz3rlsboyVDSdC 38nj34Z00bEukmYpV4AFzIHhDQBmUTs2a8FIc7K3oXMjyt7wCRWtnR23D56uumzHAvIg q5aq0WEqvz9ouqh5Or1w9cFL3TOe/fhKm/O1c9u8WqpKOIbe0DQiSr4ZLmpn/26RWq0g Wqdw== X-Gm-Message-State: AOJu0YysTTKBnfLGfygE81u2wwKfTAQOsQlF/tyboZWz5JsHCY99vDnw +JQNcAYpNswnBJF71m2NcbgMVYyEv4Dtsg== X-Google-Smtp-Source: AGHT+IEYlcJQMm5bEb8rOiDx/77KPnN71bc52YGCAdpGz+zxZPCud2iakyhoXEj0f1EPTTWIVkPnhA== X-Received: by 2002:a05:6808:1826:b0:3bd:5be7:def2 with SMTP id bh38-20020a056808182600b003bd5be7def2mr7489577oib.8.1705359680200; Mon, 15 Jan 2024 15:01:20 -0800 (PST) Received: from dread.disaster.area (pa49-180-249-6.pa.nsw.optusnet.com.au. [49.180.249.6]) by smtp.gmail.com with ESMTPSA id c23-20020aa78817000000b006d96dc803b3sm8366593pfo.12.2024.01.15.15.01.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Jan 2024 15:01:18 -0800 (PST) Received: from [192.168.253.23] (helo=devoid.disaster.area) by dread.disaster.area with esmtp (Exim 4.96) (envelope-from ) id 1rPVxE-00AtKJ-0l; Tue, 16 Jan 2024 10:01:15 +1100 Received: from dave by devoid.disaster.area with local (Exim 4.97) (envelope-from ) id 1rPVxD-0000000H8fw-2xxB; Tue, 16 Jan 2024 10:01:15 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: willy@infradead.org, linux-mm@kvack.org Subject: [PATCH 08/12] xfs: use GFP_KERNEL in pure transaction contexts Date: Tue, 16 Jan 2024 09:59:46 +1100 Message-ID: <20240115230113.4080105-9-david@fromorbit.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240115230113.4080105-1-david@fromorbit.com> References: <20240115230113.4080105-1-david@fromorbit.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: 91ygbtkpkm334pdpbbtnmcg5acxwfd43 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 572E51C0017 X-HE-Tag: 1705359681-432210 X-HE-Meta: U2FsdGVkX19+92mn447wvQDpTcLHkNacdsEWhdk+2umvbFZwFGMXbBzXILMnuG5MGU7gQaoAiAyxjVCrIpZmLvH4PfekcaUhyOFvyCHyrXnd6PeTLWBjMG5N+Hz2A5GZvX/skMy7uqf3kAL1xZrqP2e39WQIyIPYgW8FNgxtMLFxuXqgtxzkGVOQlhFzLJCquGd6PMk50rg9EeDbxuGXAjCwol0HelLzD4M3ge6lUwJiHzutG5+dTvThaBwLHzPxF21h/0EStTsNFSQbhksV+q70F4LIrAtTQ+7Dq9CUL2iYMo8wDRXUtG3HiRz5yofJk4wyMWnuseNgI3Y1qB30DPqQElw2c5RYyhDOHtpYvklz8bwh8LJCJY6T/O91D37LHYUf+qzOJGDqO7vGUiAPHLR/TsbwhaZKsbc9leI+G6grT0kJamlQU5jCVgWE4tXBQzVC8CpX2S3gScGUP/UoFZSVo6UPk2rfWwV0vdjWvmH3bljtpiStsecsvCzeUo3jMLegua8/0ghQKRP4kFDejZfxxFJP+XBL0SbUBIk/lo9Olw8YCVlxWGZ4abjIhwc1d0WkbEe6++BtRkZLPmX9FZU2eLzhIV8CdZjLICnyvoz7IhR78wNGpWsTH7gS/aVgwaHVN2NHNNuL9IU/LgnKnfb+UkRgqeOhkqyuV31jJRcJ6l4wm3OTv7T9JUpAGbsAS1K8T0atYaJRXSa1lCPeh9tXdD6MRGPVkXRPJeyRJhr0po1u9V0eiqA96+8LiE+U9lQlqxtf1/rddoNIqZEGjxsQc8lbErwpHXm3y5Wcr3GAHdRE8xohYL02JrO4+a7ywgcWAQ88QEYQ7h99L5h3T4hwIxPTIFBgwDaII+Xg1sXAfYlXbEcTuwuwlmutq8IO/EfkQ+NiGTzbimfOFQnfBADyCQvPWI1EoyrHAx1d5liREIF2Z7YgMiGzSw+I27dcPlkJX0zuz14lk+IK+Z6 lsltUZYM cwTXv5iSOKCjrC+92O2L5VCBPBG99Y8j+CbfdYASBC9IMYMgCUyHTSAAi0sy6+0EJk5lo918EpU/1D3cvykonaEqqA/rgCUjWK3YIoXHWND3D5sFq2gAbvk8nROt3dSkt63uvEMrG/nqUI93DKcMUR41CbU7LOiLObwoLlI5BVTaQHPn2cSJOVCASz4Aoj1PNkTD/L/k/AfePbELLqLOAuWCeSE6gkoRW6CHHE4UNz1qT5yxh5Ap0mPE8OrFYD0hM3e8yI8spfP0I+jFwWnEUQSTOtzz3RYSY7LJKcgaB+FgwFRuELCgqtiCYW07bH+4881dLmHEzcuEK4Vd75rBi6G9uR1yu/9Bj5LwqOxWz2BVkSF+sxO28bDeLFaQsBKt40mvPH5Mtb1PXBTsskc4405Sb/9yXEuiVeCZb3LrWFKMY0krMkc1DKQ2VQaBpN/DlZOeawRUMSjmw2UaKWA1SLy+F9YH+srVmnAwBZ4gvee8kvedkVgU84IkF2Eop50gabJydRns8PBpASBMnQ2z5KxOJUWTwZrFNQocw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Dave Chinner When running in a transaction context, memory allocations are scoped to GFP_NOFS. Hence we don't need to use GFP_NOFS contexts in pure transaction context allocations - GFP_KERNEL will automatically get converted to GFP_NOFS as appropriate. Go through the code and convert all the obvious GFP_NOFS allocations in transaction context to use GFP_KERNEL. This further reduces the explicit use of GFP_NOFS in XFS. Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong --- fs/xfs/libxfs/xfs_attr.c | 3 ++- fs/xfs/libxfs/xfs_bmap.c | 2 +- fs/xfs/libxfs/xfs_defer.c | 6 +++--- fs/xfs/libxfs/xfs_dir2.c | 8 ++++---- fs/xfs/libxfs/xfs_inode_fork.c | 8 ++++---- fs/xfs/libxfs/xfs_refcount.c | 2 +- fs/xfs/libxfs/xfs_rmap.c | 2 +- fs/xfs/xfs_attr_item.c | 4 ++-- fs/xfs/xfs_bmap_util.c | 2 +- fs/xfs/xfs_buf.c | 28 +++++++++++++++++----------- fs/xfs/xfs_log.c | 3 ++- fs/xfs/xfs_mru_cache.c | 2 +- 12 files changed, 39 insertions(+), 31 deletions(-) diff --git a/fs/xfs/libxfs/xfs_attr.c b/fs/xfs/libxfs/xfs_attr.c index 9976a00a73f9..269a57420859 100644 --- a/fs/xfs/libxfs/xfs_attr.c +++ b/fs/xfs/libxfs/xfs_attr.c @@ -891,7 +891,8 @@ xfs_attr_defer_add( struct xfs_attr_intent *new; - new = kmem_cache_zalloc(xfs_attr_intent_cache, GFP_NOFS | __GFP_NOFAIL); + new = kmem_cache_zalloc(xfs_attr_intent_cache, + GFP_KERNEL | __GFP_NOFAIL); new->xattri_op_flags = op_flags; new->xattri_da_args = args; diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index 98aaca933bdd..fbdaa53deecd 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -6098,7 +6098,7 @@ __xfs_bmap_add( bmap->br_blockcount, bmap->br_state); - bi = kmem_cache_alloc(xfs_bmap_intent_cache, GFP_NOFS | __GFP_NOFAIL); + bi = kmem_cache_alloc(xfs_bmap_intent_cache, GFP_KERNEL | __GFP_NOFAIL); INIT_LIST_HEAD(&bi->bi_list); bi->bi_type = type; bi->bi_owner = ip; diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c index 75689c151a54..8ae4401f6810 100644 --- a/fs/xfs/libxfs/xfs_defer.c +++ b/fs/xfs/libxfs/xfs_defer.c @@ -825,7 +825,7 @@ xfs_defer_alloc( struct xfs_defer_pending *dfp; dfp = kmem_cache_zalloc(xfs_defer_pending_cache, - GFP_NOFS | __GFP_NOFAIL); + GFP_KERNEL | __GFP_NOFAIL); dfp->dfp_ops = ops; INIT_LIST_HEAD(&dfp->dfp_work); list_add_tail(&dfp->dfp_list, &tp->t_dfops); @@ -888,7 +888,7 @@ xfs_defer_start_recovery( struct xfs_defer_pending *dfp; dfp = kmem_cache_zalloc(xfs_defer_pending_cache, - GFP_NOFS | __GFP_NOFAIL); + GFP_KERNEL | __GFP_NOFAIL); dfp->dfp_ops = ops; dfp->dfp_intent = lip; INIT_LIST_HEAD(&dfp->dfp_work); @@ -979,7 +979,7 @@ xfs_defer_ops_capture( return ERR_PTR(error); /* Create an object to capture the defer ops. */ - dfc = kzalloc(sizeof(*dfc), GFP_NOFS | __GFP_NOFAIL); + dfc = kzalloc(sizeof(*dfc), GFP_KERNEL | __GFP_NOFAIL); INIT_LIST_HEAD(&dfc->dfc_list); INIT_LIST_HEAD(&dfc->dfc_dfops); diff --git a/fs/xfs/libxfs/xfs_dir2.c b/fs/xfs/libxfs/xfs_dir2.c index 728f72f0d078..8c9403b33191 100644 --- a/fs/xfs/libxfs/xfs_dir2.c +++ b/fs/xfs/libxfs/xfs_dir2.c @@ -236,7 +236,7 @@ xfs_dir_init( if (error) return error; - args = kzalloc(sizeof(*args), GFP_NOFS | __GFP_NOFAIL); + args = kzalloc(sizeof(*args), GFP_KERNEL | __GFP_NOFAIL); if (!args) return -ENOMEM; @@ -273,7 +273,7 @@ xfs_dir_createname( XFS_STATS_INC(dp->i_mount, xs_dir_create); } - args = kzalloc(sizeof(*args), GFP_NOFS | __GFP_NOFAIL); + args = kzalloc(sizeof(*args), GFP_KERNEL | __GFP_NOFAIL); if (!args) return -ENOMEM; @@ -435,7 +435,7 @@ xfs_dir_removename( ASSERT(S_ISDIR(VFS_I(dp)->i_mode)); XFS_STATS_INC(dp->i_mount, xs_dir_remove); - args = kzalloc(sizeof(*args), GFP_NOFS | __GFP_NOFAIL); + args = kzalloc(sizeof(*args), GFP_KERNEL | __GFP_NOFAIL); if (!args) return -ENOMEM; @@ -496,7 +496,7 @@ xfs_dir_replace( if (rval) return rval; - args = kzalloc(sizeof(*args), GFP_NOFS | __GFP_NOFAIL); + args = kzalloc(sizeof(*args), GFP_KERNEL | __GFP_NOFAIL); if (!args) return -ENOMEM; diff --git a/fs/xfs/libxfs/xfs_inode_fork.c b/fs/xfs/libxfs/xfs_inode_fork.c index 709fda3d742f..136d5d7b9de9 100644 --- a/fs/xfs/libxfs/xfs_inode_fork.c +++ b/fs/xfs/libxfs/xfs_inode_fork.c @@ -402,7 +402,7 @@ xfs_iroot_realloc( if (ifp->if_broot_bytes == 0) { new_size = XFS_BMAP_BROOT_SPACE_CALC(mp, rec_diff); ifp->if_broot = kmalloc(new_size, - GFP_NOFS | __GFP_NOFAIL); + GFP_KERNEL | __GFP_NOFAIL); ifp->if_broot_bytes = (int)new_size; return; } @@ -417,7 +417,7 @@ xfs_iroot_realloc( new_max = cur_max + rec_diff; new_size = XFS_BMAP_BROOT_SPACE_CALC(mp, new_max); ifp->if_broot = krealloc(ifp->if_broot, new_size, - GFP_NOFS | __GFP_NOFAIL); + GFP_KERNEL | __GFP_NOFAIL); op = (char *)XFS_BMAP_BROOT_PTR_ADDR(mp, ifp->if_broot, 1, ifp->if_broot_bytes); np = (char *)XFS_BMAP_BROOT_PTR_ADDR(mp, ifp->if_broot, 1, @@ -443,7 +443,7 @@ xfs_iroot_realloc( else new_size = 0; if (new_size > 0) { - new_broot = kmalloc(new_size, GFP_NOFS | __GFP_NOFAIL); + new_broot = kmalloc(new_size, GFP_KERNEL | __GFP_NOFAIL); /* * First copy over the btree block header. */ @@ -512,7 +512,7 @@ xfs_idata_realloc( if (byte_diff) { ifp->if_data = krealloc(ifp->if_data, new_size, - GFP_NOFS | __GFP_NOFAIL); + GFP_KERNEL | __GFP_NOFAIL); if (new_size == 0) ifp->if_data = NULL; ifp->if_bytes = new_size; diff --git a/fs/xfs/libxfs/xfs_refcount.c b/fs/xfs/libxfs/xfs_refcount.c index 6709a7f8bad5..7df52daa22cf 100644 --- a/fs/xfs/libxfs/xfs_refcount.c +++ b/fs/xfs/libxfs/xfs_refcount.c @@ -1449,7 +1449,7 @@ __xfs_refcount_add( blockcount); ri = kmem_cache_alloc(xfs_refcount_intent_cache, - GFP_NOFS | __GFP_NOFAIL); + GFP_KERNEL | __GFP_NOFAIL); INIT_LIST_HEAD(&ri->ri_list); ri->ri_type = type; ri->ri_startblock = startblock; diff --git a/fs/xfs/libxfs/xfs_rmap.c b/fs/xfs/libxfs/xfs_rmap.c index 76bf7f48cb5a..0bd1f47b2c2b 100644 --- a/fs/xfs/libxfs/xfs_rmap.c +++ b/fs/xfs/libxfs/xfs_rmap.c @@ -2559,7 +2559,7 @@ __xfs_rmap_add( bmap->br_blockcount, bmap->br_state); - ri = kmem_cache_alloc(xfs_rmap_intent_cache, GFP_NOFS | __GFP_NOFAIL); + ri = kmem_cache_alloc(xfs_rmap_intent_cache, GFP_KERNEL | __GFP_NOFAIL); INIT_LIST_HEAD(&ri->ri_list); ri->ri_type = type; ri->ri_owner = owner; diff --git a/fs/xfs/xfs_attr_item.c b/fs/xfs/xfs_attr_item.c index 2a142cefdc3d..0bf25a2ba3b6 100644 --- a/fs/xfs/xfs_attr_item.c +++ b/fs/xfs/xfs_attr_item.c @@ -226,7 +226,7 @@ xfs_attri_init( { struct xfs_attri_log_item *attrip; - attrip = kmem_cache_zalloc(xfs_attri_cache, GFP_NOFS | __GFP_NOFAIL); + attrip = kmem_cache_zalloc(xfs_attri_cache, GFP_KERNEL | __GFP_NOFAIL); /* * Grab an extra reference to the name/value buffer for this log item. @@ -666,7 +666,7 @@ xfs_attr_create_done( attrip = ATTRI_ITEM(intent); - attrdp = kmem_cache_zalloc(xfs_attrd_cache, GFP_NOFS | __GFP_NOFAIL); + attrdp = kmem_cache_zalloc(xfs_attrd_cache, GFP_KERNEL | __GFP_NOFAIL); xfs_log_item_init(tp->t_mountp, &attrdp->attrd_item, XFS_LI_ATTRD, &xfs_attrd_item_ops); diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c index c2531c28905c..cb2a4b940292 100644 --- a/fs/xfs/xfs_bmap_util.c +++ b/fs/xfs/xfs_bmap_util.c @@ -66,7 +66,7 @@ xfs_zero_extent( return blkdev_issue_zeroout(target->bt_bdev, block << (mp->m_super->s_blocksize_bits - 9), count_fsb << (mp->m_super->s_blocksize_bits - 9), - GFP_NOFS, 0); + GFP_KERNEL, 0); } /* diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index a09ffbbb0dda..de99368000b4 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -190,7 +190,7 @@ xfs_buf_get_maps( } bp->b_maps = kzalloc(map_count * sizeof(struct xfs_buf_map), - GFP_NOFS | __GFP_NOFAIL); + GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL); if (!bp->b_maps) return -ENOMEM; return 0; @@ -222,7 +222,8 @@ _xfs_buf_alloc( int i; *bpp = NULL; - bp = kmem_cache_zalloc(xfs_buf_cache, GFP_NOFS | __GFP_NOFAIL); + bp = kmem_cache_zalloc(xfs_buf_cache, + GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL); /* * We don't want certain flags to appear in b_flags unless they are @@ -325,7 +326,7 @@ xfs_buf_alloc_kmem( struct xfs_buf *bp, xfs_buf_flags_t flags) { - gfp_t gfp_mask = GFP_NOFS | __GFP_NOFAIL; + gfp_t gfp_mask = GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL; size_t size = BBTOB(bp->b_length); /* Assure zeroed buffer for non-read cases. */ @@ -356,13 +357,11 @@ xfs_buf_alloc_pages( struct xfs_buf *bp, xfs_buf_flags_t flags) { - gfp_t gfp_mask = __GFP_NOWARN; + gfp_t gfp_mask = GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOWARN; long filled = 0; if (flags & XBF_READ_AHEAD) gfp_mask |= __GFP_NORETRY; - else - gfp_mask |= GFP_NOFS; /* Make sure that we have a page list */ bp->b_page_count = DIV_ROUND_UP(BBTOB(bp->b_length), PAGE_SIZE); @@ -429,11 +428,18 @@ _xfs_buf_map_pages( /* * vm_map_ram() will allocate auxiliary structures (e.g. - * pagetables) with GFP_KERNEL, yet we are likely to be under - * GFP_NOFS context here. Hence we need to tell memory reclaim - * that we are in such a context via PF_MEMALLOC_NOFS to prevent - * memory reclaim re-entering the filesystem here and - * potentially deadlocking. + * pagetables) with GFP_KERNEL, yet we often under a scoped nofs + * context here. Mixing GFP_KERNEL with GFP_NOFS allocations + * from the same call site that can be run from both above and + * below memory reclaim causes lockdep false positives. Hence we + * always need to force this allocation to nofs context because + * we can't pass __GFP_NOLOCKDEP down to auxillary structures to + * prevent false positive lockdep reports. + * + * XXX(dgc): I think dquot reclaim is the only place we can get + * to this function from memory reclaim context now. If we fix + * that like we've fixed inode reclaim to avoid writeback from + * reclaim, this nofs wrapping can go away. */ nofs_flag = memalloc_nofs_save(); do { diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index ee39639bb92b..1f68569e62ca 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -3518,7 +3518,8 @@ xlog_ticket_alloc( struct xlog_ticket *tic; int unit_res; - tic = kmem_cache_zalloc(xfs_log_ticket_cache, GFP_NOFS | __GFP_NOFAIL); + tic = kmem_cache_zalloc(xfs_log_ticket_cache, + GFP_KERNEL | __GFP_NOFAIL); unit_res = xlog_calc_unit_res(log, unit_bytes, &tic->t_iclog_hdrs); diff --git a/fs/xfs/xfs_mru_cache.c b/fs/xfs/xfs_mru_cache.c index ce496704748d..7443debaffd6 100644 --- a/fs/xfs/xfs_mru_cache.c +++ b/fs/xfs/xfs_mru_cache.c @@ -428,7 +428,7 @@ xfs_mru_cache_insert( if (!mru || !mru->lists) return -EINVAL; - if (radix_tree_preload(GFP_NOFS)) + if (radix_tree_preload(GFP_KERNEL)) return -ENOMEM; INIT_LIST_HEAD(&elem->list_node);