From patchwork Tue Aug 24 10:50:19 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Rientjes X-Patchwork-Id: 126121 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o7OApeql018399 for ; Tue, 24 Aug 2010 10:51:40 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754406Ab0HXKub (ORCPT ); Tue, 24 Aug 2010 06:50:31 -0400 Received: from smtp-out.google.com ([74.125.121.35]:44555 "EHLO smtp-out.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751466Ab0HXKu3 (ORCPT ); Tue, 24 Aug 2010 06:50:29 -0400 Received: from hpaq3.eem.corp.google.com (hpaq3.eem.corp.google.com [172.25.149.3]) by smtp-out.google.com with ESMTP id o7OAoSfJ016168; Tue, 24 Aug 2010 03:50:28 -0700 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=google.com; s=beta; t=1282647028; bh=uDk4x59vZuKtkwgUk6NH5ZlQ2MM=; h=Date:From:To:cc:Subject:Message-ID:MIME-Version:Content-Type; b=bw5HLIzY1Mi9RSGu/vdKfc6sy+nQwgyth3rW88nsc6Ozy4n6vnuUjD28WLlV4mBVk mQE2UhpqAEt+mIvopAWiw== DomainKey-Signature: a=rsa-sha1; s=beta; d=google.com; c=nofws; q=dns; h=date:from:x-x-sender:to:cc:subject:message-id:user-agent: mime-version:content-type:x-system-of-record; b=YHa15Yk9/+PR0LlvZSHG7ydA6M/2KZHHyBeP1GDxMjgzpvzwZT9/e6BICsS289so6 ImcI4QNDZe24uahBechEw== Received: from pvg4 (pvg4.prod.google.com [10.241.210.132]) by hpaq3.eem.corp.google.com with ESMTP id o7OAoP33012243; Tue, 24 Aug 2010 03:50:25 -0700 Received: by pvg4 with SMTP id 4so3078118pvg.41 for ; Tue, 24 Aug 2010 03:50:24 -0700 (PDT) Received: by 10.142.135.2 with SMTP id i2mr4473293wfd.279.1282647024800; Tue, 24 Aug 2010 03:50:24 -0700 (PDT) Received: from chino.kir.corp.google.com (chino.kir.corp.google.com [172.31.6.12]) by mx.google.com with ESMTPS id n2sm9773563wfl.1.2010.08.24.03.50.22 (version=TLSv1/SSLv3 cipher=RC4-MD5); Tue, 24 Aug 2010 03:50:23 -0700 (PDT) Date: Tue, 24 Aug 2010 03:50:19 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Andrew Morton cc: Neil Brown , Alasdair G Kergon , Chris Mason , Steven Whitehouse , Jens Axboe , Jan Kara , Frederic Weisbecker , linux-raid@vger.kernel.org, linux-btrfs@vger.kernel.org, cluster-devel@redhat.com, linux-ext4@vger.kernel.org, reiserfs-devel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [patch 1/5] mm: add nofail variants of kmalloc kcalloc and kzalloc Message-ID: User-Agent: Alpine 2.00 (DEB 1167 2008-08-23) MIME-Version: 1.0 X-System-Of-Record: true Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Tue, 24 Aug 2010 10:51:40 +0000 (UTC) diff --git a/drivers/md/dm-region-hash.c b/drivers/md/dm-region-hash.c --- a/drivers/md/dm-region-hash.c +++ b/drivers/md/dm-region-hash.c @@ -290,7 +290,7 @@ static struct dm_region *__rh_alloc(struct dm_region_hash *rh, region_t region) nreg = mempool_alloc(rh->region_pool, GFP_ATOMIC); if (unlikely(!nreg)) - nreg = kmalloc(sizeof(*nreg), GFP_NOIO | __GFP_NOFAIL); + nreg = kmalloc_nofail(sizeof(*nreg), GFP_NOIO); nreg->state = rh->log->type->in_sync(rh->log, region, 1) ? DM_RH_CLEAN : DM_RH_NOSYNC; diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1967,7 +1967,7 @@ void btrfs_add_delayed_iput(struct inode *inode) if (atomic_add_unless(&inode->i_count, -1, 1)) return; - delayed = kmalloc(sizeof(*delayed), GFP_NOFS | __GFP_NOFAIL); + delayed = kmalloc_nofail(sizeof(*delayed), GFP_NOFS); delayed->inode = inode; spin_lock(&fs_info->delayed_iput_lock); diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c --- a/fs/gfs2/log.c +++ b/fs/gfs2/log.c @@ -709,7 +709,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl) } trace_gfs2_log_flush(sdp, 1); - ai = kzalloc(sizeof(struct gfs2_ail), GFP_NOFS | __GFP_NOFAIL); + ai = kzalloc_nofail(sizeof(struct gfs2_ail), GFP_NOFS); INIT_LIST_HEAD(&ai->ai_ail1_list); INIT_LIST_HEAD(&ai->ai_ail2_list); diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c --- a/fs/gfs2/rgrp.c +++ b/fs/gfs2/rgrp.c @@ -1440,8 +1440,8 @@ static struct gfs2_rgrpd *rgblk_free(struct gfs2_sbd *sdp, u64 bstart, rgrp_blk++; if (!bi->bi_clone) { - bi->bi_clone = kmalloc(bi->bi_bh->b_size, - GFP_NOFS | __GFP_NOFAIL); + bi->bi_clone = kmalloc_nofail(bi->bi_bh->b_size, + GFP_NOFS); memcpy(bi->bi_clone + bi->bi_offset, bi->bi_bh->b_data + bi->bi_offset, bi->bi_len); @@ -1759,9 +1759,6 @@ fail: * @block: the block * * Figure out what RG a block belongs to and add that RG to the list - * - * FIXME: Don't use NOFAIL - * */ void gfs2_rlist_add(struct gfs2_sbd *sdp, struct gfs2_rgrp_list *rlist, @@ -1789,8 +1786,8 @@ void gfs2_rlist_add(struct gfs2_sbd *sdp, struct gfs2_rgrp_list *rlist, if (rlist->rl_rgrps == rlist->rl_space) { new_space = rlist->rl_space + 10; - tmp = kcalloc(new_space, sizeof(struct gfs2_rgrpd *), - GFP_NOFS | __GFP_NOFAIL); + tmp = kcalloc_nofail(new_space, sizeof(struct gfs2_rgrpd *), + GFP_NOFS); if (rlist->rl_rgd) { memcpy(tmp, rlist->rl_rgd, @@ -1811,17 +1808,14 @@ void gfs2_rlist_add(struct gfs2_sbd *sdp, struct gfs2_rgrp_list *rlist, * @rlist: the list of resource groups * @state: the lock state to acquire the RG lock in * @flags: the modifier flags for the holder structures - * - * FIXME: Don't use NOFAIL - * */ void gfs2_rlist_alloc(struct gfs2_rgrp_list *rlist, unsigned int state) { unsigned int x; - rlist->rl_ghs = kcalloc(rlist->rl_rgrps, sizeof(struct gfs2_holder), - GFP_NOFS | __GFP_NOFAIL); + rlist->rl_ghs = kcalloc_nofail(rlist->rl_rgrps, + sizeof(struct gfs2_holder), GFP_NOFS); for (x = 0; x < rlist->rl_rgrps; x++) gfs2_holder_init(rlist->rl_rgd[x]->rd_gl, state, 0, diff --git a/fs/jbd/transaction.c b/fs/jbd/transaction.c --- a/fs/jbd/transaction.c +++ b/fs/jbd/transaction.c @@ -98,14 +98,9 @@ static int start_this_handle(journal_t *journal, handle_t *handle) } alloc_transaction: - if (!journal->j_running_transaction) { - new_transaction = kzalloc(sizeof(*new_transaction), - GFP_NOFS|__GFP_NOFAIL); - if (!new_transaction) { - ret = -ENOMEM; - goto out; - } - } + if (!journal->j_running_transaction) + new_transaction = kzalloc_nofail(sizeof(*new_transaction), + GFP_NOFS); jbd_debug(3, "New handle %p going live.\n", handle); diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c --- a/fs/reiserfs/journal.c +++ b/fs/reiserfs/journal.c @@ -2593,8 +2593,7 @@ static int journal_read(struct super_block *sb) static struct reiserfs_journal_list *alloc_journal_list(struct super_block *s) { struct reiserfs_journal_list *jl; - jl = kzalloc(sizeof(struct reiserfs_journal_list), - GFP_NOFS | __GFP_NOFAIL); + jl = kzalloc_nofail(sizeof(struct reiserfs_journal_list), GFP_NOFS); INIT_LIST_HEAD(&jl->j_list); INIT_LIST_HEAD(&jl->j_working_list); INIT_LIST_HEAD(&jl->j_tail_bh_list); diff --git a/include/linux/slab.h b/include/linux/slab.h --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -334,6 +334,61 @@ static inline void *kzalloc_node(size_t size, gfp_t flags, int node) return kmalloc_node(size, flags | __GFP_ZERO, node); } +/** + * kmalloc_nofail - infinitely loop until kmalloc() succeeds. + * @size: how many bytes of memory are required. + * @flags: the type of memory to allocate (see kmalloc). + * + * NOTE: no new callers of this function should be implemented! + * All memory allocations should be failable whenever possible. + */ +static inline void *kmalloc_nofail(size_t size, gfp_t flags) +{ + void *ret; + + for (;;) { + ret = kmalloc(size, flags); + if (ret) + return ret; + WARN_ONCE(1, "Out of memory, no fallback implemented " + "(size=%lu flags=0x%x)\n", + size, flags); + } +} + +/** + * kcalloc_nofail - infinitely loop until kcalloc() succeeds. + * @n: number of elements. + * @size: element size. + * @flags: the type of memory to allocate (see kcalloc). + * + * NOTE: no new callers of this function should be implemented! + * All memory allocations should be failable whenever possible. + */ +static inline void *kcalloc_nofail(size_t n, size_t size, gfp_t flags) +{ + void *ret; + + for (;;) { + ret = kcalloc(n, size, flags); + if (ret) + return ret; + WARN_ONCE(1, "Out of memory, no fallback implemented " + "(n=%lu size=%lu flags=0x%x)\n", + n, size, flags); + } +} + +/** + * kzalloc_nofail - infinitely loop until kzalloc() succeeds. + * @size: how many bytes of memory are required. + * @flags: the type of memory to allocate (see kzalloc). + */ +static inline void *kzalloc_nofail(size_t size, gfp_t flags) +{ + return kmalloc_nofail(size, flags | __GFP_ZERO); +} + void __init kmem_cache_init_late(void); #endif /* _LINUX_SLAB_H */