From patchwork Thu Sep 2 01:02:25 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Rientjes X-Patchwork-Id: 147991 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id o8214edv002753 for ; Thu, 2 Sep 2010 01:07:37 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755966Ab0IBBDO (ORCPT ); Wed, 1 Sep 2010 21:03:14 -0400 Received: from smtp-out.google.com ([216.239.44.51]:17109 "EHLO smtp-out.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753390Ab0IBBDM (ORCPT ); Wed, 1 Sep 2010 21:03:12 -0400 Received: from wpaz9.hot.corp.google.com (wpaz9.hot.corp.google.com [172.24.198.73]) by smtp-out.google.com with ESMTP id o8213B6d017495; Wed, 1 Sep 2010 18:03:11 -0700 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=google.com; s=beta; t=1283389391; bh=5/z9Zgq+N3jgtGz6N2N5u8n34h0=; h=Date:From:To:cc:Subject:In-Reply-To:Message-ID:References: MIME-Version:Content-Type; b=oSPsw3uJxiqQtEXGoLONOJIG+2cBEviS33jsXq2P5/TwiYiCkF90pAu/KiOOivw9d LNMU9qoE6vLU7JTD9cLxw== DomainKey-Signature: a=rsa-sha1; s=beta; d=google.com; c=nofws; q=dns; h=date:from:x-x-sender:to:cc:subject:in-reply-to:message-id: references:user-agent:mime-version:content-type:x-system-of-record; b=hD7gVcgDyaAltuPs3W1l9T3Yct/yAzMYS5yOgjkfkWM79wuLkdWIWo/MCsjby7EhO sTwoX4cZlaS8ll1YZzF4g== Received: from pxi4 (pxi4.prod.google.com [10.243.27.4]) by wpaz9.hot.corp.google.com with ESMTP id o82139Sd026036; Wed, 1 Sep 2010 18:03:09 -0700 Received: by pxi4 with SMTP id 4so3296513pxi.8 for ; Wed, 01 Sep 2010 18:03:09 -0700 (PDT) Received: by 10.114.103.6 with SMTP id a6mr1603679wac.199.1283389381273; Wed, 01 Sep 2010 18:03:01 -0700 (PDT) Received: from chino.kir.corp.google.com (chino.kir.corp.google.com [172.31.6.12]) by mx.google.com with ESMTPS id q6sm19606559waj.10.2010.09.01.18.02.45 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 01 Sep 2010 18:02:46 -0700 (PDT) Date: Wed, 1 Sep 2010 18:02:25 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Andrew Morton cc: Neil Brown , Alasdair G Kergon , Chris Mason , Steven Whitehouse , Jens Axboe , Jan Kara , Frederic Weisbecker , linux-raid@vger.kernel.org, linux-btrfs@vger.kernel.org, cluster-devel@redhat.com, linux-ext4@vger.kernel.org, reiserfs-devel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [patch v2 1/5] mm: add nofail variants of kmalloc kcalloc and kzalloc In-Reply-To: Message-ID: References: User-Agent: Alpine 2.00 (DEB 1167 2008-08-23) MIME-Version: 1.0 X-System-Of-Record: true Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter1.kernel.org [140.211.167.41]); Thu, 02 Sep 2010 01:07:37 +0000 (UTC) diff --git a/drivers/md/dm-region-hash.c b/drivers/md/dm-region-hash.c --- a/drivers/md/dm-region-hash.c +++ b/drivers/md/dm-region-hash.c @@ -290,7 +290,7 @@ static struct dm_region *__rh_alloc(struct dm_region_hash *rh, region_t region) nreg = mempool_alloc(rh->region_pool, GFP_ATOMIC); if (unlikely(!nreg)) - nreg = kmalloc(sizeof(*nreg), GFP_NOIO | __GFP_NOFAIL); + nreg = kmalloc_nofail(sizeof(*nreg), GFP_NOIO); nreg->state = rh->log->type->in_sync(rh->log, region, 1) ? DM_RH_CLEAN : DM_RH_NOSYNC; diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1967,7 +1967,7 @@ void btrfs_add_delayed_iput(struct inode *inode) if (atomic_add_unless(&inode->i_count, -1, 1)) return; - delayed = kmalloc(sizeof(*delayed), GFP_NOFS | __GFP_NOFAIL); + delayed = kmalloc_nofail(sizeof(*delayed), GFP_NOFS); delayed->inode = inode; spin_lock(&fs_info->delayed_iput_lock); diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c --- a/fs/gfs2/log.c +++ b/fs/gfs2/log.c @@ -709,7 +709,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl) } trace_gfs2_log_flush(sdp, 1); - ai = kzalloc(sizeof(struct gfs2_ail), GFP_NOFS | __GFP_NOFAIL); + ai = kzalloc_nofail(sizeof(struct gfs2_ail), GFP_NOFS); INIT_LIST_HEAD(&ai->ai_ail1_list); INIT_LIST_HEAD(&ai->ai_ail2_list); diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c --- a/fs/gfs2/rgrp.c +++ b/fs/gfs2/rgrp.c @@ -1440,8 +1440,8 @@ static struct gfs2_rgrpd *rgblk_free(struct gfs2_sbd *sdp, u64 bstart, rgrp_blk++; if (!bi->bi_clone) { - bi->bi_clone = kmalloc(bi->bi_bh->b_size, - GFP_NOFS | __GFP_NOFAIL); + bi->bi_clone = kmalloc_nofail(bi->bi_bh->b_size, + GFP_NOFS); memcpy(bi->bi_clone + bi->bi_offset, bi->bi_bh->b_data + bi->bi_offset, bi->bi_len); @@ -1759,9 +1759,6 @@ fail: * @block: the block * * Figure out what RG a block belongs to and add that RG to the list - * - * FIXME: Don't use NOFAIL - * */ void gfs2_rlist_add(struct gfs2_sbd *sdp, struct gfs2_rgrp_list *rlist, @@ -1789,8 +1786,8 @@ void gfs2_rlist_add(struct gfs2_sbd *sdp, struct gfs2_rgrp_list *rlist, if (rlist->rl_rgrps == rlist->rl_space) { new_space = rlist->rl_space + 10; - tmp = kcalloc(new_space, sizeof(struct gfs2_rgrpd *), - GFP_NOFS | __GFP_NOFAIL); + tmp = kcalloc_nofail(new_space, sizeof(struct gfs2_rgrpd *), + GFP_NOFS); if (rlist->rl_rgd) { memcpy(tmp, rlist->rl_rgd, @@ -1811,17 +1808,14 @@ void gfs2_rlist_add(struct gfs2_sbd *sdp, struct gfs2_rgrp_list *rlist, * @rlist: the list of resource groups * @state: the lock state to acquire the RG lock in * @flags: the modifier flags for the holder structures - * - * FIXME: Don't use NOFAIL - * */ void gfs2_rlist_alloc(struct gfs2_rgrp_list *rlist, unsigned int state) { unsigned int x; - rlist->rl_ghs = kcalloc(rlist->rl_rgrps, sizeof(struct gfs2_holder), - GFP_NOFS | __GFP_NOFAIL); + rlist->rl_ghs = kcalloc_nofail(rlist->rl_rgrps, + sizeof(struct gfs2_holder), GFP_NOFS); for (x = 0; x < rlist->rl_rgrps; x++) gfs2_holder_init(rlist->rl_rgd[x]->rd_gl, state, 0, diff --git a/fs/jbd/transaction.c b/fs/jbd/transaction.c --- a/fs/jbd/transaction.c +++ b/fs/jbd/transaction.c @@ -98,14 +98,9 @@ static int start_this_handle(journal_t *journal, handle_t *handle) } alloc_transaction: - if (!journal->j_running_transaction) { - new_transaction = kzalloc(sizeof(*new_transaction), - GFP_NOFS|__GFP_NOFAIL); - if (!new_transaction) { - ret = -ENOMEM; - goto out; - } - } + if (!journal->j_running_transaction) + new_transaction = kzalloc_nofail(sizeof(*new_transaction), + GFP_NOFS); jbd_debug(3, "New handle %p going live.\n", handle); diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c --- a/fs/reiserfs/journal.c +++ b/fs/reiserfs/journal.c @@ -2593,8 +2593,7 @@ static int journal_read(struct super_block *sb) static struct reiserfs_journal_list *alloc_journal_list(struct super_block *s) { struct reiserfs_journal_list *jl; - jl = kzalloc(sizeof(struct reiserfs_journal_list), - GFP_NOFS | __GFP_NOFAIL); + jl = kzalloc_nofail(sizeof(struct reiserfs_journal_list), GFP_NOFS); INIT_LIST_HEAD(&jl->j_list); INIT_LIST_HEAD(&jl->j_working_list); INIT_LIST_HEAD(&jl->j_tail_bh_list); diff --git a/include/linux/slab.h b/include/linux/slab.h --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -334,6 +334,57 @@ static inline void *kzalloc_node(size_t size, gfp_t flags, int node) return kmalloc_node(size, flags | __GFP_ZERO, node); } +/** + * kmalloc_nofail - infinitely loop until kmalloc() succeeds. + * @size: how many bytes of memory are required. + * @flags: the type of memory to allocate (see kmalloc). + * + * NOTE: no new callers of this function should be implemented! + * All memory allocations should be failable whenever possible. + */ +static inline void *kmalloc_nofail(size_t size, gfp_t flags) +{ + void *ret; + + for (;;) { + ret = kmalloc(size, flags); + if (ret) + return ret; + WARN_ON_ONCE(get_order(size) > PAGE_ALLOC_COSTLY_ORDER); + } +} + +/** + * kcalloc_nofail - infinitely loop until kcalloc() succeeds. + * @n: number of elements. + * @size: element size. + * @flags: the type of memory to allocate (see kcalloc). + * + * NOTE: no new callers of this function should be implemented! + * All memory allocations should be failable whenever possible. + */ +static inline void *kcalloc_nofail(size_t n, size_t size, gfp_t flags) +{ + void *ret; + + for (;;) { + ret = kcalloc(n, size, flags); + if (ret) + return ret; + WARN_ON_ONCE(get_order(size) > PAGE_ALLOC_COSTLY_ORDER); + } +} + +/** + * kzalloc_nofail - infinitely loop until kzalloc() succeeds. + * @size: how many bytes of memory are required. + * @flags: the type of memory to allocate (see kzalloc). + */ +static inline void *kzalloc_nofail(size_t size, gfp_t flags) +{ + return kmalloc_nofail(size, flags | __GFP_ZERO); +} + void __init kmem_cache_init_late(void); #endif /* _LINUX_SLAB_H */