From patchwork Tue Sep 8 08:56:54 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 7139231 Return-Path: X-Original-To: patchwork-linux-btrfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 886909F1D5 for ; Tue, 8 Sep 2015 08:59:11 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 80536206F5 for ; Tue, 8 Sep 2015 08:59:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 781D3206F6 for ; Tue, 8 Sep 2015 08:59:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753732AbbIHI7G (ORCPT ); Tue, 8 Sep 2015 04:59:06 -0400 Received: from cn.fujitsu.com ([59.151.112.132]:33243 "EHLO heian.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1753095AbbIHI7C (ORCPT ); Tue, 8 Sep 2015 04:59:02 -0400 X-IronPort-AV: E=Sophos;i="5.15,520,1432569600"; d="scan'208";a="100466254" Received: from bogon (HELO edo.cn.fujitsu.com) ([10.167.33.5]) by heian.cn.fujitsu.com with ESMTP; 08 Sep 2015 17:01:57 +0800 Received: from G08CNEXCHPEKD01.g08.fujitsu.local (localhost.localdomain [127.0.0.1]) by edo.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id t888wk8L009121 for ; Tue, 8 Sep 2015 16:58:46 +0800 Received: from localhost.localdomain (10.167.226.33) by G08CNEXCHPEKD01.g08.fujitsu.local (10.167.33.89) with Microsoft SMTP Server (TLS) id 14.3.181.6; Tue, 8 Sep 2015 16:58:57 +0800 From: Qu Wenruo To: Subject: [PATCH 02/19] btrfs: qgroup: Implement data_rsv_map init/free functions Date: Tue, 8 Sep 2015 16:56:54 +0800 Message-ID: <1441702615-18333-3-git-send-email-quwenruo@cn.fujitsu.com> X-Mailer: git-send-email 2.5.1 In-Reply-To: <1441702615-18333-1-git-send-email-quwenruo@cn.fujitsu.com> References: <1441702615-18333-1-git-send-email-quwenruo@cn.fujitsu.com> MIME-Version: 1.0 X-Originating-IP: [10.167.226.33] Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP New functions btrfs_qgroup_init/free_data_rsv_map() to init/free data reserve map. Data reserve map is used to mark which range already holds reserved space, to avoid current reserved space leak. Signed-off-by: Qu Wenruo --- fs/btrfs/btrfs_inode.h | 2 ++ fs/btrfs/inode.c | 10 +++++++ fs/btrfs/qgroup.c | 77 ++++++++++++++++++++++++++++++++++++++++++++++++++ fs/btrfs/qgroup.h | 3 ++ 4 files changed, 92 insertions(+) diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h index e3ece65..27cc338 100644 --- a/fs/btrfs/btrfs_inode.h +++ b/fs/btrfs/btrfs_inode.h @@ -199,6 +199,8 @@ struct btrfs_inode { /* qgroup dirty map for data space reserve */ struct btrfs_qgroup_data_rsv_map *qgroup_rsv_map; + /* lock to ensure rsv_map will only be initialized once */ + spinlock_t qgroup_init_lock; }; extern unsigned char btrfs_filetype_table[]; diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 37dd8d0..61b2c17 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -8939,6 +8939,14 @@ struct inode *btrfs_alloc_inode(struct super_block *sb) INIT_LIST_HEAD(&ei->delalloc_inodes); RB_CLEAR_NODE(&ei->rb_node); + /* + * Init qgroup info to empty, as they will be initialized at write + * time. + * This behavior is needed for enable quota later case. + */ + spin_lock_init(&ei->qgroup_init_lock); + ei->qgroup_rsv_map = NULL; + return inode; } @@ -8996,6 +9004,8 @@ void btrfs_destroy_inode(struct inode *inode) btrfs_put_ordered_extent(ordered); } } + /* free and check data rsv map */ + btrfs_qgroup_free_data_rsv_map(inode); inode_tree_del(inode); btrfs_drop_extent_cache(inode, 0, (u64)-1, 0); free: diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c index 561c36d..cf07c17 100644 --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -2539,3 +2539,80 @@ btrfs_qgroup_rescan_resume(struct btrfs_fs_info *fs_info) btrfs_queue_work(fs_info->qgroup_rescan_workers, &fs_info->qgroup_rescan_work); } + +/* + * Init data_rsv_map for a given inode. + * + * This is needed at write time as quota can be disabled and then enabled + */ +int btrfs_qgroup_init_data_rsv_map(struct inode *inode) +{ + struct btrfs_inode *binode = BTRFS_I(inode); + struct btrfs_root *root = binode->root; + struct btrfs_qgroup_data_rsv_map *dirty_map; + + if (!root->fs_info->quota_enabled || !is_fstree(root->objectid)) + return 0; + + spin_lock(&binode->qgroup_init_lock); + /* Quick route for init */ + if (likely(binode->qgroup_rsv_map)) + goto out; + spin_unlock(&binode->qgroup_init_lock); + + /* + * Slow allocation route + * + * TODO: Use kmem_cache to speedup allocation + */ + dirty_map = kmalloc(sizeof(*dirty_map), GFP_NOFS); + if (!dirty_map) + return -ENOMEM; + + dirty_map->reserved = 0; + dirty_map->root = RB_ROOT; + spin_lock_init(&dirty_map->lock); + + /* Lock again to ensure no one has already init it before */ + spin_lock(&binode->qgroup_init_lock); + if (binode->qgroup_rsv_map) { + spin_unlock(&binode->qgroup_init_lock); + kfree(dirty_map); + return 0; + } + binode->qgroup_rsv_map = dirty_map; +out: + spin_unlock(&binode->qgroup_init_lock); + return 0; +} + +void btrfs_qgroup_free_data_rsv_map(struct inode *inode) +{ + struct btrfs_inode *binode = BTRFS_I(inode); + struct btrfs_root *root = binode->root; + struct btrfs_qgroup_data_rsv_map *dirty_map = binode->qgroup_rsv_map; + struct rb_node *node; + + /* + * this function is called at inode destroy routine, so no concurrency + * will happen, no need to get the lock. + */ + if (!dirty_map) + return; + + /* insanity check */ + WARN_ON(!root->fs_info->quota_enabled || !is_fstree(root->objectid)); + + btrfs_qgroup_free(root, dirty_map->reserved); + spin_lock(&dirty_map->lock); + while ((node = rb_first(&dirty_map->root)) != NULL) { + struct data_rsv_range *range; + + range = rb_entry(node, struct data_rsv_range, node); + rb_erase(node, &dirty_map->root); + kfree(range); + } + spin_unlock(&dirty_map->lock); + kfree(dirty_map); + binode->qgroup_rsv_map = NULL; +} diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h index 2f863a4..c87b7dc 100644 --- a/fs/btrfs/qgroup.h +++ b/fs/btrfs/qgroup.h @@ -84,4 +84,7 @@ int btrfs_verify_qgroup_counts(struct btrfs_fs_info *fs_info, u64 qgroupid, u64 rfer, u64 excl); #endif +/* for qgroup reserve */ +int btrfs_qgroup_init_data_rsv_map(struct inode *inode); +void btrfs_qgroup_free_data_rsv_map(struct inode *inode); #endif /* __BTRFS_QGROUP__ */