From patchwork Tue Sep 8 09:01:59 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 7139271 Return-Path: X-Original-To: patchwork-linux-btrfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 4E037BF036 for ; Tue, 8 Sep 2015 09:04:18 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 56C5920706 for ; Tue, 8 Sep 2015 09:04:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5F1CA206FD for ; Tue, 8 Sep 2015 09:04:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754377AbbIHJEJ (ORCPT ); Tue, 8 Sep 2015 05:04:09 -0400 Received: from cn.fujitsu.com ([59.151.112.132]:22293 "EHLO heian.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1754363AbbIHJEI (ORCPT ); Tue, 8 Sep 2015 05:04:08 -0400 X-IronPort-AV: E=Sophos;i="5.15,520,1432569600"; d="scan'208";a="100466478" Received: from bogon (HELO edo.cn.fujitsu.com) ([10.167.33.5]) by heian.cn.fujitsu.com with ESMTP; 08 Sep 2015 17:07:02 +0800 Received: from G08CNEXCHPEKD01.g08.fujitsu.local (localhost.localdomain [127.0.0.1]) by edo.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id t8893pj5009486 for ; Tue, 8 Sep 2015 17:03:51 +0800 Received: from localhost.localdomain (10.167.226.33) by G08CNEXCHPEKD01.g08.fujitsu.local (10.167.33.89) with Microsoft SMTP Server (TLS) id 14.3.181.6; Tue, 8 Sep 2015 17:04:03 +0800 From: Qu Wenruo To: Subject: [PATCH 06/19] btrfs: qgroup: Introduce btrfs_qgroup_reserve_data function Date: Tue, 8 Sep 2015 17:01:59 +0800 Message-ID: <1441702920-21278-3-git-send-email-quwenruo@cn.fujitsu.com> X-Mailer: git-send-email 2.5.1 In-Reply-To: <1441702615-18333-1-git-send-email-quwenruo@cn.fujitsu.com> References: <1441702615-18333-1-git-send-email-quwenruo@cn.fujitsu.com> MIME-Version: 1.0 X-Originating-IP: [10.167.226.33] Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This new function will do all the hard work to reserve precious space for a write. The overall work flow will be the following. File A already has some dirty pages: 0 4K 8K 12K 16K |///////| |///////| And then, someone want to write some data into range [4K, 16K). |<------desired-------->| Unlike the old and wrong implement, which reserve 12K, this function will only reserve space for newly dirty part: |\\\\\\\| |\\\\\\\| Which only takes 8K reserve space, as other part has already allocated their own reserve space. So the final reserve map will be: |///////////////////////////////| This provides the basis to resolve the long existing qgroup limit bug. Signed-off-by: Qu Wenruo --- fs/btrfs/qgroup.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ fs/btrfs/qgroup.h | 1 + 2 files changed, 58 insertions(+) diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c index 77a2e07..337b784 100644 --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -2793,6 +2793,63 @@ insert: } /* + * Make sure the data space for [start, start + len) is reserved. + * It will either reserve new space from given qgroup or reuse the already + * reserved space. + * + * Return 0 for successful reserve. + * Return <0 for error. + * + * TODO: to handle nocow case, like NODATACOW or write into prealloc space + * along with other mixed case. + * Like write 2M, first 1M can be nocowed, but next 1M is on hole and need COW. + */ +int btrfs_qgroup_reserve_data(struct inode *inode, u64 start, u64 len) +{ + struct btrfs_inode *binode = BTRFS_I(inode); + struct btrfs_root *root = binode->root; + struct btrfs_qgroup_data_rsv_map *reserve_map; + struct data_rsv_range *tmp = NULL; + struct ulist *insert_list; + int ret; + + if (!root->fs_info->quota_enabled || !is_fstree(root->objectid) || + len == 0) + return 0; + + if (!binode->qgroup_rsv_map) { + ret = btrfs_qgroup_init_data_rsv_map(inode); + if (ret < 0) + return ret; + } + reserve_map = binode->qgroup_rsv_map; + insert_list = ulist_alloc(GFP_NOFS); + if (!insert_list) + return -ENOMEM; + tmp = kzalloc(sizeof(*tmp), GFP_NOFS); + if (!tmp) { + ulist_free(insert_list); + return -ENOMEM; + } + + spin_lock(&reserve_map->lock); + ret = reserve_data_range(root, reserve_map, tmp, insert_list, start, + len); + /* + * For error and already exists case, free tmp memory. + * For tmp used case, set ret to 0, as some careless + * caller consider >0 as error. + */ + if (ret <= 0) + kfree(tmp); + else + ret = 0; + spin_unlock(&reserve_map->lock); + ulist_free(insert_list); + return ret; +} + +/* * Init data_rsv_map for a given inode. * * This is needed at write time as quota can be disabled and then enabled diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h index c87b7dc..366b853 100644 --- a/fs/btrfs/qgroup.h +++ b/fs/btrfs/qgroup.h @@ -87,4 +87,5 @@ int btrfs_verify_qgroup_counts(struct btrfs_fs_info *fs_info, u64 qgroupid, /* for qgroup reserve */ int btrfs_qgroup_init_data_rsv_map(struct inode *inode); void btrfs_qgroup_free_data_rsv_map(struct inode *inode); +int btrfs_qgroup_reserve_data(struct inode *inode, u64 start, u64 len); #endif /* __BTRFS_QGROUP__ */