From patchwork Sun Sep 8 14:07:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "heming.zhao@suse.com" X-Patchwork-Id: 13795496 Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1FE3F16FF3B for ; Sun, 8 Sep 2024 14:07:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725804442; cv=none; b=eqnKGdOZl8fw0e/Vzvo5b9babMrvZz/a3azCURN1r3EtVPeg49rvBZwWJH+rchMcEv3ESt+7GoaTqAn0khkx+mMCfkpZwGMgfio0PiCvCITkjKvoUuWjHICVZdpeW3lr8IPJYUEP/wJhOP7nwiC8Cs+XOvuzIMx0yBMYSP9jzQo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725804442; c=relaxed/simple; bh=gMwj85pB7Gl01P8y2LOkEt1m1asUOHHDXxzYdMCulbw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=LQ5ZmhiRQn5kUT7VwZZnlAp3sagk2a6Bsqg1Q1n7RWb/4qP3dLtGqyoBx0wzbNoUcbbHqfypL/amnx7w8JWZ98+2JNJzMuJqGz/pCMx8nmh5670lZRNkNsmhiw9MCyuVfsaa42sEFpIa+36lzeJLgckkP1Bz2/YM+fJOGHUe7Ms= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=CSyNyUxD; arc=none smtp.client-ip=209.85.128.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="CSyNyUxD" Received: by mail-wm1-f42.google.com with SMTP id 5b1f17b1804b1-42ca4e0014dso3123865e9.2 for ; Sun, 08 Sep 2024 07:07:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1725804437; x=1726409237; darn=lists.linux.dev; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jnHpTlMy8PKbCha0o0D1VFUrpGJyJX2GaCuCOV7xSpE=; b=CSyNyUxD50LKmEVUif82lsyMNXKF4sz2md5QLLfkRHBQbmGzfuoxHplF6QRRzta55E AjmsFcSXLbP7+rgf9jkYyUcJmu2TKznkxBOAlO6XwZApFrkSkL4sY4TmVnNf4cn46Pm6 EU3uFi1kIRwoJzFj1SPgMc7Cv1Fw7CgdXznpidedN5o9VXrISZqQmUEKLFFglggo0Zjo pHkJmcrEcRTqf29wGwHQEU2LuDeLojXiLgZhFle4JSw648gRilhpHWhF/kVvletbIYZQ 9h4nFGQhZNpys6AUawKG6YQs4zClotFLT8NzkhNF9v9IzhknoCrOp89TdIRnUN2KqkRp ZGbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725804437; x=1726409237; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jnHpTlMy8PKbCha0o0D1VFUrpGJyJX2GaCuCOV7xSpE=; b=aWzdD2dArXEV1Zz5hc6g6VboAOW0tEt5wD5A1y1cDTMnvx79w6Xj665eweuqEIMQwI ILG2st7AwKw8ije0BoMlfbSNWr9Mw2KBJGdqJ7l4a3RQLw+zq+CcNNE2lVpepsOrOS9L rLy0NxTiNTdG/t5rhM54BqOP9SMYjEPl90xA3fTPhLRhl47NAJAYtIod+vjnqRZU5aF8 cfTf2N/wfR0ic5/HdhKPY8ojtecMouHVRlQ7fYshCe2Rh9AiBQBffJ3UbCowLJkwtkEk K3LRrb35LfQyb78K2wcFUeFapcweuP7c1XHqlh7saK7QWqoxXlmDWb44DG5kejKd04mt ALvA== X-Forwarded-Encrypted: i=1; AJvYcCWnYSnWzesrXslnJv1lsdrkY3mM3mggGKfzqhLSOU1fWzNm7wQA+OxMp+vUNYXIg+3iHJIr2F9Jk9NwNQ==@lists.linux.dev X-Gm-Message-State: AOJu0Yy2C8aJjG6h3k7gVmnFYlD4gT56gDuh47s2Hkbbx0l/5FKdgGyI K1exRoj6kOpXYyIAay61D9vco3Ilvg4rsBS5ckd6Nppb/Mdj/hI3HgFxmwMojbI= X-Google-Smtp-Source: AGHT+IEESgZ+WmZEnwkW/95HyfkyQrgtLBWzFYQAhPcB2oWUrSMpmeqxZwQVZtmBHJt0Z7Qqw8kgtA== X-Received: by 2002:a05:600c:3582:b0:42c:aeee:d8ed with SMTP id 5b1f17b1804b1-42caeeed92dmr13227105e9.7.1725804436463; Sun, 08 Sep 2024 07:07:16 -0700 (PDT) Received: from localhost.localdomain ([202.127.77.110]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2dadc074599sm4863371a91.31.2024.09.08.07.07.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 Sep 2024 07:07:15 -0700 (PDT) From: Heming Zhao To: joseph.qi@linux.alibaba.com, glass.su@suse.com Cc: Heming Zhao , ocfs2-devel@lists.linux.dev, linux-kernel@vger.kernel.org Subject: [PATCH v3 1/3] ocfs2: give ocfs2 the ability to reclaim suballoc free bg Date: Sun, 8 Sep 2024 22:07:03 +0800 Message-Id: <20240908140705.19169-2-heming.zhao@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240908140705.19169-1-heming.zhao@suse.com> References: <20240908140705.19169-1-heming.zhao@suse.com> Precedence: bulk X-Mailing-List: ocfs2-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The current ocfs2 code can't reclaim suballocator block group space. This cause ocfs2 to hold onto a lot of space in some cases. for example, when creating lots of small files, the space is held/managed by '//inode_alloc'. After the user deletes all the small files, the space never returns to '//global_bitmap'. This issue prevents ocfs2 from providing the needed space even when there is enough free space in a small ocfs2 volume. This patch gives ocfs2 the ability to reclaim suballoc free space when the block group is freed. For performance reasons, this patch keeps the first suballocator block group. Signed-off-by: Heming Zhao Reviewed-by: Su Yue --- fs/ocfs2/suballoc.c | 302 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 292 insertions(+), 10 deletions(-) diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c index f7b483f0de2a..d62010166c34 100644 --- a/fs/ocfs2/suballoc.c +++ b/fs/ocfs2/suballoc.c @@ -294,6 +294,68 @@ static int ocfs2_validate_group_descriptor(struct super_block *sb, return ocfs2_validate_gd_self(sb, bh, 0); } +/* + * hint gd may already be released in _ocfs2_free_suballoc_bits(), + * we first check gd descriptor signature, then do the + * ocfs2_read_group_descriptor() jobs. + * + * When the group descriptor is invalid, we return 'rc=0' and + * '*released=1'. The caller should handle this case. Otherwise, + * we return the real error code. + */ +static int ocfs2_read_hint_group_descriptor(struct inode *inode, + struct ocfs2_dinode *di, u64 gd_blkno, + struct buffer_head **bh, int *released) +{ + int rc; + struct buffer_head *tmp = *bh; + struct ocfs2_group_desc *gd; + + *released = 0; + + rc = ocfs2_read_block(INODE_CACHE(inode), gd_blkno, &tmp, NULL); + if (rc) + goto out; + + gd = (struct ocfs2_group_desc *) tmp->b_data; + if (!OCFS2_IS_VALID_GROUP_DESC(gd)) { + /* + * Invalid gd cache was set in ocfs2_read_block(), + * which will affect block_group allocation. + * Path: + * ocfs2_reserve_suballoc_bits + * ocfs2_block_group_alloc + * ocfs2_block_group_alloc_contig + * ocfs2_set_new_buffer_uptodate + */ + ocfs2_remove_from_cache(INODE_CACHE(inode), tmp); + *released = 1; /* we return 'rc=0' for this case */ + goto free_bh; + } + + /* below jobs same with ocfs2_read_group_descriptor() */ + if (!buffer_jbd(tmp)) { + rc = ocfs2_validate_group_descriptor(inode->i_sb, tmp); + if (rc) + goto free_bh; + } + + rc = ocfs2_validate_gd_parent(inode->i_sb, di, tmp, 0); + if (rc) + goto free_bh; + + /* If ocfs2_read_block() got us a new bh, pass it up. */ + if (!*bh) + *bh = tmp; + + return rc; + +free_bh: + brelse(tmp); +out: + return rc; +} + int ocfs2_read_group_descriptor(struct inode *inode, struct ocfs2_dinode *di, u64 gd_blkno, struct buffer_head **bh) { @@ -1722,7 +1784,7 @@ static int ocfs2_search_one_group(struct ocfs2_alloc_context *ac, u32 bits_wanted, u32 min_bits, struct ocfs2_suballoc_result *res, - u16 *bits_left) + u16 *bits_left, int *released) { int ret; struct buffer_head *group_bh = NULL; @@ -1730,9 +1792,11 @@ static int ocfs2_search_one_group(struct ocfs2_alloc_context *ac, struct ocfs2_dinode *di = (struct ocfs2_dinode *)ac->ac_bh->b_data; struct inode *alloc_inode = ac->ac_inode; - ret = ocfs2_read_group_descriptor(alloc_inode, di, - res->sr_bg_blkno, &group_bh); - if (ret < 0) { + ret = ocfs2_read_hint_group_descriptor(alloc_inode, di, + res->sr_bg_blkno, &group_bh, released); + if (*released) { + return 0; + } else if (ret < 0) { mlog_errno(ret); return ret; } @@ -1934,7 +1998,7 @@ static int ocfs2_claim_suballoc_bits(struct ocfs2_alloc_context *ac, u32 min_bits, struct ocfs2_suballoc_result *res) { - int status; + int status, released; u16 victim, i; u16 bits_left = 0; u64 hint = ac->ac_last_group; @@ -1961,6 +2025,7 @@ static int ocfs2_claim_suballoc_bits(struct ocfs2_alloc_context *ac, goto bail; } + /* the hint bg may already be released, we quiet search this group. */ res->sr_bg_blkno = hint; if (res->sr_bg_blkno) { /* Attempt to short-circuit the usual search mechanism @@ -1968,7 +2033,12 @@ static int ocfs2_claim_suballoc_bits(struct ocfs2_alloc_context *ac, * allocation group. This helps us maintain some * contiguousness across allocations. */ status = ocfs2_search_one_group(ac, handle, bits_wanted, - min_bits, res, &bits_left); + min_bits, res, &bits_left, + &released); + if (released) { + res->sr_bg_blkno = 0; + goto chain_search; + } if (!status) goto set_hint; if (status < 0 && status != -ENOSPC) { @@ -1976,7 +2046,7 @@ static int ocfs2_claim_suballoc_bits(struct ocfs2_alloc_context *ac, goto bail; } } - +chain_search: cl = (struct ocfs2_chain_list *) &fe->id2.i_chain; victim = ocfs2_find_victim_chain(cl); @@ -2077,6 +2147,12 @@ int ocfs2_claim_metadata(handle_t *handle, return status; } +/* + * after ocfs2 has the ability to release block group unused space, + * the ->ip_last_used_group may be invalid. so this function returns + * ac->ac_last_group need to verify. + * refer the 'hint' in ocfs2_claim_suballoc_bits() for more details. + */ static void ocfs2_init_inode_ac_group(struct inode *dir, struct buffer_head *parent_di_bh, struct ocfs2_alloc_context *ac) @@ -2514,6 +2590,197 @@ static int ocfs2_block_group_clear_bits(handle_t *handle, return status; } +/* + * Reclaim the suballocator managed space to main bitmap. + * This function first works on the suballocator then switch to the + * main bitmap. + * + * handle: The transaction handle + * alloc_inode: The suballoc inode + * alloc_bh: The buffer_head of suballoc inode + * group_bh: The group descriptor buffer_head of suballocator managed. + * Caller should release the input group_bh. + */ +static int _reclaim_to_main_bm(handle_t *handle, + struct inode *alloc_inode, + struct buffer_head *alloc_bh, + struct buffer_head *group_bh) +{ + int idx, status = 0; + int i, next_free_rec, len = 0; + __le16 old_bg_contig_free_bits = 0; + u16 start_bit; + u32 tmp_used; + u64 bg_blkno, start_blk; + unsigned int count; + struct ocfs2_chain_rec *rec; + struct buffer_head *main_bm_bh = NULL; + struct inode *main_bm_inode = NULL; + struct ocfs2_super *osb = OCFS2_SB(alloc_inode->i_sb); + struct ocfs2_dinode *fe = (struct ocfs2_dinode *) alloc_bh->b_data; + struct ocfs2_chain_list *cl = &fe->id2.i_chain; + struct ocfs2_group_desc *group = (struct ocfs2_group_desc *) group_bh->b_data; + + idx = le16_to_cpu(group->bg_chain); + rec = &(cl->cl_recs[idx]); + + status = ocfs2_extend_trans(handle, + ocfs2_calc_group_alloc_credits(osb->sb, + le16_to_cpu(cl->cl_cpg))); + if (status) { + mlog_errno(status); + goto bail; + } + status = ocfs2_journal_access_di(handle, INODE_CACHE(alloc_inode), + alloc_bh, OCFS2_JOURNAL_ACCESS_WRITE); + if (status < 0) { + mlog_errno(status); + goto bail; + } + + /* + * Only clear the suballocator rec item in-place. + * + * If idx is not the last, we don't compress (remove the empty item) + * the cl_recs[]. If not, we need to do lots jobs. + * + * Compress cl_recs[] code example: + * if (idx != cl->cl_next_free_rec - 1) + * memmove(&cl->cl_recs[idx], &cl->cl_recs[idx + 1], + * sizeof(struct ocfs2_chain_rec) * + * (cl->cl_next_free_rec - idx - 1)); + * for(i = idx; i < cl->cl_next_free_rec-1; i++) { + * group->bg_chain = "later group->bg_chain"; + * group->bg_blkno = xxx; + * ... ... + * } + */ + + tmp_used = le32_to_cpu(fe->id1.bitmap1.i_total); + fe->id1.bitmap1.i_total = cpu_to_le32(tmp_used - le32_to_cpu(rec->c_total)); + + /* Substraction 1 for the block group itself */ + tmp_used = le32_to_cpu(fe->id1.bitmap1.i_used); + fe->id1.bitmap1.i_used = cpu_to_le32(tmp_used - 1); + + tmp_used = le32_to_cpu(fe->i_clusters); + fe->i_clusters = cpu_to_le32(tmp_used - le16_to_cpu(cl->cl_cpg)); + + spin_lock(&OCFS2_I(alloc_inode)->ip_lock); + OCFS2_I(alloc_inode)->ip_clusters -= le32_to_cpu(fe->i_clusters); + fe->i_size = cpu_to_le64(ocfs2_clusters_to_bytes(alloc_inode->i_sb, + le32_to_cpu(fe->i_clusters))); + spin_unlock(&OCFS2_I(alloc_inode)->ip_lock); + i_size_write(alloc_inode, le64_to_cpu(fe->i_size)); + alloc_inode->i_blocks = ocfs2_inode_sector_count(alloc_inode); + + ocfs2_journal_dirty(handle, alloc_bh); + ocfs2_update_inode_fsync_trans(handle, alloc_inode, 0); + + start_blk = le64_to_cpu(rec->c_blkno); + count = le32_to_cpu(rec->c_total) / le16_to_cpu(cl->cl_bpc); + + /* + * If the rec is the last one, let's compress the chain list by + * removing the empty cl_recs[] at the end. + */ + next_free_rec = le16_to_cpu(cl->cl_next_free_rec); + if (idx == (next_free_rec - 1)) { + len++; /* the last item should be counted first */ + for (i = (next_free_rec - 2); i > 0; i--) { + if (cl->cl_recs[i].c_free == cl->cl_recs[i].c_total) + len++; + else + break; + } + } + le16_add_cpu(&cl->cl_next_free_rec, -len); + + rec->c_free = 0; + rec->c_total = 0; + rec->c_blkno = 0; + ocfs2_remove_from_cache(INODE_CACHE(alloc_inode), group_bh); + memset(group, 0, sizeof(struct ocfs2_group_desc)); + + /* prepare job for reclaim clusters */ + main_bm_inode = ocfs2_get_system_file_inode(osb, + GLOBAL_BITMAP_SYSTEM_INODE, + OCFS2_INVALID_SLOT); + if (!main_bm_inode) + goto bail; /* ignore the error in reclaim path */ + + inode_lock(main_bm_inode); + + status = ocfs2_inode_lock(main_bm_inode, &main_bm_bh, 1); + if (status < 0) + goto free_bm_inode; /* ignore the error in reclaim path */ + + ocfs2_block_to_cluster_group(main_bm_inode, start_blk, &bg_blkno, + &start_bit); + fe = (struct ocfs2_dinode *) main_bm_bh->b_data; + cl = &fe->id2.i_chain; + /* reuse group_bh, caller will release the input group_bh */ + group_bh = NULL; + + /* reclaim clusters to global_bitmap */ + status = ocfs2_read_group_descriptor(main_bm_inode, fe, bg_blkno, + &group_bh); + if (status < 0) { + mlog_errno(status); + goto free_bm_bh; + } + group = (struct ocfs2_group_desc *) group_bh->b_data; + + if ((count + start_bit) > le16_to_cpu(group->bg_bits)) { + ocfs2_error(alloc_inode->i_sb, + "reclaim length (%d) beyands block group length (%d)", + count + start_bit, le16_to_cpu(group->bg_bits)); + goto free_group_bh; + } + + old_bg_contig_free_bits = group->bg_contig_free_bits; + status = ocfs2_block_group_clear_bits(handle, main_bm_inode, + group, group_bh, + start_bit, count, 0, + _ocfs2_clear_bit); + if (status < 0) { + mlog_errno(status); + goto free_group_bh; + } + + status = ocfs2_journal_access_di(handle, INODE_CACHE(main_bm_inode), + main_bm_bh, OCFS2_JOURNAL_ACCESS_WRITE); + if (status < 0) { + mlog_errno(status); + ocfs2_block_group_set_bits(handle, main_bm_inode, group, group_bh, + start_bit, count, + le16_to_cpu(old_bg_contig_free_bits), 1); + goto free_group_bh; + } + + idx = le16_to_cpu(group->bg_chain); + rec = &(cl->cl_recs[idx]); + + le32_add_cpu(&rec->c_free, count); + tmp_used = le32_to_cpu(fe->id1.bitmap1.i_used); + fe->id1.bitmap1.i_used = cpu_to_le32(tmp_used - count); + ocfs2_journal_dirty(handle, main_bm_bh); + +free_group_bh: + brelse(group_bh); + +free_bm_bh: + ocfs2_inode_unlock(main_bm_inode, 1); + brelse(main_bm_bh); + +free_bm_inode: + inode_unlock(main_bm_inode); + iput(main_bm_inode); + +bail: + return status; +} + /* * expects the suballoc inode to already be locked. */ @@ -2526,12 +2793,13 @@ static int _ocfs2_free_suballoc_bits(handle_t *handle, void (*undo_fn)(unsigned int bit, unsigned long *bitmap)) { - int status = 0; + int idx, status = 0; u32 tmp_used; struct ocfs2_dinode *fe = (struct ocfs2_dinode *) alloc_bh->b_data; struct ocfs2_chain_list *cl = &fe->id2.i_chain; struct buffer_head *group_bh = NULL; struct ocfs2_group_desc *group; + struct ocfs2_chain_rec *rec; __le16 old_bg_contig_free_bits = 0; /* The alloc_bh comes from ocfs2_free_dinode() or @@ -2577,12 +2845,26 @@ static int _ocfs2_free_suballoc_bits(handle_t *handle, goto bail; } - le32_add_cpu(&cl->cl_recs[le16_to_cpu(group->bg_chain)].c_free, - count); + idx = le16_to_cpu(group->bg_chain); + rec = &(cl->cl_recs[idx]); + + le32_add_cpu(&rec->c_free, count); tmp_used = le32_to_cpu(fe->id1.bitmap1.i_used); fe->id1.bitmap1.i_used = cpu_to_le32(tmp_used - count); ocfs2_journal_dirty(handle, alloc_bh); + /* + * Reclaim suballocator free space. + * Bypass: global_bitmap, not empty rec, first rec in cl_recs[] + */ + if (ocfs2_is_cluster_bitmap(alloc_inode) || + (le32_to_cpu(rec->c_free) != (le32_to_cpu(rec->c_total) - 1)) || + (le16_to_cpu(cl->cl_next_free_rec) == 1)) { + goto bail; + } + + _reclaim_to_main_bm(handle, alloc_inode, alloc_bh, group_bh); + bail: brelse(group_bh); return status; From patchwork Sun Sep 8 14:07:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "heming.zhao@suse.com" X-Patchwork-Id: 13795497 Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8A4D170A14 for ; Sun, 8 Sep 2024 14:07:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725804442; cv=none; b=UgvUVAjEF6ELOhYkLXAL2iBECieeMwnDtCPPh9Ggy20ynZYEoofZSXA4bwVcIxf8Qqtf7X2vqAS2LKF1DJZAoKqXTo0D/+1H7IJaDI02PtfRDBqYp2sw5RCMAVEcM883NJuWY3AKD3QFOZJdKdWqSHAH7PfLXuiiZyQfpgPdEcQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725804442; c=relaxed/simple; bh=Zxej2+RfME3K222sBAiQeMPUVM72vjM2LFlvxrbl7gQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Ae9/HA6doHTtkHNBigQMYNVawJlXvw/sSw5r175WFDrXU8SDuoc0GwFek14r3vmL1bz/DcNm5keqFoNaDB7hXpaPiDPBma4fYegXtaY/Plw0UDKeE9E30NirLgryXWvWvCByptUT2wkM5D6IX1tB9d7qc0ITCr944MQdtZSgjoc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=RRJ1r/pM; arc=none smtp.client-ip=209.85.128.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="RRJ1r/pM" Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-42cb0d0311fso1529405e9.1 for ; Sun, 08 Sep 2024 07:07:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1725804439; x=1726409239; darn=lists.linux.dev; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pD76QrcvtGoS8LAgez4yfl7ZobGyguVxNj5LqFJ82RU=; b=RRJ1r/pMQH4ySJ3UDpDpnYA78sFbPhqeLYnwfA2T3qxanEqrO6k6JECzhrQEz4olWQ /YPXQ1wiMIw+WbtznYSdOVcnFZxUXKc1mF2tYcgCtpJMLiPL/iAfhaTMhFEXT1oXC71k CXYlIDMZkoZk3oxT+IavBTGLm7zeOWcYt/0iCvcJToRldnOPLlz17H8RwpAP2+p0bn+d aduOQQAl86SZEVos0sTdBRpiTa9+xCKQvmW747Q02IPpjBrs8+Bu1Ur3dJHAV0seAqch YT7pQe6EYffF3NomiYXov7U6GYkWyU4M8iBYOyMBh/+j5iKaHieW4jQwOax1yxZR3zdk L6UQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725804439; x=1726409239; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pD76QrcvtGoS8LAgez4yfl7ZobGyguVxNj5LqFJ82RU=; b=PRyNYQDWKKTPt4PCA5SAiBoEqAl7ZtPa4Z9yqYk9t2+O/nfk7CuHwtiX1LqPBT6OHj YeBPQlmTMwZ3yIKroAa0Tn3EwNxyL+83Zq8z1KygOMq60lIt7sA1KLlPt8thHODgRznD BFUxMt/NLCPH+vLstLgaKhMed39r3VxOetWhYlue8tK+rDpeBETyvDbrXuhhxxZCrpqe NVthg/JNdzcfEuz1ysEU8cXatIqtDYeiMeUPDjh7NbRfm46ZSLHtDMoY2Yhoo56r3Yrc PEuD5AWS/ISfqTrt8fh9SG8UlLqe/G4OS6OwPqqxX8lyMBUWZgHT8NPILZx+KO1yrEoA sa9Q== X-Forwarded-Encrypted: i=1; AJvYcCW7gPvFSdLr/VpgNuSqC56Qj5Nlf4mqONwbV46mBkBKfgJSg9U5em23s0QbykB/FFzFno2yCmGjImitcw==@lists.linux.dev X-Gm-Message-State: AOJu0YyvOn3K3XaNq1a+/40xnQYfJ+kXsZ8fnFfptM2ufkLN4QL3jcJq My/3+Cq6IPNxEVquYhGuTraTcwuM3CL7HwVtqHvF3B0fdlcPPIhWaWXVmajmUF0= X-Google-Smtp-Source: AGHT+IGJAcrrz0nQ3n+I7PfPJ77mMXJZlyi6m4sZ0dF/GjJ/GFC76+EHDCQVX8nyW1opyCljOBAe5A== X-Received: by 2002:a05:6000:1545:b0:374:cc10:bb42 with SMTP id ffacd0b85a97d-378895c641bmr2967036f8f.2.1725804438882; Sun, 08 Sep 2024 07:07:18 -0700 (PDT) Received: from localhost.localdomain ([202.127.77.110]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2dadc074599sm4863371a91.31.2024.09.08.07.07.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 Sep 2024 07:07:18 -0700 (PDT) From: Heming Zhao To: joseph.qi@linux.alibaba.com, glass.su@suse.com Cc: Heming Zhao , ocfs2-devel@lists.linux.dev, linux-kernel@vger.kernel.org Subject: [PATCH v3 2/3] ocfs2: detect released suballocator bg for fh_to_[dentry|parent] Date: Sun, 8 Sep 2024 22:07:04 +0800 Message-Id: <20240908140705.19169-3-heming.zhao@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240908140705.19169-1-heming.zhao@suse.com> References: <20240908140705.19169-1-heming.zhao@suse.com> Precedence: bulk X-Mailing-List: ocfs2-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 After ocfs2 has the ability to reclaim suballoc free bg, the suballocator block group may be released. This change makes xfstest case 426 failed. The existed code call stack: ocfs2_fh_to_dentry //or ocfs2_fh_to_parent ocfs2_get_dentry ocfs2_test_inode_bit ocfs2_test_suballoc_bit ocfs2_read_group_descriptor + read released bg, triggers validate fails, then cause -EROFS how to fix: The read bg failure is expectation, we should ignore this error. Signed-off-by: Heming Zhao Reviewed-by: Su Yue --- fs/ocfs2/suballoc.c | 28 ++++++++++++++++++---------- 1 file changed, 18 insertions(+), 10 deletions(-) diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c index d62010166c34..9e847f59c9ef 100644 --- a/fs/ocfs2/suballoc.c +++ b/fs/ocfs2/suballoc.c @@ -3118,7 +3118,7 @@ static int ocfs2_test_suballoc_bit(struct ocfs2_super *osb, struct ocfs2_group_desc *group; struct buffer_head *group_bh = NULL; u64 bg_blkno; - int status; + int status, quiet = 0, released; trace_ocfs2_test_suballoc_bit((unsigned long long)blkno, (unsigned int)bit); @@ -3134,11 +3134,15 @@ static int ocfs2_test_suballoc_bit(struct ocfs2_super *osb, bg_blkno = group_blkno ? group_blkno : ocfs2_which_suballoc_group(blkno, bit); - status = ocfs2_read_group_descriptor(suballoc, alloc_di, bg_blkno, - &group_bh); - if (status < 0) { + status = ocfs2_read_hint_group_descriptor(suballoc, alloc_di, bg_blkno, + &group_bh, &released); + if (released) { + quiet = 1; + status = -EINVAL; + goto bail; + } else if (status < 0) { mlog(ML_ERROR, "read group %llu failed %d\n", - (unsigned long long)bg_blkno, status); + (unsigned long long)bg_blkno, status); goto bail; } @@ -3148,7 +3152,7 @@ static int ocfs2_test_suballoc_bit(struct ocfs2_super *osb, bail: brelse(group_bh); - if (status) + if (status && (!quiet)) mlog_errno(status); return status; } @@ -3168,7 +3172,7 @@ static int ocfs2_test_suballoc_bit(struct ocfs2_super *osb, */ int ocfs2_test_inode_bit(struct ocfs2_super *osb, u64 blkno, int *res) { - int status; + int status, quiet = 0; u64 group_blkno = 0; u16 suballoc_bit = 0, suballoc_slot = 0; struct inode *inode_alloc_inode; @@ -3210,8 +3214,12 @@ int ocfs2_test_inode_bit(struct ocfs2_super *osb, u64 blkno, int *res) status = ocfs2_test_suballoc_bit(osb, inode_alloc_inode, alloc_bh, group_blkno, blkno, suballoc_bit, res); - if (status < 0) - mlog(ML_ERROR, "test suballoc bit failed %d\n", status); + if (status < 0) { + if (status == -EINVAL) + quiet = 1; + else + mlog(ML_ERROR, "test suballoc bit failed %d\n", status); + } ocfs2_inode_unlock(inode_alloc_inode, 0); inode_unlock(inode_alloc_inode); @@ -3219,7 +3227,7 @@ int ocfs2_test_inode_bit(struct ocfs2_super *osb, u64 blkno, int *res) iput(inode_alloc_inode); brelse(alloc_bh); bail: - if (status) + if (status && !quiet) mlog_errno(status); return status; } From patchwork Sun Sep 8 14:07:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "heming.zhao@suse.com" X-Patchwork-Id: 13795498 Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com [209.85.128.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 97597171E49 for ; Sun, 8 Sep 2024 14:07:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725804445; cv=none; b=nd1vTsCaA9c94OD9zbxAqvxPdTXS/2YPgt9LD6X8NLyu8q99nQUn5X4sVO1MZoI01AxzSPmBlJjiO5h5pLJvGFjV0qbFKmjRNMdMtQFGSmqnj87NtMpe29w97E52SFrzFqrjHlMUMBA/L9hUdine1iPrAOrUUoepe4OVVFZcsNE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725804445; c=relaxed/simple; bh=VdJeHcRa81uNgktXVnAIXziYQCFEfA5cn+PUdZ+gLkg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=icC8XBW2fB10DIo4KeojxzU2WZbBZDLS/ggXzGVI7C/yzZANvIow/nowPF9Bdoj7pH9apHihcTkMdd9dATKqBbA9BSzhs94Z/vlLt49bcpsWf6X5l1zTRafi2a+b30Aqhkyw3v0Vjlzl+BDMGdAixYSEpeW96uLbZbptwo88apg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=Yi5+DGoQ; arc=none smtp.client-ip=209.85.128.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="Yi5+DGoQ" Received: by mail-wm1-f50.google.com with SMTP id 5b1f17b1804b1-42cacd90ee4so2207445e9.3 for ; Sun, 08 Sep 2024 07:07:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1725804442; x=1726409242; darn=lists.linux.dev; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vR6N4fCC++oBDLKVVVJ7HiCM6XtmAM6PYHQUGBS1rhs=; b=Yi5+DGoQMH8anKyYEFBvzQxS8xqdxI+nvYiRgQ6eKoVmw5HW4bhH7YBa+Nfurgrftn 5j5UMOqa8SQt2DoZoM2ASAzBceYVk38IO8dBIN9IjIWuG6hFynJfCyF1HvhK/lik4iFF 1rzojmcjOKm1jfQBsLEMIiDPJ8P0iV5MxoBuGrb2gbG6koDnZ4+BEerWJs7HnbqfdOZa JIK9O5h7+dfK8fBQczXRJ/NqHQz0iVC/H39CGTzP17vrxgizfECBPGdrI7k0E7MQm5Lx dkXOQBuTycWXatY9DNPSJ3mjHjmeUpXW+a2nndpBScWcKmfk8WC5nCgL9po7o1x8jHPw 912g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725804442; x=1726409242; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vR6N4fCC++oBDLKVVVJ7HiCM6XtmAM6PYHQUGBS1rhs=; b=nmpsXJm6Oo3IyV/OjJRqUDY3RU6G2RnZvhNtiKdU3J6CXDTIesku3eBoDJPs4lDRz8 ReQUyPCbLVcuhRd/1X9TLtlO3sT45aEoZxwjgK+yavosEX7qe58poXdJJ8P6amjc9qRo 26/CCqJ1MIeWpB6HxQBRV9Mp2IuFiNEMP6ds9GbqZEcd+dlLT27R73Eu6xezuptG21AF a9knflkZz+imTXjdbOChZuZsPHLM90qTBTuqeWXKIJimjEFhz5ZfwyrT5K60hoJzaMRo /8T+w0Q2t/20+ZKLPm/lBo4WeOjm0xevSyR8u1ao7VopRFXchux4jtgGab22nlMwqQhz XPZQ== X-Forwarded-Encrypted: i=1; AJvYcCXIJP2a8cAn2Nm+sLO0cYWjezPKg2v5YDC181uFGAWbUbwaUiD/AX5UcPd8k6bogNvdv0KP8hwZfA6sXg==@lists.linux.dev X-Gm-Message-State: AOJu0YwhJQ4lxaGafBK1NFxxqKNe+/jCZGNKKc05R7XFsoN7aUo7NwtW 48rtj12tpCEd0MPKfwuZ9yl8tWJK8XJ3685esTxzGHFzDGLaOqYvf0qgilyqrGw= X-Google-Smtp-Source: AGHT+IEjD0/PC0WO2GOqUMpTA2/eZmSm46TAshTjVAAAEN0En/GnWtBMyhgW+2LIQa8czyBy0jD9zw== X-Received: by 2002:a05:600c:4707:b0:426:6f48:415e with SMTP id 5b1f17b1804b1-42c9f97b9ddmr28933705e9.1.1725804441252; Sun, 08 Sep 2024 07:07:21 -0700 (PDT) Received: from localhost.localdomain ([202.127.77.110]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2dadc074599sm4863371a91.31.2024.09.08.07.07.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 Sep 2024 07:07:20 -0700 (PDT) From: Heming Zhao To: joseph.qi@linux.alibaba.com, glass.su@suse.com Cc: Heming Zhao , ocfs2-devel@lists.linux.dev, linux-kernel@vger.kernel.org Subject: [PATCH v3 3/3] ocfs2: adjust spinlock_t ip_lock protection scope Date: Sun, 8 Sep 2024 22:07:05 +0800 Message-Id: <20240908140705.19169-4-heming.zhao@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240908140705.19169-1-heming.zhao@suse.com> References: <20240908140705.19169-1-heming.zhao@suse.com> Precedence: bulk X-Mailing-List: ocfs2-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Some of the spinlock_t ip_lock protection scopes are incorrect and should follow the usage in 'struct ocfs2_inode_info'. Signed-off-by: Heming Zhao Reviewed-by: Su Yue --- fs/ocfs2/dlmglue.c | 3 ++- fs/ocfs2/inode.c | 5 +++-- fs/ocfs2/resize.c | 4 ++-- fs/ocfs2/suballoc.c | 2 +- 4 files changed, 8 insertions(+), 6 deletions(-) diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c index da78a04d6f0b..4a5900c8dc8f 100644 --- a/fs/ocfs2/dlmglue.c +++ b/fs/ocfs2/dlmglue.c @@ -2232,6 +2232,8 @@ static int ocfs2_refresh_inode_from_lvb(struct inode *inode) else inode->i_blocks = ocfs2_inode_sector_count(inode); + spin_unlock(&oi->ip_lock); + i_uid_write(inode, be32_to_cpu(lvb->lvb_iuid)); i_gid_write(inode, be32_to_cpu(lvb->lvb_igid)); inode->i_mode = be16_to_cpu(lvb->lvb_imode); @@ -2242,7 +2244,6 @@ static int ocfs2_refresh_inode_from_lvb(struct inode *inode) inode_set_mtime_to_ts(inode, ts); ocfs2_unpack_timespec(&ts, be64_to_cpu(lvb->lvb_ictime_packed)); inode_set_ctime_to_ts(inode, ts); - spin_unlock(&oi->ip_lock); return 0; } diff --git a/fs/ocfs2/inode.c b/fs/ocfs2/inode.c index 2cc5c99fe941..4af9a6dfddd2 100644 --- a/fs/ocfs2/inode.c +++ b/fs/ocfs2/inode.c @@ -1348,14 +1348,15 @@ void ocfs2_refresh_inode(struct inode *inode, inode->i_blocks = 0; else inode->i_blocks = ocfs2_inode_sector_count(inode); + + spin_unlock(&OCFS2_I(inode)->ip_lock); + inode_set_atime(inode, le64_to_cpu(fe->i_atime), le32_to_cpu(fe->i_atime_nsec)); inode_set_mtime(inode, le64_to_cpu(fe->i_mtime), le32_to_cpu(fe->i_mtime_nsec)); inode_set_ctime(inode, le64_to_cpu(fe->i_ctime), le32_to_cpu(fe->i_ctime_nsec)); - - spin_unlock(&OCFS2_I(inode)->ip_lock); } int ocfs2_validate_inode_block(struct super_block *sb, diff --git a/fs/ocfs2/resize.c b/fs/ocfs2/resize.c index c4a4016d3866..b29f71357d63 100644 --- a/fs/ocfs2/resize.c +++ b/fs/ocfs2/resize.c @@ -153,8 +153,8 @@ static int ocfs2_update_last_group_and_inode(handle_t *handle, spin_lock(&OCFS2_I(bm_inode)->ip_lock); OCFS2_I(bm_inode)->ip_clusters = le32_to_cpu(fe->i_clusters); - le64_add_cpu(&fe->i_size, (u64)new_clusters << osb->s_clustersize_bits); spin_unlock(&OCFS2_I(bm_inode)->ip_lock); + le64_add_cpu(&fe->i_size, (u64)new_clusters << osb->s_clustersize_bits); i_size_write(bm_inode, le64_to_cpu(fe->i_size)); ocfs2_journal_dirty(handle, bm_bh); @@ -564,8 +564,8 @@ int ocfs2_group_add(struct inode *inode, struct ocfs2_new_group_input *input) spin_lock(&OCFS2_I(main_bm_inode)->ip_lock); OCFS2_I(main_bm_inode)->ip_clusters = le32_to_cpu(fe->i_clusters); - le64_add_cpu(&fe->i_size, (u64)input->clusters << osb->s_clustersize_bits); spin_unlock(&OCFS2_I(main_bm_inode)->ip_lock); + le64_add_cpu(&fe->i_size, (u64)input->clusters << osb->s_clustersize_bits); i_size_write(main_bm_inode, le64_to_cpu(fe->i_size)); ocfs2_update_super_and_backups(main_bm_inode, input->clusters); diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c index 9e847f59c9ef..3f91615d8702 100644 --- a/fs/ocfs2/suballoc.c +++ b/fs/ocfs2/suballoc.c @@ -798,9 +798,9 @@ static int ocfs2_block_group_alloc(struct ocfs2_super *osb, spin_lock(&OCFS2_I(alloc_inode)->ip_lock); OCFS2_I(alloc_inode)->ip_clusters = le32_to_cpu(fe->i_clusters); + spin_unlock(&OCFS2_I(alloc_inode)->ip_lock); fe->i_size = cpu_to_le64(ocfs2_clusters_to_bytes(alloc_inode->i_sb, le32_to_cpu(fe->i_clusters))); - spin_unlock(&OCFS2_I(alloc_inode)->ip_lock); i_size_write(alloc_inode, le64_to_cpu(fe->i_size)); alloc_inode->i_blocks = ocfs2_inode_sector_count(alloc_inode); ocfs2_update_inode_fsync_trans(handle, alloc_inode, 0);