From patchwork Wed Jan 30 17:50:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 10789111 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19DD413B5 for ; Wed, 30 Jan 2019 17:50:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 05D0E2F8C5 for ; Wed, 30 Jan 2019 17:50:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EE09A2FC30; Wed, 30 Jan 2019 17:50:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 987532F8C5 for ; Wed, 30 Jan 2019 17:50:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727097AbfA3Rue (ORCPT ); Wed, 30 Jan 2019 12:50:34 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:51360 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726341AbfA3Rue (ORCPT ); Wed, 30 Jan 2019 12:50:34 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id x0UHiTkn038757; Wed, 30 Jan 2019 17:50:27 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to : cc : date : message-id : mime-version : content-type : content-transfer-encoding; s=corp-2018-07-02; bh=ykXbpv8Rioh5WOMCBRf3JKmf4WHNcuVqHU8CEb8cNtA=; b=faFxLj1BFcxc6w8CqUrLdYBOiOp7PfyWYYdXr5UNGG30mInyCVjv+l6vYBYYbz7gAls+ IKGkt1jiO2a22CwlZuxrcLrgNDeOtqdTg81+fK3SofwwsAD6D4GOOc/WVuzIdHN2qbqt lqCSr9XvMXR+9Zyq4pUj8O1u34+sssWfnAbToyJ+Km2erV8hWEju5JcDu7bSYvGSYFCu 6vqTc1/hh5imAA1VNXJfbS5Aj/XIgHpWH9D9l6WkQIN4vIHokuu8DFt0JphRDB0FXe6f yxgNEfnmr/9wyGRQn93Bp5l4509DQbzFqlNN1y5B9ubriI47AYegiQ9wKY93SNB9WJ1e /Q== Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by userp2120.oracle.com with ESMTP id 2q8g6rc1d5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 30 Jan 2019 17:50:26 +0000 Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id x0UHoPCO006297 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 30 Jan 2019 17:50:25 GMT Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x0UHoPix020196; Wed, 30 Jan 2019 17:50:25 GMT Received: from localhost (/67.169.218.210) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 30 Jan 2019 09:50:25 -0800 Subject: [PATCH 1/2] libfrog: hoist bitmap out of scrub From: "Darrick J. Wong" To: sandeen@redhat.com, darrick.wong@oracle.com Cc: linux-xfs@vger.kernel.org Date: Wed, 30 Jan 2019 09:50:24 -0800 Message-ID: <154887062444.5611.12746798208748899665.stgit@magnolia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9152 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1901300136 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Darrick J. Wong Move the bitmap code to libfrog so that we can use bitmaps in xfs_repair. Signed-off-by: Darrick J. Wong --- include/bitmap.h | 24 +++ libfrog/Makefile | 1 libfrog/bitmap.c | 393 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ scrub/Makefile | 2 scrub/bitmap.c | 393 ------------------------------------------------------ scrub/bitmap.h | 24 --- 6 files changed, 418 insertions(+), 419 deletions(-) create mode 100644 include/bitmap.h create mode 100644 libfrog/bitmap.c delete mode 100644 scrub/bitmap.c delete mode 100644 scrub/bitmap.h diff --git a/include/bitmap.h b/include/bitmap.h new file mode 100644 index 00000000..e29a4335 --- /dev/null +++ b/include/bitmap.h @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Oracle. All Rights Reserved. + * Author: Darrick J. Wong + */ +#ifndef LIBFROG_BITMAP_H_ +#define LIBFROG_BITMAP_H_ + +struct bitmap { + pthread_mutex_t bt_lock; + struct avl64tree_desc *bt_tree; +}; + +bool bitmap_init(struct bitmap **bmap); +void bitmap_free(struct bitmap **bmap); +bool bitmap_set(struct bitmap *bmap, uint64_t start, uint64_t length); +bool bitmap_iterate(struct bitmap *bmap, + bool (*fn)(uint64_t, uint64_t, void *), void *arg); +bool bitmap_test(struct bitmap *bmap, uint64_t start, + uint64_t len); +bool bitmap_empty(struct bitmap *bmap); +void bitmap_dump(struct bitmap *bmap); + +#endif /* LIBFROG_BITMAP_H_ */ diff --git a/libfrog/Makefile b/libfrog/Makefile index dbff9596..f5a0539b 100644 --- a/libfrog/Makefile +++ b/libfrog/Makefile @@ -12,6 +12,7 @@ LT_AGE = 0 CFILES = \ avl64.c \ +bitmap.c \ convert.c \ crc32.c \ fsgeom.c \ diff --git a/libfrog/bitmap.c b/libfrog/bitmap.c new file mode 100644 index 00000000..aecdba0d --- /dev/null +++ b/libfrog/bitmap.c @@ -0,0 +1,393 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Oracle. All Rights Reserved. + * Author: Darrick J. Wong + */ +#include "xfs.h" +#include +#include +#include +#include +#include "platform_defs.h" +#include "avl64.h" +#include "list.h" +#include "bitmap.h" + +/* + * Space Efficient Bitmap + * + * Implements a space-efficient bitmap. We use an AVL tree to manage + * extent records that tell us which ranges are set; the bitmap key is + * an arbitrary uint64_t. The usual bitmap operations (set, clear, + * test, test and set) are supported, plus we can iterate set ranges. + */ + +#define avl_for_each_range_safe(pos, n, l, first, last) \ + for (pos = (first), n = pos->avl_nextino, l = (last)->avl_nextino; pos != (l); \ + pos = n, n = pos ? pos->avl_nextino : NULL) + +#define avl_for_each_safe(tree, pos, n) \ + for (pos = (tree)->avl_firstino, n = pos ? pos->avl_nextino : NULL; \ + pos != NULL; \ + pos = n, n = pos ? pos->avl_nextino : NULL) + +#define avl_for_each(tree, pos) \ + for (pos = (tree)->avl_firstino; pos != NULL; pos = pos->avl_nextino) + +struct bitmap_node { + struct avl64node btn_node; + uint64_t btn_start; + uint64_t btn_length; +}; + +static uint64_t +extent_start( + struct avl64node *node) +{ + struct bitmap_node *btn; + + btn = container_of(node, struct bitmap_node, btn_node); + return btn->btn_start; +} + +static uint64_t +extent_end( + struct avl64node *node) +{ + struct bitmap_node *btn; + + btn = container_of(node, struct bitmap_node, btn_node); + return btn->btn_start + btn->btn_length; +} + +static struct avl64ops bitmap_ops = { + extent_start, + extent_end, +}; + +/* Initialize a bitmap. */ +bool +bitmap_init( + struct bitmap **bmapp) +{ + struct bitmap *bmap; + + bmap = calloc(1, sizeof(struct bitmap)); + if (!bmap) + return false; + bmap->bt_tree = malloc(sizeof(struct avl64tree_desc)); + if (!bmap->bt_tree) { + free(bmap); + return false; + } + + pthread_mutex_init(&bmap->bt_lock, NULL); + avl64_init_tree(bmap->bt_tree, &bitmap_ops); + *bmapp = bmap; + + return true; +} + +/* Free a bitmap. */ +void +bitmap_free( + struct bitmap **bmapp) +{ + struct bitmap *bmap; + struct avl64node *node; + struct avl64node *n; + struct bitmap_node *ext; + + bmap = *bmapp; + avl_for_each_safe(bmap->bt_tree, node, n) { + ext = container_of(node, struct bitmap_node, btn_node); + free(ext); + } + free(bmap->bt_tree); + *bmapp = NULL; +} + +/* Create a new bitmap extent node. */ +static struct bitmap_node * +bitmap_node_init( + uint64_t start, + uint64_t len) +{ + struct bitmap_node *ext; + + ext = malloc(sizeof(struct bitmap_node)); + if (!ext) + return NULL; + + ext->btn_node.avl_nextino = NULL; + ext->btn_start = start; + ext->btn_length = len; + + return ext; +} + +/* Set a region of bits (locked). */ +static bool +__bitmap_set( + struct bitmap *bmap, + uint64_t start, + uint64_t length) +{ + struct avl64node *firstn; + struct avl64node *lastn; + struct avl64node *pos; + struct avl64node *n; + struct avl64node *l; + struct bitmap_node *ext; + uint64_t new_start; + uint64_t new_length; + struct avl64node *node; + bool res = true; + + /* Find any existing nodes adjacent or within that range. */ + avl64_findranges(bmap->bt_tree, start - 1, start + length + 1, + &firstn, &lastn); + + /* Nothing, just insert a new extent. */ + if (firstn == NULL && lastn == NULL) { + ext = bitmap_node_init(start, length); + if (!ext) + return false; + + node = avl64_insert(bmap->bt_tree, &ext->btn_node); + if (node == NULL) { + free(ext); + errno = EEXIST; + return false; + } + + return true; + } + + assert(firstn != NULL && lastn != NULL); + new_start = start; + new_length = length; + + avl_for_each_range_safe(pos, n, l, firstn, lastn) { + ext = container_of(pos, struct bitmap_node, btn_node); + + /* Bail if the new extent is contained within an old one. */ + if (ext->btn_start <= start && + ext->btn_start + ext->btn_length >= start + length) + return res; + + /* Check for overlapping and adjacent extents. */ + if (ext->btn_start + ext->btn_length >= start || + ext->btn_start <= start + length) { + if (ext->btn_start < start) { + new_start = ext->btn_start; + new_length += ext->btn_length; + } + + if (ext->btn_start + ext->btn_length > + new_start + new_length) + new_length = ext->btn_start + ext->btn_length - + new_start; + + avl64_delete(bmap->bt_tree, pos); + free(ext); + } + } + + ext = bitmap_node_init(new_start, new_length); + if (!ext) + return false; + + node = avl64_insert(bmap->bt_tree, &ext->btn_node); + if (node == NULL) { + free(ext); + errno = EEXIST; + return false; + } + + return res; +} + +/* Set a region of bits. */ +bool +bitmap_set( + struct bitmap *bmap, + uint64_t start, + uint64_t length) +{ + bool res; + + pthread_mutex_lock(&bmap->bt_lock); + res = __bitmap_set(bmap, start, length); + pthread_mutex_unlock(&bmap->bt_lock); + + return res; +} + +#if 0 /* Unused, provided for completeness. */ +/* Clear a region of bits. */ +bool +bitmap_clear( + struct bitmap *bmap, + uint64_t start, + uint64_t len) +{ + struct avl64node *firstn; + struct avl64node *lastn; + struct avl64node *pos; + struct avl64node *n; + struct avl64node *l; + struct bitmap_node *ext; + uint64_t new_start; + uint64_t new_length; + struct avl64node *node; + int stat; + + pthread_mutex_lock(&bmap->bt_lock); + /* Find any existing nodes over that range. */ + avl64_findranges(bmap->bt_tree, start, start + len, &firstn, &lastn); + + /* Nothing, we're done. */ + if (firstn == NULL && lastn == NULL) { + pthread_mutex_unlock(&bmap->bt_lock); + return true; + } + + assert(firstn != NULL && lastn != NULL); + + /* Delete or truncate everything in sight. */ + avl_for_each_range_safe(pos, n, l, firstn, lastn) { + ext = container_of(pos, struct bitmap_node, btn_node); + + stat = 0; + if (ext->btn_start < start) + stat |= 1; + if (ext->btn_start + ext->btn_length > start + len) + stat |= 2; + switch (stat) { + case 0: + /* Extent totally within range; delete. */ + avl64_delete(bmap->bt_tree, pos); + free(ext); + break; + case 1: + /* Extent is left-adjacent; truncate. */ + ext->btn_length = start - ext->btn_start; + break; + case 2: + /* Extent is right-adjacent; move it. */ + ext->btn_length = ext->btn_start + ext->btn_length - + (start + len); + ext->btn_start = start + len; + break; + case 3: + /* Extent overlaps both ends. */ + ext->btn_length = start - ext->btn_start; + new_start = start + len; + new_length = ext->btn_start + ext->btn_length - + new_start; + + ext = bitmap_node_init(new_start, new_length); + if (!ext) + return false; + + node = avl64_insert(bmap->bt_tree, &ext->btn_node); + if (node == NULL) { + errno = EEXIST; + return false; + } + break; + } + } + + pthread_mutex_unlock(&bmap->bt_lock); + return true; +} +#endif + +#ifdef DEBUG +/* Iterate the set regions of this bitmap. */ +bool +bitmap_iterate( + struct bitmap *bmap, + bool (*fn)(uint64_t, uint64_t, void *), + void *arg) +{ + struct avl64node *node; + struct bitmap_node *ext; + bool moveon = true; + + pthread_mutex_lock(&bmap->bt_lock); + avl_for_each(bmap->bt_tree, node) { + ext = container_of(node, struct bitmap_node, btn_node); + moveon = fn(ext->btn_start, ext->btn_length, arg); + if (!moveon) + break; + } + pthread_mutex_unlock(&bmap->bt_lock); + + return moveon; +} +#endif + +/* Do any bitmap extents overlap the given one? (locked) */ +static bool +__bitmap_test( + struct bitmap *bmap, + uint64_t start, + uint64_t len) +{ + struct avl64node *firstn; + struct avl64node *lastn; + + /* Find any existing nodes over that range. */ + avl64_findranges(bmap->bt_tree, start, start + len, &firstn, &lastn); + + return firstn != NULL && lastn != NULL; +} + +/* Is any part of this range set? */ +bool +bitmap_test( + struct bitmap *bmap, + uint64_t start, + uint64_t len) +{ + bool res; + + pthread_mutex_lock(&bmap->bt_lock); + res = __bitmap_test(bmap, start, len); + pthread_mutex_unlock(&bmap->bt_lock); + + return res; +} + +/* Are none of the bits set? */ +bool +bitmap_empty( + struct bitmap *bmap) +{ + return bmap->bt_tree->avl_firstino == NULL; +} + +#ifdef DEBUG +static bool +bitmap_dump_fn( + uint64_t startblock, + uint64_t blockcount, + void *arg) +{ + printf("%"PRIu64":%"PRIu64"\n", startblock, blockcount); + return true; +} + +/* Dump bitmap. */ +void +bitmap_dump( + struct bitmap *bmap) +{ + printf("BITMAP DUMP %p\n", bmap); + bitmap_iterate(bmap, bitmap_dump_fn, NULL); + printf("BITMAP DUMP DONE\n"); +} +#endif diff --git a/scrub/Makefile b/scrub/Makefile index c6473c12..9c56d550 100644 --- a/scrub/Makefile +++ b/scrub/Makefile @@ -31,7 +31,6 @@ endif endif # scrub_prereqs HFILES = \ -bitmap.h \ common.h \ counter.h \ disk.h \ @@ -48,7 +47,6 @@ vfs.h \ xfs_scrub.h CFILES = \ -bitmap.c \ common.c \ counter.c \ disk.c \ diff --git a/scrub/bitmap.c b/scrub/bitmap.c deleted file mode 100644 index aecdba0d..00000000 --- a/scrub/bitmap.c +++ /dev/null @@ -1,393 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0+ -/* - * Copyright (C) 2018 Oracle. All Rights Reserved. - * Author: Darrick J. Wong - */ -#include "xfs.h" -#include -#include -#include -#include -#include "platform_defs.h" -#include "avl64.h" -#include "list.h" -#include "bitmap.h" - -/* - * Space Efficient Bitmap - * - * Implements a space-efficient bitmap. We use an AVL tree to manage - * extent records that tell us which ranges are set; the bitmap key is - * an arbitrary uint64_t. The usual bitmap operations (set, clear, - * test, test and set) are supported, plus we can iterate set ranges. - */ - -#define avl_for_each_range_safe(pos, n, l, first, last) \ - for (pos = (first), n = pos->avl_nextino, l = (last)->avl_nextino; pos != (l); \ - pos = n, n = pos ? pos->avl_nextino : NULL) - -#define avl_for_each_safe(tree, pos, n) \ - for (pos = (tree)->avl_firstino, n = pos ? pos->avl_nextino : NULL; \ - pos != NULL; \ - pos = n, n = pos ? pos->avl_nextino : NULL) - -#define avl_for_each(tree, pos) \ - for (pos = (tree)->avl_firstino; pos != NULL; pos = pos->avl_nextino) - -struct bitmap_node { - struct avl64node btn_node; - uint64_t btn_start; - uint64_t btn_length; -}; - -static uint64_t -extent_start( - struct avl64node *node) -{ - struct bitmap_node *btn; - - btn = container_of(node, struct bitmap_node, btn_node); - return btn->btn_start; -} - -static uint64_t -extent_end( - struct avl64node *node) -{ - struct bitmap_node *btn; - - btn = container_of(node, struct bitmap_node, btn_node); - return btn->btn_start + btn->btn_length; -} - -static struct avl64ops bitmap_ops = { - extent_start, - extent_end, -}; - -/* Initialize a bitmap. */ -bool -bitmap_init( - struct bitmap **bmapp) -{ - struct bitmap *bmap; - - bmap = calloc(1, sizeof(struct bitmap)); - if (!bmap) - return false; - bmap->bt_tree = malloc(sizeof(struct avl64tree_desc)); - if (!bmap->bt_tree) { - free(bmap); - return false; - } - - pthread_mutex_init(&bmap->bt_lock, NULL); - avl64_init_tree(bmap->bt_tree, &bitmap_ops); - *bmapp = bmap; - - return true; -} - -/* Free a bitmap. */ -void -bitmap_free( - struct bitmap **bmapp) -{ - struct bitmap *bmap; - struct avl64node *node; - struct avl64node *n; - struct bitmap_node *ext; - - bmap = *bmapp; - avl_for_each_safe(bmap->bt_tree, node, n) { - ext = container_of(node, struct bitmap_node, btn_node); - free(ext); - } - free(bmap->bt_tree); - *bmapp = NULL; -} - -/* Create a new bitmap extent node. */ -static struct bitmap_node * -bitmap_node_init( - uint64_t start, - uint64_t len) -{ - struct bitmap_node *ext; - - ext = malloc(sizeof(struct bitmap_node)); - if (!ext) - return NULL; - - ext->btn_node.avl_nextino = NULL; - ext->btn_start = start; - ext->btn_length = len; - - return ext; -} - -/* Set a region of bits (locked). */ -static bool -__bitmap_set( - struct bitmap *bmap, - uint64_t start, - uint64_t length) -{ - struct avl64node *firstn; - struct avl64node *lastn; - struct avl64node *pos; - struct avl64node *n; - struct avl64node *l; - struct bitmap_node *ext; - uint64_t new_start; - uint64_t new_length; - struct avl64node *node; - bool res = true; - - /* Find any existing nodes adjacent or within that range. */ - avl64_findranges(bmap->bt_tree, start - 1, start + length + 1, - &firstn, &lastn); - - /* Nothing, just insert a new extent. */ - if (firstn == NULL && lastn == NULL) { - ext = bitmap_node_init(start, length); - if (!ext) - return false; - - node = avl64_insert(bmap->bt_tree, &ext->btn_node); - if (node == NULL) { - free(ext); - errno = EEXIST; - return false; - } - - return true; - } - - assert(firstn != NULL && lastn != NULL); - new_start = start; - new_length = length; - - avl_for_each_range_safe(pos, n, l, firstn, lastn) { - ext = container_of(pos, struct bitmap_node, btn_node); - - /* Bail if the new extent is contained within an old one. */ - if (ext->btn_start <= start && - ext->btn_start + ext->btn_length >= start + length) - return res; - - /* Check for overlapping and adjacent extents. */ - if (ext->btn_start + ext->btn_length >= start || - ext->btn_start <= start + length) { - if (ext->btn_start < start) { - new_start = ext->btn_start; - new_length += ext->btn_length; - } - - if (ext->btn_start + ext->btn_length > - new_start + new_length) - new_length = ext->btn_start + ext->btn_length - - new_start; - - avl64_delete(bmap->bt_tree, pos); - free(ext); - } - } - - ext = bitmap_node_init(new_start, new_length); - if (!ext) - return false; - - node = avl64_insert(bmap->bt_tree, &ext->btn_node); - if (node == NULL) { - free(ext); - errno = EEXIST; - return false; - } - - return res; -} - -/* Set a region of bits. */ -bool -bitmap_set( - struct bitmap *bmap, - uint64_t start, - uint64_t length) -{ - bool res; - - pthread_mutex_lock(&bmap->bt_lock); - res = __bitmap_set(bmap, start, length); - pthread_mutex_unlock(&bmap->bt_lock); - - return res; -} - -#if 0 /* Unused, provided for completeness. */ -/* Clear a region of bits. */ -bool -bitmap_clear( - struct bitmap *bmap, - uint64_t start, - uint64_t len) -{ - struct avl64node *firstn; - struct avl64node *lastn; - struct avl64node *pos; - struct avl64node *n; - struct avl64node *l; - struct bitmap_node *ext; - uint64_t new_start; - uint64_t new_length; - struct avl64node *node; - int stat; - - pthread_mutex_lock(&bmap->bt_lock); - /* Find any existing nodes over that range. */ - avl64_findranges(bmap->bt_tree, start, start + len, &firstn, &lastn); - - /* Nothing, we're done. */ - if (firstn == NULL && lastn == NULL) { - pthread_mutex_unlock(&bmap->bt_lock); - return true; - } - - assert(firstn != NULL && lastn != NULL); - - /* Delete or truncate everything in sight. */ - avl_for_each_range_safe(pos, n, l, firstn, lastn) { - ext = container_of(pos, struct bitmap_node, btn_node); - - stat = 0; - if (ext->btn_start < start) - stat |= 1; - if (ext->btn_start + ext->btn_length > start + len) - stat |= 2; - switch (stat) { - case 0: - /* Extent totally within range; delete. */ - avl64_delete(bmap->bt_tree, pos); - free(ext); - break; - case 1: - /* Extent is left-adjacent; truncate. */ - ext->btn_length = start - ext->btn_start; - break; - case 2: - /* Extent is right-adjacent; move it. */ - ext->btn_length = ext->btn_start + ext->btn_length - - (start + len); - ext->btn_start = start + len; - break; - case 3: - /* Extent overlaps both ends. */ - ext->btn_length = start - ext->btn_start; - new_start = start + len; - new_length = ext->btn_start + ext->btn_length - - new_start; - - ext = bitmap_node_init(new_start, new_length); - if (!ext) - return false; - - node = avl64_insert(bmap->bt_tree, &ext->btn_node); - if (node == NULL) { - errno = EEXIST; - return false; - } - break; - } - } - - pthread_mutex_unlock(&bmap->bt_lock); - return true; -} -#endif - -#ifdef DEBUG -/* Iterate the set regions of this bitmap. */ -bool -bitmap_iterate( - struct bitmap *bmap, - bool (*fn)(uint64_t, uint64_t, void *), - void *arg) -{ - struct avl64node *node; - struct bitmap_node *ext; - bool moveon = true; - - pthread_mutex_lock(&bmap->bt_lock); - avl_for_each(bmap->bt_tree, node) { - ext = container_of(node, struct bitmap_node, btn_node); - moveon = fn(ext->btn_start, ext->btn_length, arg); - if (!moveon) - break; - } - pthread_mutex_unlock(&bmap->bt_lock); - - return moveon; -} -#endif - -/* Do any bitmap extents overlap the given one? (locked) */ -static bool -__bitmap_test( - struct bitmap *bmap, - uint64_t start, - uint64_t len) -{ - struct avl64node *firstn; - struct avl64node *lastn; - - /* Find any existing nodes over that range. */ - avl64_findranges(bmap->bt_tree, start, start + len, &firstn, &lastn); - - return firstn != NULL && lastn != NULL; -} - -/* Is any part of this range set? */ -bool -bitmap_test( - struct bitmap *bmap, - uint64_t start, - uint64_t len) -{ - bool res; - - pthread_mutex_lock(&bmap->bt_lock); - res = __bitmap_test(bmap, start, len); - pthread_mutex_unlock(&bmap->bt_lock); - - return res; -} - -/* Are none of the bits set? */ -bool -bitmap_empty( - struct bitmap *bmap) -{ - return bmap->bt_tree->avl_firstino == NULL; -} - -#ifdef DEBUG -static bool -bitmap_dump_fn( - uint64_t startblock, - uint64_t blockcount, - void *arg) -{ - printf("%"PRIu64":%"PRIu64"\n", startblock, blockcount); - return true; -} - -/* Dump bitmap. */ -void -bitmap_dump( - struct bitmap *bmap) -{ - printf("BITMAP DUMP %p\n", bmap); - bitmap_iterate(bmap, bitmap_dump_fn, NULL); - printf("BITMAP DUMP DONE\n"); -} -#endif diff --git a/scrub/bitmap.h b/scrub/bitmap.h deleted file mode 100644 index e9746a12..00000000 --- a/scrub/bitmap.h +++ /dev/null @@ -1,24 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0+ -/* - * Copyright (C) 2018 Oracle. All Rights Reserved. - * Author: Darrick J. Wong - */ -#ifndef XFS_SCRUB_BITMAP_H_ -#define XFS_SCRUB_BITMAP_H_ - -struct bitmap { - pthread_mutex_t bt_lock; - struct avl64tree_desc *bt_tree; -}; - -bool bitmap_init(struct bitmap **bmap); -void bitmap_free(struct bitmap **bmap); -bool bitmap_set(struct bitmap *bmap, uint64_t start, uint64_t length); -bool bitmap_iterate(struct bitmap *bmap, - bool (*fn)(uint64_t, uint64_t, void *), void *arg); -bool bitmap_test(struct bitmap *bmap, uint64_t start, - uint64_t len); -bool bitmap_empty(struct bitmap *bmap); -void bitmap_dump(struct bitmap *bmap); - -#endif /* XFS_SCRUB_BITMAP_H_ */ From patchwork Wed Jan 30 17:50:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 10789113 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B26913B4 for ; Wed, 30 Jan 2019 17:50:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 772012F8C5 for ; Wed, 30 Jan 2019 17:50:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 67F642FC30; Wed, 30 Jan 2019 17:50:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DD7E82F8C5 for ; Wed, 30 Jan 2019 17:50:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727169AbfA3Run (ORCPT ); Wed, 30 Jan 2019 12:50:43 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:51712 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726341AbfA3Run (ORCPT ); Wed, 30 Jan 2019 12:50:43 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id x0UHieFr038833; Wed, 30 Jan 2019 17:50:39 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to : cc : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2018-07-02; bh=kW/j/OJ5Tx97V0ix+alJ1j4Un9eoMtNzBYCkjlqljrg=; b=e5QmbIxqFWtnB61qi8Jj78qSRGo2Zq7lhlXv4zxUnJ/+UnR9ucvCX5IBNfP/k4HnqXhI E0ApWlnySTHosBXPH+1jo2ohKzAuKLxGBcw+CmNDhRgeWKHiqB7NXZXDCojyKWGHHpIj 2OzLbTR9shcJ0SXhamgfdMmWzsc1JWT2UD7vGHZHWNLDZi/XiUUVbBEH1a+NGtWKBp5c FaLT/VhEJeVbNav3oVoB0rPoy9oUdy8NokWS8J4mNQzpojErhWE9RtPlqVAs3WrWnLo1 FG36Qb0aAb2j1+FKx1hEj+BfGmaDFHPbqGSdlHPa01c7PwIBPNdayVVL1vtur2i4EP82 SQ== Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by userp2120.oracle.com with ESMTP id 2q8g6rc1e7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 30 Jan 2019 17:50:39 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userv0021.oracle.com (8.14.4/8.14.4) with ESMTP id x0UHoYtW027156 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 30 Jan 2019 17:50:34 GMT Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x0UHoXUm001357; Wed, 30 Jan 2019 17:50:33 GMT Received: from localhost (/67.169.218.210) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 30 Jan 2019 09:50:33 -0800 Subject: [PATCH 2/2] xfs_repair: correctly account for free space btree shrinks when fixing freelist From: "Darrick J. Wong" To: sandeen@redhat.com, darrick.wong@oracle.com Cc: linux-xfs@vger.kernel.org Date: Wed, 30 Jan 2019 09:50:30 -0800 Message-ID: <154887063081.5611.2097236157481583596.stgit@magnolia> In-Reply-To: <154887062444.5611.12746798208748899665.stgit@magnolia> References: <154887062444.5611.12746798208748899665.stgit@magnolia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9152 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1901300136 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Darrick J. Wong When we fix the freelist at the end of build_agf_agfl in phase 5 of repair, we need to create incore rmap records for the blocks that get added to the AGFL. We can't let the regular freelist fixing code use the regular on-disk rmapbt update code because the rmapbt isn't fully set up yet. Unfortunately, the original code fails to account for the fact that the free space btrees can shrink when we allocate blocks to fix the freelist; those blocks are also put on the freelist, but there are already incore rmaps for all the free space btree blocks. We must not create (redundant) incore rmaps for those blocks. If we do, repair fails with a complaint that rebuilding the rmapbt failed during phase 5. xfs/137 on a 1k block size occasionally triggers this bug. To fix the problem, construct a bitmap of all OWN_AG blocks that we know about before traversing the AGFL, and only create new incore rmaps for those AGFL blocks that are not already tracked in the bitmap. Signed-off-by: Darrick J. Wong --- repair/rmap.c | 54 ++++++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 44 insertions(+), 10 deletions(-) diff --git a/repair/rmap.c b/repair/rmap.c index c5a86646..3bb05065 100644 --- a/repair/rmap.c +++ b/repair/rmap.c @@ -12,6 +12,7 @@ #include "dinode.h" #include "slab.h" #include "rmap.h" +#include "bitmap.h" #undef RMAP_DEBUG @@ -450,6 +451,8 @@ rmap_store_ag_btree_rec( struct xfs_buf *agflbp = NULL; struct xfs_trans *tp; __be32 *agfl_bno, *b; + struct xfs_ag_rmap *ag_rmap = &ag_rmaps[agno]; + struct bitmap *own_ag_bitmap = NULL; int error = 0; struct xfs_owner_info oinfo; @@ -457,9 +460,8 @@ rmap_store_ag_btree_rec( return 0; /* Release the ar_rmaps; they were put into the rmapbt during p5. */ - free_slab(&ag_rmaps[agno].ar_rmaps); - error = init_slab(&ag_rmaps[agno].ar_rmaps, - sizeof(struct xfs_rmap_irec)); + free_slab(&ag_rmap->ar_rmaps); + error = init_slab(&ag_rmap->ar_rmaps, sizeof(struct xfs_rmap_irec)); if (error) goto err; @@ -479,19 +481,50 @@ rmap_store_ag_btree_rec( * rmap, we only need to add rmap records for AGFL blocks past * that point in the AGFL because those blocks are a result of a * no-rmap no-shrink freelist fixup that we did earlier. + * + * However, some blocks end up on the AGFL because the free space + * btrees shed blocks as a result of allocating space to fix the + * freelist. We already created in-core rmap records for the free + * space btree blocks, so we must be careful not to create those + * records again. Create a bitmap of already-recorded OWN_AG rmaps. */ + error = init_slab_cursor(ag_rmap->ar_raw_rmaps, rmap_compare, &rm_cur); + if (error) + goto err; + if (!bitmap_init(&own_ag_bitmap)) { + error = -ENOMEM; + goto err; + } + while ((rm_rec = pop_slab_cursor(rm_cur)) != NULL) { + if (rm_rec->rm_owner != XFS_RMAP_OWN_AG) + continue; + if (!bitmap_set(own_ag_bitmap, rm_rec->rm_startblock, + rm_rec->rm_blockcount)) { + error = EFSCORRUPTED; + goto err; + } + } + free_slab_cursor(&rm_cur); + + /* Create rmaps for any AGFL blocks that aren't already rmapped. */ agfl_bno = XFS_BUF_TO_AGFL_BNO(mp, agflbp); - b = agfl_bno + ag_rmaps[agno].ar_flcount; + b = agfl_bno + ag_rmap->ar_flcount; while (*b != cpu_to_be32(NULLAGBLOCK) && b - agfl_bno < libxfs_agfl_size(mp)) { - error = rmap_add_ag_rec(mp, agno, be32_to_cpu(*b), 1, - XFS_RMAP_OWN_AG); - if (error) - goto err; + xfs_agblock_t agbno; + + agbno = be32_to_cpu(*b); + if (!bitmap_test(own_ag_bitmap, agbno, 1)) { + error = rmap_add_ag_rec(mp, agno, agbno, 1, + XFS_RMAP_OWN_AG); + if (error) + goto err; + } b++; } libxfs_putbuf(agflbp); agflbp = NULL; + bitmap_free(&own_ag_bitmap); /* Merge all the raw rmaps into the main list */ error = rmap_fold_raw_recs(mp, agno); @@ -499,8 +532,7 @@ rmap_store_ag_btree_rec( goto err; /* Create cursors to refcount structures */ - error = init_slab_cursor(ag_rmaps[agno].ar_rmaps, rmap_compare, - &rm_cur); + error = init_slab_cursor(ag_rmap->ar_rmaps, rmap_compare, &rm_cur); if (error) goto err; @@ -541,6 +573,8 @@ rmap_store_ag_btree_rec( err: if (agflbp) libxfs_putbuf(agflbp); + if (own_ag_bitmap) + bitmap_free(&own_ag_bitmap); return error; }