From patchwork Mon Aug 26 21:22:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 11115601 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D66D914DE for ; Mon, 26 Aug 2019 21:22:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A923521883 for ; Mon, 26 Aug 2019 21:22:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="AOxQj0Z5" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728159AbfHZVWX (ORCPT ); Mon, 26 Aug 2019 17:22:23 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:52550 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727064AbfHZVWX (ORCPT ); Mon, 26 Aug 2019 17:22:23 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x7QLDms9000886; Mon, 26 Aug 2019 21:22:21 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to : cc : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2019-08-05; bh=vOvmZSwrnhJWdeB7AucKkR3As9vfS9DSQJQy2DAsTmU=; b=AOxQj0Z59S0PkioPkQjvBgPPm3/8ygggO3n702zjx2pC1cU7DVNsw9nyZCfDRltJiHO7 fvAag/V7lgdoIRKHoQUp81rgZgfz9PHmA1UrjkPq+RgUOpTN4IaK4YZHFPy0eeG5AR6K DHmKWKqjCVK/WoquTTNVoZrB3TDj9W8l5Ge1KdaOJwWISxH8atVPX1C/5ftenPymFwRL ni+hR1AohbD8xMbPc9TFvc76d4Sn3qhJ+aS5sqf6G8j8D7Rj7IYXSFlKEE/rNn3t44hi z0jeYiBDkygg4svzHKCK8Lfl3BkMQk/GBr5vWapfufaSZ65XN9wti+9UHKdRmCGWsVtV uw== Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79]) by aserp2120.oracle.com with ESMTP id 2umpxx04b7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 26 Aug 2019 21:22:20 +0000 Received: from pps.filterd (userp3020.oracle.com [127.0.0.1]) by userp3020.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x7QLIJqZ169955; Mon, 26 Aug 2019 21:22:20 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by userp3020.oracle.com with ESMTP id 2umj277uja-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 26 Aug 2019 21:22:19 +0000 Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x7QLMJ2Y002347; Mon, 26 Aug 2019 21:22:19 GMT Received: from localhost (/10.159.144.227) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 26 Aug 2019 14:22:18 -0700 Subject: [PATCH 1/5] man: add documentation for v5 bulkstat ioctl From: "Darrick J. Wong" To: sandeen@sandeen.net, darrick.wong@oracle.com Cc: linux-xfs@vger.kernel.org Date: Mon, 26 Aug 2019 14:22:17 -0700 Message-ID: <156685453775.2840332.17792325799461085474.stgit@magnolia> In-Reply-To: <156685453125.2840332.15645173323964762232.stgit@magnolia> References: <156685453125.2840332.15645173323964762232.stgit@magnolia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9361 signatures=668684 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1908260198 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9361 signatures=668684 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1908260198 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Darrick J. Wong Add a new manpage describing the V5 XFS_IOC_BULKSTAT ioctl. Signed-off-by: Darrick J. Wong --- man/man2/ioctl_xfs_bulkstat.2 | 330 +++++++++++++++++++++++++++++++++++++++ man/man2/ioctl_xfs_fsbulkstat.2 | 6 + 2 files changed, 336 insertions(+) create mode 100644 man/man2/ioctl_xfs_bulkstat.2 diff --git a/man/man2/ioctl_xfs_bulkstat.2 b/man/man2/ioctl_xfs_bulkstat.2 new file mode 100644 index 00000000..f687cfe8 --- /dev/null +++ b/man/man2/ioctl_xfs_bulkstat.2 @@ -0,0 +1,330 @@ +.\" Copyright (c) 2019, Oracle. All rights reserved. +.\" +.\" %%%LICENSE_START(GPLv2+_DOC_FULL) +.\" SPDX-License-Identifier: GPL-2.0+ +.\" %%%LICENSE_END +.TH IOCTL-XFS-BULKSTAT 2 2019-05-23 "XFS" +.SH NAME +ioctl_xfs_bulkstat \- query information for a batch of XFS inodes +.SH SYNOPSIS +.br +.B #include +.PP +.BI "int ioctl(int " fd ", XFS_IOC_BULKSTAT, struct xfs_bulkstat_req *" arg ); +.SH DESCRIPTION +Query stat information for a group of XFS inodes. +This ioctl uses +.B struct xfs_bulkstat_req +to set up a bulk transfer with the kernel: +.PP +.in +4n +.nf +struct xfs_bulkstat_req { + struct xfs_bulk_ireq hdr; + struct xfs_bulkstat bulkstat[]; +}; + +struct xfs_bulk_ireq { + uint64_t ino; + uint32_t flags; + uint32_t icount; + uint32_t ocount; + uint32_t agno; + uint64_t reserved[5]; +}; +.fi +.in +.PP +.I hdr.ino +should be set to the number of the first inode for which the caller wants +information, or zero to start with the first inode in the filesystem. +Note that this is a different semantic than the +.B lastip +in the old +.B FSBULKSTAT +ioctl. +After the call, this value will be set to the number of the next inode for +which information could supplied. +This sets up the next call for an iteration loop. +.PP +If the +.B XFS_BULK_REQ_SPECIAL +flag is set, this field is interpreted as follows: +.RS 0.4i +.TP +.B XFS_BULK_IREQ_SPECIAL_ROOT +Return stat information for the root directory inode. +.RE +.PP +.PP +.I hdr.flags +is a bit set of operational flags: +.RS 0.4i +.TP +.B XFS_BULK_REQ_AGNO +If this is set, the call will only return results for the allocation group (AG) +set in +.BR hdr.agno . +If +.B hdr.ino +is set to zero, results will be returned starting with the first inode in the +AG. +This flag may not be set at the same time as the +.B XFS_BULK_REQ_SPECIAL +flag. +.TP +.B XFS_BULK_REQ_SPECIAL +If this is set, results will be returned for only the special inode +specified in the +.B hdr.ino +field. +This flag may not be set at the same time as the +.B XFS_BULK_REQ_AGNO +flag. +.RE +.PP +.I hdr.icount +is the number of inodes to examine. +.PP +.I hdr.ocount +will be set to the number of records returned. +.PP +.I hdr.agno +is the number of the allocation group (AG) for which we want results. +If the +.B XFS_BULK_REQ_AGNO +flag is not set, this field is ignored. +.PP +.I hdr.reserved +must be set to zero. + +.PP +.I bulkstat +is an array of +.B struct xfs_bulkstat +which is described below. +The array must have at least +.I icount +elements. +.PP +.in +4n +.nf +struct xfs_bulkstat { + uint64_t bs_ino; + uint64_t bs_size; + + uint64_t bs_blocks; + uint64_t bs_xflags; + + uint64_t bs_atime; + uint64_t bs_mtime; + + uint64_t bs_ctime; + uint64_t bs_btime; + + uint32_t bs_gen; + uint32_t bs_uid; + uint32_t bs_gid; + uint32_t bs_projectid; + + uint32_t bs_atime_nsec; + uint32_t bs_mtime_nsec; + uint32_t bs_ctime_nsec; + uint32_t bs_btime_nsec; + + uint32_t bs_blksize; + uint32_t bs_rdev; + uint32_t bs_cowextsize_blks; + uint32_t bs_extsize_blks; + + uint32_t bs_nlink; + uint32_t bs_extents; + uint32_t bs_aextents; + uint16_t bs_version; + uint16_t bs_forkoff; + + uint16_t bs_sick; + uint16_t bs_checked; + uint16_t bs_mode; + uint16_t bs_pad2; + + uint64_t bs_pad[7]; +}; +.fi +.in +.PP +.I bs_ino +is the inode number of this record. +.PP +.I bs_size +is the size of the file, in bytes. +.PP +.I bs_blocks +is the number of filesystem blocks allocated to this file, including metadata. +.PP +.I bs_xflags +tell us what extended flags are set this inode. +These flags are the same values as those defined in the +.B XFS INODE FLAGS +section of the +.BR ioctl_xfs_fsgetxattr (2) +manpage. +.PP +.I bs_atime +is the last time this file was accessed, in seconds. +.PP +.I bs_mtime +is the last time the contents of this file were modified, in seconds. +.PP +.I bs_ctime +is the last time this inode record was modified, in seconds. +.PP +.I bs_btime +is the time this inode record was created, in seconds. +.PP +.I bs_gen +is the generation number of the inode record. +.PP +.I bs_uid +is the user id. +.PP +.I bs_gid +is the group id. +.PP +.I bs_projectid +is the the project id. +.PP +.I bs_atime_nsec +is the nanoseconds component of the last time this file was accessed. +.PP +.I bs_mtime_nsec +is the nanoseconds component of the last time the contents of this file were +modified. +.PP +.I bs_ctime_nsec +is the nanoseconds component of the last time this inode record was modified. +.PP +.I bs_btime_nsec +is the nanoseconds component of the time this inode record was created. +.PP +.I bs_blksize +is the size of a data block for this file, in units of bytes. +.PP +.I bs_rdev +is the encoded device id if this is a special file. +.PP +.I bs_cowextsize_blks +is the Copy on Write extent size hint for this file, in units of data blocks. +.PP +.I bs_extsize_blks +is the extent size hint for this file, in units of data blocks. +.PP +.I bs_nlink +is the number of hard links to this inode. +.PP +.I bs_extents +is the number of storage mappings associated with this file's data. +.PP +.I bs_aextents +is the number of storage mappings associated with this file's extended +attributes. +.PP +.I bs_version +is the version of this data structure. +Currently, only 1 or 5 are supported. +.PP +.I bs_forkoff +is the offset of the attribute fork in the inode record, in bytes. +.PP +The fields +.IR bs_sick " and " bs_checked +indicate the relative health of various allocation group metadata. +Please see the section +.B XFS INODE METADATA HEALTH REPORTING +for more information. +.PP +.I bs_mode +is the file type and mode. +.PP +.I bs_pad[7] +is zeroed. +.SH RETURN VALUE +On error, \-1 is returned, and +.I errno +is set to indicate the error. +.PP +.SH XFS INODE METADATA HEALTH REPORTING +.PP +The online filesystem checking utility scans inode metadata and records what it +finds in the kernel incore state. +The following scheme is used for userspace to read the incore health status of +an inode: +.IP \[bu] 2 +If a given sick flag is set in +.IR bs_sick , +then that piece of metadata has been observed to be damaged. +The same bit should be set in +.IR bs_checked . +.IP \[bu] +If a given sick flag is set in +.I bs_checked +but is not set in +.IR bs_sick , +then that piece of metadata has been checked and is not faulty. +.IP \[bu] +If a given sick flag is not set in +.IR bs_checked , +then no conclusion can be made. +.PP +The following flags apply to these fields: +.RS 0.4i +.TP +.B XFS_BS_SICK_INODE +The inode's record itself. +.TP +.B XFS_BS_SICK_BMBTD +File data extent mappings. +.TP +.B XFS_BS_SICK_BMBTA +Extended attribute extent mappings. +.TP +.B XFS_BS_SICK_BMBTC +Copy on Write staging extent mappings. +.TP +.B XFS_BS_SICK_DIR +Directory information. +.TP +.B XFS_BS_SICK_XATTR +Extended attribute data. +.TP +.B XFS_BS_SICK_SYMLINK +Symbolic link target. +.TP +.B XFS_BS_SICK_PARENT +Parent pointers. +.RE +.SH ERRORS +Error codes can be one of, but are not limited to, the following: +.TP +.B EFAULT +The kernel was not able to copy into the userspace buffer. +.TP +.B EFSBADCRC +Metadata checksum validation failed while performing the query. +.TP +.B EFSCORRUPTED +Metadata corruption was encountered while performing the query. +.TP +.B EINVAL +One of the arguments was not valid. +.TP +.B EIO +An I/O error was encountered while performing the query. +.TP +.B ENOMEM +There was insufficient memory to perform the query. +.SH CONFORMING TO +This API is specific to XFS filesystem on the Linux kernel. +.SH SEE ALSO +.BR ioctl (2), +.BR ioctl_xfs_fsgetxattr (2) diff --git a/man/man2/ioctl_xfs_fsbulkstat.2 b/man/man2/ioctl_xfs_fsbulkstat.2 index 3e13cfa8..81f9d9bf 100644 --- a/man/man2/ioctl_xfs_fsbulkstat.2 +++ b/man/man2/ioctl_xfs_fsbulkstat.2 @@ -15,6 +15,12 @@ ioctl_xfs_fsbulkstat \- query information for a batch of XFS inodes .BI "int ioctl(int " fd ", XFS_IOC_FSBULKSTAT_SINGLE, struct xfs_fsop_bulkreq *" arg ); .SH DESCRIPTION Query stat information for a group of XFS inodes. +.PP +NOTE: This ioctl has been superseded. +Please see the +.BR ioctl_xfs_bulkstat (2) +manpage for information about its replacement. +.PP These ioctls use .B struct xfs_fsop_bulkreq to set up a bulk transfer with the kernel: From patchwork Mon Aug 26 21:22:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 11115605 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 74D8614E5 for ; Mon, 26 Aug 2019 21:22:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 52FEB21883 for ; Mon, 26 Aug 2019 21:22:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="g37bAYuh" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728178AbfHZVW3 (ORCPT ); Mon, 26 Aug 2019 17:22:29 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:41196 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727064AbfHZVW2 (ORCPT ); Mon, 26 Aug 2019 17:22:28 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x7QLFVKG162597; Mon, 26 Aug 2019 21:22:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to : cc : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2019-08-05; bh=Qm6EWx6d8TIyzmFvr4u8JctQdWZ+za9q0dMLaFPGt+w=; b=g37bAYuh7s8C0F4ug6wTXfsrfB2TuYd6kA051g3x5ypWk4X1kxbku0EveNuEKeu69+bn AVTsmn/p9r4thc/gSxNAo3fLe8AiaOcukfTq0456yqzbMegc9fTcT+WvuLAQ6qngd9sF UAfR9MOMpX0Q6tT9lsLETKE05luG/PdLAsQ75Yt+IAftHdd0/1wli3GDiO+fF0Lwz6Mt dCKafsaSff5yEG5Vzr8GBYHiEfkdZw+4VwS5yFR8feh6hH/g9n5g2Bx+zOeth2Zb3jFn /js0t/dBnk5n3nyE2/RS1H78tGu3hHkpi50V7a7z6eZlPt9NWOE4v+BYczvlvMc7dxTj 0Q== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by userp2130.oracle.com with ESMTP id 2umq5t812p-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 26 Aug 2019 21:22:26 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x7QLIQZ5024956; Mon, 26 Aug 2019 21:22:26 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by userp3030.oracle.com with ESMTP id 2umj1tjxqs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 26 Aug 2019 21:22:26 +0000 Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x7QLMP64002387; Mon, 26 Aug 2019 21:22:25 GMT Received: from localhost (/10.159.144.227) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 26 Aug 2019 14:22:25 -0700 Subject: [PATCH 2/5] man: add documentation for v5 inumbers ioctl From: "Darrick J. Wong" To: sandeen@sandeen.net, darrick.wong@oracle.com Cc: linux-xfs@vger.kernel.org Date: Mon, 26 Aug 2019 14:22:24 -0700 Message-ID: <156685454401.2840332.2689052873122428637.stgit@magnolia> In-Reply-To: <156685453125.2840332.15645173323964762232.stgit@magnolia> References: <156685453125.2840332.15645173323964762232.stgit@magnolia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9361 signatures=668684 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1908260198 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9361 signatures=668684 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1908260198 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Darrick J. Wong Add a manpage describing the new v5 XFS_IOC_INUMBERS ioctl. Signed-off-by: Darrick J. Wong --- man/man2/ioctl_xfs_inumbers.2 | 118 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 118 insertions(+) create mode 100644 man/man2/ioctl_xfs_inumbers.2 diff --git a/man/man2/ioctl_xfs_inumbers.2 b/man/man2/ioctl_xfs_inumbers.2 new file mode 100644 index 00000000..b1e854d3 --- /dev/null +++ b/man/man2/ioctl_xfs_inumbers.2 @@ -0,0 +1,118 @@ +.\" Copyright (c) 2019, Oracle. All rights reserved. +.\" +.\" %%%LICENSE_START(GPLv2+_DOC_FULL) +.\" SPDX-License-Identifier: GPL-2.0+ +.\" %%%LICENSE_END +.TH IOCTL-XFS-INUMBERS 2 2019-05-23 "XFS" +.SH NAME +ioctl_xfs_inumbers \- query allocation information for groups of XFS inodes +.SH SYNOPSIS +.br +.B #include +.PP +.BI "int ioctl(int " fd ", XFS_IOC_INUMBERS, struct xfs_inumbers_req *" arg ); +.SH DESCRIPTION +Query inode allocation information for groups of XFS inodes. +This ioctl uses +.B struct xfs_inumbers_req +to set up a bulk transfer with the kernel: +.PP +.in +4n +.nf +struct xfs_inumbers_req { + struct xfs_bulk_ireq hdr; + struct xfs_inumbers inumbers[]; +}; + +struct xfs_bulk_ireq { + uint64_t ino; + uint32_t flags; + uint32_t icount; + uint32_t ocount; + uint32_t agno; + uint64_t reserved[5]; +}; +.fi +.in +.PP +.I hdr +describes the information to query. +The layout and behavior are documented in the +.BR ioctl_xfs_bulkstat (2) +manpage and will not be discussed further here. + +.PP +.I inumbers +is an array of +.B struct xfs_inumbers +which is described below. +The array must have at least +.I icount +elements. +.PP +.in +4n +.nf +struct xfs_inumbers { + uint64_t xi_startino; + uint64_t xi_allocmask; + uint8_t xi_alloccount; + uint8_t xi_version; + uint8_t xi_padding[6]; +}; +.fi +.in +.PP +This structure describes inode usage information for a group of 64 consecutive +inode numbers. +.PP +.I xi_startino +is the first inode number of this group. +.PP +.I xi_allocmask +is a bitmask telling which inodes in this group are allocated. +To clarify, bit +.B N +is set if inode +.BR xi_startino + N +is allocated. +.PP +.I xi_alloccount +is the number of inodes in this group that are allocated. +This should be equal to popcnt(xi_allocmask). +.PP +.I xi_version +is the version of this data structure. +Currently, only 1 or 5 are supported. +.PP +.I xi_padding[6] +is zeroed. +.SH RETURN VALUE +On error, \-1 is returned, and +.I errno +is set to indicate the error. +.PP +.SH ERRORS +Error codes can be one of, but are not limited to, the following: +.TP +.B EFAULT +The kernel was not able to copy into the userspace buffer. +.TP +.B EFSBADCRC +Metadata checksum validation failed while performing the query. +.TP +.B EFSCORRUPTED +Metadata corruption was encountered while performing the query. +.TP +.B EINVAL +One of the arguments was not valid. +.TP +.B EIO +An I/O error was encountered while performing the query. +.TP +.B ENOMEM +There was insufficient memory to perform the query. +.SH CONFORMING TO +This API is specific to XFS filesystem on the Linux kernel. +.SH SEE ALSO +.BR ioctl (2), +.BR ioctl_xfs_bulkstat (2). From patchwork Mon Aug 26 21:22:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 11115607 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 529E414DE for ; Mon, 26 Aug 2019 21:22:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 12C1221872 for ; Mon, 26 Aug 2019 21:22:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="XkekZfGu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727692AbfHZVWi (ORCPT ); Mon, 26 Aug 2019 17:22:38 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:52844 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727064AbfHZVWi (ORCPT ); Mon, 26 Aug 2019 17:22:38 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x7QLDpIZ000989; Mon, 26 Aug 2019 21:22:34 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to : cc : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2019-08-05; bh=cKzA/G7z/mEEOJwd3PcG2H9hjyQeai+E+6UgEfooy3Y=; b=XkekZfGu7kz0Kp4AGT6LzjmnHSFjIleO7KlD9BmDghnasy+Pfq0DP2t85Mdux3c1umlG /ZUTmycEjk0zmghir2C03Wsq6aBvw6J567NisncgV2Zu94FEOCQR9BaiYOQKSeIgptfb E/4TxNj2oQL/oGpBkBOptmb1/N2Wi4m9eQTYbsfq4VAuG53WOlPVi7y0FvJ0zlzHICgS Tnuu+VHeL67oB7HXTWCd4Kp8UoLKZAWDHnKnFWxWNbmCN7mKr4Posu5/KEiJDoyzaYV4 G2i3t+LSpH5NEFOEdp6InpU4o3i6ll8miefNRznnGNjqCSm8hOizuSCWFYPxNyH/XZX7 Qg== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by aserp2120.oracle.com with ESMTP id 2umpxx04cp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 26 Aug 2019 21:22:33 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x7QLIQZC024956; Mon, 26 Aug 2019 21:22:33 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userp3030.oracle.com with ESMTP id 2umj1tjxte-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 26 Aug 2019 21:22:32 +0000 Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x7QLMVjZ024776; Mon, 26 Aug 2019 21:22:31 GMT Received: from localhost (/10.159.144.227) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 26 Aug 2019 14:22:31 -0700 Subject: [PATCH 3/5] misc: convert XFS_IOC_FSBULKSTAT to XFS_IOC_BULKSTAT From: "Darrick J. Wong" To: sandeen@sandeen.net, darrick.wong@oracle.com Cc: linux-xfs@vger.kernel.org Date: Mon, 26 Aug 2019 14:22:30 -0700 Message-ID: <156685455023.2840332.15144489859945888693.stgit@magnolia> In-Reply-To: <156685453125.2840332.15645173323964762232.stgit@magnolia> References: <156685453125.2840332.15645173323964762232.stgit@magnolia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9361 signatures=668684 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1908260198 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9361 signatures=668684 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1908260198 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Darrick J. Wong Convert the v1 calls to v5 calls. Signed-off-by: Darrick J. Wong --- fsr/xfs_fsr.c | 45 ++++++-- include/libfrog.h | 2 include/xfrog.h | 19 +++ io/open.c | 17 ++- libfrog/bulkstat.c | 283 ++++++++++++++++++++++++++++++++++++++++++++++++++-- quota/quot.c | 36 ++++--- scrub/inodes.c | 45 +++++--- scrub/inodes.h | 2 scrub/phase3.c | 6 + scrub/phase5.c | 8 + scrub/phase6.c | 2 scrub/unicrash.c | 6 + scrub/unicrash.h | 4 - spaceman/health.c | 28 +++-- 14 files changed, 416 insertions(+), 87 deletions(-) diff --git a/fsr/xfs_fsr.c b/fsr/xfs_fsr.c index ceee9576..207dafc2 100644 --- a/fsr/xfs_fsr.c +++ b/fsr/xfs_fsr.c @@ -465,6 +465,17 @@ fsrallfs(char *mtab, int howlong, char *leftofffile) ptr = strchr(ptr, ' '); if (ptr) { startino = strtoull(++ptr, NULL, 10); + /* + * NOTE: The inode number read in from + * the leftoff file is the last inode + * to have been fsr'd. Since the new + * xfrog_bulkstat function wants to be + * passed the first inode that we want + * to examine, increment the value that + * we read in. The debug message below + * prints the lastoff value. + */ + startino++; } } if (startpass < 0) @@ -483,7 +494,7 @@ fsrallfs(char *mtab, int howlong, char *leftofffile) if (vflag) { fsrprintf(_("START: pass=%d ino=%llu %s %s\n"), - fs->npass, (unsigned long long)startino, + fs->npass, (unsigned long long)startino - 1, fs->dev, fs->mnt); } @@ -575,12 +586,10 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange) int fd; int count = 0; int ret; - uint32_t buflenout; - struct xfs_bstat buf[GRABSZ]; char fname[64]; char *tname; jdm_fshandle_t *fshandlep; - xfs_ino_t lastino = startino; + struct xfs_bulkstat_req *breq; fsrprintf(_("%s start inode=%llu\n"), mntdir, (unsigned long long)startino); @@ -611,10 +620,21 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange) tmp_init(mntdir); - while ((ret = xfrog_bulkstat(&fsxfd, &lastino, GRABSZ, &buf[0], - &buflenout)) == 0) { - struct xfs_bstat *p; - struct xfs_bstat *endp; + breq = xfrog_bulkstat_alloc_req(GRABSZ, startino); + if (!breq) { + fsrprintf(_("Skipping %s: not enough memory\n"), + mntdir); + xfrog_close(&fsxfd); + free(fshandlep); + return -1; + } + + while ((ret = xfrog_bulkstat(&fsxfd, breq) == 0)) { + struct xfs_bstat bs1; + struct xfs_bulkstat *buf = breq->bulkstat; + struct xfs_bulkstat *p; + struct xfs_bulkstat *endp; + uint32_t buflenout = breq->hdr.ocount; if (buflenout == 0) goto out0; @@ -622,7 +642,7 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange) /* Each loop through, defrag targetrange percent of the files */ count = (buflenout * targetrange) / 100; - qsort((char *)buf, buflenout, sizeof(struct xfs_bstat), cmp); + qsort((char *)buf, buflenout, sizeof(struct xfs_bulkstat), cmp); for (p = buf, endp = (buf + buflenout); p < endp ; p++) { /* Do some obvious checks now */ @@ -630,7 +650,8 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange) (p->bs_extents < 2)) continue; - fd = jdm_open(fshandlep, p, O_RDWR|O_DIRECT); + xfrog_bulkstat_to_bstat(&fsxfd, &bs1, p); + fd = jdm_open(fshandlep, &bs1, O_RDWR | O_DIRECT); if (fd < 0) { /* This probably means the file was * removed while in progress of handling @@ -648,7 +669,7 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange) /* Get a tmp file name */ tname = tmp_next(mntdir); - ret = fsrfile_common(fname, tname, mntdir, fd, p); + ret = fsrfile_common(fname, tname, mntdir, fd, &bs1); leftoffino = p->bs_ino; @@ -660,6 +681,7 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange) } } if (endtime && endtime < time(NULL)) { + free(breq); tmp_close(mntdir); xfrog_close(&fsxfd); fsrall_cleanup(1); @@ -669,6 +691,7 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange) if (ret < 0) fsrprintf(_("%s: xfrog_bulkstat: %s\n"), progname, strerror(errno)); out0: + free(breq); tmp_close(mntdir); xfrog_close(&fsxfd); free(fshandlep); diff --git a/include/libfrog.h b/include/libfrog.h index d33f0146..a28d1b2f 100644 --- a/include/libfrog.h +++ b/include/libfrog.h @@ -8,4 +8,6 @@ unsigned int log2_roundup(unsigned int i); +#define XFROG_ITER_ABORT (1) + #endif /* __LIBFROG_UTIL_H_ */ diff --git a/include/xfrog.h b/include/xfrog.h index 3a43a403..f71a7786 100644 --- a/include/xfrog.h +++ b/include/xfrog.h @@ -48,8 +48,17 @@ struct xfs_fd { /* log2 of sb_blocksize / sb_sectsize */ unsigned int blkbb_log; + + /* XFROG_FLAG_* state flags */ + unsigned int flags; }; +/* Only use v1 bulkstat/inumbers ioctls. */ +#define XFROG_FLAG_BULKSTAT_FORCE_V1 (1 << 0) + +/* Only use v5 bulkstat/inumbers ioctls. */ +#define XFROG_FLAG_BULKSTAT_FORCE_V5 (1 << 1) + /* Static initializers */ #define XFS_FD_INIT(_fd) { .fd = (_fd), } #define XFS_FD_INIT_EMPTY XFS_FD_INIT(-1) @@ -170,8 +179,14 @@ xfrog_daddr_to_agbno( struct xfs_bstat; int xfrog_bulkstat_single(struct xfs_fd *xfd, uint64_t ino, struct xfs_bstat *ubuffer); -int xfrog_bulkstat(struct xfs_fd *xfd, uint64_t *lastino, uint32_t icount, - struct xfs_bstat *ubuffer, uint32_t *ocount); +int xfrog_bulkstat(struct xfs_fd *xfd, struct xfs_bulkstat_req *req); + +struct xfs_bulkstat_req *xfrog_bulkstat_alloc_req(uint32_t nr, + uint64_t startino); +void xfrog_bulkstat_to_bstat(struct xfs_fd *xfd, struct xfs_bstat *bs1, + const struct xfs_bulkstat *bstat); +void xfrog_bstat_to_bulkstat(struct xfs_fd *xfd, struct xfs_bulkstat *bstat, + const struct xfs_bstat *bs1); struct xfs_inogrp; int xfrog_inumbers(struct xfs_fd *xfd, uint64_t *lastino, uint32_t icount, diff --git a/io/open.c b/io/open.c index 968a9d9e..6cbce594 100644 --- a/io/open.c +++ b/io/open.c @@ -713,7 +713,6 @@ inode_f( char **argv) { struct xfs_bstat bstat; - uint32_t count = 0; uint64_t result_ino = 0; uint64_t userino = NULLFSINO; char *p; @@ -764,20 +763,30 @@ inode_f( } } else if (ret_next) { struct xfs_fd xfd = XFS_FD_INIT(file->fd); + struct xfs_bulkstat_req *breq; + + breq = xfrog_bulkstat_alloc_req(1, userino + 1); + if (!breq) { + perror("alloc bulkstat"); + exitcode = 1; + return 0; + } /* get next inode */ - ret = xfrog_bulkstat(&xfd, &userino, 1, &bstat, &count); + ret = xfrog_bulkstat(&xfd, breq); if (ret) { perror("xfsctl"); + free(breq); exitcode = 1; return 0; } /* The next inode in use, or 0 if none */ - if (count) - result_ino = bstat.bs_ino; + if (breq->hdr.ocount) + result_ino = breq->bulkstat[0].bs_ino; else result_ino = 0; + free(breq); } else { struct xfs_fd xfd = XFS_FD_INIT(file->fd); diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c index 6632ffbb..ab27607c 100644 --- a/libfrog/bulkstat.c +++ b/libfrog/bulkstat.c @@ -3,8 +3,22 @@ * Copyright (C) 2019 Oracle. All Rights Reserved. * Author: Darrick J. Wong */ +#include +#include #include "xfs.h" #include "xfrog.h" +#include "libfrog.h" +#include "bitops.h" + +/* Grab fs geometry needed to degrade to v1 bulkstat/inumbers ioctls. */ +static inline int +xfrog_bulkstat_prep_v1_emulation( + struct xfs_fd *xfd) +{ + if (xfd->fsgeom.blocksize == 0 && xfrog_prepare_geometry(xfd)) + return -1; + return 0; +} /* Bulkstat a single inode. */ int @@ -24,23 +38,270 @@ xfrog_bulkstat_single( return ioctl(xfd->fd, XFS_IOC_FSBULKSTAT_SINGLE, &bulkreq); } +/* + * Set up emulation of a v5 bulk request ioctl with a v1 bulk request ioctl. + * Returns 0 if the emulation should proceed; XFROG_ITER_ABORT if there are no + * records; or -1 for error. + */ +static int +xfrog_bulk_req_setup( + struct xfs_fd *xfd, + struct xfs_bulk_ireq *hdr, + struct xfs_fsop_bulkreq *bulkreq, + size_t rec_size) +{ + void *buf; + + if (hdr->flags & XFS_BULK_IREQ_AGNO) { + uint32_t agno = xfrog_ino_to_agno(xfd, hdr->ino); + + if (hdr->ino == 0) + hdr->ino = xfrog_agino_to_ino(xfd, hdr->agno, 0); + else if (agno < hdr->agno) { + errno = EINVAL; + return -1; + } else if (agno > hdr->agno) + goto no_results; + } + + if (xfrog_ino_to_agno(xfd, hdr->ino) > xfd->fsgeom.agcount) + goto no_results; + + buf = malloc(hdr->icount * rec_size); + if (!buf) + return -1; + + if (hdr->ino) + hdr->ino--; + bulkreq->lastip = (__u64 *)&hdr->ino, + bulkreq->icount = hdr->icount, + bulkreq->ocount = (__s32 *)&hdr->ocount, + bulkreq->ubuffer = buf; + return 0; + +no_results: + hdr->ocount = 0; + return XFROG_ITER_ABORT; +} + +/* + * Convert records and free resources used to do a v1 emulation of v5 bulk + * request. + */ +static int +xfrog_bulk_req_teardown( + struct xfs_fd *xfd, + struct xfs_bulk_ireq *hdr, + struct xfs_fsop_bulkreq *bulkreq, + size_t v1_rec_size, + uint64_t (*v1_ino)(void *v1_rec), + void *v5_records, + size_t v5_rec_size, + void (*cvt)(struct xfs_fd *xfd, void *v5, void *v1), + unsigned int startino_adj, + int error) +{ + void *v1_rec = bulkreq->ubuffer; + void *v5_rec = v5_records; + unsigned int i; + + if (error == XFROG_ITER_ABORT) { + error = 0; + goto free; + } + if (error) + goto free; + + /* + * Convert each record from v1 to v5 format, keeping the startino + * value up to date and (if desired) stopping at the end of the + * AG. + */ + for (i = 0; + i < hdr->ocount; + i++, v1_rec += v1_rec_size, v5_rec += v5_rec_size) { + uint64_t ino = v1_ino(v1_rec); + + /* Stop if we hit a different AG. */ + if ((hdr->flags & XFS_BULK_IREQ_AGNO) && + xfrog_ino_to_agno(xfd, ino) != hdr->agno) { + hdr->ocount = i; + break; + } + cvt(xfd, v5_rec, v1_rec); + hdr->ino = ino + startino_adj; + } + +free: + free(bulkreq->ubuffer); + return error; +} + +static uint64_t xfrog_bstat_ino(void *v1_rec) +{ + return ((struct xfs_bstat *)v1_rec)->bs_ino; +} + +static void xfrog_bstat_cvt(struct xfs_fd *xfd, void *v5, void *v1) +{ + xfrog_bstat_to_bulkstat(xfd, v5, v1); +} + +/* Bulkstat a bunch of inodes using the v5 interface. */ +static int +xfrog_bulkstat5( + struct xfs_fd *xfd, + struct xfs_bulkstat_req *req) +{ + return ioctl(xfd->fd, XFS_IOC_BULKSTAT, req); +} + +/* Bulkstat a bunch of inodes using the v1 interface. */ +static int +xfrog_bulkstat1( + struct xfs_fd *xfd, + struct xfs_bulkstat_req *req) +{ + struct xfs_fsop_bulkreq bulkreq = { 0 }; + int error; + + error = xfrog_bulkstat_prep_v1_emulation(xfd); + if (error) + return error; + + error = xfrog_bulk_req_setup(xfd, &req->hdr, &bulkreq, + sizeof(struct xfs_bstat)); + if (error == XFROG_ITER_ABORT) + goto out_teardown; + if (error < 0) + return error; + + error = ioctl(xfd->fd, XFS_IOC_FSBULKSTAT, &bulkreq); + +out_teardown: + return xfrog_bulk_req_teardown(xfd, &req->hdr, &bulkreq, + sizeof(struct xfs_bstat), xfrog_bstat_ino, + &req->bulkstat, sizeof(struct xfs_bulkstat), + xfrog_bstat_cvt, 1, error); +} + /* Bulkstat a bunch of inodes. */ int xfrog_bulkstat( struct xfs_fd *xfd, - uint64_t *lastino, - uint32_t icount, - struct xfs_bstat *ubuffer, - uint32_t *ocount) + struct xfs_bulkstat_req *req) { - struct xfs_fsop_bulkreq bulkreq = { - .lastip = (__u64 *)lastino, - .icount = icount, - .ubuffer = ubuffer, - .ocount = (__s32 *)ocount, - }; + int error; + + if (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V1) + goto try_v1; + + error = xfrog_bulkstat5(xfd, req); + if (error == 0 || (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V5)) + return error; + + /* If the v5 ioctl wasn't found, we punt to v1. */ + switch (errno) { + case EOPNOTSUPP: + case ENOTTY: + xfd->flags |= XFROG_FLAG_BULKSTAT_FORCE_V1; + break; + } + +try_v1: + return xfrog_bulkstat1(xfd, req); +} + +/* Convert bulkstat (v5) to bstat (v1). */ +void +xfrog_bulkstat_to_bstat( + struct xfs_fd *xfd, + struct xfs_bstat *bs1, + const struct xfs_bulkstat *bstat) +{ + bs1->bs_ino = bstat->bs_ino; + bs1->bs_mode = bstat->bs_mode; + bs1->bs_nlink = bstat->bs_nlink; + bs1->bs_uid = bstat->bs_uid; + bs1->bs_gid = bstat->bs_gid; + bs1->bs_rdev = bstat->bs_rdev; + bs1->bs_blksize = bstat->bs_blksize; + bs1->bs_size = bstat->bs_size; + bs1->bs_atime.tv_sec = bstat->bs_atime; + bs1->bs_mtime.tv_sec = bstat->bs_mtime; + bs1->bs_ctime.tv_sec = bstat->bs_ctime; + bs1->bs_atime.tv_nsec = bstat->bs_atime_nsec; + bs1->bs_mtime.tv_nsec = bstat->bs_mtime_nsec; + bs1->bs_ctime.tv_nsec = bstat->bs_ctime_nsec; + bs1->bs_blocks = bstat->bs_blocks; + bs1->bs_xflags = bstat->bs_xflags; + bs1->bs_extsize = xfrog_fsb_to_b(xfd, bstat->bs_extsize_blks); + bs1->bs_extents = bstat->bs_extents; + bs1->bs_gen = bstat->bs_gen; + bs1->bs_projid_lo = bstat->bs_projectid & 0xFFFF; + bs1->bs_forkoff = bstat->bs_forkoff; + bs1->bs_projid_hi = bstat->bs_projectid >> 16; + bs1->bs_sick = bstat->bs_sick; + bs1->bs_checked = bstat->bs_checked; + bs1->bs_cowextsize = xfrog_fsb_to_b(xfd, bstat->bs_cowextsize_blks); + bs1->bs_dmevmask = 0; + bs1->bs_dmstate = 0; + bs1->bs_aextents = bstat->bs_aextents; +} + +/* Convert bstat (v1) to bulkstat (v5). */ +void +xfrog_bstat_to_bulkstat( + struct xfs_fd *xfd, + struct xfs_bulkstat *bstat, + const struct xfs_bstat *bs1) +{ + memset(bstat, 0, sizeof(*bstat)); + bstat->bs_version = XFS_BULKSTAT_VERSION_V1; + + bstat->bs_ino = bs1->bs_ino; + bstat->bs_mode = bs1->bs_mode; + bstat->bs_nlink = bs1->bs_nlink; + bstat->bs_uid = bs1->bs_uid; + bstat->bs_gid = bs1->bs_gid; + bstat->bs_rdev = bs1->bs_rdev; + bstat->bs_blksize = bs1->bs_blksize; + bstat->bs_size = bs1->bs_size; + bstat->bs_atime = bs1->bs_atime.tv_sec; + bstat->bs_mtime = bs1->bs_mtime.tv_sec; + bstat->bs_ctime = bs1->bs_ctime.tv_sec; + bstat->bs_atime_nsec = bs1->bs_atime.tv_nsec; + bstat->bs_mtime_nsec = bs1->bs_mtime.tv_nsec; + bstat->bs_ctime_nsec = bs1->bs_ctime.tv_nsec; + bstat->bs_blocks = bs1->bs_blocks; + bstat->bs_xflags = bs1->bs_xflags; + bstat->bs_extsize_blks = xfrog_b_to_fsbt(xfd, bs1->bs_extsize); + bstat->bs_extents = bs1->bs_extents; + bstat->bs_gen = bs1->bs_gen; + bstat->bs_projectid = bstat_get_projid(bs1); + bstat->bs_forkoff = bs1->bs_forkoff; + bstat->bs_sick = bs1->bs_sick; + bstat->bs_checked = bs1->bs_checked; + bstat->bs_cowextsize_blks = xfrog_b_to_fsbt(xfd, bs1->bs_cowextsize); + bstat->bs_aextents = bs1->bs_aextents; +} + +/* Allocate a bulkstat request. */ +struct xfs_bulkstat_req * +xfrog_bulkstat_alloc_req( + uint32_t nr, + uint64_t startino) +{ + struct xfs_bulkstat_req *breq; + + breq = calloc(1, XFS_BULKSTAT_REQ_SIZE(nr)); + if (!breq) + return NULL; + + breq->hdr.icount = nr; + breq->hdr.ino = startino; - return ioctl(xfd->fd, XFS_IOC_FSBULKSTAT, &bulkreq); + return breq; } /* Query inode allocation bitmask information. */ diff --git a/quota/quot.c b/quota/quot.c index 3455636c..7d09af85 100644 --- a/quota/quot.c +++ b/quota/quot.c @@ -68,7 +68,7 @@ quot_help(void) static void quot_bulkstat_add( - struct xfs_bstat *p, + struct xfs_bulkstat *p, uint flags) { du_t *dp; @@ -92,7 +92,7 @@ quot_bulkstat_add( } for (i = 0; i < 3; i++) { id = (i == 0) ? p->bs_uid : ((i == 1) ? - p->bs_gid : bstat_get_projid(p)); + p->bs_gid : p->bs_projectid); hp = &duhash[i][id % DUHASH]; for (dp = *hp; dp; dp = dp->next) if (dp->id == id) @@ -112,11 +112,11 @@ quot_bulkstat_add( } dp->blocks += size; - if (now - p->bs_atime.tv_sec > 30 * (60*60*24)) + if (now - p->bs_atime > 30 * (60*60*24)) dp->blocks30 += size; - if (now - p->bs_atime.tv_sec > 60 * (60*60*24)) + if (now - p->bs_atime > 60 * (60*60*24)) dp->blocks60 += size; - if (now - p->bs_atime.tv_sec > 90 * (60*60*24)) + if (now - p->bs_atime > 90 * (60*60*24)) dp->blocks90 += size; dp->nfiles++; } @@ -128,9 +128,7 @@ quot_bulkstat_mount( unsigned int flags) { struct xfs_fd fsxfd = XFS_FD_INIT_EMPTY; - struct xfs_bstat *buf; - uint64_t last = 0; - uint32_t count; + struct xfs_bulkstat_req *breq; int i, sts; du_t **dp; @@ -152,23 +150,29 @@ quot_bulkstat_mount( return; } - buf = (struct xfs_bstat *)calloc(NBSTAT, sizeof(struct xfs_bstat)); - if (!buf) { + i = xfrog_prepare_geometry(&fsxfd); + if (i) { + perror("geometry"); + xfrog_close(&fsxfd); + return; + } + + breq = xfrog_bulkstat_alloc_req(NBSTAT, 0); + if (!breq) { perror("calloc"); xfrog_close(&fsxfd); return; } - while ((sts = xfrog_bulkstat(&fsxfd, &last, NBSTAT, buf, - &count)) == 0) { - if (count == 0) + while ((sts = xfrog_bulkstat(&fsxfd, breq)) == 0) { + if (breq->hdr.ocount == 0) break; - for (i = 0; i < count; i++) - quot_bulkstat_add(&buf[i], flags); + for (i = 0; i < breq->hdr.ocount; i++) + quot_bulkstat_add(&breq->bulkstat[i], flags); } if (sts < 0) perror("XFS_IOC_FSBULKSTAT"), - free(buf); + free(breq); xfrog_close(&fsxfd); } diff --git a/scrub/inodes.c b/scrub/inodes.c index 06d1245d..da3bd82b 100644 --- a/scrub/inodes.c +++ b/scrub/inodes.c @@ -49,13 +49,15 @@ static void xfs_iterate_inodes_range_check( struct scrub_ctx *ctx, struct xfs_inogrp *inogrp, - struct xfs_bstat *bstat) + struct xfs_bulkstat *bstat) { - struct xfs_bstat *bs; + struct xfs_bulkstat *bs; int i; int error; for (i = 0, bs = bstat; i < XFS_INODES_PER_CHUNK; i++) { + struct xfs_bstat bs1; + if (!(inogrp->xi_allocmask & (1ULL << i))) continue; if (bs->bs_ino == inogrp->xi_startino + i) { @@ -65,11 +67,13 @@ xfs_iterate_inodes_range_check( /* Load the one inode. */ error = xfrog_bulkstat_single(&ctx->mnt, - inogrp->xi_startino + i, bs); - if (error || bs->bs_ino != inogrp->xi_startino + i) { - memset(bs, 0, sizeof(struct xfs_bstat)); + inogrp->xi_startino + i, &bs1); + if (error || bs1.bs_ino != inogrp->xi_startino + i) { + memset(bs, 0, sizeof(struct xfs_bulkstat)); bs->bs_ino = inogrp->xi_startino + i; bs->bs_blksize = ctx->mnt_sv.f_frsize; + } else { + xfrog_bstat_to_bulkstat(&ctx->mnt, bs, &bs1); } bs++; } @@ -92,50 +96,52 @@ xfs_iterate_inodes_range( { struct xfs_handle handle; struct xfs_inogrp inogrp; - struct xfs_bstat bstat[XFS_INODES_PER_CHUNK]; + struct xfs_bulkstat_req *breq; char idescr[DESCR_BUFSZ]; char buf[DESCR_BUFSZ]; - struct xfs_bstat *bs; + struct xfs_bulkstat *bs; uint64_t igrp_ino; - uint64_t ino; - uint32_t bulklen = 0; uint32_t igrplen = 0; bool moveon = true; int i; int error; int stale_count = 0; - - memset(bstat, 0, XFS_INODES_PER_CHUNK * sizeof(struct xfs_bstat)); - memcpy(&handle.ha_fsid, fshandle, sizeof(handle.ha_fsid)); handle.ha_fid.fid_len = sizeof(xfs_fid_t) - sizeof(handle.ha_fid.fid_len); handle.ha_fid.fid_pad = 0; + breq = xfrog_bulkstat_alloc_req(XFS_INODES_PER_CHUNK, 0); + if (!breq) { + str_info(ctx, descr, _("Insufficient memory; giving up.")); + return false; + } + /* Find the inode chunk & alloc mask */ igrp_ino = first_ino; error = xfrog_inumbers(&ctx->mnt, &igrp_ino, 1, &inogrp, &igrplen); while (!error && igrplen) { - /* Load the inodes. */ - ino = inogrp.xi_startino - 1; - /* * We can have totally empty inode chunks on filesystems where * there are more than 64 inodes per block. Skip these. */ if (inogrp.xi_alloccount == 0) goto igrp_retry; - error = xfrog_bulkstat(&ctx->mnt, &ino, inogrp.xi_alloccount, - bstat, &bulklen); + + breq->hdr.ino = inogrp.xi_startino; + breq->hdr.icount = inogrp.xi_alloccount; + error = xfrog_bulkstat(&ctx->mnt, breq); if (error) str_info(ctx, descr, "%s", strerror_r(errno, buf, DESCR_BUFSZ)); - xfs_iterate_inodes_range_check(ctx, &inogrp, bstat); + xfs_iterate_inodes_range_check(ctx, &inogrp, breq->bulkstat); /* Iterate all the inodes. */ - for (i = 0, bs = bstat; i < inogrp.xi_alloccount; i++, bs++) { + for (i = 0, bs = breq->bulkstat; + i < inogrp.xi_alloccount; + i++, bs++) { if (bs->bs_ino > last_ino) goto out; @@ -181,6 +187,7 @@ _("Changed too many times during scan; giving up.")); str_errno(ctx, descr); moveon = false; } + free(breq); out: return moveon; } diff --git a/scrub/inodes.h b/scrub/inodes.h index 631848c3..3341c6d9 100644 --- a/scrub/inodes.h +++ b/scrub/inodes.h @@ -7,7 +7,7 @@ #define XFS_SCRUB_INODES_H_ typedef int (*xfs_inode_iter_fn)(struct scrub_ctx *ctx, - struct xfs_handle *handle, struct xfs_bstat *bs, void *arg); + struct xfs_handle *handle, struct xfs_bulkstat *bs, void *arg); #define XFS_ITERATE_INODES_ABORT (-1) bool xfs_scan_all_inodes(struct scrub_ctx *ctx, xfs_inode_iter_fn fn, diff --git a/scrub/phase3.c b/scrub/phase3.c index def9a0de..7f1c528a 100644 --- a/scrub/phase3.c +++ b/scrub/phase3.c @@ -30,7 +30,7 @@ xfs_scrub_fd( struct scrub_ctx *ctx, bool (*fn)(struct scrub_ctx *, uint64_t, uint32_t, int, struct xfs_action_list *), - struct xfs_bstat *bs, + struct xfs_bulkstat *bs, struct xfs_action_list *alist) { return fn(ctx, bs->bs_ino, bs->bs_gen, ctx->mnt.fd, alist); @@ -45,7 +45,7 @@ struct scrub_inode_ctx { static void xfs_scrub_inode_vfs_error( struct scrub_ctx *ctx, - struct xfs_bstat *bstat) + struct xfs_bulkstat *bstat) { char descr[DESCR_BUFSZ]; xfs_agnumber_t agno; @@ -65,7 +65,7 @@ static int xfs_scrub_inode( struct scrub_ctx *ctx, struct xfs_handle *handle, - struct xfs_bstat *bstat, + struct xfs_bulkstat *bstat, void *arg) { struct xfs_action_list alist; diff --git a/scrub/phase5.c b/scrub/phase5.c index 2189c9e4..335b0d19 100644 --- a/scrub/phase5.c +++ b/scrub/phase5.c @@ -80,7 +80,7 @@ xfs_scrub_scan_dirents( struct scrub_ctx *ctx, const char *descr, int *fd, - struct xfs_bstat *bstat) + struct xfs_bulkstat *bstat) { struct unicrash *uc = NULL; DIR *dir; @@ -140,7 +140,7 @@ xfs_scrub_scan_fhandle_namespace_xattrs( struct scrub_ctx *ctx, const char *descr, struct xfs_handle *handle, - struct xfs_bstat *bstat, + struct xfs_bulkstat *bstat, const struct attrns_decode *attr_ns) { struct attrlist_cursor cur; @@ -200,7 +200,7 @@ xfs_scrub_scan_fhandle_xattrs( struct scrub_ctx *ctx, const char *descr, struct xfs_handle *handle, - struct xfs_bstat *bstat) + struct xfs_bulkstat *bstat) { const struct attrns_decode *ns; bool moveon = true; @@ -228,7 +228,7 @@ static int xfs_scrub_connections( struct scrub_ctx *ctx, struct xfs_handle *handle, - struct xfs_bstat *bstat, + struct xfs_bulkstat *bstat, void *arg) { bool *pmoveon = arg; diff --git a/scrub/phase6.c b/scrub/phase6.c index 630d15b0..3c1e7dc3 100644 --- a/scrub/phase6.c +++ b/scrub/phase6.c @@ -172,7 +172,7 @@ static int xfs_report_verify_inode( struct scrub_ctx *ctx, struct xfs_handle *handle, - struct xfs_bstat *bstat, + struct xfs_bulkstat *bstat, void *arg) { char descr[DESCR_BUFSZ]; diff --git a/scrub/unicrash.c b/scrub/unicrash.c index 824b10f0..3ae91327 100644 --- a/scrub/unicrash.c +++ b/scrub/unicrash.c @@ -432,7 +432,7 @@ unicrash_init( */ static bool is_only_root_writable( - struct xfs_bstat *bstat) + struct xfs_bulkstat *bstat) { if (bstat->bs_uid != 0 || bstat->bs_gid != 0) return false; @@ -444,7 +444,7 @@ bool unicrash_dir_init( struct unicrash **ucp, struct scrub_ctx *ctx, - struct xfs_bstat *bstat) + struct xfs_bulkstat *bstat) { /* * Assume 64 bytes per dentry, clamp buckets between 16 and 64k. @@ -459,7 +459,7 @@ bool unicrash_xattr_init( struct unicrash **ucp, struct scrub_ctx *ctx, - struct xfs_bstat *bstat) + struct xfs_bulkstat *bstat) { /* Assume 16 attributes per extent for lack of a better idea. */ return unicrash_init(ucp, ctx, false, 16 * (1 + bstat->bs_aextents), diff --git a/scrub/unicrash.h b/scrub/unicrash.h index fb8f5f72..feb9cc86 100644 --- a/scrub/unicrash.h +++ b/scrub/unicrash.h @@ -14,9 +14,9 @@ struct unicrash; struct dirent; bool unicrash_dir_init(struct unicrash **ucp, struct scrub_ctx *ctx, - struct xfs_bstat *bstat); + struct xfs_bulkstat *bstat); bool unicrash_xattr_init(struct unicrash **ucp, struct scrub_ctx *ctx, - struct xfs_bstat *bstat); + struct xfs_bulkstat *bstat); bool unicrash_fs_label_init(struct unicrash **ucp, struct scrub_ctx *ctx); void unicrash_free(struct unicrash *uc); bool unicrash_check_dir_name(struct unicrash *uc, const char *descr, diff --git a/spaceman/health.c b/spaceman/health.c index 6c9c75a1..e71c1e45 100644 --- a/spaceman/health.c +++ b/spaceman/health.c @@ -263,11 +263,10 @@ static int report_bulkstat_health( xfs_agnumber_t agno) { - struct xfs_bstat bstat[128]; + struct xfs_bulkstat_req *breq; char descr[256]; uint64_t startino = 0; uint64_t lastino = -1ULL; - uint32_t ocount; uint32_t i; int error; @@ -276,18 +275,27 @@ report_bulkstat_health( lastino = xfrog_agino_to_ino(&file->xfd, agno + 1, 0) - 1; } - while ((error = xfrog_bulkstat(&file->xfd, &startino, 128, bstat, - &ocount) == 0) && ocount > 0) { - for (i = 0; i < ocount; i++) { - if (bstat[i].bs_ino > lastino) + breq = xfrog_bulkstat_alloc_req(128, startino); + if (!breq) { + perror("bulk alloc req"); + exitcode = 1; + return 1; + } + + while ((error = xfrog_bulkstat(&file->xfd, breq) == 0) && + breq->hdr.ocount > 0) { + for (i = 0; i < breq->hdr.ocount; i++) { + if (breq->bulkstat[i].bs_ino > lastino) goto out; - snprintf(descr, sizeof(descr) - 1, _("inode %llu"), - bstat[i].bs_ino); - report_sick(descr, inode_flags, bstat[i].bs_sick, - bstat[i].bs_checked); + snprintf(descr, sizeof(descr) - 1, _("inode %"PRIu64), + breq->bulkstat[i].bs_ino); + report_sick(descr, inode_flags, + breq->bulkstat[i].bs_sick, + breq->bulkstat[i].bs_checked); } } out: + free(breq); return error; } From patchwork Mon Aug 26 21:22:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 11115609 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC41314DE for ; Mon, 26 Aug 2019 21:22:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A0A1621883 for ; Mon, 26 Aug 2019 21:22:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="dfAmjDoO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728194AbfHZVWl (ORCPT ); Mon, 26 Aug 2019 17:22:41 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:52938 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727064AbfHZVWl (ORCPT ); Mon, 26 Aug 2019 17:22:41 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x7QLDmi5000900; Mon, 26 Aug 2019 21:22:39 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to : cc : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2019-08-05; bh=y8k5Xy6SceTHLm4VhyvUFgarSs+5t6D6HnLhaqDbp3M=; b=dfAmjDoOjDQ4WinH40vjntaiJOGRI00LcCs0eH1pk5ri/l5UCDPm5YRFAvA60KwBvrFf QeYwTNKHHa4kzokOJV/r4KrI00DxaZ64iV7yQ+C+6kfX3dmQbVYzdaSWiOaxZzR6EZvL 4bajP1yvWdYSqdwqfFX0dnoId6B46z44po78Uo2W2j/Ta36UEJ5RioW5KemD1pXt+ZrC 0uoctnhordr+Zdy2ZekdzWspyRHRysfQ77z16zE2389JyPsZA+1D20ou+jrAj55dmYqD yuDzqJfwVgS37sff4jZtkGmf7iE9qFLRbMneoaBp7GXzK8rveyeEBFn/oHwglB2i2Az9 ZA== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by aserp2120.oracle.com with ESMTP id 2umpxx04d3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 26 Aug 2019 21:22:38 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x7QLIcCW169557; Mon, 26 Aug 2019 21:22:38 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserp3030.oracle.com with ESMTP id 2umhu7wtua-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 26 Aug 2019 21:22:38 +0000 Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x7QLMcgK002444; Mon, 26 Aug 2019 21:22:38 GMT Received: from localhost (/10.159.144.227) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 26 Aug 2019 14:22:37 -0700 Subject: [PATCH 4/5] misc: convert to v5 bulkstat_single call From: "Darrick J. Wong" To: sandeen@sandeen.net, darrick.wong@oracle.com Cc: linux-xfs@vger.kernel.org Date: Mon, 26 Aug 2019 14:22:36 -0700 Message-ID: <156685455669.2840332.9371973449817694904.stgit@magnolia> In-Reply-To: <156685453125.2840332.15645173323964762232.stgit@magnolia> References: <156685453125.2840332.15645173323964762232.stgit@magnolia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9361 signatures=668684 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1908260198 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9361 signatures=668684 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1908260198 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Darrick J. Wong Signed-off-by: Darrick J. Wong --- fsr/xfs_fsr.c | 8 +++- include/xfrog.h | 4 +- io/open.c | 6 ++- io/swapext.c | 4 ++ libfrog/bulkstat.c | 104 ++++++++++++++++++++++++++++++++++++++++++++++------ scrub/inodes.c | 8 +--- spaceman/health.c | 4 +- 7 files changed, 109 insertions(+), 29 deletions(-) diff --git a/fsr/xfs_fsr.c b/fsr/xfs_fsr.c index 207dafc2..9238f93c 100644 --- a/fsr/xfs_fsr.c +++ b/fsr/xfs_fsr.c @@ -731,6 +731,7 @@ fsrfile( xfs_ino_t ino) { struct xfs_fd fsxfd = XFS_FD_INIT_EMPTY; + struct xfs_bulkstat bulkstat; struct xfs_bstat statbuf; jdm_fshandle_t *fshandlep; int fd = -1; @@ -761,12 +762,13 @@ fsrfile( goto out; } - error = xfrog_bulkstat_single(&fsxfd, ino, &statbuf); + error = xfrog_bulkstat_single(&fsxfd, ino, 0, &bulkstat); if (error < 0) { fsrprintf(_("unable to get bstat on %s: %s\n"), fname, strerror(errno)); goto out; } + xfrog_bulkstat_to_bstat(&fsxfd, &statbuf, &bulkstat); fd = jdm_open(fshandlep, &statbuf, O_RDWR|O_DIRECT); if (fd < 0) { @@ -987,7 +989,7 @@ fsr_setup_attr_fork( i = 0; do { - struct xfs_bstat tbstat; + struct xfs_bulkstat tbstat; char name[64]; int ret; @@ -996,7 +998,7 @@ fsr_setup_attr_fork( * this to compare against the target and determine what we * need to do. */ - ret = xfrog_bulkstat_single(&txfd, tstatbuf.st_ino, &tbstat); + ret = xfrog_bulkstat_single(&txfd, tstatbuf.st_ino, 0, &tbstat); if (ret < 0) { fsrprintf(_("unable to get bstat on temp file: %s\n"), strerror(errno)); diff --git a/include/xfrog.h b/include/xfrog.h index f71a7786..de87f97c 100644 --- a/include/xfrog.h +++ b/include/xfrog.h @@ -177,8 +177,8 @@ xfrog_daddr_to_agbno( /* Bulkstat wrappers */ struct xfs_bstat; -int xfrog_bulkstat_single(struct xfs_fd *xfd, uint64_t ino, - struct xfs_bstat *ubuffer); +int xfrog_bulkstat_single(struct xfs_fd *xfd, uint64_t ino, unsigned int flags, + struct xfs_bulkstat *bulkstat); int xfrog_bulkstat(struct xfs_fd *xfd, struct xfs_bulkstat_req *req); struct xfs_bulkstat_req *xfrog_bulkstat_alloc_req(uint32_t nr, diff --git a/io/open.c b/io/open.c index 6cbce594..015f88ec 100644 --- a/io/open.c +++ b/io/open.c @@ -712,7 +712,7 @@ inode_f( int argc, char **argv) { - struct xfs_bstat bstat; + struct xfs_bulkstat bulkstat; uint64_t result_ino = 0; uint64_t userino = NULLFSINO; char *p; @@ -791,7 +791,7 @@ inode_f( struct xfs_fd xfd = XFS_FD_INIT(file->fd); /* get this inode */ - ret = xfrog_bulkstat_single(&xfd, userino, &bstat); + ret = xfrog_bulkstat_single(&xfd, userino, 0, &bulkstat); if (ret && errno == EINVAL) { /* Not in use */ result_ino = 0; @@ -800,7 +800,7 @@ inode_f( exitcode = 1; return 0; } else { - result_ino = bstat.bs_ino; + result_ino = bulkstat.bs_ino; } } diff --git a/io/swapext.c b/io/swapext.c index e8432e7d..196d4744 100644 --- a/io/swapext.c +++ b/io/swapext.c @@ -27,6 +27,7 @@ swapext_f( char **argv) { struct xfs_fd fxfd = XFS_FD_INIT(file->fd); + struct xfs_bulkstat bulkstat; int fd; int error; struct xfs_swapext sx; @@ -47,11 +48,12 @@ swapext_f( goto out; } - error = xfrog_bulkstat_single(&fxfd, stat.st_ino, &sx.sx_stat); + error = xfrog_bulkstat_single(&fxfd, stat.st_ino, 0, &bulkstat); if (error) { perror("bulkstat"); goto out; } + xfrog_bulkstat_to_bstat(&fxfd, &sx.sx_stat, &bulkstat); sx.sx_version = XFS_SX_VERSION; sx.sx_fdtarget = file->fd; sx.sx_fdtmp = fd; diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c index ab27607c..70d9c42a 100644 --- a/libfrog/bulkstat.c +++ b/libfrog/bulkstat.c @@ -20,22 +20,102 @@ xfrog_bulkstat_prep_v1_emulation( return 0; } -/* Bulkstat a single inode. */ +/* Bulkstat a single inode using v5 ioctl. */ +static int +xfrog_bulkstat_single5( + struct xfs_fd *xfd, + uint64_t ino, + unsigned int flags, + struct xfs_bulkstat *bulkstat) +{ + struct xfs_bulkstat_req *req; + int ret; + + if (flags & ~(XFS_BULK_IREQ_SPECIAL)) { + errno = EINVAL; + return -1; + } + + req = xfrog_bulkstat_alloc_req(1, ino); + if (!req) + return -1; + + req->hdr.flags = flags; + ret = ioctl(xfd->fd, XFS_IOC_BULKSTAT, req); + if (ret) + goto free; + + if (req->hdr.ocount == 0) { + errno = ENOENT; + ret = -1; + goto free; + } + + memcpy(bulkstat, req->bulkstat, sizeof(struct xfs_bulkstat)); +free: + free(req); + return ret; +} + +/* Bulkstat a single inode using v1 ioctl. */ +static int +xfrog_bulkstat_single1( + struct xfs_fd *xfd, + uint64_t ino, + unsigned int flags, + struct xfs_bulkstat *bulkstat) +{ + struct xfs_bstat bstat; + struct xfs_fsop_bulkreq bulkreq = { 0 }; + int error; + + if (flags) { + errno = EINVAL; + return -1; + } + + error = xfrog_bulkstat_prep_v1_emulation(xfd); + if (error) + return error; + + bulkreq.lastip = (__u64 *)&ino; + bulkreq.icount = 1; + bulkreq.ubuffer = &bstat; + error = ioctl(xfd->fd, XFS_IOC_FSBULKSTAT_SINGLE, &bulkreq); + if (error) + return error; + + xfrog_bstat_to_bulkstat(xfd, bulkstat, &bstat); + return 0; +} + +/* Bulkstat a single inode using v1 ioctl. */ int xfrog_bulkstat_single( - struct xfs_fd *xfd, - uint64_t ino, - struct xfs_bstat *ubuffer) + struct xfs_fd *xfd, + uint64_t ino, + unsigned int flags, + struct xfs_bulkstat *bulkstat) { - __u64 i = ino; - struct xfs_fsop_bulkreq bulkreq = { - .lastip = &i, - .icount = 1, - .ubuffer = ubuffer, - .ocount = NULL, - }; + int error; + + if (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V1) + goto try_v1; + + error = xfrog_bulkstat_single5(xfd, ino, flags, bulkstat); + if (error == 0 || (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V5)) + return 0; - return ioctl(xfd->fd, XFS_IOC_FSBULKSTAT_SINGLE, &bulkreq); + /* If the v5 ioctl wasn't found, we punt to v1. */ + switch (errno) { + case EOPNOTSUPP: + case ENOTTY: + xfd->flags |= XFROG_FLAG_BULKSTAT_FORCE_V1; + break; + } + +try_v1: + return xfrog_bulkstat_single1(xfd, ino, flags, bulkstat); } /* diff --git a/scrub/inodes.c b/scrub/inodes.c index da3bd82b..49ab74e3 100644 --- a/scrub/inodes.c +++ b/scrub/inodes.c @@ -56,8 +56,6 @@ xfs_iterate_inodes_range_check( int error; for (i = 0, bs = bstat; i < XFS_INODES_PER_CHUNK; i++) { - struct xfs_bstat bs1; - if (!(inogrp->xi_allocmask & (1ULL << i))) continue; if (bs->bs_ino == inogrp->xi_startino + i) { @@ -67,13 +65,11 @@ xfs_iterate_inodes_range_check( /* Load the one inode. */ error = xfrog_bulkstat_single(&ctx->mnt, - inogrp->xi_startino + i, &bs1); - if (error || bs1.bs_ino != inogrp->xi_startino + i) { + inogrp->xi_startino + i, 0, bs); + if (error || bs->bs_ino != inogrp->xi_startino + i) { memset(bs, 0, sizeof(struct xfs_bulkstat)); bs->bs_ino = inogrp->xi_startino + i; bs->bs_blksize = ctx->mnt_sv.f_frsize; - } else { - xfrog_bstat_to_bulkstat(&ctx->mnt, bs, &bs1); } bs++; } diff --git a/spaceman/health.c b/spaceman/health.c index e71c1e45..ff03d074 100644 --- a/spaceman/health.c +++ b/spaceman/health.c @@ -208,7 +208,7 @@ report_inode_health( unsigned long long ino, const char *descr) { - struct xfs_bstat bs; + struct xfs_bulkstat bs; char d[256]; int ret; @@ -217,7 +217,7 @@ report_inode_health( descr = d; } - ret = xfrog_bulkstat_single(&file->xfd, ino, &bs); + ret = xfrog_bulkstat_single(&file->xfd, ino, 0, &bs); if (ret) { perror(descr); return 1; From patchwork Mon Aug 26 21:22:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 11115615 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F247914DE for ; Mon, 26 Aug 2019 21:27:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B9D2521848 for ; Mon, 26 Aug 2019 21:27:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="qw2gLUge" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728233AbfHZV1w (ORCPT ); Mon, 26 Aug 2019 17:27:52 -0400 Received: from userp2120.oracle.com ([156.151.31.85]:49122 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728078AbfHZV1w (ORCPT ); Mon, 26 Aug 2019 17:27:52 -0400 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x7QLQllS003322; Mon, 26 Aug 2019 21:27:48 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to : cc : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2019-08-05; bh=S0bnC+VyrvGxzQaDz29MuJuCkCKRByHsQ8qTq+PuO4Y=; b=qw2gLUgeYSt2dMY0Z/w5PnekxHCRApqr/gQ7qIHaI2yLoFfdH+S+/X8CvkCCLLNvF5vu lXhRwkmTuVXtmFululeaelrdusA3yumCBRFrGZnDEZKPeIlHea+qczn+jf5nD0yJrCm1 MWQati4AT2npaztBBE4EqkaK+sMF2alKBVd08Zi+ytQ/VbvWvROxP/cDQi8EsQ872/bg CoKq8gsg/ToaQs0rAUgJvcws+AaiQaRqgAPoeQPYAlGgGsdwQWLSv8ai4ur/JafVNWfX EGDBQNwI35cdkzEv0mfOS8395w6qTBXPOEFRMyv+6IJ+n7syEB7lHi6l5yEuHgo908Ic vQ== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by userp2120.oracle.com with ESMTP id 2umqbe804n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 26 Aug 2019 21:27:48 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x7QLIQdv024922; Mon, 26 Aug 2019 21:27:47 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by userp3030.oracle.com with ESMTP id 2umj1tk3pw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 26 Aug 2019 21:27:47 +0000 Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id x7QLRkuK002815; Mon, 26 Aug 2019 21:27:46 GMT Received: from localhost (/10.159.144.227) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 26 Aug 2019 14:27:46 -0700 Subject: [PATCH 5/5] misc: convert from XFS_IOC_FSINUMBERS to XFS_IOC_INUMBERS From: "Darrick J. Wong" To: sandeen@sandeen.net, darrick.wong@oracle.com Cc: linux-xfs@vger.kernel.org Date: Mon, 26 Aug 2019 14:22:43 -0700 Message-ID: <156685456301.2840332.3019220306362297470.stgit@magnolia> In-Reply-To: <156685453125.2840332.15645173323964762232.stgit@magnolia> References: <156685453125.2840332.15645173323964762232.stgit@magnolia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9361 signatures=668684 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1908260198 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9361 signatures=668684 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1908260198 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Darrick J. Wong Convert all programs to use the v5 inumbers ioctl. Signed-off-by: Darrick J. Wong --- include/xfrog.h | 10 +++- io/imap.c | 26 +++++------ io/open.c | 27 +++++++---- libfrog/bulkstat.c | 123 +++++++++++++++++++++++++++++++++++++++++++++++----- scrub/fscounters.c | 21 +++++---- scrub/inodes.c | 36 +++++++++------ 6 files changed, 183 insertions(+), 60 deletions(-) diff --git a/include/xfrog.h b/include/xfrog.h index de87f97c..0dcee510 100644 --- a/include/xfrog.h +++ b/include/xfrog.h @@ -189,8 +189,14 @@ void xfrog_bstat_to_bulkstat(struct xfs_fd *xfd, struct xfs_bulkstat *bstat, const struct xfs_bstat *bs1); struct xfs_inogrp; -int xfrog_inumbers(struct xfs_fd *xfd, uint64_t *lastino, uint32_t icount, - struct xfs_inogrp *ubuffer, uint32_t *ocount); +int xfrog_inumbers(struct xfs_fd *xfd, struct xfs_inumbers_req *req); + +struct xfs_inumbers_req *xfrog_inumbers_alloc_req(uint32_t nr, + uint64_t startino); +void xfrog_inumbers_to_inogrp(struct xfs_inogrp *ig1, + const struct xfs_inumbers *ig); +void xfrog_inogrp_to_inumbers(struct xfs_inumbers *ig, + const struct xfs_inogrp *ig1); int xfrog_ag_geometry(int fd, unsigned int agno, struct xfs_ag_geometry *ageo); diff --git a/io/imap.c b/io/imap.c index 9a3d8965..6b6cc0f9 100644 --- a/io/imap.c +++ b/io/imap.c @@ -16,9 +16,7 @@ static int imap_f(int argc, char **argv) { struct xfs_fd xfd = XFS_FD_INIT(file->fd); - struct xfs_inogrp *t; - uint64_t last = 0; - uint32_t count; + struct xfs_inumbers_req *ireq; uint32_t nent; int i; int error; @@ -28,17 +26,19 @@ imap_f(int argc, char **argv) else nent = atoi(argv[1]); - t = malloc(nent * sizeof(*t)); - if (!t) + ireq = xfrog_inumbers_alloc_req(nent, 0); + if (!ireq) { + perror("alloc req"); return 0; + } - while ((error = xfrog_inumbers(&xfd, &last, nent, t, &count)) == 0 && - count > 0) { - for (i = 0; i < count; i++) { - printf(_("ino %10llu count %2d mask %016llx\n"), - (unsigned long long)t[i].xi_startino, - t[i].xi_alloccount, - (unsigned long long)t[i].xi_allocmask); + while ((error = xfrog_inumbers(&xfd, ireq)) == 0 && + ireq->hdr.ocount > 0) { + for (i = 0; i < ireq->hdr.ocount; i++) { + printf(_("ino %10"PRIu64" count %2d mask %016"PRIx64"\n"), + ireq->inumbers[i].xi_startino, + ireq->inumbers[i].xi_alloccount, + ireq->inumbers[i].xi_allocmask); } } @@ -46,7 +46,7 @@ imap_f(int argc, char **argv) perror("xfsctl(XFS_IOC_FSINUMBERS)"); exitcode = 1; } - free(t); + free(ireq); return 0; } diff --git a/io/open.c b/io/open.c index 015f88ec..8322e147 100644 --- a/io/open.c +++ b/io/open.c @@ -674,35 +674,42 @@ static __u64 get_last_inode(void) { struct xfs_fd xfd = XFS_FD_INIT(file->fd); - uint64_t lastip = 0; + struct xfs_inumbers_req *ireq; uint32_t lastgrp = 0; - uint32_t ocount = 0; __u64 last_ino; - struct xfs_inogrp igroup[IGROUP_NR]; + + ireq = xfrog_inumbers_alloc_req(IGROUP_NR, 0); + if (!ireq) { + perror("alloc req"); + return 0; + } for (;;) { - if (xfrog_inumbers(&xfd, &lastip, IGROUP_NR, igroup, - &ocount)) { + if (xfrog_inumbers(&xfd, ireq)) { perror("XFS_IOC_FSINUMBERS"); + free(ireq); return 0; } /* Did we reach the last inode? */ - if (ocount == 0) + if (ireq->hdr.ocount == 0) break; /* last inode in igroup table */ - lastgrp = ocount; + lastgrp = ireq->hdr.ocount; } - if (lastgrp == 0) + if (lastgrp == 0) { + free(ireq); return 0; + } lastgrp--; /* The last inode number in use */ - last_ino = igroup[lastgrp].xi_startino + - libxfs_highbit64(igroup[lastgrp].xi_allocmask); + last_ino = ireq->inumbers[lastgrp].xi_startino + + libxfs_highbit64(ireq->inumbers[lastgrp].xi_allocmask); + free(ireq); return last_ino; } diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c index 70d9c42a..26db9af2 100644 --- a/libfrog/bulkstat.c +++ b/libfrog/bulkstat.c @@ -384,21 +384,120 @@ xfrog_bulkstat_alloc_req( return breq; } +/* Convert an inumbers (v5) struct to a inogrp (v1) struct. */ +void +xfrog_inumbers_to_inogrp( + struct xfs_inogrp *ig1, + const struct xfs_inumbers *ig) +{ + ig1->xi_startino = ig->xi_startino; + ig1->xi_alloccount = ig->xi_alloccount; + ig1->xi_allocmask = ig->xi_allocmask; +} + +/* Convert an inogrp (v1) struct to a inumbers (v5) struct. */ +void +xfrog_inogrp_to_inumbers( + struct xfs_inumbers *ig, + const struct xfs_inogrp *ig1) +{ + memset(ig, 0, sizeof(*ig)); + ig->xi_version = XFS_INUMBERS_VERSION_V1; + + ig->xi_startino = ig1->xi_startino; + ig->xi_alloccount = ig1->xi_alloccount; + ig->xi_allocmask = ig1->xi_allocmask; +} + +static uint64_t xfrog_inum_ino(void *v1_rec) +{ + return ((struct xfs_inogrp *)v1_rec)->xi_startino; +} + +static void xfrog_inum_cvt(struct xfs_fd *xfd, void *v5, void *v1) +{ + xfrog_inogrp_to_inumbers(v5, v1); +} + +/* Query inode allocation bitmask information using v5 ioctl. */ +static int +xfrog_inumbers5( + struct xfs_fd *xfd, + struct xfs_inumbers_req *req) +{ + return ioctl(xfd->fd, XFS_IOC_INUMBERS, req); +} + +/* Query inode allocation bitmask information using v1 ioctl. */ +static int +xfrog_inumbers1( + struct xfs_fd *xfd, + struct xfs_inumbers_req *req) +{ + struct xfs_fsop_bulkreq bulkreq = { 0 }; + int error; + + error = xfrog_bulkstat_prep_v1_emulation(xfd); + if (error) + return error; + + error = xfrog_bulk_req_setup(xfd, &req->hdr, &bulkreq, + sizeof(struct xfs_inogrp)); + if (error == XFROG_ITER_ABORT) + goto out_teardown; + if (error < 0) + return error; + + error = ioctl(xfd->fd, XFS_IOC_FSINUMBERS, &bulkreq); + +out_teardown: + return xfrog_bulk_req_teardown(xfd, &req->hdr, &bulkreq, + sizeof(struct xfs_inogrp), xfrog_inum_ino, + &req->inumbers, sizeof(struct xfs_inumbers), + xfrog_inum_cvt, 64, error); +} + /* Query inode allocation bitmask information. */ int xfrog_inumbers( struct xfs_fd *xfd, - uint64_t *lastino, - uint32_t icount, - struct xfs_inogrp *ubuffer, - uint32_t *ocount) + struct xfs_inumbers_req *req) { - struct xfs_fsop_bulkreq bulkreq = { - .lastip = (__u64 *)lastino, - .icount = icount, - .ubuffer = ubuffer, - .ocount = (__s32 *)ocount, - }; - - return ioctl(xfd->fd, XFS_IOC_FSINUMBERS, &bulkreq); + int error; + + if (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V1) + goto try_v1; + + error = xfrog_inumbers5(xfd, req); + if (error == 0 || (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V5)) + return 0; + + /* If the v5 ioctl wasn't found, we punt to v1. */ + switch (errno) { + case EOPNOTSUPP: + case ENOTTY: + xfd->flags |= XFROG_FLAG_BULKSTAT_FORCE_V1; + break; + } + +try_v1: + return xfrog_inumbers1(xfd, req); +} + +/* Allocate a inumbers request. */ +struct xfs_inumbers_req * +xfrog_inumbers_alloc_req( + uint32_t nr, + uint64_t startino) +{ + struct xfs_inumbers_req *ireq; + + ireq = calloc(1, XFS_INUMBERS_REQ_SIZE(nr)); + if (!ireq) + return NULL; + + ireq->hdr.icount = nr; + ireq->hdr.ino = startino; + + return ireq; } diff --git a/scrub/fscounters.c b/scrub/fscounters.c index cd216b30..d95418ba 100644 --- a/scrub/fscounters.c +++ b/scrub/fscounters.c @@ -42,23 +42,28 @@ xfs_count_inodes_range( uint64_t last_ino, uint64_t *count) { - struct xfs_inogrp inogrp; - uint64_t igrp_ino; + struct xfs_inumbers_req *ireq; uint64_t nr = 0; - uint32_t igrplen = 0; int error; ASSERT(!(first_ino & (XFS_INODES_PER_CHUNK - 1))); ASSERT((last_ino & (XFS_INODES_PER_CHUNK - 1))); - igrp_ino = first_ino; - while (!(error = xfrog_inumbers(&ctx->mnt, &igrp_ino, 1, &inogrp, - &igrplen))) { - if (igrplen == 0 || inogrp.xi_startino >= last_ino) + ireq = xfrog_inumbers_alloc_req(1, first_ino); + if (!ireq) { + str_info(ctx, descr, _("Insufficient memory; giving up.")); + return false; + } + + while (!(error = xfrog_inumbers(&ctx->mnt, ireq))) { + if (ireq->hdr.ocount == 0 || + ireq->inumbers[0].xi_startino >= last_ino) break; - nr += inogrp.xi_alloccount; + nr += ireq->inumbers[0].xi_alloccount; } + free(ireq); + if (error) { str_errno(ctx, descr); return false; diff --git a/scrub/inodes.c b/scrub/inodes.c index 49ab74e3..bcdc60b9 100644 --- a/scrub/inodes.c +++ b/scrub/inodes.c @@ -48,7 +48,7 @@ static void xfs_iterate_inodes_range_check( struct scrub_ctx *ctx, - struct xfs_inogrp *inogrp, + struct xfs_inumbers *inogrp, struct xfs_bulkstat *bstat) { struct xfs_bulkstat *bs; @@ -91,13 +91,12 @@ xfs_iterate_inodes_range( void *arg) { struct xfs_handle handle; - struct xfs_inogrp inogrp; + struct xfs_inumbers_req *ireq; struct xfs_bulkstat_req *breq; char idescr[DESCR_BUFSZ]; char buf[DESCR_BUFSZ]; struct xfs_bulkstat *bs; - uint64_t igrp_ino; - uint32_t igrplen = 0; + struct xfs_inumbers *inogrp; bool moveon = true; int i; int error; @@ -114,29 +113,36 @@ xfs_iterate_inodes_range( return false; } + ireq = xfrog_inumbers_alloc_req(1, first_ino); + if (!ireq) { + str_info(ctx, descr, _("Insufficient memory; giving up.")); + free(breq); + return false; + } + inogrp = &ireq->inumbers[0]; + /* Find the inode chunk & alloc mask */ - igrp_ino = first_ino; - error = xfrog_inumbers(&ctx->mnt, &igrp_ino, 1, &inogrp, &igrplen); - while (!error && igrplen) { + error = xfrog_inumbers(&ctx->mnt, ireq); + while (!error && ireq->hdr.ocount > 0) { /* * We can have totally empty inode chunks on filesystems where * there are more than 64 inodes per block. Skip these. */ - if (inogrp.xi_alloccount == 0) + if (inogrp->xi_alloccount == 0) goto igrp_retry; - breq->hdr.ino = inogrp.xi_startino; - breq->hdr.icount = inogrp.xi_alloccount; + breq->hdr.ino = inogrp->xi_startino; + breq->hdr.icount = inogrp->xi_alloccount; error = xfrog_bulkstat(&ctx->mnt, breq); if (error) str_info(ctx, descr, "%s", strerror_r(errno, buf, DESCR_BUFSZ)); - xfs_iterate_inodes_range_check(ctx, &inogrp, breq->bulkstat); + xfs_iterate_inodes_range_check(ctx, inogrp, breq->bulkstat); /* Iterate all the inodes. */ for (i = 0, bs = breq->bulkstat; - i < inogrp.xi_alloccount; + i < inogrp->xi_alloccount; i++, bs++) { if (bs->bs_ino > last_ino) goto out; @@ -150,7 +156,7 @@ xfs_iterate_inodes_range( case ESTALE: stale_count++; if (stale_count < 30) { - igrp_ino = inogrp.xi_startino; + ireq->hdr.ino = inogrp->xi_startino; goto igrp_retry; } snprintf(idescr, DESCR_BUFSZ, "inode %"PRIu64, @@ -174,8 +180,7 @@ _("Changed too many times during scan; giving up.")); stale_count = 0; igrp_retry: - error = xfrog_inumbers(&ctx->mnt, &igrp_ino, 1, &inogrp, - &igrplen); + error = xfrog_inumbers(&ctx->mnt, ireq); } err: @@ -183,6 +188,7 @@ _("Changed too many times during scan; giving up.")); str_errno(ctx, descr); moveon = false; } + free(ireq); free(breq); out: return moveon;