From patchwork Fri Sep 6 03:34:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 11134313 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3EC8576 for ; Fri, 6 Sep 2019 03:35:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2363F2082C for ; Fri, 6 Sep 2019 03:35:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="H/4C35t8" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404422AbfIFDfH (ORCPT ); Thu, 5 Sep 2019 23:35:07 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:48848 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733034AbfIFDfH (ORCPT ); Thu, 5 Sep 2019 23:35:07 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x863Ywfh110548; Fri, 6 Sep 2019 03:35:05 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to : cc : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2019-08-05; bh=vOvmZSwrnhJWdeB7AucKkR3As9vfS9DSQJQy2DAsTmU=; b=H/4C35t8XwEYR4AZ2EwDjtixXHMZICyWmaFTq3Qn7tawI5Q2pV+ZgtwN9jpBWUjVNSSR owTkZXdhWJYlQc1TYjVW2z1bcp+J5k1njYoby5/UY68z4vBwcJ8VRnZfswiQqjLuyrZT O/ZhaW2b3Mv8+YmF+y0ymonE0/MeJoo77jJxswnlPZ7vfEhpQU0lo8vu8tBcK9QxkHiu SZ7Kw3IJ0vYmtSQNYat3Eu7opHSn457aPUC8jMO80G/6KBvDUaMkYRROgO/U4nHLnq5Y k3icwe+CiqPRdObCtjtoAzrL8op9QI5k29KQec4k9pGNapCgGpx0RB917UsISduH7iD8 Fw== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by userp2130.oracle.com with ESMTP id 2uuf4n036y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 06 Sep 2019 03:35:05 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x863YOIw088493; Fri, 6 Sep 2019 03:35:04 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserp3030.oracle.com with ESMTP id 2uu1b99qwn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 06 Sep 2019 03:35:04 +0000 Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id x863Z3UJ019187; Fri, 6 Sep 2019 03:35:03 GMT Received: from localhost (/10.159.148.70) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 05 Sep 2019 20:35:03 -0700 Subject: [PATCH 1/6] man: add documentation for v5 bulkstat ioctl From: "Darrick J. Wong" To: sandeen@sandeen.net, darrick.wong@oracle.com Cc: linux-xfs@vger.kernel.org Date: Thu, 05 Sep 2019 20:34:56 -0700 Message-ID: <156774089663.2643497.7520759665881798589.stgit@magnolia> In-Reply-To: <156774089024.2643497.2754524603021685770.stgit@magnolia> References: <156774089024.2643497.2754524603021685770.stgit@magnolia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9371 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909060039 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9371 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909060039 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Darrick J. Wong Add a new manpage describing the V5 XFS_IOC_BULKSTAT ioctl. Signed-off-by: Darrick J. Wong --- man/man2/ioctl_xfs_bulkstat.2 | 330 +++++++++++++++++++++++++++++++++++++++ man/man2/ioctl_xfs_fsbulkstat.2 | 6 + 2 files changed, 336 insertions(+) create mode 100644 man/man2/ioctl_xfs_bulkstat.2 diff --git a/man/man2/ioctl_xfs_bulkstat.2 b/man/man2/ioctl_xfs_bulkstat.2 new file mode 100644 index 00000000..f687cfe8 --- /dev/null +++ b/man/man2/ioctl_xfs_bulkstat.2 @@ -0,0 +1,330 @@ +.\" Copyright (c) 2019, Oracle. All rights reserved. +.\" +.\" %%%LICENSE_START(GPLv2+_DOC_FULL) +.\" SPDX-License-Identifier: GPL-2.0+ +.\" %%%LICENSE_END +.TH IOCTL-XFS-BULKSTAT 2 2019-05-23 "XFS" +.SH NAME +ioctl_xfs_bulkstat \- query information for a batch of XFS inodes +.SH SYNOPSIS +.br +.B #include +.PP +.BI "int ioctl(int " fd ", XFS_IOC_BULKSTAT, struct xfs_bulkstat_req *" arg ); +.SH DESCRIPTION +Query stat information for a group of XFS inodes. +This ioctl uses +.B struct xfs_bulkstat_req +to set up a bulk transfer with the kernel: +.PP +.in +4n +.nf +struct xfs_bulkstat_req { + struct xfs_bulk_ireq hdr; + struct xfs_bulkstat bulkstat[]; +}; + +struct xfs_bulk_ireq { + uint64_t ino; + uint32_t flags; + uint32_t icount; + uint32_t ocount; + uint32_t agno; + uint64_t reserved[5]; +}; +.fi +.in +.PP +.I hdr.ino +should be set to the number of the first inode for which the caller wants +information, or zero to start with the first inode in the filesystem. +Note that this is a different semantic than the +.B lastip +in the old +.B FSBULKSTAT +ioctl. +After the call, this value will be set to the number of the next inode for +which information could supplied. +This sets up the next call for an iteration loop. +.PP +If the +.B XFS_BULK_REQ_SPECIAL +flag is set, this field is interpreted as follows: +.RS 0.4i +.TP +.B XFS_BULK_IREQ_SPECIAL_ROOT +Return stat information for the root directory inode. +.RE +.PP +.PP +.I hdr.flags +is a bit set of operational flags: +.RS 0.4i +.TP +.B XFS_BULK_REQ_AGNO +If this is set, the call will only return results for the allocation group (AG) +set in +.BR hdr.agno . +If +.B hdr.ino +is set to zero, results will be returned starting with the first inode in the +AG. +This flag may not be set at the same time as the +.B XFS_BULK_REQ_SPECIAL +flag. +.TP +.B XFS_BULK_REQ_SPECIAL +If this is set, results will be returned for only the special inode +specified in the +.B hdr.ino +field. +This flag may not be set at the same time as the +.B XFS_BULK_REQ_AGNO +flag. +.RE +.PP +.I hdr.icount +is the number of inodes to examine. +.PP +.I hdr.ocount +will be set to the number of records returned. +.PP +.I hdr.agno +is the number of the allocation group (AG) for which we want results. +If the +.B XFS_BULK_REQ_AGNO +flag is not set, this field is ignored. +.PP +.I hdr.reserved +must be set to zero. + +.PP +.I bulkstat +is an array of +.B struct xfs_bulkstat +which is described below. +The array must have at least +.I icount +elements. +.PP +.in +4n +.nf +struct xfs_bulkstat { + uint64_t bs_ino; + uint64_t bs_size; + + uint64_t bs_blocks; + uint64_t bs_xflags; + + uint64_t bs_atime; + uint64_t bs_mtime; + + uint64_t bs_ctime; + uint64_t bs_btime; + + uint32_t bs_gen; + uint32_t bs_uid; + uint32_t bs_gid; + uint32_t bs_projectid; + + uint32_t bs_atime_nsec; + uint32_t bs_mtime_nsec; + uint32_t bs_ctime_nsec; + uint32_t bs_btime_nsec; + + uint32_t bs_blksize; + uint32_t bs_rdev; + uint32_t bs_cowextsize_blks; + uint32_t bs_extsize_blks; + + uint32_t bs_nlink; + uint32_t bs_extents; + uint32_t bs_aextents; + uint16_t bs_version; + uint16_t bs_forkoff; + + uint16_t bs_sick; + uint16_t bs_checked; + uint16_t bs_mode; + uint16_t bs_pad2; + + uint64_t bs_pad[7]; +}; +.fi +.in +.PP +.I bs_ino +is the inode number of this record. +.PP +.I bs_size +is the size of the file, in bytes. +.PP +.I bs_blocks +is the number of filesystem blocks allocated to this file, including metadata. +.PP +.I bs_xflags +tell us what extended flags are set this inode. +These flags are the same values as those defined in the +.B XFS INODE FLAGS +section of the +.BR ioctl_xfs_fsgetxattr (2) +manpage. +.PP +.I bs_atime +is the last time this file was accessed, in seconds. +.PP +.I bs_mtime +is the last time the contents of this file were modified, in seconds. +.PP +.I bs_ctime +is the last time this inode record was modified, in seconds. +.PP +.I bs_btime +is the time this inode record was created, in seconds. +.PP +.I bs_gen +is the generation number of the inode record. +.PP +.I bs_uid +is the user id. +.PP +.I bs_gid +is the group id. +.PP +.I bs_projectid +is the the project id. +.PP +.I bs_atime_nsec +is the nanoseconds component of the last time this file was accessed. +.PP +.I bs_mtime_nsec +is the nanoseconds component of the last time the contents of this file were +modified. +.PP +.I bs_ctime_nsec +is the nanoseconds component of the last time this inode record was modified. +.PP +.I bs_btime_nsec +is the nanoseconds component of the time this inode record was created. +.PP +.I bs_blksize +is the size of a data block for this file, in units of bytes. +.PP +.I bs_rdev +is the encoded device id if this is a special file. +.PP +.I bs_cowextsize_blks +is the Copy on Write extent size hint for this file, in units of data blocks. +.PP +.I bs_extsize_blks +is the extent size hint for this file, in units of data blocks. +.PP +.I bs_nlink +is the number of hard links to this inode. +.PP +.I bs_extents +is the number of storage mappings associated with this file's data. +.PP +.I bs_aextents +is the number of storage mappings associated with this file's extended +attributes. +.PP +.I bs_version +is the version of this data structure. +Currently, only 1 or 5 are supported. +.PP +.I bs_forkoff +is the offset of the attribute fork in the inode record, in bytes. +.PP +The fields +.IR bs_sick " and " bs_checked +indicate the relative health of various allocation group metadata. +Please see the section +.B XFS INODE METADATA HEALTH REPORTING +for more information. +.PP +.I bs_mode +is the file type and mode. +.PP +.I bs_pad[7] +is zeroed. +.SH RETURN VALUE +On error, \-1 is returned, and +.I errno +is set to indicate the error. +.PP +.SH XFS INODE METADATA HEALTH REPORTING +.PP +The online filesystem checking utility scans inode metadata and records what it +finds in the kernel incore state. +The following scheme is used for userspace to read the incore health status of +an inode: +.IP \[bu] 2 +If a given sick flag is set in +.IR bs_sick , +then that piece of metadata has been observed to be damaged. +The same bit should be set in +.IR bs_checked . +.IP \[bu] +If a given sick flag is set in +.I bs_checked +but is not set in +.IR bs_sick , +then that piece of metadata has been checked and is not faulty. +.IP \[bu] +If a given sick flag is not set in +.IR bs_checked , +then no conclusion can be made. +.PP +The following flags apply to these fields: +.RS 0.4i +.TP +.B XFS_BS_SICK_INODE +The inode's record itself. +.TP +.B XFS_BS_SICK_BMBTD +File data extent mappings. +.TP +.B XFS_BS_SICK_BMBTA +Extended attribute extent mappings. +.TP +.B XFS_BS_SICK_BMBTC +Copy on Write staging extent mappings. +.TP +.B XFS_BS_SICK_DIR +Directory information. +.TP +.B XFS_BS_SICK_XATTR +Extended attribute data. +.TP +.B XFS_BS_SICK_SYMLINK +Symbolic link target. +.TP +.B XFS_BS_SICK_PARENT +Parent pointers. +.RE +.SH ERRORS +Error codes can be one of, but are not limited to, the following: +.TP +.B EFAULT +The kernel was not able to copy into the userspace buffer. +.TP +.B EFSBADCRC +Metadata checksum validation failed while performing the query. +.TP +.B EFSCORRUPTED +Metadata corruption was encountered while performing the query. +.TP +.B EINVAL +One of the arguments was not valid. +.TP +.B EIO +An I/O error was encountered while performing the query. +.TP +.B ENOMEM +There was insufficient memory to perform the query. +.SH CONFORMING TO +This API is specific to XFS filesystem on the Linux kernel. +.SH SEE ALSO +.BR ioctl (2), +.BR ioctl_xfs_fsgetxattr (2) diff --git a/man/man2/ioctl_xfs_fsbulkstat.2 b/man/man2/ioctl_xfs_fsbulkstat.2 index 3e13cfa8..81f9d9bf 100644 --- a/man/man2/ioctl_xfs_fsbulkstat.2 +++ b/man/man2/ioctl_xfs_fsbulkstat.2 @@ -15,6 +15,12 @@ ioctl_xfs_fsbulkstat \- query information for a batch of XFS inodes .BI "int ioctl(int " fd ", XFS_IOC_FSBULKSTAT_SINGLE, struct xfs_fsop_bulkreq *" arg ); .SH DESCRIPTION Query stat information for a group of XFS inodes. +.PP +NOTE: This ioctl has been superseded. +Please see the +.BR ioctl_xfs_bulkstat (2) +manpage for information about its replacement. +.PP These ioctls use .B struct xfs_fsop_bulkreq to set up a bulk transfer with the kernel: From patchwork Fri Sep 6 03:35:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 11134315 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F03C576 for ; Fri, 6 Sep 2019 03:35:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DD77520820 for ; Fri, 6 Sep 2019 03:35:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="Cw6VMreT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404438AbfIFDfN (ORCPT ); Thu, 5 Sep 2019 23:35:13 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:40970 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733034AbfIFDfN (ORCPT ); Thu, 5 Sep 2019 23:35:13 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x863Xu3l074308; Fri, 6 Sep 2019 03:35:11 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to : cc : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2019-08-05; bh=Qm6EWx6d8TIyzmFvr4u8JctQdWZ+za9q0dMLaFPGt+w=; b=Cw6VMreTIaB3lSgQY6TihJOLEnTEOZIpjYfg5Cs0zco3TfWx1gGGcOctMWUqUYnPX/Bd 4zA6lSulU69qmXdM1ml3flEwuW8L+QOfaojQj/aQBfmNUV+aileI7tb0Ng1Z15xSQloN aAgt/E7+tg+h8SIJ01Bls80aNn4m3W0UBQ6l5pOr9Ni84AwxRJGCflbarPzwP9YbEcR9 7BNDqEYOhTkflHFPThG6O8dH2coy9ioDoAhhMmyVU4hTFJ8Jg8SPK0LLUhsxQxR0NBNr Yp5IoRcrBmsLE7TCzW513gJ3GhmO0Wne3Qv/8ASpx0n2sNBJs5PyqG/1Lf5IS5yOS/BE Xw== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by aserp2120.oracle.com with ESMTP id 2uuf51g32v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 06 Sep 2019 03:35:11 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x863YUp8103595; Fri, 6 Sep 2019 03:35:10 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserp3020.oracle.com with ESMTP id 2uud7p2qgt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 06 Sep 2019 03:35:10 +0000 Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id x863ZAJc019202; Fri, 6 Sep 2019 03:35:10 GMT Received: from localhost (/10.159.148.70) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 05 Sep 2019 20:35:09 -0700 Subject: [PATCH 2/6] man: add documentation for v5 inumbers ioctl From: "Darrick J. Wong" To: sandeen@sandeen.net, darrick.wong@oracle.com Cc: linux-xfs@vger.kernel.org Date: Thu, 05 Sep 2019 20:35:09 -0700 Message-ID: <156774090939.2643497.6505275402139227224.stgit@magnolia> In-Reply-To: <156774089024.2643497.2754524603021685770.stgit@magnolia> References: <156774089024.2643497.2754524603021685770.stgit@magnolia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9371 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909060039 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9371 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909060039 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Darrick J. Wong Add a manpage describing the new v5 XFS_IOC_INUMBERS ioctl. Signed-off-by: Darrick J. Wong --- man/man2/ioctl_xfs_inumbers.2 | 118 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 118 insertions(+) create mode 100644 man/man2/ioctl_xfs_inumbers.2 diff --git a/man/man2/ioctl_xfs_inumbers.2 b/man/man2/ioctl_xfs_inumbers.2 new file mode 100644 index 00000000..b1e854d3 --- /dev/null +++ b/man/man2/ioctl_xfs_inumbers.2 @@ -0,0 +1,118 @@ +.\" Copyright (c) 2019, Oracle. All rights reserved. +.\" +.\" %%%LICENSE_START(GPLv2+_DOC_FULL) +.\" SPDX-License-Identifier: GPL-2.0+ +.\" %%%LICENSE_END +.TH IOCTL-XFS-INUMBERS 2 2019-05-23 "XFS" +.SH NAME +ioctl_xfs_inumbers \- query allocation information for groups of XFS inodes +.SH SYNOPSIS +.br +.B #include +.PP +.BI "int ioctl(int " fd ", XFS_IOC_INUMBERS, struct xfs_inumbers_req *" arg ); +.SH DESCRIPTION +Query inode allocation information for groups of XFS inodes. +This ioctl uses +.B struct xfs_inumbers_req +to set up a bulk transfer with the kernel: +.PP +.in +4n +.nf +struct xfs_inumbers_req { + struct xfs_bulk_ireq hdr; + struct xfs_inumbers inumbers[]; +}; + +struct xfs_bulk_ireq { + uint64_t ino; + uint32_t flags; + uint32_t icount; + uint32_t ocount; + uint32_t agno; + uint64_t reserved[5]; +}; +.fi +.in +.PP +.I hdr +describes the information to query. +The layout and behavior are documented in the +.BR ioctl_xfs_bulkstat (2) +manpage and will not be discussed further here. + +.PP +.I inumbers +is an array of +.B struct xfs_inumbers +which is described below. +The array must have at least +.I icount +elements. +.PP +.in +4n +.nf +struct xfs_inumbers { + uint64_t xi_startino; + uint64_t xi_allocmask; + uint8_t xi_alloccount; + uint8_t xi_version; + uint8_t xi_padding[6]; +}; +.fi +.in +.PP +This structure describes inode usage information for a group of 64 consecutive +inode numbers. +.PP +.I xi_startino +is the first inode number of this group. +.PP +.I xi_allocmask +is a bitmask telling which inodes in this group are allocated. +To clarify, bit +.B N +is set if inode +.BR xi_startino + N +is allocated. +.PP +.I xi_alloccount +is the number of inodes in this group that are allocated. +This should be equal to popcnt(xi_allocmask). +.PP +.I xi_version +is the version of this data structure. +Currently, only 1 or 5 are supported. +.PP +.I xi_padding[6] +is zeroed. +.SH RETURN VALUE +On error, \-1 is returned, and +.I errno +is set to indicate the error. +.PP +.SH ERRORS +Error codes can be one of, but are not limited to, the following: +.TP +.B EFAULT +The kernel was not able to copy into the userspace buffer. +.TP +.B EFSBADCRC +Metadata checksum validation failed while performing the query. +.TP +.B EFSCORRUPTED +Metadata corruption was encountered while performing the query. +.TP +.B EINVAL +One of the arguments was not valid. +.TP +.B EIO +An I/O error was encountered while performing the query. +.TP +.B ENOMEM +There was insufficient memory to perform the query. +.SH CONFORMING TO +This API is specific to XFS filesystem on the Linux kernel. +.SH SEE ALSO +.BR ioctl (2), +.BR ioctl_xfs_bulkstat (2). From patchwork Fri Sep 6 03:35:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 11134349 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 16FB2924 for ; Fri, 6 Sep 2019 03:37:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DBDB02082C for ; Fri, 6 Sep 2019 03:37:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="HplzL4SX" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2392139AbfIFDhW (ORCPT ); Thu, 5 Sep 2019 23:37:22 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:43062 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732799AbfIFDhW (ORCPT ); Thu, 5 Sep 2019 23:37:22 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x863XuXV074305; Fri, 6 Sep 2019 03:37:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to : cc : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2019-08-05; bh=kpdeyys3WjLYkOL5PqAR9+xePD/7eaROmfGo5eFFFHk=; b=HplzL4SXyeTICxOJPkI5UyBGybSjvJzQ6ZhmYe/hiIKza0PRnujq4h7xhtAv3tho6S/N fXDHxZip1XlDPOIQbyyndrAWAih/M+LcD5iv3wOWFkp9HBT0yhdcPKySe+Cr176R1scl CFYaRsWSyTm/4gN0SfbcPggoozIIcvLKqi5RtyTbd3koReA2aA7gMItBqLcSXLST0hZe 3bwq1pxspH3nIuH1VL6Q2ZPUdG/I9QmFBTfwt7UVl5il1xFz06Rn2P5hLvdstxedcz9C VkyuhOm+QwvNszXSNZmY7GqCVhfKP3stJwt49Eina7QPGBqEzLxIAEMSz5QnJ62e1w/X VQ== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by aserp2120.oracle.com with ESMTP id 2uuf51g372-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 06 Sep 2019 03:37:17 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x863YVft103691; Fri, 6 Sep 2019 03:35:17 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserp3020.oracle.com with ESMTP id 2uud7p2qjd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 06 Sep 2019 03:35:17 +0000 Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x863ZGed004184; Fri, 6 Sep 2019 03:35:16 GMT Received: from localhost (/10.159.148.70) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 05 Sep 2019 20:35:16 -0700 Subject: [PATCH 3/6] misc: convert XFS_IOC_FSBULKSTAT to XFS_IOC_BULKSTAT From: "Darrick J. Wong" To: sandeen@sandeen.net, darrick.wong@oracle.com Cc: linux-xfs@vger.kernel.org Date: Thu, 05 Sep 2019 20:35:15 -0700 Message-ID: <156774091553.2643497.13127754211857633238.stgit@magnolia> In-Reply-To: <156774089024.2643497.2754524603021685770.stgit@magnolia> References: <156774089024.2643497.2754524603021685770.stgit@magnolia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9371 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909060039 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9371 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909060039 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Darrick J. Wong Convert the v1 calls to v5 calls. Signed-off-by: Darrick J. Wong --- fsr/xfs_fsr.c | 45 ++++++-- io/open.c | 17 ++- libfrog/bulkstat.c | 290 +++++++++++++++++++++++++++++++++++++++++++++++++--- libfrog/bulkstat.h | 10 +- libfrog/fsgeom.h | 9 ++ quota/quot.c | 29 ++--- scrub/inodes.c | 45 +++++--- scrub/inodes.h | 2 scrub/phase3.c | 6 + scrub/phase5.c | 8 + scrub/phase6.c | 2 scrub/unicrash.c | 6 + scrub/unicrash.h | 4 - spaceman/health.c | 28 +++-- 14 files changed, 411 insertions(+), 90 deletions(-) diff --git a/fsr/xfs_fsr.c b/fsr/xfs_fsr.c index a53eb924..cc3cc93a 100644 --- a/fsr/xfs_fsr.c +++ b/fsr/xfs_fsr.c @@ -466,6 +466,17 @@ fsrallfs(char *mtab, int howlong, char *leftofffile) ptr = strchr(ptr, ' '); if (ptr) { startino = strtoull(++ptr, NULL, 10); + /* + * NOTE: The inode number read in from + * the leftoff file is the last inode + * to have been fsr'd. Since the new + * xfrog_bulkstat function wants to be + * passed the first inode that we want + * to examine, increment the value that + * we read in. The debug message below + * prints the lastoff value. + */ + startino++; } } if (startpass < 0) @@ -484,7 +495,7 @@ fsrallfs(char *mtab, int howlong, char *leftofffile) if (vflag) { fsrprintf(_("START: pass=%d ino=%llu %s %s\n"), - fs->npass, (unsigned long long)startino, + fs->npass, (unsigned long long)startino - 1, fs->dev, fs->mnt); } @@ -576,12 +587,10 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange) int fd; int count = 0; int ret; - uint32_t buflenout; - struct xfs_bstat buf[GRABSZ]; char fname[64]; char *tname; jdm_fshandle_t *fshandlep; - xfs_ino_t lastino = startino; + struct xfs_bulkstat_req *breq; fsrprintf(_("%s start inode=%llu\n"), mntdir, (unsigned long long)startino); @@ -604,10 +613,21 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange) tmp_init(mntdir); - while ((ret = xfrog_bulkstat(&fsxfd, &lastino, GRABSZ, &buf[0], - &buflenout)) == 0) { - struct xfs_bstat *p; - struct xfs_bstat *endp; + breq = xfrog_bulkstat_alloc_req(GRABSZ, startino); + if (!breq) { + fsrprintf(_("Skipping %s: not enough memory\n"), + mntdir); + xfd_close(&fsxfd); + free(fshandlep); + return -1; + } + + while ((ret = xfrog_bulkstat(&fsxfd, breq) == 0)) { + struct xfs_bstat bs1; + struct xfs_bulkstat *buf = breq->bulkstat; + struct xfs_bulkstat *p; + struct xfs_bulkstat *endp; + uint32_t buflenout = breq->hdr.ocount; if (buflenout == 0) goto out0; @@ -615,7 +635,7 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange) /* Each loop through, defrag targetrange percent of the files */ count = (buflenout * targetrange) / 100; - qsort((char *)buf, buflenout, sizeof(struct xfs_bstat), cmp); + qsort((char *)buf, buflenout, sizeof(struct xfs_bulkstat), cmp); for (p = buf, endp = (buf + buflenout); p < endp ; p++) { /* Do some obvious checks now */ @@ -623,7 +643,8 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange) (p->bs_extents < 2)) continue; - fd = jdm_open(fshandlep, p, O_RDWR|O_DIRECT); + xfrog_bulkstat_to_bstat(&fsxfd, &bs1, p); + fd = jdm_open(fshandlep, &bs1, O_RDWR | O_DIRECT); if (fd < 0) { /* This probably means the file was * removed while in progress of handling @@ -641,7 +662,7 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange) /* Get a tmp file name */ tname = tmp_next(mntdir); - ret = fsrfile_common(fname, tname, mntdir, fd, p); + ret = fsrfile_common(fname, tname, mntdir, fd, &bs1); leftoffino = p->bs_ino; @@ -653,6 +674,7 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange) } } if (endtime && endtime < time(NULL)) { + free(breq); tmp_close(mntdir); xfd_close(&fsxfd); fsrall_cleanup(1); @@ -662,6 +684,7 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange) if (ret) fsrprintf(_("%s: bulkstat: %s\n"), progname, strerror(ret)); out0: + free(breq); tmp_close(mntdir); xfd_close(&fsxfd); free(fshandlep); diff --git a/io/open.c b/io/open.c index 99ca0dd3..e1aac7d1 100644 --- a/io/open.c +++ b/io/open.c @@ -724,7 +724,6 @@ inode_f( char **argv) { struct xfs_bstat bstat; - uint32_t count = 0; uint64_t result_ino = 0; uint64_t userino = NULLFSINO; char *p; @@ -775,21 +774,31 @@ inode_f( } } else if (ret_next) { struct xfs_fd xfd = XFS_FD_INIT(file->fd); + struct xfs_bulkstat_req *breq; + + breq = xfrog_bulkstat_alloc_req(1, userino + 1); + if (!breq) { + perror("alloc bulkstat"); + exitcode = 1; + return 0; + } /* get next inode */ - ret = xfrog_bulkstat(&xfd, &userino, 1, &bstat, &count); + ret = xfrog_bulkstat(&xfd, breq); if (ret) { errno = ret; perror("bulkstat"); + free(breq); exitcode = 1; return 0; } /* The next inode in use, or 0 if none */ - if (count) - result_ino = bstat.bs_ino; + if (breq->hdr.ocount) + result_ino = breq->bulkstat[0].bs_ino; else result_ino = 0; + free(breq); } else { struct xfs_fd xfd = XFS_FD_INIT(file->fd); diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c index fa10f298..b4468243 100644 --- a/libfrog/bulkstat.c +++ b/libfrog/bulkstat.c @@ -3,10 +3,23 @@ * Copyright (C) 2019 Oracle. All Rights Reserved. * Author: Darrick J. Wong */ +#include +#include #include "xfs.h" #include "fsgeom.h" #include "bulkstat.h" +/* Grab fs geometry needed to degrade to v1 bulkstat/inumbers ioctls. */ +static inline int +xfrog_bulkstat_prep_v1_emulation( + struct xfs_fd *xfd) +{ + if (xfd->fsgeom.blocksize > 0) + return 0; + + return xfd_prepare_geometry(xfd); +} + /* Bulkstat a single inode. Returns zero or a positive error code. */ int xfrog_bulkstat_single( @@ -29,29 +42,278 @@ xfrog_bulkstat_single( return 0; } -/* Bulkstat a bunch of inodes. Returns zero or a positive error code. */ -int -xfrog_bulkstat( +/* + * Set up emulation of a v5 bulk request ioctl with a v1 bulk request ioctl. + * Returns 0 if the emulation should proceed; ECANCELED if there are no + * records; or a positive error code. + */ +static int +xfrog_bulk_req_setup( struct xfs_fd *xfd, - uint64_t *lastino, - uint32_t icount, - struct xfs_bstat *ubuffer, - uint32_t *ocount) + struct xfs_bulk_ireq *hdr, + struct xfs_fsop_bulkreq *bulkreq, + size_t rec_size) +{ + void *buf; + + if (hdr->flags & XFS_BULK_IREQ_AGNO) { + uint32_t agno = cvt_ino_to_agno(xfd, hdr->ino); + + if (hdr->ino == 0) + hdr->ino = cvt_agino_to_ino(xfd, hdr->agno, 0); + else if (agno < hdr->agno) + return EINVAL; + else if (agno > hdr->agno) + goto no_results; + } + + if (cvt_ino_to_agno(xfd, hdr->ino) > xfd->fsgeom.agcount) + goto no_results; + + buf = malloc(hdr->icount * rec_size); + if (!buf) + return errno; + + if (hdr->ino) + hdr->ino--; + bulkreq->lastip = (__u64 *)&hdr->ino, + bulkreq->icount = hdr->icount, + bulkreq->ocount = (__s32 *)&hdr->ocount, + bulkreq->ubuffer = buf; + return 0; + +no_results: + hdr->ocount = 0; + return ECANCELED; +} + +/* + * Convert records and free resources used to do a v1 emulation of v5 bulk + * request. + */ +static int +xfrog_bulk_req_teardown( + struct xfs_fd *xfd, + struct xfs_bulk_ireq *hdr, + struct xfs_fsop_bulkreq *bulkreq, + size_t v1_rec_size, + uint64_t (*v1_ino)(void *v1_rec), + void *v5_records, + size_t v5_rec_size, + void (*cvt)(struct xfs_fd *xfd, void *v5, void *v1), + unsigned int startino_adj, + int error) +{ + void *v1_rec = bulkreq->ubuffer; + void *v5_rec = v5_records; + unsigned int i; + + if (error == ECANCELED) { + error = 0; + goto free; + } + if (error) + goto free; + + /* + * Convert each record from v1 to v5 format, keeping the startino + * value up to date and (if desired) stopping at the end of the + * AG. + */ + for (i = 0; + i < hdr->ocount; + i++, v1_rec += v1_rec_size, v5_rec += v5_rec_size) { + uint64_t ino = v1_ino(v1_rec); + + /* Stop if we hit a different AG. */ + if ((hdr->flags & XFS_BULK_IREQ_AGNO) && + cvt_ino_to_agno(xfd, ino) != hdr->agno) { + hdr->ocount = i; + break; + } + cvt(xfd, v5_rec, v1_rec); + hdr->ino = ino + startino_adj; + } + +free: + free(bulkreq->ubuffer); + return error; +} + +static uint64_t xfrog_bstat_ino(void *v1_rec) +{ + return ((struct xfs_bstat *)v1_rec)->bs_ino; +} + +static void xfrog_bstat_cvt(struct xfs_fd *xfd, void *v5, void *v1) +{ + xfrog_bstat_to_bulkstat(xfd, v5, v1); +} + +/* Bulkstat a bunch of inodes using the v5 interface. */ +static int +xfrog_bulkstat5( + struct xfs_fd *xfd, + struct xfs_bulkstat_req *req) { - struct xfs_fsop_bulkreq bulkreq = { - .lastip = (__u64 *)lastino, - .icount = icount, - .ubuffer = ubuffer, - .ocount = (__s32 *)ocount, - }; int ret; - ret = ioctl(xfd->fd, XFS_IOC_FSBULKSTAT, &bulkreq); + ret = ioctl(xfd->fd, XFS_IOC_BULKSTAT, req); if (ret) return errno; return 0; } +/* Bulkstat a bunch of inodes using the v1 interface. */ +static int +xfrog_bulkstat1( + struct xfs_fd *xfd, + struct xfs_bulkstat_req *req) +{ + struct xfs_fsop_bulkreq bulkreq = { 0 }; + int error; + + error = xfrog_bulkstat_prep_v1_emulation(xfd); + if (error) + return error; + + error = xfrog_bulk_req_setup(xfd, &req->hdr, &bulkreq, + sizeof(struct xfs_bstat)); + if (error == ECANCELED) + goto out_teardown; + if (error) + return error; + + error = ioctl(xfd->fd, XFS_IOC_FSBULKSTAT, &bulkreq); + if (error) + error = errno; + +out_teardown: + return xfrog_bulk_req_teardown(xfd, &req->hdr, &bulkreq, + sizeof(struct xfs_bstat), xfrog_bstat_ino, + &req->bulkstat, sizeof(struct xfs_bulkstat), + xfrog_bstat_cvt, 1, error); +} + +/* Bulkstat a bunch of inodes. Returns zero or a positive error code. */ +int +xfrog_bulkstat( + struct xfs_fd *xfd, + struct xfs_bulkstat_req *req) +{ + int error; + + if (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V1) + goto try_v1; + + error = xfrog_bulkstat5(xfd, req); + if (error == 0 || (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V5)) + return error; + + /* If the v5 ioctl wasn't found, we punt to v1. */ + switch (error) { + case EOPNOTSUPP: + case ENOTTY: + xfd->flags |= XFROG_FLAG_BULKSTAT_FORCE_V1; + break; + } + +try_v1: + return xfrog_bulkstat1(xfd, req); +} + +/* Convert bulkstat (v5) to bstat (v1). */ +void +xfrog_bulkstat_to_bstat( + struct xfs_fd *xfd, + struct xfs_bstat *bs1, + const struct xfs_bulkstat *bstat) +{ + bs1->bs_ino = bstat->bs_ino; + bs1->bs_mode = bstat->bs_mode; + bs1->bs_nlink = bstat->bs_nlink; + bs1->bs_uid = bstat->bs_uid; + bs1->bs_gid = bstat->bs_gid; + bs1->bs_rdev = bstat->bs_rdev; + bs1->bs_blksize = bstat->bs_blksize; + bs1->bs_size = bstat->bs_size; + bs1->bs_atime.tv_sec = bstat->bs_atime; + bs1->bs_mtime.tv_sec = bstat->bs_mtime; + bs1->bs_ctime.tv_sec = bstat->bs_ctime; + bs1->bs_atime.tv_nsec = bstat->bs_atime_nsec; + bs1->bs_mtime.tv_nsec = bstat->bs_mtime_nsec; + bs1->bs_ctime.tv_nsec = bstat->bs_ctime_nsec; + bs1->bs_blocks = bstat->bs_blocks; + bs1->bs_xflags = bstat->bs_xflags; + bs1->bs_extsize = cvt_off_fsb_to_b(xfd, bstat->bs_extsize_blks); + bs1->bs_extents = bstat->bs_extents; + bs1->bs_gen = bstat->bs_gen; + bs1->bs_projid_lo = bstat->bs_projectid & 0xFFFF; + bs1->bs_forkoff = bstat->bs_forkoff; + bs1->bs_projid_hi = bstat->bs_projectid >> 16; + bs1->bs_sick = bstat->bs_sick; + bs1->bs_checked = bstat->bs_checked; + bs1->bs_cowextsize = cvt_off_fsb_to_b(xfd, bstat->bs_cowextsize_blks); + bs1->bs_dmevmask = 0; + bs1->bs_dmstate = 0; + bs1->bs_aextents = bstat->bs_aextents; +} + +/* Convert bstat (v1) to bulkstat (v5). */ +void +xfrog_bstat_to_bulkstat( + struct xfs_fd *xfd, + struct xfs_bulkstat *bstat, + const struct xfs_bstat *bs1) +{ + memset(bstat, 0, sizeof(*bstat)); + bstat->bs_version = XFS_BULKSTAT_VERSION_V1; + + bstat->bs_ino = bs1->bs_ino; + bstat->bs_mode = bs1->bs_mode; + bstat->bs_nlink = bs1->bs_nlink; + bstat->bs_uid = bs1->bs_uid; + bstat->bs_gid = bs1->bs_gid; + bstat->bs_rdev = bs1->bs_rdev; + bstat->bs_blksize = bs1->bs_blksize; + bstat->bs_size = bs1->bs_size; + bstat->bs_atime = bs1->bs_atime.tv_sec; + bstat->bs_mtime = bs1->bs_mtime.tv_sec; + bstat->bs_ctime = bs1->bs_ctime.tv_sec; + bstat->bs_atime_nsec = bs1->bs_atime.tv_nsec; + bstat->bs_mtime_nsec = bs1->bs_mtime.tv_nsec; + bstat->bs_ctime_nsec = bs1->bs_ctime.tv_nsec; + bstat->bs_blocks = bs1->bs_blocks; + bstat->bs_xflags = bs1->bs_xflags; + bstat->bs_extsize_blks = cvt_b_to_off_fsbt(xfd, bs1->bs_extsize); + bstat->bs_extents = bs1->bs_extents; + bstat->bs_gen = bs1->bs_gen; + bstat->bs_projectid = bstat_get_projid(bs1); + bstat->bs_forkoff = bs1->bs_forkoff; + bstat->bs_sick = bs1->bs_sick; + bstat->bs_checked = bs1->bs_checked; + bstat->bs_cowextsize_blks = cvt_b_to_off_fsbt(xfd, bs1->bs_cowextsize); + bstat->bs_aextents = bs1->bs_aextents; +} + +/* Allocate a bulkstat request. On error returns NULL and sets errno. */ +struct xfs_bulkstat_req * +xfrog_bulkstat_alloc_req( + uint32_t nr, + uint64_t startino) +{ + struct xfs_bulkstat_req *breq; + + breq = calloc(1, XFS_BULKSTAT_REQ_SIZE(nr)); + if (!breq) + return NULL; + + breq->hdr.icount = nr; + breq->hdr.ino = startino; + + return breq; +} + /* * Query inode allocation bitmask information. Returns zero or a positive * error code. diff --git a/libfrog/bulkstat.h b/libfrog/bulkstat.h index 83ac0e37..6f51c167 100644 --- a/libfrog/bulkstat.h +++ b/libfrog/bulkstat.h @@ -10,8 +10,14 @@ struct xfs_bstat; int xfrog_bulkstat_single(struct xfs_fd *xfd, uint64_t ino, struct xfs_bstat *ubuffer); -int xfrog_bulkstat(struct xfs_fd *xfd, uint64_t *lastino, uint32_t icount, - struct xfs_bstat *ubuffer, uint32_t *ocount); +int xfrog_bulkstat(struct xfs_fd *xfd, struct xfs_bulkstat_req *req); + +struct xfs_bulkstat_req *xfrog_bulkstat_alloc_req(uint32_t nr, + uint64_t startino); +void xfrog_bulkstat_to_bstat(struct xfs_fd *xfd, struct xfs_bstat *bs1, + const struct xfs_bulkstat *bstat); +void xfrog_bstat_to_bulkstat(struct xfs_fd *xfd, struct xfs_bulkstat *bstat, + const struct xfs_bstat *bs1); struct xfs_inogrp; int xfrog_inumbers(struct xfs_fd *xfd, uint64_t *lastino, uint32_t icount, diff --git a/libfrog/fsgeom.h b/libfrog/fsgeom.h index 55b14c2b..ca38324e 100644 --- a/libfrog/fsgeom.h +++ b/libfrog/fsgeom.h @@ -39,8 +39,17 @@ struct xfs_fd { /* log2 of sb_blocksize / sb_sectsize */ unsigned int blkbb_log; + + /* XFROG_FLAG_* state flags */ + unsigned int flags; }; +/* Only use v1 bulkstat/inumbers ioctls. */ +#define XFROG_FLAG_BULKSTAT_FORCE_V1 (1 << 0) + +/* Only use v5 bulkstat/inumbers ioctls. */ +#define XFROG_FLAG_BULKSTAT_FORCE_V5 (1 << 1) + /* Static initializers */ #define XFS_FD_INIT(_fd) { .fd = (_fd), } #define XFS_FD_INIT_EMPTY XFS_FD_INIT(-1) diff --git a/quota/quot.c b/quota/quot.c index 686b2726..7edfad16 100644 --- a/quota/quot.c +++ b/quota/quot.c @@ -69,7 +69,7 @@ quot_help(void) static void quot_bulkstat_add( - struct xfs_bstat *p, + struct xfs_bulkstat *p, uint flags) { du_t *dp; @@ -93,7 +93,7 @@ quot_bulkstat_add( } for (i = 0; i < 3; i++) { id = (i == 0) ? p->bs_uid : ((i == 1) ? - p->bs_gid : bstat_get_projid(p)); + p->bs_gid : p->bs_projectid); hp = &duhash[i][id % DUHASH]; for (dp = *hp; dp; dp = dp->next) if (dp->id == id) @@ -113,11 +113,11 @@ quot_bulkstat_add( } dp->blocks += size; - if (now - p->bs_atime.tv_sec > 30 * (60*60*24)) + if (now - p->bs_atime > 30 * (60*60*24)) dp->blocks30 += size; - if (now - p->bs_atime.tv_sec > 60 * (60*60*24)) + if (now - p->bs_atime > 60 * (60*60*24)) dp->blocks60 += size; - if (now - p->bs_atime.tv_sec > 90 * (60*60*24)) + if (now - p->bs_atime > 90 * (60*60*24)) dp->blocks90 += size; dp->nfiles++; } @@ -129,9 +129,7 @@ quot_bulkstat_mount( unsigned int flags) { struct xfs_fd fsxfd = XFS_FD_INIT_EMPTY; - struct xfs_bstat *buf; - uint64_t last = 0; - uint32_t count; + struct xfs_bulkstat_req *breq; int i, sts, ret; du_t **dp; @@ -154,25 +152,24 @@ quot_bulkstat_mount( return; } - buf = (struct xfs_bstat *)calloc(NBSTAT, sizeof(struct xfs_bstat)); - if (!buf) { + breq = xfrog_bulkstat_alloc_req(NBSTAT, 0); + if (!breq) { perror("calloc"); xfd_close(&fsxfd); return; } - while ((sts = xfrog_bulkstat(&fsxfd, &last, NBSTAT, buf, - &count)) == 0) { - if (count == 0) + while ((sts = xfrog_bulkstat(&fsxfd, breq)) == 0) { + if (breq->hdr.ocount == 0) break; - for (i = 0; i < count; i++) - quot_bulkstat_add(&buf[i], flags); + for (i = 0; i < breq->hdr.ocount; i++) + quot_bulkstat_add(&breq->bulkstat[i], flags); } if (sts < 0) { errno = sts; perror("XFS_IOC_FSBULKSTAT"); } - free(buf); + free(breq); xfd_close(&fsxfd); } diff --git a/scrub/inodes.c b/scrub/inodes.c index 580a845e..851c24bd 100644 --- a/scrub/inodes.c +++ b/scrub/inodes.c @@ -50,13 +50,15 @@ static void xfs_iterate_inodes_range_check( struct scrub_ctx *ctx, struct xfs_inogrp *inogrp, - struct xfs_bstat *bstat) + struct xfs_bulkstat *bstat) { - struct xfs_bstat *bs; + struct xfs_bulkstat *bs; int i; int error; for (i = 0, bs = bstat; i < XFS_INODES_PER_CHUNK; i++) { + struct xfs_bstat bs1; + if (!(inogrp->xi_allocmask & (1ULL << i))) continue; if (bs->bs_ino == inogrp->xi_startino + i) { @@ -66,11 +68,13 @@ xfs_iterate_inodes_range_check( /* Load the one inode. */ error = xfrog_bulkstat_single(&ctx->mnt, - inogrp->xi_startino + i, bs); - if (error || bs->bs_ino != inogrp->xi_startino + i) { - memset(bs, 0, sizeof(struct xfs_bstat)); + inogrp->xi_startino + i, &bs1); + if (error || bs1.bs_ino != inogrp->xi_startino + i) { + memset(bs, 0, sizeof(struct xfs_bulkstat)); bs->bs_ino = inogrp->xi_startino + i; bs->bs_blksize = ctx->mnt_sv.f_frsize; + } else { + xfrog_bstat_to_bulkstat(&ctx->mnt, bs, &bs1); } bs++; } @@ -93,41 +97,41 @@ xfs_iterate_inodes_range( { struct xfs_handle handle; struct xfs_inogrp inogrp; - struct xfs_bstat bstat[XFS_INODES_PER_CHUNK]; + struct xfs_bulkstat_req *breq; char idescr[DESCR_BUFSZ]; - struct xfs_bstat *bs; + struct xfs_bulkstat *bs; uint64_t igrp_ino; - uint64_t ino; - uint32_t bulklen = 0; uint32_t igrplen = 0; bool moveon = true; int i; int error; int stale_count = 0; - - memset(bstat, 0, XFS_INODES_PER_CHUNK * sizeof(struct xfs_bstat)); - memcpy(&handle.ha_fsid, fshandle, sizeof(handle.ha_fsid)); handle.ha_fid.fid_len = sizeof(xfs_fid_t) - sizeof(handle.ha_fid.fid_len); handle.ha_fid.fid_pad = 0; + breq = xfrog_bulkstat_alloc_req(XFS_INODES_PER_CHUNK, 0); + if (!breq) { + str_info(ctx, descr, _("Insufficient memory; giving up.")); + return false; + } + /* Find the inode chunk & alloc mask */ igrp_ino = first_ino; error = xfrog_inumbers(&ctx->mnt, &igrp_ino, 1, &inogrp, &igrplen); while (!error && igrplen) { - /* Load the inodes. */ - ino = inogrp.xi_startino - 1; - /* * We can have totally empty inode chunks on filesystems where * there are more than 64 inodes per block. Skip these. */ if (inogrp.xi_alloccount == 0) goto igrp_retry; - error = xfrog_bulkstat(&ctx->mnt, &ino, inogrp.xi_alloccount, - bstat, &bulklen); + + breq->hdr.ino = inogrp.xi_startino; + breq->hdr.icount = inogrp.xi_alloccount; + error = xfrog_bulkstat(&ctx->mnt, breq); if (error) { char errbuf[DESCR_BUFSZ]; @@ -135,10 +139,12 @@ xfs_iterate_inodes_range( errbuf, DESCR_BUFSZ)); } - xfs_iterate_inodes_range_check(ctx, &inogrp, bstat); + xfs_iterate_inodes_range_check(ctx, &inogrp, breq->bulkstat); /* Iterate all the inodes. */ - for (i = 0, bs = bstat; i < inogrp.xi_alloccount; i++, bs++) { + for (i = 0, bs = breq->bulkstat; + i < inogrp.xi_alloccount; + i++, bs++) { if (bs->bs_ino > last_ino) goto out; @@ -184,6 +190,7 @@ _("Changed too many times during scan; giving up.")); str_liberror(ctx, error, descr); moveon = false; } + free(breq); out: return moveon; } diff --git a/scrub/inodes.h b/scrub/inodes.h index 631848c3..3341c6d9 100644 --- a/scrub/inodes.h +++ b/scrub/inodes.h @@ -7,7 +7,7 @@ #define XFS_SCRUB_INODES_H_ typedef int (*xfs_inode_iter_fn)(struct scrub_ctx *ctx, - struct xfs_handle *handle, struct xfs_bstat *bs, void *arg); + struct xfs_handle *handle, struct xfs_bulkstat *bs, void *arg); #define XFS_ITERATE_INODES_ABORT (-1) bool xfs_scan_all_inodes(struct scrub_ctx *ctx, xfs_inode_iter_fn fn, diff --git a/scrub/phase3.c b/scrub/phase3.c index 81c64cd1..a32d1ced 100644 --- a/scrub/phase3.c +++ b/scrub/phase3.c @@ -30,7 +30,7 @@ xfs_scrub_fd( struct scrub_ctx *ctx, bool (*fn)(struct scrub_ctx *ctx, uint64_t ino, uint32_t gen, struct xfs_action_list *a), - struct xfs_bstat *bs, + struct xfs_bulkstat *bs, struct xfs_action_list *alist) { return fn(ctx, bs->bs_ino, bs->bs_gen, alist); @@ -45,7 +45,7 @@ struct scrub_inode_ctx { static void xfs_scrub_inode_vfs_error( struct scrub_ctx *ctx, - struct xfs_bstat *bstat) + struct xfs_bulkstat *bstat) { char descr[DESCR_BUFSZ]; xfs_agnumber_t agno; @@ -65,7 +65,7 @@ static int xfs_scrub_inode( struct scrub_ctx *ctx, struct xfs_handle *handle, - struct xfs_bstat *bstat, + struct xfs_bulkstat *bstat, void *arg) { struct xfs_action_list alist; diff --git a/scrub/phase5.c b/scrub/phase5.c index 3ff34251..99cd51b2 100644 --- a/scrub/phase5.c +++ b/scrub/phase5.c @@ -80,7 +80,7 @@ xfs_scrub_scan_dirents( struct scrub_ctx *ctx, const char *descr, int *fd, - struct xfs_bstat *bstat) + struct xfs_bulkstat *bstat) { struct unicrash *uc = NULL; DIR *dir; @@ -140,7 +140,7 @@ xfs_scrub_scan_fhandle_namespace_xattrs( struct scrub_ctx *ctx, const char *descr, struct xfs_handle *handle, - struct xfs_bstat *bstat, + struct xfs_bulkstat *bstat, const struct attrns_decode *attr_ns) { struct attrlist_cursor cur; @@ -200,7 +200,7 @@ xfs_scrub_scan_fhandle_xattrs( struct scrub_ctx *ctx, const char *descr, struct xfs_handle *handle, - struct xfs_bstat *bstat) + struct xfs_bulkstat *bstat) { const struct attrns_decode *ns; bool moveon = true; @@ -228,7 +228,7 @@ static int xfs_scrub_connections( struct scrub_ctx *ctx, struct xfs_handle *handle, - struct xfs_bstat *bstat, + struct xfs_bulkstat *bstat, void *arg) { bool *pmoveon = arg; diff --git a/scrub/phase6.c b/scrub/phase6.c index 506e75d2..b41f90e0 100644 --- a/scrub/phase6.c +++ b/scrub/phase6.c @@ -172,7 +172,7 @@ static int xfs_report_verify_inode( struct scrub_ctx *ctx, struct xfs_handle *handle, - struct xfs_bstat *bstat, + struct xfs_bulkstat *bstat, void *arg) { char descr[DESCR_BUFSZ]; diff --git a/scrub/unicrash.c b/scrub/unicrash.c index 17e8f34f..b02c5658 100644 --- a/scrub/unicrash.c +++ b/scrub/unicrash.c @@ -432,7 +432,7 @@ unicrash_init( */ static bool is_only_root_writable( - struct xfs_bstat *bstat) + struct xfs_bulkstat *bstat) { if (bstat->bs_uid != 0 || bstat->bs_gid != 0) return false; @@ -444,7 +444,7 @@ bool unicrash_dir_init( struct unicrash **ucp, struct scrub_ctx *ctx, - struct xfs_bstat *bstat) + struct xfs_bulkstat *bstat) { /* * Assume 64 bytes per dentry, clamp buckets between 16 and 64k. @@ -459,7 +459,7 @@ bool unicrash_xattr_init( struct unicrash **ucp, struct scrub_ctx *ctx, - struct xfs_bstat *bstat) + struct xfs_bulkstat *bstat) { /* Assume 16 attributes per extent for lack of a better idea. */ return unicrash_init(ucp, ctx, false, 16 * (1 + bstat->bs_aextents), diff --git a/scrub/unicrash.h b/scrub/unicrash.h index fb8f5f72..feb9cc86 100644 --- a/scrub/unicrash.h +++ b/scrub/unicrash.h @@ -14,9 +14,9 @@ struct unicrash; struct dirent; bool unicrash_dir_init(struct unicrash **ucp, struct scrub_ctx *ctx, - struct xfs_bstat *bstat); + struct xfs_bulkstat *bstat); bool unicrash_xattr_init(struct unicrash **ucp, struct scrub_ctx *ctx, - struct xfs_bstat *bstat); + struct xfs_bulkstat *bstat); bool unicrash_fs_label_init(struct unicrash **ucp, struct scrub_ctx *ctx); void unicrash_free(struct unicrash *uc); bool unicrash_check_dir_name(struct unicrash *uc, const char *descr, diff --git a/spaceman/health.c b/spaceman/health.c index c3575b8e..9bed7fdf 100644 --- a/spaceman/health.c +++ b/spaceman/health.c @@ -266,11 +266,10 @@ static int report_bulkstat_health( xfs_agnumber_t agno) { - struct xfs_bstat bstat[BULKSTAT_NR]; + struct xfs_bulkstat_req *breq; char descr[256]; uint64_t startino = 0; uint64_t lastino = -1ULL; - uint32_t ocount; uint32_t i; int error; @@ -279,15 +278,23 @@ report_bulkstat_health( lastino = cvt_agino_to_ino(&file->xfd, agno + 1, 0) - 1; } - while ((error = xfrog_bulkstat(&file->xfd, &startino, BULKSTAT_NR, - bstat, &ocount) == 0) && ocount > 0) { - for (i = 0; i < ocount; i++) { - if (bstat[i].bs_ino > lastino) + breq = xfrog_bulkstat_alloc_req(BULKSTAT_NR, startino); + if (!breq) { + perror("bulk alloc req"); + exitcode = 1; + return 1; + } + + while ((error = xfrog_bulkstat(&file->xfd, breq) == 0) && + breq->hdr.ocount > 0) { + for (i = 0; i < breq->hdr.ocount; i++) { + if (breq->bulkstat[i].bs_ino > lastino) goto out; - snprintf(descr, sizeof(descr) - 1, _("inode %llu"), - bstat[i].bs_ino); - report_sick(descr, inode_flags, bstat[i].bs_sick, - bstat[i].bs_checked); + snprintf(descr, sizeof(descr) - 1, _("inode %"PRIu64), + breq->bulkstat[i].bs_ino); + report_sick(descr, inode_flags, + breq->bulkstat[i].bs_sick, + breq->bulkstat[i].bs_checked); } } if (error) { @@ -295,6 +302,7 @@ report_bulkstat_health( perror("bulkstat"); } out: + free(breq); return error; } From patchwork Fri Sep 6 03:35:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 11134317 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 217AF76 for ; Fri, 6 Sep 2019 03:35:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 05BA720820 for ; Fri, 6 Sep 2019 03:35:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="XbL56YBW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732615AbfIFDf1 (ORCPT ); Thu, 5 Sep 2019 23:35:27 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:49136 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732613AbfIFDf1 (ORCPT ); Thu, 5 Sep 2019 23:35:27 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x863YecF110274; Fri, 6 Sep 2019 03:35:25 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to : cc : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2019-08-05; bh=GuHkAeorD7iO79c32LK6hZ6rsANnbQobdHRSqAZi2S4=; b=XbL56YBWhS3bGWKQ4KX4nkjBnP4ZiZAXSD/udVss45sAltduju6fgjHJEo6vQVSzfQ06 gW9oVUnJBD1vZ8YPmtAKMQ6dNvckq0c15wOU7mSl4SW5J8+fo9QZrOUaJXh6mhJ5Pn4t 5TbBUDl2o2buSXcGZ1vjeBUo5g2/EEaHLWHBaviy8aVa4sYCcUk4cRynLyXVJe7MP9Yc d4ttHNQ/J+03aIgPFpLZcMMBM6kIZNRK48vXe23+445/6+rldfj7XiOkSrZbGaO1iK22 fZynH1CFH+gqg8iMFUarpqV3vsy6gg6Q8bnJSGjFNGyzAD4p19eqPBGSSRKluTWnezrz 1g== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by userp2130.oracle.com with ESMTP id 2uuf4n037c-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 06 Sep 2019 03:35:24 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x863YOuU088444; Fri, 6 Sep 2019 03:35:24 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserp3030.oracle.com with ESMTP id 2uu1b99r8m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 06 Sep 2019 03:35:23 +0000 Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id x863ZMub014582; Fri, 6 Sep 2019 03:35:23 GMT Received: from localhost (/10.159.148.70) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 05 Sep 2019 20:35:22 -0700 Subject: [PATCH 4/6] misc: convert to v5 bulkstat_single call From: "Darrick J. Wong" To: sandeen@sandeen.net, darrick.wong@oracle.com Cc: linux-xfs@vger.kernel.org Date: Thu, 05 Sep 2019 20:35:22 -0700 Message-ID: <156774092210.2643497.7118033849671297049.stgit@magnolia> In-Reply-To: <156774089024.2643497.2754524603021685770.stgit@magnolia> References: <156774089024.2643497.2754524603021685770.stgit@magnolia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9371 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909060039 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9371 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909060039 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Darrick J. Wong Signed-off-by: Darrick J. Wong --- fsr/xfs_fsr.c | 8 +++- io/open.c | 6 ++- io/swapext.c | 4 ++ libfrog/bulkstat.c | 103 ++++++++++++++++++++++++++++++++++++++++++++-------- libfrog/bulkstat.h | 4 +- scrub/inodes.c | 8 +--- spaceman/health.c | 4 +- 7 files changed, 105 insertions(+), 32 deletions(-) diff --git a/fsr/xfs_fsr.c b/fsr/xfs_fsr.c index cc3cc93a..e8fa39ab 100644 --- a/fsr/xfs_fsr.c +++ b/fsr/xfs_fsr.c @@ -724,6 +724,7 @@ fsrfile( xfs_ino_t ino) { struct xfs_fd fsxfd = XFS_FD_INIT_EMPTY; + struct xfs_bulkstat bulkstat; struct xfs_bstat statbuf; jdm_fshandle_t *fshandlep; int fd = -1; @@ -748,12 +749,13 @@ fsrfile( goto out; } - error = xfrog_bulkstat_single(&fsxfd, ino, &statbuf); + error = xfrog_bulkstat_single(&fsxfd, ino, 0, &bulkstat); if (error) { fsrprintf(_("unable to get bstat on %s: %s\n"), fname, strerror(error)); goto out; } + xfrog_bulkstat_to_bstat(&fsxfd, &statbuf, &bulkstat); fd = jdm_open(fshandlep, &statbuf, O_RDWR|O_DIRECT); if (fd < 0) { @@ -974,7 +976,7 @@ fsr_setup_attr_fork( i = 0; do { - struct xfs_bstat tbstat; + struct xfs_bulkstat tbstat; char name[64]; int ret; @@ -983,7 +985,7 @@ fsr_setup_attr_fork( * this to compare against the target and determine what we * need to do. */ - ret = xfrog_bulkstat_single(&txfd, tstatbuf.st_ino, &tbstat); + ret = xfrog_bulkstat_single(&txfd, tstatbuf.st_ino, 0, &tbstat); if (ret) { fsrprintf(_("unable to get bstat on temp file: %s\n"), strerror(ret)); diff --git a/io/open.c b/io/open.c index e1aac7d1..e1979501 100644 --- a/io/open.c +++ b/io/open.c @@ -723,7 +723,7 @@ inode_f( int argc, char **argv) { - struct xfs_bstat bstat; + struct xfs_bulkstat bulkstat; uint64_t result_ino = 0; uint64_t userino = NULLFSINO; char *p; @@ -803,7 +803,7 @@ inode_f( struct xfs_fd xfd = XFS_FD_INIT(file->fd); /* get this inode */ - ret = xfrog_bulkstat_single(&xfd, userino, &bstat); + ret = xfrog_bulkstat_single(&xfd, userino, 0, &bulkstat); if (ret == EINVAL) { /* Not in use */ result_ino = 0; @@ -813,7 +813,7 @@ inode_f( exitcode = 1; return 0; } else { - result_ino = bstat.bs_ino; + result_ino = bulkstat.bs_ino; } } diff --git a/io/swapext.c b/io/swapext.c index 2b4918f8..ca024b93 100644 --- a/io/swapext.c +++ b/io/swapext.c @@ -28,6 +28,7 @@ swapext_f( char **argv) { struct xfs_fd fxfd = XFS_FD_INIT(file->fd); + struct xfs_bulkstat bulkstat; int fd; int error; struct xfs_swapext sx; @@ -48,12 +49,13 @@ swapext_f( goto out; } - error = xfrog_bulkstat_single(&fxfd, stat.st_ino, &sx.sx_stat); + error = xfrog_bulkstat_single(&fxfd, stat.st_ino, 0, &bulkstat); if (error) { errno = error; perror("bulkstat"); goto out; } + xfrog_bulkstat_to_bstat(&fxfd, &sx.sx_stat, &bulkstat); sx.sx_version = XFS_SX_VERSION; sx.sx_fdtarget = file->fd; sx.sx_fdtmp = fd; diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c index b4468243..2a70824e 100644 --- a/libfrog/bulkstat.c +++ b/libfrog/bulkstat.c @@ -20,26 +20,99 @@ xfrog_bulkstat_prep_v1_emulation( return xfd_prepare_geometry(xfd); } +/* Bulkstat a single inode using v5 ioctl. */ +static int +xfrog_bulkstat_single5( + struct xfs_fd *xfd, + uint64_t ino, + unsigned int flags, + struct xfs_bulkstat *bulkstat) +{ + struct xfs_bulkstat_req *req; + int ret; + + if (flags & ~(XFS_BULK_IREQ_SPECIAL)) + return EINVAL; + + req = xfrog_bulkstat_alloc_req(1, ino); + if (!req) + return ENOMEM; + + req->hdr.flags = flags; + ret = ioctl(xfd->fd, XFS_IOC_BULKSTAT, req); + if (ret) { + ret = errno; + goto free; + } + + if (req->hdr.ocount == 0) { + ret = ENOENT; + goto free; + } + + memcpy(bulkstat, req->bulkstat, sizeof(struct xfs_bulkstat)); +free: + free(req); + return ret; +} + +/* Bulkstat a single inode using v1 ioctl. */ +static int +xfrog_bulkstat_single1( + struct xfs_fd *xfd, + uint64_t ino, + unsigned int flags, + struct xfs_bulkstat *bulkstat) +{ + struct xfs_bstat bstat; + struct xfs_fsop_bulkreq bulkreq = { 0 }; + int error; + + if (flags) + return EINVAL; + + error = xfrog_bulkstat_prep_v1_emulation(xfd); + if (error) + return error; + + bulkreq.lastip = (__u64 *)&ino; + bulkreq.icount = 1; + bulkreq.ubuffer = &bstat; + error = ioctl(xfd->fd, XFS_IOC_FSBULKSTAT_SINGLE, &bulkreq); + if (error) + return errno; + + xfrog_bstat_to_bulkstat(xfd, bulkstat, &bstat); + return 0; +} + /* Bulkstat a single inode. Returns zero or a positive error code. */ int xfrog_bulkstat_single( - struct xfs_fd *xfd, - uint64_t ino, - struct xfs_bstat *ubuffer) + struct xfs_fd *xfd, + uint64_t ino, + unsigned int flags, + struct xfs_bulkstat *bulkstat) { - __u64 i = ino; - struct xfs_fsop_bulkreq bulkreq = { - .lastip = &i, - .icount = 1, - .ubuffer = ubuffer, - .ocount = NULL, - }; - int ret; + int error; - ret = ioctl(xfd->fd, XFS_IOC_FSBULKSTAT_SINGLE, &bulkreq); - if (ret) - return errno; - return 0; + if (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V1) + goto try_v1; + + error = xfrog_bulkstat_single5(xfd, ino, flags, bulkstat); + if (error == 0 || (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V5)) + return error; + + /* If the v5 ioctl wasn't found, we punt to v1. */ + switch (error) { + case EOPNOTSUPP: + case ENOTTY: + xfd->flags |= XFROG_FLAG_BULKSTAT_FORCE_V1; + break; + } + +try_v1: + return xfrog_bulkstat_single1(xfd, ino, flags, bulkstat); } /* diff --git a/libfrog/bulkstat.h b/libfrog/bulkstat.h index 6f51c167..3135e752 100644 --- a/libfrog/bulkstat.h +++ b/libfrog/bulkstat.h @@ -8,8 +8,8 @@ /* Bulkstat wrappers */ struct xfs_bstat; -int xfrog_bulkstat_single(struct xfs_fd *xfd, uint64_t ino, - struct xfs_bstat *ubuffer); +int xfrog_bulkstat_single(struct xfs_fd *xfd, uint64_t ino, unsigned int flags, + struct xfs_bulkstat *bulkstat); int xfrog_bulkstat(struct xfs_fd *xfd, struct xfs_bulkstat_req *req); struct xfs_bulkstat_req *xfrog_bulkstat_alloc_req(uint32_t nr, diff --git a/scrub/inodes.c b/scrub/inodes.c index 851c24bd..2112c9d1 100644 --- a/scrub/inodes.c +++ b/scrub/inodes.c @@ -57,8 +57,6 @@ xfs_iterate_inodes_range_check( int error; for (i = 0, bs = bstat; i < XFS_INODES_PER_CHUNK; i++) { - struct xfs_bstat bs1; - if (!(inogrp->xi_allocmask & (1ULL << i))) continue; if (bs->bs_ino == inogrp->xi_startino + i) { @@ -68,13 +66,11 @@ xfs_iterate_inodes_range_check( /* Load the one inode. */ error = xfrog_bulkstat_single(&ctx->mnt, - inogrp->xi_startino + i, &bs1); - if (error || bs1.bs_ino != inogrp->xi_startino + i) { + inogrp->xi_startino + i, 0, bs); + if (error || bs->bs_ino != inogrp->xi_startino + i) { memset(bs, 0, sizeof(struct xfs_bulkstat)); bs->bs_ino = inogrp->xi_startino + i; bs->bs_blksize = ctx->mnt_sv.f_frsize; - } else { - xfrog_bstat_to_bulkstat(&ctx->mnt, bs, &bs1); } bs++; } diff --git a/spaceman/health.c b/spaceman/health.c index 9bed7fdf..b6e1fcd9 100644 --- a/spaceman/health.c +++ b/spaceman/health.c @@ -208,7 +208,7 @@ report_inode_health( unsigned long long ino, const char *descr) { - struct xfs_bstat bs; + struct xfs_bulkstat bs; char d[256]; int ret; @@ -217,7 +217,7 @@ report_inode_health( descr = d; } - ret = xfrog_bulkstat_single(&file->xfd, ino, &bs); + ret = xfrog_bulkstat_single(&file->xfd, ino, 0, &bs); if (ret) { errno = ret; perror(descr); From patchwork Fri Sep 6 03:35:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 11134353 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5464976 for ; Fri, 6 Sep 2019 03:37:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 380B72082C for ; Fri, 6 Sep 2019 03:37:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="LlieBO+v" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388007AbfIFDhe (ORCPT ); Thu, 5 Sep 2019 23:37:34 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:43312 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2392138AbfIFDhe (ORCPT ); Thu, 5 Sep 2019 23:37:34 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x863YAOL074363; Fri, 6 Sep 2019 03:37:31 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to : cc : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2019-08-05; bh=uCr9XYeic4n6cV+HYOonIq6bxNgUkFGzVv+ykIQzz9c=; b=LlieBO+vhZVY30NwHfWuJtqWvHAUOdMBX2YM6KYUr51m7R0o3/xioHsxGWeCpwrhW8b9 PnUsERhWL/ro3D/twjtG9Ejup4yVXHAUZYu/+lv09YoyB8a3HQJKFRyHf+d6b6Ab6asg gbaK1n2AJ0YOsy94acolR0CmxO40iD0w2SxkFSfYQpr4Lx3gG7OAiU0d/0ipJU+irO+5 8r9SK9HHBrab25D76XdEq4IhEz564LvjYGkMImd08vmLCYiSy8oPJ/SqRQlFZ3cDxVaR h2J/1Cs79IEvqpYe6Xq1ZSAAisSZmlochmysGLKSzeHD9rgJ4ewxBf4yxmp26EjhMFM2 sw== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by aserp2120.oracle.com with ESMTP id 2uuf51g37t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 06 Sep 2019 03:37:30 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x863YPDp088550; Fri, 6 Sep 2019 03:35:30 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserp3030.oracle.com with ESMTP id 2uu1b99rbb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 06 Sep 2019 03:35:30 +0000 Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id x863ZTNx014685; Fri, 6 Sep 2019 03:35:29 GMT Received: from localhost (/10.159.148.70) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 05 Sep 2019 20:35:28 -0700 Subject: [PATCH 5/6] misc: convert from XFS_IOC_FSINUMBERS to XFS_IOC_INUMBERS From: "Darrick J. Wong" To: sandeen@sandeen.net, darrick.wong@oracle.com Cc: linux-xfs@vger.kernel.org Date: Thu, 05 Sep 2019 20:35:28 -0700 Message-ID: <156774092832.2643497.11735239040494298471.stgit@magnolia> In-Reply-To: <156774089024.2643497.2754524603021685770.stgit@magnolia> References: <156774089024.2643497.2754524603021685770.stgit@magnolia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9371 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909060039 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9371 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909060039 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Darrick J. Wong Convert all programs to use the v5 inumbers ioctl. Signed-off-by: Darrick J. Wong --- io/imap.c | 26 +++++----- io/open.c | 27 +++++++---- libfrog/bulkstat.c | 132 ++++++++++++++++++++++++++++++++++++++++++++++------ libfrog/bulkstat.h | 10 +++- scrub/fscounters.c | 21 +++++--- scrub/inodes.c | 36 ++++++++------ 6 files changed, 189 insertions(+), 63 deletions(-) diff --git a/io/imap.c b/io/imap.c index 472c1fda..fa69676e 100644 --- a/io/imap.c +++ b/io/imap.c @@ -17,9 +17,7 @@ static int imap_f(int argc, char **argv) { struct xfs_fd xfd = XFS_FD_INIT(file->fd); - struct xfs_inogrp *t; - uint64_t last = 0; - uint32_t count; + struct xfs_inumbers_req *ireq; uint32_t nent; int i; int error; @@ -29,17 +27,19 @@ imap_f(int argc, char **argv) else nent = atoi(argv[1]); - t = malloc(nent * sizeof(*t)); - if (!t) + ireq = xfrog_inumbers_alloc_req(nent, 0); + if (!ireq) { + perror("alloc req"); return 0; + } - while ((error = xfrog_inumbers(&xfd, &last, nent, t, &count)) == 0 && - count > 0) { - for (i = 0; i < count; i++) { - printf(_("ino %10llu count %2d mask %016llx\n"), - (unsigned long long)t[i].xi_startino, - t[i].xi_alloccount, - (unsigned long long)t[i].xi_allocmask); + while ((error = xfrog_inumbers(&xfd, ireq)) == 0 && + ireq->hdr.ocount > 0) { + for (i = 0; i < ireq->hdr.ocount; i++) { + printf(_("ino %10"PRIu64" count %2d mask %016"PRIx64"\n"), + ireq->inumbers[i].xi_startino, + ireq->inumbers[i].xi_alloccount, + ireq->inumbers[i].xi_allocmask); } } @@ -48,7 +48,7 @@ imap_f(int argc, char **argv) perror("xfsctl(XFS_IOC_FSINUMBERS)"); exitcode = 1; } - free(t); + free(ireq); return 0; } diff --git a/io/open.c b/io/open.c index e1979501..e198bcd8 100644 --- a/io/open.c +++ b/io/open.c @@ -681,39 +681,46 @@ static __u64 get_last_inode(void) { struct xfs_fd xfd = XFS_FD_INIT(file->fd); - uint64_t lastip = 0; + struct xfs_inumbers_req *ireq; uint32_t lastgrp = 0; - uint32_t ocount = 0; __u64 last_ino; - struct xfs_inogrp igroup[IGROUP_NR]; + + ireq = xfrog_inumbers_alloc_req(IGROUP_NR, 0); + if (!ireq) { + perror("alloc req"); + return 0; + } for (;;) { int ret; - ret = xfrog_inumbers(&xfd, &lastip, IGROUP_NR, igroup, - &ocount); + ret = xfrog_inumbers(&xfd, ireq); if (ret) { errno = ret; perror("XFS_IOC_FSINUMBERS"); + free(ireq); return 0; } /* Did we reach the last inode? */ - if (ocount == 0) + if (ireq->hdr.ocount == 0) break; /* last inode in igroup table */ - lastgrp = ocount; + lastgrp = ireq->hdr.ocount; } - if (lastgrp == 0) + if (lastgrp == 0) { + free(ireq); return 0; + } lastgrp--; /* The last inode number in use */ - last_ino = igroup[lastgrp].xi_startino + - libxfs_highbit64(igroup[lastgrp].xi_allocmask); + last_ino = ireq->inumbers[lastgrp].xi_startino + + libxfs_highbit64(ireq->inumbers[lastgrp].xi_allocmask); + free(ireq); return last_ino; } diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c index 2a70824e..748d0f32 100644 --- a/libfrog/bulkstat.c +++ b/libfrog/bulkstat.c @@ -387,6 +387,86 @@ xfrog_bulkstat_alloc_req( return breq; } +/* Convert an inumbers (v5) struct to a inogrp (v1) struct. */ +void +xfrog_inumbers_to_inogrp( + struct xfs_inogrp *ig1, + const struct xfs_inumbers *ig) +{ + ig1->xi_startino = ig->xi_startino; + ig1->xi_alloccount = ig->xi_alloccount; + ig1->xi_allocmask = ig->xi_allocmask; +} + +/* Convert an inogrp (v1) struct to a inumbers (v5) struct. */ +void +xfrog_inogrp_to_inumbers( + struct xfs_inumbers *ig, + const struct xfs_inogrp *ig1) +{ + memset(ig, 0, sizeof(*ig)); + ig->xi_version = XFS_INUMBERS_VERSION_V1; + + ig->xi_startino = ig1->xi_startino; + ig->xi_alloccount = ig1->xi_alloccount; + ig->xi_allocmask = ig1->xi_allocmask; +} + +static uint64_t xfrog_inum_ino(void *v1_rec) +{ + return ((struct xfs_inogrp *)v1_rec)->xi_startino; +} + +static void xfrog_inum_cvt(struct xfs_fd *xfd, void *v5, void *v1) +{ + xfrog_inogrp_to_inumbers(v5, v1); +} + +/* Query inode allocation bitmask information using v5 ioctl. */ +static int +xfrog_inumbers5( + struct xfs_fd *xfd, + struct xfs_inumbers_req *req) +{ + int ret; + + ret = ioctl(xfd->fd, XFS_IOC_INUMBERS, req); + if (ret) + return errno; + return 0; +} + +/* Query inode allocation bitmask information using v1 ioctl. */ +static int +xfrog_inumbers1( + struct xfs_fd *xfd, + struct xfs_inumbers_req *req) +{ + struct xfs_fsop_bulkreq bulkreq = { 0 }; + int error; + + error = xfrog_bulkstat_prep_v1_emulation(xfd); + if (error) + return error; + + error = xfrog_bulk_req_setup(xfd, &req->hdr, &bulkreq, + sizeof(struct xfs_inogrp)); + if (error == ECANCELED) + goto out_teardown; + if (error) + return error; + + error = ioctl(xfd->fd, XFS_IOC_FSINUMBERS, &bulkreq); + if (error) + error = errno; + +out_teardown: + return xfrog_bulk_req_teardown(xfd, &req->hdr, &bulkreq, + sizeof(struct xfs_inogrp), xfrog_inum_ino, + &req->inumbers, sizeof(struct xfs_inumbers), + xfrog_inum_cvt, 64, error); +} + /* * Query inode allocation bitmask information. Returns zero or a positive * error code. @@ -394,21 +474,43 @@ xfrog_bulkstat_alloc_req( int xfrog_inumbers( struct xfs_fd *xfd, - uint64_t *lastino, - uint32_t icount, - struct xfs_inogrp *ubuffer, - uint32_t *ocount) + struct xfs_inumbers_req *req) { - struct xfs_fsop_bulkreq bulkreq = { - .lastip = (__u64 *)lastino, - .icount = icount, - .ubuffer = ubuffer, - .ocount = (__s32 *)ocount, - }; - int ret; + int error; - ret = ioctl(xfd->fd, XFS_IOC_FSINUMBERS, &bulkreq); - if (ret) - return errno; - return 0; + if (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V1) + goto try_v1; + + error = xfrog_inumbers5(xfd, req); + if (error == 0 || (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V5)) + return error; + + /* If the v5 ioctl wasn't found, we punt to v1. */ + switch (error) { + case EOPNOTSUPP: + case ENOTTY: + xfd->flags |= XFROG_FLAG_BULKSTAT_FORCE_V1; + break; + } + +try_v1: + return xfrog_inumbers1(xfd, req); +} + +/* Allocate a inumbers request. On error returns NULL and sets errno. */ +struct xfs_inumbers_req * +xfrog_inumbers_alloc_req( + uint32_t nr, + uint64_t startino) +{ + struct xfs_inumbers_req *ireq; + + ireq = calloc(1, XFS_INUMBERS_REQ_SIZE(nr)); + if (!ireq) + return NULL; + + ireq->hdr.icount = nr; + ireq->hdr.ino = startino; + + return ireq; } diff --git a/libfrog/bulkstat.h b/libfrog/bulkstat.h index 3135e752..5da7d3f5 100644 --- a/libfrog/bulkstat.h +++ b/libfrog/bulkstat.h @@ -20,7 +20,13 @@ void xfrog_bstat_to_bulkstat(struct xfs_fd *xfd, struct xfs_bulkstat *bstat, const struct xfs_bstat *bs1); struct xfs_inogrp; -int xfrog_inumbers(struct xfs_fd *xfd, uint64_t *lastino, uint32_t icount, - struct xfs_inogrp *ubuffer, uint32_t *ocount); +int xfrog_inumbers(struct xfs_fd *xfd, struct xfs_inumbers_req *req); + +struct xfs_inumbers_req *xfrog_inumbers_alloc_req(uint32_t nr, + uint64_t startino); +void xfrog_inumbers_to_inogrp(struct xfs_inogrp *ig1, + const struct xfs_inumbers *ig); +void xfrog_inogrp_to_inumbers(struct xfs_inumbers *ig, + const struct xfs_inogrp *ig1); #endif /* __LIBFROG_BULKSTAT_H__ */ diff --git a/scrub/fscounters.c b/scrub/fscounters.c index 8e4b3467..2fdf658a 100644 --- a/scrub/fscounters.c +++ b/scrub/fscounters.c @@ -42,23 +42,28 @@ xfs_count_inodes_range( uint64_t last_ino, uint64_t *count) { - struct xfs_inogrp inogrp; - uint64_t igrp_ino; + struct xfs_inumbers_req *ireq; uint64_t nr = 0; - uint32_t igrplen = 0; int error; ASSERT(!(first_ino & (XFS_INODES_PER_CHUNK - 1))); ASSERT((last_ino & (XFS_INODES_PER_CHUNK - 1))); - igrp_ino = first_ino; - while (!(error = xfrog_inumbers(&ctx->mnt, &igrp_ino, 1, &inogrp, - &igrplen))) { - if (igrplen == 0 || inogrp.xi_startino >= last_ino) + ireq = xfrog_inumbers_alloc_req(1, first_ino); + if (!ireq) { + str_info(ctx, descr, _("Insufficient memory; giving up.")); + return false; + } + + while (!(error = xfrog_inumbers(&ctx->mnt, ireq))) { + if (ireq->hdr.ocount == 0 || + ireq->inumbers[0].xi_startino >= last_ino) break; - nr += inogrp.xi_alloccount; + nr += ireq->inumbers[0].xi_alloccount; } + free(ireq); + if (error) { str_liberror(ctx, error, descr); return false; diff --git a/scrub/inodes.c b/scrub/inodes.c index 2112c9d1..65c404ab 100644 --- a/scrub/inodes.c +++ b/scrub/inodes.c @@ -49,7 +49,7 @@ static void xfs_iterate_inodes_range_check( struct scrub_ctx *ctx, - struct xfs_inogrp *inogrp, + struct xfs_inumbers *inogrp, struct xfs_bulkstat *bstat) { struct xfs_bulkstat *bs; @@ -92,12 +92,11 @@ xfs_iterate_inodes_range( void *arg) { struct xfs_handle handle; - struct xfs_inogrp inogrp; + struct xfs_inumbers_req *ireq; struct xfs_bulkstat_req *breq; char idescr[DESCR_BUFSZ]; struct xfs_bulkstat *bs; - uint64_t igrp_ino; - uint32_t igrplen = 0; + struct xfs_inumbers *inogrp; bool moveon = true; int i; int error; @@ -114,19 +113,26 @@ xfs_iterate_inodes_range( return false; } + ireq = xfrog_inumbers_alloc_req(1, first_ino); + if (!ireq) { + str_info(ctx, descr, _("Insufficient memory; giving up.")); + free(breq); + return false; + } + inogrp = &ireq->inumbers[0]; + /* Find the inode chunk & alloc mask */ - igrp_ino = first_ino; - error = xfrog_inumbers(&ctx->mnt, &igrp_ino, 1, &inogrp, &igrplen); - while (!error && igrplen) { + error = xfrog_inumbers(&ctx->mnt, ireq); + while (!error && ireq->hdr.ocount > 0) { /* * We can have totally empty inode chunks on filesystems where * there are more than 64 inodes per block. Skip these. */ - if (inogrp.xi_alloccount == 0) + if (inogrp->xi_alloccount == 0) goto igrp_retry; - breq->hdr.ino = inogrp.xi_startino; - breq->hdr.icount = inogrp.xi_alloccount; + breq->hdr.ino = inogrp->xi_startino; + breq->hdr.icount = inogrp->xi_alloccount; error = xfrog_bulkstat(&ctx->mnt, breq); if (error) { char errbuf[DESCR_BUFSZ]; @@ -135,11 +141,11 @@ xfs_iterate_inodes_range( errbuf, DESCR_BUFSZ)); } - xfs_iterate_inodes_range_check(ctx, &inogrp, breq->bulkstat); + xfs_iterate_inodes_range_check(ctx, inogrp, breq->bulkstat); /* Iterate all the inodes. */ for (i = 0, bs = breq->bulkstat; - i < inogrp.xi_alloccount; + i < inogrp->xi_alloccount; i++, bs++) { if (bs->bs_ino > last_ino) goto out; @@ -153,7 +159,7 @@ xfs_iterate_inodes_range( case ESTALE: stale_count++; if (stale_count < 30) { - igrp_ino = inogrp.xi_startino; + ireq->hdr.ino = inogrp->xi_startino; goto igrp_retry; } snprintf(idescr, DESCR_BUFSZ, "inode %"PRIu64, @@ -177,8 +183,7 @@ _("Changed too many times during scan; giving up.")); stale_count = 0; igrp_retry: - error = xfrog_inumbers(&ctx->mnt, &igrp_ino, 1, &inogrp, - &igrplen); + error = xfrog_inumbers(&ctx->mnt, ireq); } err: @@ -186,6 +191,7 @@ _("Changed too many times during scan; giving up.")); str_liberror(ctx, error, descr); moveon = false; } + free(ireq); free(breq); out: return moveon; From patchwork Fri Sep 6 03:35:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 11134369 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ED1F214B4 for ; Fri, 6 Sep 2019 03:38:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DA46D206B8 for ; Fri, 6 Sep 2019 03:38:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="NuGwveB3" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2392148AbfIFDiL (ORCPT ); Thu, 5 Sep 2019 23:38:11 -0400 Received: from userp2120.oracle.com ([156.151.31.85]:36994 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2392144AbfIFDiL (ORCPT ); Thu, 5 Sep 2019 23:38:11 -0400 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x863Y3K2105155; Fri, 6 Sep 2019 03:38:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to : cc : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2019-08-05; bh=g9CeuiXObq/136k02ahrOuOSBvKuBfElRCe/7coFubg=; b=NuGwveB3w1sP+B8awpuEYB839P94gQeFHc2tgNUFR3lxDeaiP/g8KE4GDUFzqGIcoIap iUlHdlO1KU2gI3aXWOiecJGlHfHcPqwOaNEaXG7u2Tur3ucc/jmyMlbFG76sPA2Lnz7a 4rRrTAslXQetG7OY7GUh9iuYWN691wl2zj+mpuTR+QrbYH2uPSqu0Fw8Pd7dvXCbRDiR xvfrDV9H4X/tWWZwvxfMHFVmnNFSoWkiMJDU4py3MBuaHmBu8GcxK77py28rhiNASEkM J+S5MJMdJu6bdhEcP1Zk/a7JaHCBXCfOGeQXoZBgI8fBulrHTDeSglZ444f0tcwJ+gHc nQ== Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79]) by userp2120.oracle.com with ESMTP id 2uuf5f834u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 06 Sep 2019 03:38:08 +0000 Received: from pps.filterd (userp3020.oracle.com [127.0.0.1]) by userp3020.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x863XamR069081; Fri, 6 Sep 2019 03:35:36 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by userp3020.oracle.com with ESMTP id 2utvr4juhp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 06 Sep 2019 03:35:36 +0000 Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x863ZZsL004331; Fri, 6 Sep 2019 03:35:35 GMT Received: from localhost (/10.159.148.70) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 05 Sep 2019 20:35:35 -0700 Subject: [PATCH 6/6] libxfs: revert FSGEOMETRY v5 -> v4 hack From: "Darrick J. Wong" To: sandeen@sandeen.net, darrick.wong@oracle.com Cc: linux-xfs@vger.kernel.org Date: Thu, 05 Sep 2019 20:35:34 -0700 Message-ID: <156774093481.2643497.5230418343512898938.stgit@magnolia> In-Reply-To: <156774089024.2643497.2754524603021685770.stgit@magnolia> References: <156774089024.2643497.2754524603021685770.stgit@magnolia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9371 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909060039 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9371 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909060039 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Darrick J. Wong Revert the #define redirection of XFS_IOC_FSGEOMETRY to the old V4 ioctl. Signed-off-by: Darrick J. Wong --- libxfs/xfs_fs.h | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/libxfs/xfs_fs.h b/libxfs/xfs_fs.h index 67fceffc..31ac6323 100644 --- a/libxfs/xfs_fs.h +++ b/libxfs/xfs_fs.h @@ -822,9 +822,7 @@ struct xfs_scrub_metadata { #define XFS_IOC_ATTRMULTI_BY_HANDLE _IOW ('X', 123, struct xfs_fsop_attrmulti_handlereq) #define XFS_IOC_FSGEOMETRY_V4 _IOR ('X', 124, struct xfs_fsop_geom_v4) #define XFS_IOC_GOINGDOWN _IOR ('X', 125, uint32_t) -/* For compatibility, for now */ -/* #define XFS_IOC_FSGEOMETRY _IOR ('X', 126, struct xfs_fsop_geom_v5) */ -#define XFS_IOC_FSGEOMETRY XFS_IOC_FSGEOMETRY_V4 +#define XFS_IOC_FSGEOMETRY _IOR ('X', 126, struct xfs_fsop_geom) #define XFS_IOC_BULKSTAT _IOR ('X', 127, struct xfs_bulkstat_req) #define XFS_IOC_INUMBERS _IOR ('X', 128, struct xfs_inumbers_req)