From patchwork Fri Aug 21 16:24:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Komarov X-Patchwork-Id: 11729961 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7B08F109B for ; Fri, 21 Aug 2020 16:32:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 42D57207DF for ; Fri, 21 Aug 2020 16:32:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=paragon-software.com header.i=@paragon-software.com header.b="pcVR1rTl" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728454AbgHUQZe (ORCPT ); Fri, 21 Aug 2020 12:25:34 -0400 Received: from relayfre-01.paragon-software.com ([176.12.100.13]:56458 "EHLO relayfre-01.paragon-software.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728646AbgHUQZH (ORCPT ); Fri, 21 Aug 2020 12:25:07 -0400 Received: from dlg2.mail.paragon-software.com (vdlg-exch-02.paragon-software.com [172.30.1.105]) by relayfre-01.paragon-software.com (Postfix) with ESMTPS id 2CD83195; Fri, 21 Aug 2020 19:24:58 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=paragon-software.com; s=mail; t=1598027098; bh=2+TyR6hKBTjyyzXntu+QjGuQJVVaW4nkHn62a/e86zk=; h=From:To:CC:Subject:Date; b=pcVR1rTlRcVMAw1oc5ZpRNIbkIrWx8JxZ3IoJedEI+Y6XZf+0Xwvdz7J0ftIMVzEV OyYakIGy5HDDWeeMDLD45YB4U1DIBBeeEyuI7PKCZ8xNgr+e9t1Js0phZnBfUfUVC2 JPjqELaAwL2BzJ6jy1iNFHeL/RWQL3WkrCz2mhV4= Received: from vdlg-exch-02.paragon-software.com (172.30.1.105) by vdlg-exch-02.paragon-software.com (172.30.1.105) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1847.3; Fri, 21 Aug 2020 19:24:57 +0300 Received: from vdlg-exch-02.paragon-software.com ([fe80::586:6d72:3fe5:bd9b]) by vdlg-exch-02.paragon-software.com ([fe80::586:6d72:3fe5:bd9b%6]) with mapi id 15.01.1847.003; Fri, 21 Aug 2020 19:24:57 +0300 From: Konstantin Komarov To: "viro@zeniv.linux.org.uk" , "linux-kernel@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" CC: =?iso-8859-1?q?Pali_Roh=E1r?= Subject: [PATCH v2 01/10] fs/ntfs3: Add headers and misc files Thread-Topic: [PATCH v2 01/10] fs/ntfs3: Add headers and misc files Thread-Index: AdZ30qbixSAkld9HQEuPoPVD2m0uSQ== Date: Fri, 21 Aug 2020 16:24:57 +0000 Message-ID: <0911041fee4649f5bbd76cca7cb225cc@paragon-software.com> Accept-Language: ru-RU, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.30.8.36] MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This adds fs/ntfs3 headers and misc files Signed-off-by: Konstantin Komarov --- fs/ntfs3/debug.h | 77 +++ fs/ntfs3/ntfs.h | 1253 ++++++++++++++++++++++++++++++++++++++++++++ fs/ntfs3/ntfs_fs.h | 965 ++++++++++++++++++++++++++++++++++ fs/ntfs3/upcase.c | 78 +++ 4 files changed, 2373 insertions(+) create mode 100644 fs/ntfs3/debug.h create mode 100644 fs/ntfs3/ntfs.h create mode 100644 fs/ntfs3/ntfs_fs.h create mode 100644 fs/ntfs3/upcase.c diff --git a/fs/ntfs3/debug.h b/fs/ntfs3/debug.h new file mode 100644 index 000000000000..ee348daeb7a9 --- /dev/null +++ b/fs/ntfs3/debug.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * linux/fs/ntfs3/debug.h + * + * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved. + * + * useful functions for debuging + */ + +#ifndef Add2Ptr +#define Add2Ptr(P, I) (void *)((u8 *)(P) + (I)) +#define PtrOffset(B, O) ((size_t)((size_t)(O) - (size_t)(B))) +#endif + +#define QuadAlign(n) (((n) + 7u) & (~7u)) +#define IsQuadAligned(n) (!((size_t)(n)&7u)) +#define Quad2Align(n) (((n) + 15u) & (~15u)) +#define IsQuad2Aligned(n) (!((size_t)(n)&15u)) +#define Quad4Align(n) (((n) + 31u) & (~31u)) +#define IsSizeTAligned(n) (!((size_t)(n) & (sizeof(size_t) - 1))) +#define DwordAlign(n) (((n) + 3u) & (~3u)) +#define IsDwordAligned(n) (!((size_t)(n)&3u)) +#define WordAlign(n) (((n) + 1u) & (~1u)) +#define IsWordAligned(n) (!((size_t)(n)&1u)) + +__printf(3, 4) void __ntfs_trace(const struct super_block *sb, + const char *level, const char *fmt, ...); +__printf(3, 4) void __ntfs_fs_error(struct super_block *sb, int report, + const char *fmt, ...); +__printf(3, 4) void __ntfs_inode_trace(struct inode *inode, const char *level, + const char *fmt, ...); + +#define ntfs_trace(sb, fmt, args...) __ntfs_trace(sb, KERN_NOTICE, fmt, ##args) +#define ntfs_error(sb, fmt, args...) __ntfs_trace(sb, KERN_ERR, fmt, ##args) +#define ntfs_warning(sb, fmt, args...) \ + __ntfs_trace(sb, KERN_WARNING, fmt, ##args) + +#define ntfs_fs_error(sb, fmt, args...) __ntfs_fs_error(sb, 1, fmt, ##args) +#define ntfs_inode_error(inode, fmt, args...) \ + __ntfs_inode_trace(inode, KERN_ERR, fmt, ##args) +#define ntfs_inode_warning(inode, fmt, args...) \ + __ntfs_inode_trace(inode, KERN_WARNING, fmt, ##args) + +static inline void *ntfs_alloc(size_t size, int zero) +{ + void *p = kmalloc(size, zero ? (GFP_NOFS | __GFP_ZERO) : GFP_NOFS); + + return p; +} + +static inline void ntfs_free(void *p) +{ + if (!p) + return; + kfree(p); +} + +static inline void trace_mem_report(int on_exit) +{ +} + +static inline void ntfs_init_trace_file(void) +{ +} + +static inline void ntfs_close_trace_file(void) +{ +} + +static inline void *ntfs_memdup(const void *src, size_t len) +{ + void *p = ntfs_alloc(len, 0); + + if (p) + memcpy(p, src, len); + return p; +} diff --git a/fs/ntfs3/ntfs.h b/fs/ntfs3/ntfs.h new file mode 100644 index 000000000000..b9735129f45b --- /dev/null +++ b/fs/ntfs3/ntfs.h @@ -0,0 +1,1253 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * linux/fs/ntfs3/ntfs.h + * + * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved. + * + * on-disk ntfs structs + */ + +/* TODO: + * - Check 4K mft record and 512 bytes cluster + */ + +/* + * Activate this define to use binary search in indexes + */ +#define NTFS3_INDEX_BINARY_SEARCH + +/* + * Check each run for marked clusters + */ +#define NTFS3_CHECK_FREE_CLST + +/* + * Activate this define to activate preallocate + */ +//#define NTFS3_PREALLOCATE + +#define NTFS_NAME_LEN 255 + +/* + * ntfs.sys used 500 maximum links + * on-disk struct allows up to 0xffff + */ +#define NTFS_LINK_MAX 0x400 +//#define NTFS_LINK_MAX 0xffff + +/* + * Activate to use 64 bit clusters instead of 32 bits in ntfs.sys + * Logical and virtual cluster number + * If needed, may be redefined to use 64 bit value + */ +//#define NTFS3_64BIT_CLUSTER + +#define NTFS_LZNT_MAX_CLUSTER 4096 +#define NTFS_LZNT_CUNIT 4 + +#ifndef GUID_DEFINED +#define GUID_DEFINED +typedef struct { + __le32 Data1; + __le16 Data2; + __le16 Data3; + u8 Data4[8]; +} GUID; +#endif + +struct le_str { + u8 len; + u8 unused; + __le16 name[20]; +}; + +struct cpu_str { + u8 len; + u8 unused; + u16 name[20]; +}; + +static_assert(SECTOR_SHIFT == 9); + +#ifdef NTFS3_64BIT_CLUSTER +typedef u64 CLST; +static_assert(sizeof(size_t) == 8); +#else +typedef u32 CLST; +#endif + +#define SPARSE_LCN ((CLST)-1) +#define RESIDENT_LCN ((CLST)-2) +#define COMPRESSED_LCN ((CLST)-3) + +#define COMPRESSION_UNIT 4 +#define COMPRESS_MAX_CLUSTER 0x1000 +#define MFT_INCREASE_CHUNK 1024 + +enum RECORD_NUM { + MFT_REC_MFT = 0, + MFT_REC_MIRR = 1, + MFT_REC_LOG = 2, + MFT_REC_VOL = 3, + MFT_REC_ATTR = 4, + MFT_REC_ROOT = 5, + MFT_REC_BITMAP = 6, + MFT_REC_BOOT = 7, + MFT_REC_BADCLUST = 8, + MFT_REC_QUOTA = 9, + MFT_REC_SECURE = 9, // NTFS 3.0 + MFT_REC_UPCASE = 10, + MFT_REC_EXTEND = 11, // NTFS 3.0 + MFT_REC_RESERVED = 11, + MFT_REC_FREE = 16, + MFT_REC_USER = 24, +}; + +typedef enum { + ATTR_ZERO = cpu_to_le32(0x00), + ATTR_STD = cpu_to_le32(0x10), + ATTR_LIST = cpu_to_le32(0x20), + ATTR_NAME = cpu_to_le32(0x30), + // ATTR_VOLUME_VERSION on Nt4 + ATTR_ID = cpu_to_le32(0x40), + ATTR_SECURE = cpu_to_le32(0x50), + ATTR_LABEL = cpu_to_le32(0x60), + ATTR_VOL_INFO = cpu_to_le32(0x70), + ATTR_DATA = cpu_to_le32(0x80), + ATTR_ROOT = cpu_to_le32(0x90), + ATTR_ALLOC = cpu_to_le32(0xA0), + ATTR_BITMAP = cpu_to_le32(0xB0), + // ATTR_SYMLINK on Nt4 + ATTR_REPARSE = cpu_to_le32(0xC0), + ATTR_EA_INFO = cpu_to_le32(0xD0), + ATTR_EA = cpu_to_le32(0xE0), + ATTR_PROPERTYSET = cpu_to_le32(0xF0), + ATTR_LOGGED_UTILITY_STREAM = cpu_to_le32(0x100), + ATTR_END = cpu_to_le32(0xFFFFFFFF) +} ATTR_TYPE; + +static_assert(sizeof(ATTR_TYPE) == 4); + +typedef enum { + FILE_ATTRIBUTE_READONLY = cpu_to_le32(0x00000001), + FILE_ATTRIBUTE_HIDDEN = cpu_to_le32(0x00000002), + FILE_ATTRIBUTE_SYSTEM = cpu_to_le32(0x00000004), + FILE_ATTRIBUTE_ARCHIVE = cpu_to_le32(0x00000020), + FILE_ATTRIBUTE_DEVICE = cpu_to_le32(0x00000040), + + FILE_ATTRIBUTE_TEMPORARY = cpu_to_le32(0x00000100), + FILE_ATTRIBUTE_SPARSE_FILE = cpu_to_le32(0x00000200), + FILE_ATTRIBUTE_REPARSE_POINT = cpu_to_le32(0x00000400), + FILE_ATTRIBUTE_COMPRESSED = cpu_to_le32(0x00000800), + + FILE_ATTRIBUTE_OFFLINE = cpu_to_le32(0x00001000), + FILE_ATTRIBUTE_NOT_CONTENT_INDEXED = cpu_to_le32(0x00002000), + FILE_ATTRIBUTE_ENCRYPTED = cpu_to_le32(0x00004000), + + FILE_ATTRIBUTE_VALID_FLAGS = cpu_to_le32(0x00007fb7), + + FILE_ATTRIBUTE_DIRECTORY = cpu_to_le32(0x10000000), +} FILE_ATTRIBUTE; + +static_assert(sizeof(FILE_ATTRIBUTE) == 4); + +extern const struct cpu_str NAME_MFT; // L"$MFT" +extern const struct cpu_str NAME_MIRROR; // L"$MFTMirr" +extern const struct cpu_str NAME_LOGFILE; // L"$LogFile" +extern const struct cpu_str NAME_VOLUME; // L"$Volume" +extern const struct cpu_str NAME_ATTRDEF; // L"$AttrDef" +extern const struct cpu_str NAME_ROOT; // L"." +extern const struct cpu_str NAME_BITMAP; // L"$Bitmap" +extern const struct cpu_str NAME_BOOT; // L"$Boot" +extern const struct cpu_str NAME_BADCLUS; // L"$BadClus" +extern const struct cpu_str NAME_QUOTA; // L"$Quota" +extern const struct cpu_str NAME_SECURE; // L"$Secure" +extern const struct cpu_str NAME_UPCASE; // L"$UpCase" +extern const struct cpu_str NAME_EXTEND; // L"$Extend" +extern const struct cpu_str NAME_OBJID; // L"$ObjId" +extern const struct cpu_str NAME_REPARSE; // L"$Reparse" +extern const struct cpu_str NAME_USNJRNL; // L"$UsnJrnl" +extern const struct cpu_str NAME_UGM; // L"$UGM" + +extern const __le16 I30_NAME[4]; // L"$I30" +extern const __le16 SII_NAME[4]; // L"$SII" +extern const __le16 SDH_NAME[4]; // L"$SDH" +extern const __le16 SO_NAME[2]; // L"$O" +extern const __le16 SQ_NAME[2]; // L"$Q" +extern const __le16 SR_NAME[2]; // L"$R" + +extern const __le16 BAD_NAME[4]; // L"$Bad" +extern const __le16 SDS_NAME[4]; // L"$SDS" +extern const __le16 EFS_NAME[4]; // L"$EFS" +extern const __le16 WOF_NAME[17]; // L"WofCompressedData" +extern const __le16 J_NAME[2]; // L"$J" +extern const __le16 MAX_NAME[4]; // L"$Max" + +/* MFT record number structure */ +typedef struct { + __le32 low; // The low part of the number + __le16 high; // The high part of the number + __le16 seq; // The sequence number of MFT record +} MFT_REF; + +static_assert(sizeof(__le64) == sizeof(MFT_REF)); + +static inline CLST ino_get(const MFT_REF *ref) +{ +#ifdef NTFS3_64BIT_CLUSTER + return le32_to_cpu(ref->low) | ((u64)le16_to_cpu(ref->high) << 32); +#else + return le32_to_cpu(ref->low); +#endif +} + +struct NTFS_BOOT { + u8 jump_code[3]; // 0x00: Jump to boot code + u8 system_id[8]; // 0x03: System ID, equals "NTFS " + + // NOTE: this member is not aligned(!) + // bytes_per_sector[0] must be 0 + // bytes_per_sector[1] must be multiplied by 256 + u8 bytes_per_sector[2]; // 0x0B: Bytes per sector + + u8 sectors_per_clusters; // 0x0D: Sectors per cluster + u8 unused1[7]; + u8 media_type; // 0x15: Media type (0xF8 - harddisk) + u8 unused2[2]; + __le16 secotrs_per_track; // 0x18: number of sectors per track + __le16 heads; // 0x1A: number of heads per cylinder + __le32 hidden_sectors; // 0x1C: number of 'hidden' sectors + u8 unused3[4]; + u8 bios_drive_num; // 0x24: BIOS drive number =0x80 + u8 unused4; + u8 signature_ex; // 0x26: Extended BOOT signature =0x80 + u8 unused5; + __le64 sectors_per_volume; // 0x28: size of volume in sectors + __le64 mft_clst; // 0x30: first cluster of $MFT + __le64 mft2_clst; // 0x38: first cluster of $MFTMirr + s8 record_size; // 0x40: size of MFT record in clusters(sectors) + u8 unused6[3]; + s8 index_size; // 0x44: size of INDX record in clusters(sectors) + u8 unused7[3]; + __le64 serial_num; // 0x48: Volume serial number + __le32 check_sum; // 0x50: Simple additive checksum of all of the u32's which + // precede the 'check_sum' + + u8 boot_code[0x200 - 0x50 - 2 - 4]; // 0x54: + u8 boot_magic[2]; // 0x1FE: Boot signature =0x55 + 0xAA +}; + +static_assert(sizeof(struct NTFS_BOOT) == 0x200); + +typedef enum { + NTFS_FILE_SIGNATURE = cpu_to_le32(0x454C4946), // 'FILE' + NTFS_INDX_SIGNATURE = cpu_to_le32(0x58444E49), // 'INDX' + NTFS_CHKD_SIGNATURE = cpu_to_le32(0x444B4843), // 'CHKD' + NTFS_RSTR_SIGNATURE = cpu_to_le32(0x52545352), // 'RSTR' + NTFS_RCRD_SIGNATURE = cpu_to_le32(0x44524352), // 'RCRD' + NTFS_BAAD_SIGNATURE = cpu_to_le32(0x44414142), // 'BAAD' + NTFS_HOLE_SIGNATURE = cpu_to_le32(0x454C4F48), // 'HOLE' + NTFS_FFFF_SIGNATURE = cpu_to_le32(0xffffffff), +} NTFS_SIGNATURE; + +static_assert(sizeof(NTFS_SIGNATURE) == 4); + +/* MFT Record header structure */ +typedef struct { + /* Record magic number, equals 'FILE'/'INDX'/'RSTR'/'RCRD' */ + __le32 sign; // 0x00: + __le16 fix_off; // 0x04: + __le16 fix_num; // 0x06: + __le64 lsn; // 0x08: Log file sequence number +} NTFS_RECORD_HEADER; + +static_assert(sizeof(NTFS_RECORD_HEADER) == 0x10); + +static inline int is_baad(const NTFS_RECORD_HEADER *hdr) +{ + return hdr->sign == NTFS_BAAD_SIGNATURE; +} + +/* Possible bits in MFT_REC.flags */ +typedef enum { + RECORD_FLAG_IN_USE = cpu_to_le16(0x0001), + RECORD_FLAG_DIR = cpu_to_le16(0x0002), + RECORD_FLAG_SYSTEM = cpu_to_le16(0x0004), + RECORD_FLAG_UNKNOWN = cpu_to_le16(0x0008), +} RECORD_FLAG; + +/* MFT Record structure */ +typedef struct { + NTFS_RECORD_HEADER rhdr; // 'FILE' + + __le16 seq; // 0x10: Sequence number for this record + __le16 hard_links; // 0x12: The number of hard links to record + __le16 attr_off; // 0x14: Offset to attributes + __le16 flags; // 0x16: 1=non-resident, 2=dir. See RECORD_FLAG_XXX + __le32 used; // 0x18: The size of used part + __le32 total; // 0x1C: Total record size + + MFT_REF parent_ref; // 0x20: Parent MFT record + __le16 next_attr_id; // 0x28: The next attribute Id + + // + // NTFS of version 3.1 uses this record header + // if fix_off >= 0x30 + + __le16 Res; // 0x2A: ? High part of MftRecord + __le32 MftRecord; // 0x2C: Current record number + __le16 Fixups[1]; // 0x30: + +} MFT_REC; + +#define MFTRECORD_FIXUP_OFFSET_1 offsetof(MFT_REC, Res) +#define MFTRECORD_FIXUP_OFFSET_3 offsetof(MFT_REC, Fixups) + +static_assert(MFTRECORD_FIXUP_OFFSET_1 == 0x2A); +static_assert(MFTRECORD_FIXUP_OFFSET_3 == 0x30); + +static inline bool is_rec_base(const MFT_REC *rec) +{ + const MFT_REF *r = &rec->parent_ref; + + return !r->low && !r->high && !r->seq; +} + +static inline bool is_mft_rec5(const MFT_REC *rec) +{ + return le16_to_cpu(rec->rhdr.fix_off) >= offsetof(MFT_REC, Fixups); +} + +static inline bool is_rec_inuse(const MFT_REC *rec) +{ + return rec->flags & RECORD_FLAG_IN_USE; +} + +static inline bool clear_rec_inuse(MFT_REC *rec) +{ + return rec->flags &= ~RECORD_FLAG_IN_USE; +} + +/* Possible values of ATTR_RESIDENT.flags */ +#define RESIDENT_FLAG_INDEXED 0x01 + +typedef struct { + __le32 data_size; // 0x10: The size of data + __le16 data_off; // 0x14: Offset to data + u8 flags; // 0x16: resident flags ( 1 - indexed ) + u8 res; // 0x17: +} ATTR_RESIDENT; // sizeof() = 0x18 + +typedef struct { + __le64 svcn; // 0x10: Starting VCN of this segment + __le64 evcn; // 0x18: End VCN of this segment + __le16 run_off; // 0x20: Offset to packed runs + // Unit of Compression size for this stream, expressed + // as a log of the cluster size. + // + // 0 means file is not compressed + // 1, 2, 3, and 4 are potentially legal values if the + // stream is compressed, however the implementation + // may only choose to use 4, or possibly 3. Note + // that 4 means cluster size time 16. If convenient + // the implementation may wish to accept a + // reasonable range of legal values here (1-5?), + // even if the implementation only generates + // a smaller set of values itself. + u8 c_unit; // 0x22 + u8 res1[5]; // 0x23: + __le64 alloc_size; // 0x28: The allocated size of attribute in bytes + // (multiple of cluster size) + __le64 data_size; // 0x30: The size of attribute in bytes <= alloc_size + __le64 valid_size; // 0x38: The size of valid part in bytes <= data_size + __le64 total_size; // 0x40: The sum of the allocated clusters for a file + // (present only for the first segment (0 == vcn) + // of compressed attribute) + +} ATTR_NONRESIDENT; // sizeof()=0x40 or 0x48 (if compressed) + +/* Possible values of ATTRIB.flags: */ +#define ATTR_FLAG_COMPRESSED cpu_to_le16(0x0001) +#define ATTR_FLAG_COMPRESSED_MASK cpu_to_le16(0x00FF) +#define ATTR_FLAG_ENCRYPTED cpu_to_le16(0x4000) +#define ATTR_FLAG_SPARSED cpu_to_le16(0x8000) + +typedef struct { + ATTR_TYPE type; // 0x00: The type of this attribute + __le32 size; // 0x04: The size of this attribute + u8 non_res; // 0x08: Is this attribute non-resident ? + u8 name_len; // 0x09: This attribute name length + __le16 name_off; // 0x0A: Offset to the attribute name + __le16 flags; // 0x0C: See ATTR_FLAG_XXX + __le16 id; // 0x0E: unique id (per record) + + union { + ATTR_RESIDENT res; // 0x10 + ATTR_NONRESIDENT nres; // 0x10 + }; +} ATTRIB; + +/* Define attribute sizes */ +#define SIZEOF_RESIDENT 0x18 +#define SIZEOF_NONRESIDENT_EX 0x48 +#define SIZEOF_NONRESIDENT 0x40 + +#define SIZEOF_RESIDENT_LE cpu_to_le16(0x18) +#define SIZEOF_NONRESIDENT_EX_LE cpu_to_le16(0x48) +#define SIZEOF_NONRESIDENT_LE cpu_to_le16(0x40) + +static inline u64 attr_ondisk_size(const ATTRIB *attr) +{ + return attr->non_res ? ((attr->flags & + (ATTR_FLAG_COMPRESSED | ATTR_FLAG_SPARSED)) ? + le64_to_cpu(attr->nres.total_size) : + le64_to_cpu(attr->nres.alloc_size)) : + QuadAlign(le32_to_cpu(attr->res.data_size)); +} + +static inline u64 attr_size(const ATTRIB *attr) +{ + return attr->non_res ? le64_to_cpu(attr->nres.data_size) : + le32_to_cpu(attr->res.data_size); +} + +static inline bool is_attr_encrypted(const ATTRIB *attr) +{ + return attr->flags & ATTR_FLAG_ENCRYPTED; +} + +static inline bool is_attr_sparsed(const ATTRIB *attr) +{ + return attr->flags & ATTR_FLAG_SPARSED; +} + +static inline bool is_attr_compressed(const ATTRIB *attr) +{ + return attr->flags & ATTR_FLAG_COMPRESSED; +} + +static inline bool is_attr_ext(const ATTRIB *attr) +{ + return attr->flags & (ATTR_FLAG_SPARSED | ATTR_FLAG_COMPRESSED); +} + +static inline bool is_attr_indexed(const ATTRIB *attr) +{ + return !attr->non_res && (attr->res.flags & RESIDENT_FLAG_INDEXED); +} + +static inline const __le16 *attr_name(const ATTRIB *attr) +{ + return Add2Ptr(attr, le16_to_cpu(attr->name_off)); +} + +static inline u64 attr_svcn(const ATTRIB *attr) +{ + return attr->non_res ? le64_to_cpu(attr->nres.svcn) : 0; +} + +/* the size of resident attribute by its resident size */ +#define BYTES_PER_RESIDENT(b) (0x18 + (b)) + +static_assert(sizeof(ATTRIB) == 0x48); +static_assert(sizeof(((ATTRIB *)NULL)->res) == 0x08); +static_assert(sizeof(((ATTRIB *)NULL)->nres) == 0x38); + +static inline void *resident_data_ex(const ATTRIB *attr, u32 datasize) +{ + u32 asize, rsize; + u16 off; + + if (attr->non_res) + return NULL; + + asize = le32_to_cpu(attr->size); + off = le16_to_cpu(attr->res.data_off); + + if (asize < datasize + off) + return NULL; + + rsize = le32_to_cpu(attr->res.data_size); + if (rsize < datasize) + return NULL; + + return Add2Ptr(attr, off); +} + +static inline void *resident_data(const ATTRIB *attr) +{ + return Add2Ptr(attr, le16_to_cpu(attr->res.data_off)); +} + +static inline void *attr_run(const ATTRIB *attr) +{ + return Add2Ptr(attr, le16_to_cpu(attr->nres.run_off)); +} + +/* Standard information attribute (0x10) */ +typedef struct { + __le64 cr_time; // 0x00: File creation file + __le64 m_time; // 0x08: File modification time + __le64 c_time; // 0x10: Last time any attribute was modified. + __le64 a_time; // 0x18: File last access time + FILE_ATTRIBUTE fa; // 0x20: Standard DOS attributes & more + __le32 max_ver_num; // 0x24: Maximum Number of Versions + __le32 ver_num; // 0x28: Version Number + __le32 class_id; // 0x2C: Class Id from bidirectional Class Id index +} ATTR_STD_INFO; + +static_assert(sizeof(ATTR_STD_INFO) == 0x30); + +#define SECURITY_ID_INVALID 0x00000000 +#define SECURITY_ID_FIRST 0x00000100 + +typedef struct { + __le64 cr_time; // 0x00: File creation file + __le64 m_time; // 0x08: File modification time + __le64 c_time; // 0x10: Last time any attribute was modified. + __le64 a_time; // 0x18: File last access time + FILE_ATTRIBUTE fa; // 0x20: Standard DOS attributes & more + __le32 max_ver_num; // 0x24: Maximum Number of Versions + __le32 ver_num; // 0x28: Version Number + __le32 class_id; // 0x2C: Class Id from bidirectional Class Id index + + __le32 owner_id; // 0x30: Owner Id of the user owning the file. This Id is a key + // in the $O and $Q Indexes of the file $Quota. If zero, then + // quotas are disabled + __le32 security_id; // 0x34: The Security Id is a key in the $SII Index and $SDS + // Data Stream in the file $Secure. + __le64 quota_charge; // 0x38: The number of bytes this file user from the user's + // quota. This should be the total data size of all streams. + // If zero, then quotas are disabled. + __le64 usn; // 0x40: Last Update Sequence Number of the file. This is a direct + // index into the file $UsnJrnl. If zero, the USN Journal is + // disabled. +} ATTR_STD_INFO5; + +static_assert(sizeof(ATTR_STD_INFO5) == 0x48); + +/* attribute list entry structure (0x20) */ +typedef struct { + ATTR_TYPE type; // 0x00: The type of attribute + __le16 size; // 0x04: The size of this record + u8 name_len; // 0x06: The length of attribute name + u8 name_off; // 0x07: The offset to attribute name + __le64 vcn; // 0x08: Starting VCN of this attribute + MFT_REF ref; // 0x10: MFT record number with attribute + __le16 id; // 0x18: ATTRIB ID + __le16 name[3]; // 0x1A: Just to align. To get real name can use bNameOffset + +} ATTR_LIST_ENTRY; // sizeof(0x20) + +static_assert(sizeof(ATTR_LIST_ENTRY) == 0x20); + +static inline u32 le_size(u8 name_len) +{ + return QuadAlign(offsetof(ATTR_LIST_ENTRY, name) + + name_len * sizeof(short)); +} + +/* returns 0 if 'attr' has the same type and name */ +static inline int le_cmp(const ATTR_LIST_ENTRY *le, const ATTRIB *attr) +{ + return le->type != attr->type || le->name_len != attr->name_len || + (!le->name_len && + memcmp(Add2Ptr(le, le->name_off), + Add2Ptr(attr, le16_to_cpu(attr->name_off)), + le->name_len * sizeof(short))); +} + +static inline const __le16 *le_name(const ATTR_LIST_ENTRY *le) +{ + return Add2Ptr(le, le->name_off); +} + +/* File name types (the field type in ATTR_FILE_NAME ) */ +#define FILE_NAME_POSIX 0 +#define FILE_NAME_UNICODE 1 +#define FILE_NAME_DOS 2 +#define FILE_NAME_UNICODE_AND_DOS (FILE_NAME_DOS | FILE_NAME_UNICODE) + +/* Filename attribute structure (0x30) */ +typedef struct { + __le64 cr_time; // 0x00: File creation file + __le64 m_time; // 0x08: File modification time + __le64 c_time; // 0x10: Last time any attribute was modified + __le64 a_time; // 0x18: File last access time + __le64 alloc_size; // 0x20: Data attribute allocated size, multiple of cluster size + __le64 data_size; // 0x28: Data attribute size <= Dataalloc_size + FILE_ATTRIBUTE fa; // 0x30: Standard DOS attributes & more + __le16 ea_size; // 0x34: Packed EAs + __le16 reparse; // 0x36: Used by Reparse + +} NTFS_DUP_INFO; // 0x38 + +typedef struct { + MFT_REF home; // 0x00: MFT record for directory + NTFS_DUP_INFO dup; // 0x08 + u8 name_len; // 0x40: File name length in words + u8 type; // 0x41: File name type + __le16 name[1]; // 0x42: File name +} ATTR_FILE_NAME; + +static_assert(sizeof(((ATTR_FILE_NAME *)NULL)->dup) == 0x38); +static_assert(offsetof(ATTR_FILE_NAME, name) == 0x42); +#define SIZEOF_ATTRIBUTE_FILENAME 0x44 +#define SIZEOF_ATTRIBUTE_FILENAME_MAX (0x42 + 255 * 2) + +static inline ATTRIB *attr_from_name(ATTR_FILE_NAME *fname) +{ + return (ATTRIB *)((char *)fname - SIZEOF_RESIDENT); +} + +static inline u16 fname_full_size(const ATTR_FILE_NAME *fname) +{ + return offsetof(ATTR_FILE_NAME, name) + fname->name_len * sizeof(short); +} + +static inline u8 paired_name(u8 type) +{ + if (type == FILE_NAME_UNICODE) + return FILE_NAME_DOS; + if (type == FILE_NAME_DOS) + return FILE_NAME_UNICODE; + return FILE_NAME_POSIX; +} + +/* Index entry defines ( the field flags in NtfsDirEntry ) */ +#define NTFS_IE_HAS_SUBNODES cpu_to_le16(1) +#define NTFS_IE_LAST cpu_to_le16(2) + +/* Directory entry structure */ +typedef struct { + union { + MFT_REF ref; // 0x00: MFT record number with this file + struct { + __le16 data_off; // 0x00: + __le16 data_size; // 0x02: + __le32 Res; // 0x04: must be 0 + } View; + }; + __le16 size; // 0x08: The size of this entry + __le16 key_size; // 0x0A: The size of File name length in bytes + 0x42 + __le16 flags; // 0x0C: Entry flags, 1=subnodes, 2=last + __le16 Reserved; // 0x0E: + + // Here any indexed attribute can be placed + // One of them is: + // ATTR_FILE_NAME AttrFileName; + // + + // The last 8 bytes of this structure contains + // the VBN of subnode + // !!! Note !!! + // This field is presented only if (flags & NTFS_IE_HAS_SUBNODES) + // __le64 vbn; + +} NTFS_DE; + +static_assert(sizeof(NTFS_DE) == 0x10); + +static inline void de_set_vbn_le(NTFS_DE *e, __le64 vcn) +{ + __le64 *v = Add2Ptr(e, le16_to_cpu(e->size) - sizeof(__le64)); + + *v = vcn; +} + +static inline void de_set_vbn(NTFS_DE *e, CLST vcn) +{ + __le64 *v = Add2Ptr(e, le16_to_cpu(e->size) - sizeof(__le64)); + + *v = cpu_to_le64(vcn); +} + +static inline __le64 de_get_vbn_le(const NTFS_DE *e) +{ + return *(__le64 *)Add2Ptr(e, le16_to_cpu(e->size) - sizeof(__le64)); +} + +static inline CLST de_get_vbn(const NTFS_DE *e) +{ + __le64 *v = Add2Ptr(e, le16_to_cpu(e->size) - sizeof(__le64)); + + return le64_to_cpu(*v); +} + +static inline NTFS_DE *de_get_next(const NTFS_DE *e) +{ + return Add2Ptr(e, le16_to_cpu(e->size)); +} + +static inline ATTR_FILE_NAME *de_get_fname(const NTFS_DE *e) +{ + return le16_to_cpu(e->key_size) >= SIZEOF_ATTRIBUTE_FILENAME ? + Add2Ptr(e, sizeof(NTFS_DE)) : + NULL; +} + +static inline bool de_is_last(const NTFS_DE *e) +{ + return e->flags & NTFS_IE_LAST; +} + +static inline bool de_has_vcn(const NTFS_DE *e) +{ + return e->flags & NTFS_IE_HAS_SUBNODES; +} + +static inline bool de_has_vcn_ex(const NTFS_DE *e) +{ + return (e->flags & NTFS_IE_HAS_SUBNODES) && + (u64)(-1) != *((u64 *)Add2Ptr(e, le16_to_cpu(e->size) - + sizeof(__le64))); +} + +#define MAX_BYTES_PER_NAME_ENTRY \ + QuadAlign(sizeof(NTFS_DE) + offsetof(ATTR_FILE_NAME, name) + \ + NTFS_NAME_LEN * sizeof(short)) + +typedef struct { + // The offset from the start of this structure to the first NtfsDirEntry + __le32 de_off; // 0x00: + // The size of this structure plus all entries (quad-word aligned) + __le32 used; // 0x04 + // The allocated size of for this structure plus all entries + __le32 total; // 0x08: + // 0x00 = Small directory, 0x01 = Large directory + u8 flags; // 0x0C + u8 res[3]; + + // + // de_off + used <= total + // + +} INDEX_HDR; + +static_assert(sizeof(INDEX_HDR) == 0x10); + +static inline NTFS_DE *hdr_first_de(const INDEX_HDR *hdr) +{ + u32 de_off = le32_to_cpu(hdr->de_off); + u32 used = le32_to_cpu(hdr->used); + NTFS_DE *e = Add2Ptr(hdr, de_off); + u16 esize; + + if (de_off >= used || de_off >= le32_to_cpu(hdr->total)) + return NULL; + + esize = le16_to_cpu(e->size); + if (esize < sizeof(NTFS_DE) || de_off + esize > used) + return NULL; + + return e; +} + +static inline NTFS_DE *hdr_next_de(const INDEX_HDR *hdr, const NTFS_DE *e) +{ + size_t off = PtrOffset(hdr, e); + u32 used = le32_to_cpu(hdr->used); + u16 esize; + + if (off >= used) + return NULL; + + esize = le16_to_cpu(e->size); + + if (esize < sizeof(NTFS_DE) || off + esize + sizeof(NTFS_DE) > used) + return NULL; + + return Add2Ptr(e, esize); +} + +static inline bool hdr_has_subnode(const INDEX_HDR *hdr) +{ + return hdr->flags & 1; +} + +typedef struct { + NTFS_RECORD_HEADER rhdr; // 'INDX' + __le64 vbn; // 0x10: vcn if index >= cluster or vsn id index < cluster + INDEX_HDR ihdr; // 0x18: + +} INDEX_BUFFER; + +static_assert(sizeof(INDEX_BUFFER) == 0x28); + +static inline bool ib_is_empty(const INDEX_BUFFER *ib) +{ + const NTFS_DE *first = hdr_first_de(&ib->ihdr); + + return !first || de_is_last(first); +} + +static inline bool ib_is_leaf(const INDEX_BUFFER *ib) +{ + return !(ib->ihdr.flags & 1); +} + +/* Index root structure ( 0x90 ) */ +typedef enum { + NTFS_COLLATION_TYPE_BINARY = cpu_to_le32(0), + NTFS_COLLATION_TYPE_FILENAME = cpu_to_le32(0x01), + // $SII of $Secure / $Q of Quota + NTFS_COLLATION_TYPE_UINT = cpu_to_le32(0x10), + // $O of Quota + NTFS_COLLATION_TYPE_SID = cpu_to_le32(0x11), + // $SDH of $Secure + NTFS_COLLATION_TYPE_SECURITY_HASH = cpu_to_le32(0x12), + // $O of ObjId and "$R" for Reparse + NTFS_COLLATION_TYPE_UINTS = cpu_to_le32(0x13) +} COLLATION_RULE; + +static_assert(sizeof(COLLATION_RULE) == 4); + +// +typedef struct { + ATTR_TYPE type; // 0x00: The type of attribute to index on + COLLATION_RULE rule; // 0x04: The rule + __le32 index_block_size; // 0x08: The size of index record + u8 index_block_clst; // 0x0C: The number of clusters per index + u8 res[3]; + INDEX_HDR ihdr; // 0x10: + +} INDEX_ROOT; + +static_assert(sizeof(INDEX_ROOT) == 0x20); +static_assert(offsetof(INDEX_ROOT, ihdr) == 0x10); + +#define VOLUME_FLAG_DIRTY cpu_to_le16(0x0001) +#define VOLUME_FLAG_RESIZE_LOG_FILE cpu_to_le16(0x0002) + +typedef struct { + __le64 res1; // 0x00 + u8 major_ver; // 0x08: NTFS major version number (before .) + u8 minor_ver; // 0x09: NTFS minor version number (after .) + __le16 flags; // 0x0A: Volume flags, see VOLUME_FLAG_XXX + +} VOLUME_INFO; // sizeof=0xC + +#define SIZEOF_ATTRIBUTE_VOLUME_INFO 0xc + +#define NTFS_LABEL_MAX_LENGTH (0x100 / sizeof(short)) +#define NTFS_ATTR_INDEXABLE cpu_to_le32(0x00000002) +#define NTFS_ATTR_DUPALLOWED cpu_to_le32(0x00000004) +#define NTFS_ATTR_MUST_BE_INDEXED cpu_to_le32(0x00000010) +#define NTFS_ATTR_MUST_BE_NAMED cpu_to_le32(0x00000020) +#define NTFS_ATTR_MUST_BE_RESIDENT cpu_to_le32(0x00000040) +#define NTFS_ATTR_LOG_ALWAYS cpu_to_le32(0x00000080) + +/* $AttrDef file entry */ +typedef struct { + __le16 name[0x40]; // 0x00: Attr name + ATTR_TYPE type; // 0x80: ATTRIB type + __le32 res; // 0x84: + COLLATION_RULE rule; // 0x88: + __le32 flags; // 0x8C: NTFS_ATTR_XXX (see above) + __le64 min_sz; // 0x90: Minimum attribute data size + __le64 max_sz; // 0x98: Maximum attribute data size +} ATTR_DEF_ENTRY; + +static_assert(sizeof(ATTR_DEF_ENTRY) == 0xa0); + +/* Object ID (0x40) */ +typedef struct { + GUID ObjId; // 0x00: Unique Id assigned to file + GUID BirthVolumeId; // 0x10: Birth Volume Id is the Object Id of the Volume on + // which the Object Id was allocated. It never changes + GUID BirthObjectId; // 0x20: Birth Object Id is the first Object Id that was + // ever assigned to this MFT Record. I.e. If the Object Id + // is changed for some reason, this field will reflect the + // original value of the Object Id. + GUID DomainId; // 0x30: Domain Id is currently unused but it is intended to be + // used in a network environment where the local machine is + // part of a Windows 2000 Domain. This may be used in a Windows + // 2000 Advanced Server managed domain. +} OBJECT_ID; + +static_assert(sizeof(OBJECT_ID) == 0x40); + +/* O Directory entry structure ( rule = 0x13 ) */ +typedef struct { + NTFS_DE de; + // See OBJECT_ID (0x40) for details + GUID ObjId; // 0x10: Unique Id assigned to file + MFT_REF ref; // 0x20: MFT record number with this file + GUID BirthVolumeId; // 0x28: Birth Volume Id is the Object Id of the Volume on + // which the Object Id was allocated. It never changes + GUID BirthObjectId; // 0x38: Birth Object Id is the first Object Id that was + // ever assigned to this MFT Record. I.e. If the Object Id + // is changed for some reason, this field will reflect the + // original value of the Object Id. + // This field is valid if data_size == 0x48 + GUID BirthDomainId; // 0x48: Domain Id is currently unused but it is intended + // to be used in a network environment where the local + // machine is part of a Windows 2000 Domain. This may be + // used in a Windows 2000 Advanced Server managed domain. + + // The last 8 bytes of this structure contains + // the VCN of subnode + // !!! Note !!! + // This field is presented only if (flags & 0x1) + // __le64 SubnodesVCN; +} NTFS_DE_O; + +static_assert(sizeof(NTFS_DE_O) == 0x58); + +#define NTFS_OBJECT_ENTRY_DATA_SIZE1 0x38 // NTFS_DE_O.BirthDomainId is not used +#define NTFS_OBJECT_ENTRY_DATA_SIZE2 0x48 // NTFS_DE_O.BirthDomainId is used + +/* Q Directory entry structure ( rule = 0x11 ) */ +typedef struct { + NTFS_DE de; + __le32 owner_id; // 0x10: Unique Id assigned to file + __le32 Version; // 0x14: 0x02 + __le32 flags2; // 0x18: Quota flags, see above + __le64 BytesUsed; // 0x1C: + __le64 ChangeTime; // 0x24: + __le64 WarningLimit; // 0x28: + __le64 HardLimit; // 0x34: + __le64 ExceededTime; // 0x3C: + + // SID is placed here + + // The last 8 bytes of this structure contains + // the VCN of subnode + // !!! Note !!! + // This field is presented only if (flags & 0x1) + // __le64 SubnodesVCN; + +} NTFS_DE_Q; // __attribute__ ((packed)); // sizeof() = 0x44 + +#define SIZEOF_NTFS_DE_Q 0x44 + +#define SecurityDescriptorsBlockSize 0x40000 // 256K +#define SecurityDescriptorMaxSize 0x20000 // 128K +#define Log2OfSecurityDescriptorsBlockSize 18 + +typedef struct { + __le32 hash; // Hash value for descriptor + __le32 sec_id; // Security Id (guaranteed unique) +} SECURITY_KEY; + +/* Security descriptors (the content of $Secure::SDS data stream) */ +typedef struct { + SECURITY_KEY key; // 0x00: Security Key + __le64 off; // 0x08: Offset of this entry in the file + __le32 size; // 0x10: Size of this entry, 8 byte aligned + // + // Security descriptor itself is placed here + // Total size is 16 byte aligned + // + +} __packed SECURITY_HDR; + +#define SIZEOF_SECURITY_HDR 0x14 + +/* SII Directory entry structure */ +typedef struct { + NTFS_DE de; + __le32 sec_id; // 0x10: Key: sizeof(security_id) = wKeySize + SECURITY_HDR sec_hdr; // 0x14: + +} __packed NTFS_DE_SII; + +#define SIZEOF_SII_DIRENTRY 0x28 + +/* SDH Directory entry structure */ +typedef struct { + NTFS_DE de; + SECURITY_KEY key; // 0x10: Key + SECURITY_HDR sec_hdr; // 0x18: Data + __le16 magic[2]; // 0x2C: 0x00490049 "I I" + +} NTFS_DE_SDH; // __attribute__ ((packed)); + +#define SIZEOF_SDH_DIRENTRY 0x30 + +typedef struct { + __le32 ReparseTag; // 0x00: Reparse Tag + MFT_REF ref; // 0x04: MFT record number with this file + +} REPARSE_KEY; // sizeof() = 0x0C + +static_assert(offsetof(REPARSE_KEY, ref) == 0x04); +#define SIZEOF_REPARSE_KEY 0x0C + +/* Reparse Directory entry structure */ +typedef struct { + NTFS_DE de; + REPARSE_KEY Key; // 0x10: Reparse Key (Tag + MFT_REF) + + // The last 8 bytes of this structure contains + // the VCN of subnode + // !!! Note !!! + // This field is presented only if (flags & 0x1) + // __le64 SubnodesVCN; + +} NTFS_DE_R; // sizeof() = 0x1C + +#define SIZEOF_R_DIRENTRY 0x1C + +/* CompressReparseBuffer.WofVersion */ +#define WOF_CURRENT_VERSION cpu_to_le32(1) +/* CompressReparseBuffer.WofProvider */ +#define WOF_PROVIDER_WIM cpu_to_le32(1) +/* CompressReparseBuffer.WofProvider */ +#define WOF_PROVIDER_SYSTEM cpu_to_le32(2) +/* CompressReparseBuffer.ProviderVer */ +#define WOF_PROVIDER_CURRENT_VERSION cpu_to_le32(1) + +#define WOF_COMPRESSION_XPRESS4K 0 // 4k +#define WOF_COMPRESSION_LZX 1 // 32k +#define WOF_COMPRESSION_XPRESS8K 2 // 8k +#define WOF_COMPRESSION_XPRESS16K 3 // 16k + +/* + * ATTR_REPARSE (0xC0) + * + * The reparse GUID structure is used by all 3rd party layered drivers to + * store data in a reparse point. For non-Microsoft tags, The GUID field + * cannot be GUID_NULL. + * The constraints on reparse tags are defined below. + * Microsoft tags can also be used with this format of the reparse point buffer. + */ +typedef struct { + __le32 ReparseTag; // 0x00: + __le16 ReparseDataLength; // 0x04: + __le16 Reserved; + + GUID Guid; // 0x08: + + // + // Here GenericReparseBuffer is placed + // +} REPARSE_POINT; + +static_assert(sizeof(REPARSE_POINT) == 0x18); + +// +// Maximum allowed size of the reparse data. +// +#define MAXIMUM_REPARSE_DATA_BUFFER_SIZE (16 * 1024) + +// +// The value of the following constant needs to satisfy the following +// conditions: +// (1) Be at least as large as the largest of the reserved tags. +// (2) Be strictly smaller than all the tags in use. +// +#define IO_REPARSE_TAG_RESERVED_RANGE 1 + +// +// The reparse tags are a ULONG. The 32 bits are laid out as follows: +// +// 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 +// 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 +// +-+-+-+-+-----------------------+-------------------------------+ +// |M|R|N|R| Reserved bits | Reparse Tag Value | +// +-+-+-+-+-----------------------+-------------------------------+ +// +// M is the Microsoft bit. When set to 1, it denotes a tag owned by Microsoft. +// All ISVs must use a tag with a 0 in this position. +// Note: If a Microsoft tag is used by non-Microsoft software, the +// behavior is not defined. +// +// R is reserved. Must be zero for non-Microsoft tags. +// +// N is name surrogate. When set to 1, the file represents another named +// entity in the system. +// +// The M and N bits are OR-able. +// The following macros check for the M and N bit values: +// + +// +// Macro to determine whether a reparse point tag corresponds to a tag +// owned by Microsoft. +// +#define IsReparseTagMicrosoft(_tag) (((_tag)&IO_REPARSE_TAG_MICROSOFT)) + +// +// Macro to determine whether a reparse point tag is a name surrogate +// +#define IsReparseTagNameSurrogate(_tag) (((_tag)&IO_REPARSE_TAG_NAME_SURROGATE)) + +// +// The following constant represents the bits that are valid to use in +// reparse tags. +// +#define IO_REPARSE_TAG_VALID_VALUES 0xF000FFFF + +// +// Macro to determine whether a reparse tag is a valid tag. +// +#define IsReparseTagValid(_tag) \ + (!((_tag) & ~IO_REPARSE_TAG_VALID_VALUES) && \ + ((_tag) > IO_REPARSE_TAG_RESERVED_RANGE)) + +// +// Microsoft tags for reparse points. +// + +typedef enum { + IO_REPARSE_TAG_SYMBOLIC_LINK = cpu_to_le32(0), + IO_REPARSE_TAG_NAME_SURROGATE = cpu_to_le32(0x20000000), + IO_REPARSE_TAG_MICROSOFT = cpu_to_le32(0x80000000), + IO_REPARSE_TAG_MOUNT_POINT = cpu_to_le32(0xA0000003), + IO_REPARSE_TAG_SYMLINK = cpu_to_le32(0xA000000C), + IO_REPARSE_TAG_HSM = cpu_to_le32(0xC0000004), + IO_REPARSE_TAG_SIS = cpu_to_le32(0x80000007), + IO_REPARSE_TAG_DEDUP = cpu_to_le32(0x80000013), + IO_REPARSE_TAG_COMPRESS = cpu_to_le32(0x80000017), + + // + // The reparse tag 0x80000008 is reserved for Microsoft internal use + // (may be published in the future) + // + + // + // Microsoft reparse tag reserved for DFS + // + IO_REPARSE_TAG_DFS = cpu_to_le32(0x8000000A), + + // + // Microsoft reparse tag reserved for the file system filter manager + // + IO_REPARSE_TAG_FILTER_MANAGER = cpu_to_le32(0x8000000B), + + // + // Non-Microsoft tags for reparse points + // + + // + // Tag allocated to CONGRUENT, May 2000. Used by IFSTEST + // + IO_REPARSE_TAG_IFSTEST_CONGRUENT = cpu_to_le32(0x00000009), + + // + // Tag allocated to ARKIVIO + // + IO_REPARSE_TAG_ARKIVIO = cpu_to_le32(0x0000000C), + + // + // Tag allocated to SOLUTIONSOFT + // + IO_REPARSE_TAG_SOLUTIONSOFT = cpu_to_le32(0x2000000D), + + // + // Tag allocated to COMMVAULT + // + IO_REPARSE_TAG_COMMVAULT = cpu_to_le32(0x0000000E), + + // OneDrive?? + IO_REPARSE_TAG_CLOUD = cpu_to_le32(0x9000001A), + IO_REPARSE_TAG_CLOUD_1 = cpu_to_le32(0x9000101A), + IO_REPARSE_TAG_CLOUD_2 = cpu_to_le32(0x9000201A), + IO_REPARSE_TAG_CLOUD_3 = cpu_to_le32(0x9000301A), + IO_REPARSE_TAG_CLOUD_4 = cpu_to_le32(0x9000401A), + IO_REPARSE_TAG_CLOUD_5 = cpu_to_le32(0x9000501A), + IO_REPARSE_TAG_CLOUD_6 = cpu_to_le32(0x9000601A), + IO_REPARSE_TAG_CLOUD_7 = cpu_to_le32(0x9000701A), + IO_REPARSE_TAG_CLOUD_8 = cpu_to_le32(0x9000801A), + IO_REPARSE_TAG_CLOUD_9 = cpu_to_le32(0x9000901A), + IO_REPARSE_TAG_CLOUD_A = cpu_to_le32(0x9000A01A), + IO_REPARSE_TAG_CLOUD_B = cpu_to_le32(0x9000B01A), + IO_REPARSE_TAG_CLOUD_C = cpu_to_le32(0x9000C01A), + IO_REPARSE_TAG_CLOUD_D = cpu_to_le32(0x9000D01A), + IO_REPARSE_TAG_CLOUD_E = cpu_to_le32(0x9000E01A), + IO_REPARSE_TAG_CLOUD_F = cpu_to_le32(0x9000F01A), + +} NTFS_IO_REPARSE_TAG; + +static_assert(sizeof(NTFS_IO_REPARSE_TAG) == 4); + +/* Microsoft reparse buffer. (see DDK for details) */ +typedef struct { + __le32 ReparseTag; // 0x00: + __le16 ReparseDataLength; // 0x04: + __le16 Reserved; + + union { + // If ReparseTag == 0 + struct { + __le16 SubstituteNameOffset; // 0x08 + __le16 SubstituteNameLength; // 0x0A + __le16 PrintNameOffset; // 0x0C + __le16 PrintNameLength; // 0x0E + __le16 PathBuffer[1]; // 0x10 + } SymbolicLinkReparseBuffer; + + // If ReparseTag == 0xA0000003U + struct { + __le16 SubstituteNameOffset; // 0x08 + __le16 SubstituteNameLength; // 0x0A + __le16 PrintNameOffset; // 0x0C + __le16 PrintNameLength; // 0x0E + __le16 PathBuffer[1]; // 0x10 + } MountPointReparseBuffer; + + // If ReparseTag == IO_REPARSE_TAG_SYMLINK2 (0xA000000CU) + // https://msdn.microsoft.com/en-us/library/cc232006.aspx + struct { + __le16 SubstituteNameOffset; // 0x08 + __le16 SubstituteNameLength; // 0x0A + __le16 PrintNameOffset; // 0x0C + __le16 PrintNameLength; // 0x0E + // 0-absolute path 1- relative path + __le32 Flags; // 0x10 + __le16 PathBuffer[1]; // 0x14 + } SymbolicLink2ReparseBuffer; + + // If ReparseTag == 0x80000017U + struct { + __le32 WofVersion; // 0x08 == 1 + /* 1 - WIM backing provider ("WIMBoot"), + * 2 - System compressed file provider + */ + __le32 WofProvider; // 0x0C + __le32 ProviderVer; // 0x10: == 1 WOF_FILE_PROVIDER_CURRENT_VERSION == 1 + __le32 CompressionFormat; // 0x14: 0, 1, 2, 3. See WOF_COMPRESSION_XXX + } CompressReparseBuffer; + + struct { + u8 DataBuffer[1]; // 0x08 + } GenericReparseBuffer; + }; + +} REPARSE_DATA_BUFFER; + +static inline u32 ntfs_reparse_bytes(u32 uni_len) +{ + return sizeof(short) * (2 * uni_len + 4) + + offsetof(REPARSE_DATA_BUFFER, + SymbolicLink2ReparseBuffer.PathBuffer); +} + +/* ATTR_EA_INFO (0xD0) */ + +#define FILE_NEED_EA 0x80 // See ntifs.h +/* FILE_NEED_EA, indicates that the file to which the EA belongs cannot be + * interpreted without understanding the associated extended attributes. + */ +typedef struct { + __le16 size_pack; // 0x00: Size of buffer to hold in packed form + __le16 count; // 0x02: Count of EA's with FILE_NEED_EA bit set + __le32 size; // 0x04: Size of buffer to hold in unpacked form +} EA_INFO; + +static_assert(sizeof(EA_INFO) == 8); + +/* ATTR_EA (0xE0) */ +typedef struct { + __le32 size; // 0x00: (not in packed) + u8 flags; // 0x04 + u8 name_len; // 0x05 + __le16 elength; // 0x06 + u8 name[1]; // 0x08 + +} EA_FULL; + +static_assert(offsetof(EA_FULL, name) == 8); + +#define MAX_EA_DATA_SIZE (256 * 1024) diff --git a/fs/ntfs3/ntfs_fs.h b/fs/ntfs3/ntfs_fs.h new file mode 100644 index 000000000000..0024fcad3b89 --- /dev/null +++ b/fs/ntfs3/ntfs_fs.h @@ -0,0 +1,965 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * linux/fs/ntfs3/ntfs_fs.h + * + * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved. + * + */ + +/* "true" when [s,s+c) intersects with [l,l+w) */ +#define IS_IN_RANGE(s, c, l, w) \ + (((c) > 0 && (w) > 0) && \ + (((l) <= (s) && (s) < ((l) + (w))) || \ + ((s) <= (l) && ((s) + (c)) >= ((l) + (w))) || \ + ((l) < ((s) + (c)) && ((s) + (c)) < ((l) + (w))))) + +/* "true" when [s,se) intersects with [l,le) */ +#define IS_IN_RANGE2(s, se, l, le) \ + (((se) > (s) && (le) > (l)) && \ + (((l) <= (s) && (s) < (le)) || ((s) <= (l) && (se) >= (le)) || \ + ((l) < (se) && (se) < (le)))) + +#define MINUS_ONE_T ((size_t)(-1)) +/* Biggest MFT / smallest cluster */ +#define MAXIMUM_BYTES_PER_MFT 4096 // ?? +#define NTFS_BLOCKS_PER_MFT_RECORD (MAXIMUM_BYTES_PER_MFT / 512) + +#define MAXIMUM_BYTES_PER_INDEX 4096 // ?? +#define NTFS_BLOCKS_PER_INODE (MAXIMUM_BYTES_PER_INDEX / 512) + +typedef struct ntfs_inode ntfs_inode; +typedef struct ntfs_sb_info ntfs_sb_info; +struct lznt; + +struct mount_options { + kuid_t fs_uid; + kgid_t fs_gid; + u16 fs_fmask; + u16 fs_dmask; + unsigned quiet : 1, /* set = fake successful chmods and chowns */ + sys_immutable : 1, /* set = system files are immutable */ + discard : 1, /* Issue discard requests on deletions */ + uid : 1, /* uid was set */ + gid : 1, /* gid was set */ + fmask : 1, /* fmask was set */ + dmask : 1, /*dmask was set*/ + sparse : 1, /*create sparse files*/ + showmeta : 1, /*show meta files*/ + nohidden : 1, /*do not shot hidden files*/ + acl : 1, /*create acl*/ + force : 1, /*rw mount dirty volume*/ + no_acs_rules : 1 /*exclude acs rules*/ + ; +}; + +struct ntfs_run; + +/* TODO: use rb tree instead of array */ +struct runs_tree { + struct ntfs_run *runs_; + size_t count; // Currently used size a ntfs_run storage. + size_t allocated; // Currently allocated ntfs_run storage size. +}; + +struct ntfs_buffers { + /* Biggest MFT / smallest cluster = 4096 / 512 = 8 */ + /* Biggest index / smallest cluster = 4096 / 512 = 8 */ + struct buffer_head *bh[PAGE_SIZE >> SECTOR_SHIFT]; + u32 bytes; + u32 nbufs; + u32 off; +}; + +#define NTFS_FLAGS_NODISCARD 0x00000001 +#define NTFS_FLAGS_NEED_REPLAY 0x04000000 + +enum ALLOCATE_OPT { + ALLOCATE_DEF = 0, // Allocate all clusters + ALLOCATE_MFT = 1, // Allocate for MFT +}; + +enum bitmap_mutex_classes { + BITMAP_MUTEX_CLUSTERS = 0, + BITMAP_MUTEX_MFT = 1, +}; + +typedef struct { + struct super_block *sb; + struct rw_semaphore rw_lock; + + struct runs_tree run; + size_t nbits; + + u16 free_holder[8]; // holder for free_bits + + size_t total_zeroes; // total number of free bits + u16 *free_bits; // free bits in each window + size_t nwnd; + u32 bits_last; // bits in last window + + struct rb_root start_tree; // extents, sorted by 'start' + struct rb_root count_tree; // extents, sorted by 'count + start' + size_t count; // extents count + int uptodated; // -1 Tree is activated but not updated (too many fragments) + // 0 - Tree is not activated + // 1 - Tree is activated and updated + size_t extent_min; // Minimal extent used while building + size_t extent_max; // Upper estimate of biggest free block + + bool set_tail; // not necessary in driver + bool inited; + + /* Zone [bit, end) */ + size_t zone_bit; + size_t zone_end; + +} wnd_bitmap; + +typedef int (*NTFS_CMP_FUNC)(const void *key1, size_t len1, const void *key2, + size_t len2, const void *param); + +enum index_mutex_classed { + INDEX_MUTEX_I30 = 0, + INDEX_MUTEX_SII = 1, + INDEX_MUTEX_SDH = 2, + INDEX_MUTEX_SO = 3, + INDEX_MUTEX_SQ = 4, + INDEX_MUTEX_SR = 5, + INDEX_MUTEX_TOTAL +}; + +/* This struct works with indexes */ +typedef struct { + struct runs_tree bitmap_run; + struct runs_tree alloc_run; + + /*TODO: remove 'cmp'*/ + NTFS_CMP_FUNC cmp; + + u8 index_bits; // log2(root->index_block_size) + u8 idx2vbn_bits; // log2(root->index_block_clst) + u8 vbn2vbo_bits; // index_block_size < cluster? 9 : cluster_bits + u8 changed; // set when tree is changed + u8 type; // index_mutex_classed + +} ntfs_index; + +/* Set when $LogFile is replaying */ +#define NTFS_FLAGS_LOG_REPLAING 0x00000008 + +/* Set when we changed first MFT's which copy must be updated in $MftMirr */ +#define NTFS_FLAGS_MFTMIRR 0x00001000 + +/* Minimum mft zone */ +#define NTFS_MIN_MFT_ZONE 100 + +struct COMPRESS_CTX { + u64 chunk_num; // Number of chunk cmpr_buffer/unc_buffer + u64 first_chunk, last_chunk, total_chunks; + u64 chunk0_off; + void *ctx; + u8 *cmpr_buffer; + u8 *unc_buffer; + void *chunk_off_mem; + size_t chunk_off; + u32 *chunk_off32; // pointer inside ChunkOffsetsMem + u64 *chunk_off64; // pointer inside ChunkOffsetsMem + u32 compress_format; + u32 offset_bits; + u32 chunk_bits; + u32 chunk_size; +}; + +/* ntfs file system in-core superblock data */ +typedef struct ntfs_sb_info { + struct super_block *sb; + + u32 discard_granularity; + u64 discard_granularity_mask_inv; // ~(discard_granularity_mask_inv-1) + + u32 cluster_size; // bytes per cluster + u32 cluster_mask; // == cluster_size - 1 + u64 cluster_mask_inv; // ~(cluster_size - 1) + u32 block_mask; // sb->s_blocksize - 1 + u32 blocks_per_cluster; // cluster_size / sb->s_blocksize + + u32 record_size; + u32 sector_size; + u32 index_size; + + u8 sector_bits; + u8 cluster_bits; + u8 record_bits; + + u64 maxbytes; // Maximum size for normal files + u64 maxbytes_sparse; // Maximum size for sparse file + + u32 flags; // See NTFS_FLAGS_XXX + + CLST bad_clusters; // The count of marked bad clusters + + u16 max_bytes_per_attr; // maximum attribute size in record + u16 attr_size_tr; // attribute size threshold (320 bytes) + + /* Records in $Extend */ + CLST objid_no; + CLST quota_no; + CLST reparse_no; + CLST usn_jrnl_no; + + ATTR_DEF_ENTRY *def_table; // attribute definition table + u32 def_entries; + + MFT_REC *new_rec; + + u16 *upcase; + + struct nls_table *nls; + + struct { + u64 lbo, lbo2; + ntfs_inode *ni; + wnd_bitmap bitmap; // $MFT::Bitmap + ulong reserved_bitmap; + size_t next_free; // The next record to allocate from + size_t used; + u32 recs_mirr; // Number of records MFTMirr + u8 next_reserved; + u8 reserved_bitmap_inited; + } mft; + + struct { + wnd_bitmap bitmap; // $Bitmap::Data + CLST next_free_lcn; + } used; + + struct { + u64 size; // in bytes + u64 blocks; // in blocks + u64 ser_num; + ntfs_inode *ni; + __le16 flags; // see VOLUME_FLAG_XXX + u8 major_ver; + u8 minor_ver; + char label[65]; + bool real_dirty; /* real fs state*/ + } volume; + + struct { + ntfs_index index_sii; + ntfs_index index_sdh; + ntfs_inode *ni; + u32 next_id; + u64 next_off; + + __le32 def_file_id; + __le32 def_dir_id; + } security; + + struct { + ntfs_index index_r; + ntfs_inode *ni; + u64 max_size; // 16K + } reparse; + + struct { + ntfs_index index_o; + ntfs_inode *ni; + } objid; + + struct { + /*protect 'frame_unc' and 'ctx'*/ + spinlock_t lock; + u8 *frame_unc; + struct lznt *ctx; + } compress; + + struct mount_options options; + struct ratelimit_state ratelimit; + +} ntfs_sb_info; + +typedef struct { + struct rb_node node; + ntfs_sb_info *sbi; + + CLST rno; + MFT_REC *mrec; + struct ntfs_buffers nb; + + bool dirty; +} mft_inode; + +#define NI_FLAG_DIR 0x00000001 +#define NI_FLAG_RESIDENT 0x00000002 +#define NI_FLAG_UPDATE_PARENT 0x00000004 + +/* Data attribute is compressed special way */ +#define NI_FLAG_COMPRESSED_MASK 0x00000f00 // +/* Data attribute is deduplicated */ +#define NI_FLAG_DEDUPLICATED 0x00001000 +#define NI_FLAG_EA 0x00002000 + +/* ntfs file system inode data memory */ +typedef struct ntfs_inode { + mft_inode mi; // base record + + loff_t i_valid; /* valid size */ + struct timespec64 i_crtime; + + struct mutex ni_lock; + + /* file attributes from std */ + FILE_ATTRIBUTE std_fa; + __le32 std_security_id; + + // subrecords tree + struct rb_root mi_tree; + + union { + ntfs_index dir; + struct { + struct rw_semaphore run_lock; + struct runs_tree run; + } file; + }; + + struct { + struct runs_tree run; + void *le; // 1K aligned memory + size_t size; + bool dirty; + } attr_list; + + size_t ni_flags; // NI_FLAG_XXX + + struct inode vfs_inode; +} ntfs_inode; + +struct indx_node { + struct ntfs_buffers nb; + INDEX_BUFFER *index; +}; + +struct ntfs_fnd { + int level; + struct indx_node *nodes[20]; + NTFS_DE *de[20]; + NTFS_DE *root_de; +}; + +enum REPARSE_SIGN { + REPARSE_NONE = 0, + REPARSE_COMPRESSED = 1, + REPARSE_DEDUPLICATED = 2, + REPARSE_LINK = 3 +}; + +/* functions from attrib.c*/ +int attr_load_runs(ATTRIB *attr, ntfs_inode *ni, struct runs_tree *run); +int attr_allocate_clusters(ntfs_sb_info *sbi, struct runs_tree *run, CLST vcn, + CLST lcn, CLST len, CLST *pre_alloc, + enum ALLOCATE_OPT opt, CLST *alen, const size_t fr, + CLST *new_lcn); +int attr_set_size(ntfs_inode *ni, ATTR_TYPE type, const __le16 *name, + u8 name_len, struct runs_tree *run, u64 new_size, + const u64 *new_valid, bool keep_prealloc, ATTRIB **ret); +int attr_data_get_block(ntfs_inode *ni, CLST vcn, CLST *lcn, CLST *len, + bool *new); +int attr_load_runs_vcn(ntfs_inode *ni, ATTR_TYPE type, const __le16 *name, + u8 name_len, struct runs_tree *run, CLST vcn); +int attr_is_frame_compressed(ntfs_inode *ni, ATTRIB *attr, CLST frame, + CLST *clst_data, bool *is_compr); +int attr_allocate_frame(ntfs_inode *ni, CLST frame, size_t compr_size, + u64 new_valid); + +/* functions from attrlist.c*/ +void al_destroy(ntfs_inode *ni); +bool al_verify(ntfs_inode *ni); +int ntfs_load_attr_list(ntfs_inode *ni, ATTRIB *attr); +ATTR_LIST_ENTRY *al_enumerate(ntfs_inode *ni, ATTR_LIST_ENTRY *le); +ATTR_LIST_ENTRY *al_find_le(ntfs_inode *ni, ATTR_LIST_ENTRY *le, + const ATTRIB *attr); +ATTR_LIST_ENTRY *al_find_ex(ntfs_inode *ni, ATTR_LIST_ENTRY *le, ATTR_TYPE type, + const __le16 *name, u8 name_len, const CLST *vcn); +int al_add_le(ntfs_inode *ni, ATTR_TYPE type, const __le16 *name, u8 name_len, + CLST svcn, __le16 id, const MFT_REF *ref, + ATTR_LIST_ENTRY **new_le); +bool al_remove_le(ntfs_inode *ni, ATTR_LIST_ENTRY *le); +bool al_delete_le(ntfs_inode *ni, ATTR_TYPE type, CLST vcn, const __le16 *name, + size_t name_len, const MFT_REF *ref); +int al_update(ntfs_inode *ni); +static inline size_t al_aligned(size_t size) +{ + return (size + 1023) & ~(size_t)1023; +} + +/* globals from bitfunc.c */ +bool are_bits_clear(const ulong *map, size_t bit, size_t nbits); +bool are_bits_set(const ulong *map, size_t bit, size_t nbits); +size_t get_set_bits_ex(const ulong *map, size_t bit, size_t nbits); + +/* globals from dir.c */ +int uni_to_x8(ntfs_sb_info *sbi, const struct le_str *uni, u8 *buf, + int buf_len); +int x8_to_uni(ntfs_sb_info *sbi, const u8 *name, u32 name_len, + struct cpu_str *uni, u32 max_ulen, enum utf16_endian endian); +struct inode *dir_search_u(struct inode *dir, const struct cpu_str *uni, + struct ntfs_fnd *fnd); +struct inode *dir_search(struct inode *dir, const struct qstr *name, + struct ntfs_fnd *fnd); +bool dir_is_empty(struct inode *dir); +extern const struct file_operations ntfs_dir_operations; + +/* globals from file.c*/ +int ntfs_getattr(const struct path *path, struct kstat *stat, u32 request_mask, + u32 flags); +void ntfs_sparse_cluster(struct inode *inode, struct page *page0, loff_t vbo, + u32 bytes); +int ntfs_file_fsync(struct file *filp, loff_t start, loff_t end, int datasync); +void ntfs_truncate_blocks(struct inode *inode, loff_t offset); +int ntfs_setattr(struct dentry *dentry, struct iattr *attr); +int ntfs_file_open(struct inode *inode, struct file *file); +extern const struct inode_operations ntfs_special_inode_operations; +extern const struct inode_operations ntfs_file_inode_operations; +extern const struct file_operations ntfs_file_operations; + +/* globals from frecord.c */ +void ni_remove_mi(ntfs_inode *ni, mft_inode *mi); +ATTR_STD_INFO *ni_std(ntfs_inode *ni); +void ni_clear(ntfs_inode *ni); +int ni_load_mi_ex(ntfs_inode *ni, CLST rno, mft_inode **mi); +int ni_load_mi(ntfs_inode *ni, ATTR_LIST_ENTRY *le, mft_inode **mi); +ATTRIB *ni_find_attr(ntfs_inode *ni, ATTRIB *attr, ATTR_LIST_ENTRY **entry_o, + ATTR_TYPE type, const __le16 *name, u8 name_len, + const CLST *vcn, mft_inode **mi); +ATTRIB *ni_enum_attr_ex(ntfs_inode *ni, ATTRIB *attr, ATTR_LIST_ENTRY **le); +ATTRIB *ni_load_attr(ntfs_inode *ni, ATTR_TYPE type, const __le16 *name, + u8 name_len, CLST vcn, mft_inode **pmi); +int ni_load_all_mi(ntfs_inode *ni); +bool ni_add_subrecord(ntfs_inode *ni, CLST rno, mft_inode **mi); +int ni_remove_attr(ntfs_inode *ni, ATTR_TYPE type, const __le16 *name, + size_t name_len, bool base_only, const __le16 *id); +int ni_create_attr_list(ntfs_inode *ni); +int ni_expand_list(ntfs_inode *ni); +int ni_insert_nonresident(ntfs_inode *ni, ATTR_TYPE type, const __le16 *name, + u8 name_len, const struct runs_tree *run, CLST svcn, + CLST len, __le16 flags, ATTRIB **new_attr, + mft_inode **mi); +int ni_insert_resident(ntfs_inode *ni, u32 data_size, ATTR_TYPE type, + const __le16 *name, u8 name_len, ATTRIB **new_attr, + mft_inode **mi); +int ni_remove_attr_le(ntfs_inode *ni, ATTRIB *attr, ATTR_LIST_ENTRY *le); +int ni_delete_all(ntfs_inode *ni); +ATTR_FILE_NAME *ni_fname_name(ntfs_inode *ni, const struct cpu_str *uni, + const MFT_REF *home, ATTR_LIST_ENTRY **entry); +ATTR_FILE_NAME *ni_fname_type(ntfs_inode *ni, u8 name_type, + ATTR_LIST_ENTRY **entry); +u16 ni_fnames_count(ntfs_inode *ni); +int ni_init_compress(ntfs_inode *ni, struct COMPRESS_CTX *ctx); +enum REPARSE_SIGN ni_parse_reparse(ntfs_inode *ni, ATTRIB *attr, void *buffer); +int ni_write_inode(struct inode *inode, int sync, const char *hint); +#define _ni_write_inode(i, w) ni_write_inode(i, w, __func__) + +/* globals from compress.c */ +int ni_readpage_cmpr(ntfs_inode *ni, struct page *page); +int ni_writepage_cmpr(struct page *page, int sync); + +/* globals from fslog.c */ +int log_replay(ntfs_inode *ni); + +/* globals from fsntfs.c */ +bool ntfs_fix_pre_write(NTFS_RECORD_HEADER *rhdr, size_t bytes); +int ntfs_fix_post_read(NTFS_RECORD_HEADER *rhdr, size_t bytes, bool simple); +int ntfs_extend_init(ntfs_sb_info *sbi); +int ntfs_loadlog_and_replay(ntfs_inode *ni, ntfs_sb_info *sbi); +const ATTR_DEF_ENTRY *ntfs_query_def(ntfs_sb_info *sbi, ATTR_TYPE Type); +int ntfs_look_for_free_space(ntfs_sb_info *sbi, CLST lcn, CLST len, + CLST *new_lcn, CLST *new_len, + enum ALLOCATE_OPT opt); +int ntfs_look_free_mft(ntfs_sb_info *sbi, CLST *rno, bool mft, ntfs_inode *ni, + mft_inode **mi); +void ntfs_mark_rec_free(ntfs_sb_info *sbi, CLST nRecord); +int ntfs_clear_mft_tail(ntfs_sb_info *sbi, size_t from, size_t to); +int ntfs_refresh_zone(ntfs_sb_info *sbi); +int ntfs_update_mftmirr(ntfs_sb_info *sbi, int wait); +enum NTFS_DIRTY_FLAGS { + NTFS_DIRTY_DIRTY = 0, + NTFS_DIRTY_CLEAR = 1, + NTFS_DIRTY_ERROR = 2, +}; +int ntfs_set_state(ntfs_sb_info *sbi, enum NTFS_DIRTY_FLAGS dirty); +int ntfs_sb_read(struct super_block *sb, u64 lbo, size_t bytes, void *buffer); +int ntfs_sb_write(struct super_block *sb, u64 lbo, size_t bytes, + const void *buffer, int wait); +int ntfs_sb_write_run(ntfs_sb_info *sbi, struct runs_tree *run, u64 vbo, + const void *buf, size_t bytes); +struct buffer_head *ntfs_bread_run(ntfs_sb_info *sbi, struct runs_tree *run, + u64 vbo); +int ntfs_read_run_nb(ntfs_sb_info *sbi, struct runs_tree *run, u64 vbo, + void *buf, u32 bytes, struct ntfs_buffers *nb); +int ntfs_read_bh_ex(ntfs_sb_info *sbi, struct runs_tree *run, u64 vbo, + NTFS_RECORD_HEADER *rhdr, u32 bytes, + struct ntfs_buffers *nb); +int ntfs_get_bh(ntfs_sb_info *sbi, struct runs_tree *run, u64 vbo, u32 bytes, + struct ntfs_buffers *nb); +int ntfs_write_bh_ex(ntfs_sb_info *sbi, NTFS_RECORD_HEADER *rhdr, + struct ntfs_buffers *nb, int sync); +int ntfs_vbo_to_pbo(ntfs_sb_info *sbi, struct runs_tree *run, u64 vbo, u64 *pbo, + u64 *bytes); +ntfs_inode *ntfs_new_inode(ntfs_sb_info *sbi, CLST nRec, bool dir); +extern const u8 s_dir_security[0x50]; +extern const u8 s_file_security[0x58]; +int ntfs_security_init(ntfs_sb_info *sbi); +int ntfs_get_security_by_id(ntfs_sb_info *sbi, u32 security_id, void **sd, + size_t *size); +int ntfs_insert_security(ntfs_sb_info *sbi, const void *sd, u32 size, + __le32 *security_id, bool *inserted); +int ntfs_reparse_init(ntfs_sb_info *sbi); +int ntfs_objid_init(ntfs_sb_info *sbi); +int ntfs_objid_remove(ntfs_sb_info *sbi, GUID *guid); +int ntfs_insert_reparse(ntfs_sb_info *sbi, __le32 rtag, const MFT_REF *ref); +int ntfs_remove_reparse(ntfs_sb_info *sbi, __le32 rtag, const MFT_REF *ref); +void mark_as_free_ex(ntfs_sb_info *sbi, CLST lcn, CLST len, bool trim); +int run_deallocate(ntfs_sb_info *sbi, struct runs_tree *run, bool trim); + +/* globals from index.c */ +int indx_used_bit(ntfs_index *indx, ntfs_inode *ni, size_t *bit); +void fnd_clear(struct ntfs_fnd *fnd); +struct ntfs_fnd *fnd_get(ntfs_index *indx); +void fnd_put(struct ntfs_fnd *fnd); +void indx_clear(ntfs_index *idx); +int indx_init(ntfs_index *indx, ntfs_sb_info *sbi, const ATTRIB *attr, + enum index_mutex_classed type); +INDEX_ROOT *indx_get_root(ntfs_index *indx, ntfs_inode *ni, ATTRIB **attr, + mft_inode **mi); +int indx_read(ntfs_index *idx, ntfs_inode *ni, CLST vbn, + struct indx_node **node); +int indx_find(ntfs_index *indx, ntfs_inode *dir, const INDEX_ROOT *root, + const void *Key, size_t KeyLen, const void *param, int *diff, + NTFS_DE **entry, struct ntfs_fnd *fnd); +int indx_find_sort(ntfs_index *indx, ntfs_inode *ni, const INDEX_ROOT *root, + NTFS_DE **entry, struct ntfs_fnd *fnd); +int indx_find_raw(ntfs_index *indx, ntfs_inode *ni, const INDEX_ROOT *root, + NTFS_DE **entry, size_t *off, struct ntfs_fnd *fnd); +int indx_insert_entry(ntfs_index *indx, ntfs_inode *ni, const NTFS_DE *new_de, + const void *param, struct ntfs_fnd *fnd); +int indx_delete_entry(ntfs_index *indx, ntfs_inode *ni, const void *key, + u32 key_len, const void *param); +int indx_update_dup(ntfs_inode *ni, ntfs_sb_info *sbi, + const ATTR_FILE_NAME *fname, const NTFS_DUP_INFO *dup, + int sync); + +/* globals from inode.c */ +struct inode *ntfs_iget5(struct super_block *sb, const MFT_REF *ref, + const struct cpu_str *name); +int ntfs_set_size(struct inode *inode, u64 new_size); +int reset_log_file(struct inode *inode); +int ntfs_get_block(struct inode *inode, sector_t vbn, + struct buffer_head *bh_result, int create); +int ntfs_write_inode(struct inode *inode, struct writeback_control *wbc); +int ntfs_sync_inode(struct inode *inode); +int ntfs_flush_inodes(struct super_block *sb, struct inode *i1, + struct inode *i2); +int inode_write_data(struct inode *inode, const void *data, size_t bytes); +int ntfs_create_inode(struct inode *dir, struct dentry *dentry, + struct file *file, umode_t mode, dev_t dev, + const char *symname, unsigned int size, int excl, + struct ntfs_fnd *fnd, struct inode **new_inode); +int ntfs_link_inode(struct inode *inode, struct dentry *dentry); +int ntfs_unlink_inode(struct inode *dir, const struct dentry *dentry); +void ntfs_evict_inode(struct inode *inode); +int ntfs_readpage(struct file *file, struct page *page); +extern const struct inode_operations ntfs_link_inode_operations; +extern const struct address_space_operations ntfs_aops; +extern const struct address_space_operations ntfs_aops_cmpr; + +/* globals from name_i.c*/ +int fill_name_de(ntfs_sb_info *sbi, void *buf, const struct qstr *name); +struct dentry *ntfs_get_parent(struct dentry *child); + +extern const struct inode_operations ntfs_dir_inode_operations; + +/* globals from record.c */ +int mi_get(ntfs_sb_info *sbi, CLST rno, mft_inode **mi); +void mi_put(mft_inode *mi); +int mi_init(mft_inode *mi, ntfs_sb_info *sbi, CLST rno); +int mi_read(mft_inode *mi, bool is_mft); +ATTRIB *mi_enum_attr(mft_inode *mi, ATTRIB *attr); +// TODO: id? +ATTRIB *mi_find_attr(mft_inode *mi, ATTRIB *attr, ATTR_TYPE type, + const __le16 *name, size_t name_len, const __le16 *id); +static inline ATTRIB *rec_find_attr_le(mft_inode *rec, ATTR_LIST_ENTRY *le) +{ + return mi_find_attr(rec, NULL, le->type, le_name(le), le->name_len, + &le->id); +} +int mi_write(mft_inode *mi, int wait); +int mi_format_new(mft_inode *mi, ntfs_sb_info *sbi, CLST rno, __le16 flags, + bool is_mft); +void mi_mark_free(mft_inode *mi); +ATTRIB *mi_insert_attr(mft_inode *mi, ATTR_TYPE type, const __le16 *name, + u8 name_len, u32 asize, u16 name_off); + +bool mi_remove_attr(mft_inode *mi, ATTRIB *attr); +bool mi_resize_attr(mft_inode *mi, ATTRIB *attr, int bytes); +int mi_pack_runs(mft_inode *mi, ATTRIB *attr, struct runs_tree *run, CLST len); +static inline bool mi_is_ref(const mft_inode *mi, const MFT_REF *ref) +{ + if (le32_to_cpu(ref->low) != mi->rno) + return false; + if (ref->seq != mi->mrec->seq) + return false; + +#ifdef NTFS3_64BIT_CLUSTER + return le16_to_cpu(ref->high) == (mi->rno >> 32); +#else + return !ref->high; +#endif +} + +/* globals from run.c */ +bool run_lookup_entry(const struct runs_tree *run, CLST vcn, CLST *lcn, + CLST *len, size_t *index); +void run_truncate(struct runs_tree *run, CLST vcn); +void run_truncate_head(struct runs_tree *run, CLST vcn); +bool run_lookup(const struct runs_tree *run, CLST Vcn, size_t *Index); +bool run_add_entry(struct runs_tree *run, CLST vcn, CLST lcn, CLST len); +bool run_get_entry(const struct runs_tree *run, size_t index, CLST *vcn, + CLST *lcn, CLST *len); +bool run_is_mapped_full(const struct runs_tree *run, CLST svcn, CLST evcn); + +int run_pack(const struct runs_tree *run, CLST svcn, CLST len, u8 *run_buf, + u32 run_buf_size, CLST *packed_vcns); +int run_unpack(struct runs_tree *run, ntfs_sb_info *sbi, CLST ino, CLST svcn, + CLST evcn, const u8 *run_buf, u32 run_buf_size); + +#ifdef NTFS3_CHECK_FREE_CLST +int run_unpack_ex(struct runs_tree *run, ntfs_sb_info *sbi, CLST ino, CLST svcn, + CLST evcn, const u8 *run_buf, u32 run_buf_size); +#else +#define run_unpack_ex run_unpack +#endif +int run_get_highest_vcn(CLST vcn, const u8 *run_buf, u64 *highest_vcn); + +/* globals from super.c */ +void *ntfs_set_shared(void *ptr, u32 bytes); +void *ntfs_put_shared(void *ptr); +void ntfs_unmap_meta(struct super_block *sb, CLST lcn, CLST len); +int ntfs_discard(ntfs_sb_info *sbi, CLST Lcn, CLST Len); + +/* globals from ubitmap.c*/ +void wnd_close(wnd_bitmap *wnd); +static inline size_t wnd_zeroes(const wnd_bitmap *wnd) +{ + return wnd->total_zeroes; +} +void wnd_trace(wnd_bitmap *wnd); +void wnd_trace_tree(wnd_bitmap *wnd, u32 nExtents, const char *Hint); +int wnd_init(wnd_bitmap *wnd, struct super_block *sb, size_t nBits); +int wnd_set_free(wnd_bitmap *wnd, size_t FirstBit, size_t Bits); +int wnd_set_used(wnd_bitmap *wnd, size_t FirstBit, size_t Bits); +bool wnd_is_free(wnd_bitmap *wnd, size_t FirstBit, size_t Bits); +bool wnd_is_used(wnd_bitmap *wnd, size_t FirstBit, size_t Bits); + +/* Possible values for 'flags' 'wnd_find' */ +#define BITMAP_FIND_MARK_AS_USED 0x01 +#define BITMAP_FIND_FULL 0x02 +size_t wnd_find(wnd_bitmap *wnd, size_t to_alloc, size_t hint, size_t flags, + size_t *allocated); +int wnd_extend(wnd_bitmap *wnd, size_t new_bits); +void wnd_zone_set(wnd_bitmap *wnd, size_t Lcn, size_t Len); +int ntfs_trim_fs(ntfs_sb_info *sbi, struct fstrim_range *range); + +/* globals from upcase.c */ +int ntfs_cmp_names(const __le16 *s1, size_t l1, const __le16 *s2, size_t l2, + const u16 *upcase); +int ntfs_cmp_names_cpu(const struct cpu_str *uni1, const struct le_str *uni2, + const u16 *upcase); + +/* globals from xattr.c */ +struct posix_acl *ntfs_get_acl(struct inode *inode, int type); +int ntfs_set_acl(struct inode *inode, struct posix_acl *acl, int type); +int ntfs_acl_chmod(struct inode *inode); +int ntfs_permission(struct inode *inode, int mask); +ssize_t ntfs_listxattr(struct dentry *dentry, char *buffer, size_t size); +extern const struct xattr_handler *ntfs_xattr_handlers[]; + +/* globals from lznt.c */ +struct lznt *get_compression_ctx(bool std); +size_t compress_lznt(const void *uncompressed, size_t uncompressed_size, + void *compressed, size_t compressed_size, + struct lznt *ctx); +ssize_t decompress_lznt(const void *compressed, size_t compressed_size, + void *uncompressed, size_t uncompressed_size); + +char *attr_str(const ATTRIB *attr, char *buf, size_t buf_len); + +static inline bool is_nt5(ntfs_sb_info *sbi) +{ + return sbi->volume.major_ver >= 3; +} + +/*(sb->s_flags & SB_ACTIVE)*/ +static inline bool is_mounted(ntfs_sb_info *sbi) +{ + return !!sbi->sb->s_root; +} + +static inline bool ntfs_is_meta_file(ntfs_sb_info *sbi, CLST rno) +{ + return rno < MFT_REC_FREE || rno == sbi->objid_no || + rno == sbi->quota_no || rno == sbi->reparse_no || + rno == sbi->usn_jrnl_no; +} + +static inline void ntfs_unmap_page(struct page *page) +{ + kunmap(page); + put_page(page); +} + +static inline struct page *ntfs_map_page(struct address_space *mapping, + unsigned long index) +{ + struct page *page = read_mapping_page(mapping, index, NULL); + + if (!IS_ERR(page)) { + kmap(page); + if (!PageError(page)) + return page; + ntfs_unmap_page(page); + return ERR_PTR(-EIO); + } + return page; +} + +static inline size_t wnd_zone_bit(const wnd_bitmap *wnd) +{ + return wnd->zone_bit; +} + +static inline size_t wnd_zone_len(const wnd_bitmap *wnd) +{ + return wnd->zone_end - wnd->zone_bit; +} + +static inline void run_init(struct runs_tree *run) +{ + run->runs_ = NULL; + run->count = 0; + run->allocated = 0; +} + +static inline struct runs_tree *run_alloc(void) +{ + return ntfs_alloc(sizeof(struct runs_tree), 1); +} + +static inline void run_close(struct runs_tree *run) +{ + ntfs_free(run->runs_); + memset(run, 0, sizeof(*run)); +} + +static inline void run_free(struct runs_tree *run) +{ + if (run) { + ntfs_free(run->runs_); + ntfs_free(run); + } +} + +static inline bool run_is_empty(struct runs_tree *run) +{ + return !run->count; +} + +/* NTFS uses quad aligned bitmaps */ +static inline size_t bitmap_size(size_t bits) +{ + return QuadAlign((bits + 7) >> 3); +} + +#define _100ns2seconds 10000000 +#define SecondsToStartOf1970 0x00000002B6109100 + +#define NTFS_TIME_GRAN 100 + +/* + * kernel2nt + * + * converts in-memory kernel timestamp into nt time + */ +static inline __le64 kernel2nt(const struct timespec64 *ts) +{ + // 10^7 units of 100 nanoseconds one second + return cpu_to_le64(_100ns2seconds * + (ts->tv_sec + SecondsToStartOf1970) + + ts->tv_nsec / NTFS_TIME_GRAN); +} + +/* + * nt2kernel + * + * converts on-disk nt time into kernel timestamp + */ +static inline void nt2kernel(const __le64 tm, struct timespec64 *ts) +{ + u64 t = le64_to_cpu(tm) - _100ns2seconds * SecondsToStartOf1970; + + // WARNING: do_div changes its first argument(!) + ts->tv_nsec = do_div(t, _100ns2seconds) * 100; + ts->tv_sec = t; +} + +static inline ntfs_sb_info *ntfs_sb(struct super_block *sb) +{ + return sb->s_fs_info; +} + +/* Align up on cluster boundary */ +static inline u64 ntfs_up_cluster(const ntfs_sb_info *sbi, u64 size) +{ + return (size + sbi->cluster_mask) & ~((u64)sbi->cluster_mask); +} + +/* Align up on cluster boundary */ +static inline u64 ntfs_up_block(const struct super_block *sb, u64 size) +{ + return (size + sb->s_blocksize - 1) & ~(u64)(sb->s_blocksize - 1); +} + +static inline CLST bytes_to_cluster(const ntfs_sb_info *sbi, u64 size) +{ + return (size + sbi->cluster_mask) >> sbi->cluster_bits; +} + +static inline u64 bytes_to_block(const struct super_block *sb, u64 size) +{ + return (size + sb->s_blocksize - 1) >> sb->s_blocksize_bits; +} + +/* calculates ((bytes + frame_size - 1)/frame_size)*frame_size; */ +static inline u64 ntfs_up_frame(const ntfs_sb_info *sbi, u64 bytes, u8 c_unit) +{ + u32 bytes_per_frame = 1u << (c_unit + sbi->cluster_bits); + + return (bytes + bytes_per_frame - 1) & ~(u64)(bytes_per_frame - 1); +} + +static inline struct buffer_head *ntfs_bread(struct super_block *sb, + sector_t block) +{ + struct buffer_head *bh; + + bh = sb_bread(sb, block); + if (bh) + return bh; + + __ntfs_trace(sb, KERN_ERR, "failed to read volume at offset 0x%llx", + (u64)block << sb->s_blocksize_bits); + return NULL; +} + +static inline bool is_power_of2(size_t v) +{ + return v && !(v & (v - 1)); +} + +static inline ntfs_inode *ntfs_i(struct inode *inode) +{ + return container_of(inode, ntfs_inode, vfs_inode); +} + +static inline bool is_compressed(const ntfs_inode *ni) +{ + return (ni->std_fa & FILE_ATTRIBUTE_COMPRESSED) || + (ni->ni_flags & NI_FLAG_COMPRESSED_MASK); +} + +static inline bool is_dedup(const ntfs_inode *ni) +{ + return ni->ni_flags & NI_FLAG_DEDUPLICATED; +} + +static inline bool is_encrypted(const ntfs_inode *ni) +{ + return ni->std_fa & FILE_ATTRIBUTE_ENCRYPTED; +} + +static inline bool is_sparsed(const ntfs_inode *ni) +{ + return ni->std_fa & FILE_ATTRIBUTE_SPARSE_FILE; +} + +static inline void le16_sub_cpu(__le16 *var, u16 val) +{ + *var = cpu_to_le16(le16_to_cpu(*var) - val); +} + +static inline void le32_sub_cpu(__le32 *var, u32 val) +{ + *var = cpu_to_le32(le32_to_cpu(*var) - val); +} + +static inline void nb_put(struct ntfs_buffers *nb) +{ + u32 i, nbufs = nb->nbufs; + + if (!nbufs) + return; + + for (i = 0; i < nbufs; i++) + put_bh(nb->bh[i]); + nb->nbufs = 0; +} + +static inline void put_indx_node(struct indx_node *in) +{ + if (!in) + return; + + ntfs_free(in->index); + nb_put(&in->nb); + ntfs_free(in); +} + +static inline void mi_clear(mft_inode *mi) +{ + nb_put(&mi->nb); + ntfs_free(mi->mrec); + mi->mrec = NULL; +} + +static inline void ni_lock(ntfs_inode *ni) +{ + mutex_lock(&ni->ni_lock); +} + +static inline void ni_unlock(ntfs_inode *ni) +{ + mutex_unlock(&ni->ni_lock); +} + +static inline int ni_trylock(ntfs_inode *ni) +{ + return mutex_trylock(&ni->ni_lock); +} + +static inline int ni_has_resident_data(ntfs_inode *ni) +{ + return ni->ni_flags & NI_FLAG_RESIDENT; +} + +static inline int attr_load_runs_attr(ntfs_inode *ni, ATTRIB *attr, + struct runs_tree *run, CLST vcn) +{ + return attr_load_runs_vcn(ni, attr->type, attr_name(attr), + attr->name_len, run, vcn); +} + +static inline void le64_sub_cpu(__le64 *var, u64 val) +{ + *var = cpu_to_le64(le64_to_cpu(*var) - val); +} diff --git a/fs/ntfs3/upcase.c b/fs/ntfs3/upcase.c new file mode 100644 index 000000000000..0bb8d75b8abb --- /dev/null +++ b/fs/ntfs3/upcase.c @@ -0,0 +1,78 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/fs/ntfs3/upcase.c + * + * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved. + * + */ +#include +#include +#include +#include + +#include "debug.h" +#include "ntfs.h" +#include "ntfs_fs.h" + +static inline u16 upcase_unicode_char(const u16 *upcase, u16 chr) +{ + if (chr < 'a') + return chr; + + if (chr <= 'z') + return (u16)(chr - ('a' - 'A')); + + return upcase[chr]; +} + +int ntfs_cmp_names(const __le16 *s1, size_t l1, const __le16 *s2, size_t l2, + const u16 *upcase) +{ + int diff; + size_t len = l1 < l2 ? l1 : l2; + + if (upcase) { + while (len--) { + diff = upcase_unicode_char(upcase, le16_to_cpu(*s1++)) - + upcase_unicode_char(upcase, le16_to_cpu(*s2++)); + if (diff) + return diff; + } + } else { + while (len--) { + diff = le16_to_cpu(*s1++) - le16_to_cpu(*s2++); + if (diff) + return diff; + } + } + + return (int)(l1 - l2); +} + +int ntfs_cmp_names_cpu(const struct cpu_str *uni1, const struct le_str *uni2, + const u16 *upcase) +{ + const u16 *s1 = uni1->name; + const __le16 *s2 = uni2->name; + size_t l1 = uni1->len; + size_t l2 = uni2->len; + size_t len = l1 < l2 ? l1 : l2; + int diff; + + if (upcase) { + while (len--) { + diff = upcase_unicode_char(upcase, *s1++) - + upcase_unicode_char(upcase, le16_to_cpu(*s2++)); + if (diff) + return diff; + } + } else { + while (len--) { + diff = *s1++ - le16_to_cpu(*s2++); + if (diff) + return diff; + } + } + + return l1 - l2; +} From patchwork Fri Aug 21 16:25:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Komarov X-Patchwork-Id: 11729963 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7D5971731 for ; Fri, 21 Aug 2020 16:32:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 525DC22B4E for ; Fri, 21 Aug 2020 16:32:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=paragon-software.com header.i=@paragon-software.com header.b="mQNJBo68" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727014AbgHUQcM (ORCPT ); Fri, 21 Aug 2020 12:32:12 -0400 Received: from relayfre-01.paragon-software.com ([176.12.100.13]:56470 "EHLO relayfre-01.paragon-software.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728473AbgHUQZY (ORCPT ); Fri, 21 Aug 2020 12:25:24 -0400 Received: from dlg2.mail.paragon-software.com (vdlg-exch-02.paragon-software.com [172.30.1.105]) by relayfre-01.paragon-software.com (Postfix) with ESMTPS id 49FFF414; Fri, 21 Aug 2020 19:25:11 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=paragon-software.com; s=mail; t=1598027111; bh=P9ZLW0N+7PeIoRooFf5cnGvDnjVdte3FmzihkkkMKJg=; h=From:To:CC:Subject:Date; b=mQNJBo68JRODXV8uqYhW7YcnwC4jLiqGhgFkUgD5U3aSUe2OCePUYOhGlAEqCF+rv 4rD/NQpjPlNQEQVJ9J3cvKM7B+5H3D1FizktJMjNuvVmXIaI0baWHAh0l5/UFwXGfV AfSozRWliSym91arHx/IN6BDJTS6h0SB9vwdg7NM= Received: from vdlg-exch-02.paragon-software.com (172.30.1.105) by vdlg-exch-02.paragon-software.com (172.30.1.105) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1847.3; Fri, 21 Aug 2020 19:25:10 +0300 Received: from vdlg-exch-02.paragon-software.com ([fe80::586:6d72:3fe5:bd9b]) by vdlg-exch-02.paragon-software.com ([fe80::586:6d72:3fe5:bd9b%6]) with mapi id 15.01.1847.003; Fri, 21 Aug 2020 19:25:10 +0300 From: Konstantin Komarov To: "viro@zeniv.linux.org.uk" , "linux-kernel@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" CC: =?iso-8859-1?q?Pali_Roh=E1r?= Subject: [PATCH v2 03/10] fs/ntfs3: Add bitmap Thread-Topic: [PATCH v2 03/10] fs/ntfs3: Add bitmap Thread-Index: AdZ30vgvg21hJl5JT/mlNin5wjtHrA== Date: Fri, 21 Aug 2020 16:25:10 +0000 Message-ID: Accept-Language: ru-RU, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.30.8.36] MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Bitmap implementation Signed-off-by: Konstantin Komarov --- fs/ntfs3/bitfunc.c | 144 +++++ fs/ntfs3/bitmap.c | 1545 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 1689 insertions(+) create mode 100644 fs/ntfs3/bitfunc.c create mode 100644 fs/ntfs3/bitmap.c diff --git a/fs/ntfs3/bitfunc.c b/fs/ntfs3/bitfunc.c new file mode 100644 index 000000000000..b10a13d0c3b2 --- /dev/null +++ b/fs/ntfs3/bitfunc.c @@ -0,0 +1,144 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/fs/ntfs3/bitfunc.c + * + * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved. + * + */ +#include +#include +#include +#include +#include + +#include "debug.h" +#include "ntfs.h" +#include "ntfs_fs.h" + +#define BITS_IN_SIZE_T (sizeof(size_t) * 8) + +/* + * fill_mask[i] - first i bits are '1' , i = 0,1,2,3,4,5,6,7,8 + * fill_mask[i] = 0xFF >> (8-i) + */ +static const u8 fill_mask[] = { 0x00, 0x01, 0x03, 0x07, 0x0F, + 0x1F, 0x3F, 0x7F, 0xFF }; + +/* + * zero_mask[i] - first i bits are '0' , i = 0,1,2,3,4,5,6,7,8 + * zero_mask[i] = 0xFF << i + */ +static const u8 zero_mask[] = { 0xFF, 0xFE, 0xFC, 0xF8, 0xF0, + 0xE0, 0xC0, 0x80, 0x00 }; + +/* + * are_bits_clear + * + * Returns true if all bits [bit, bit+nbits) are zeros "0" + */ +bool are_bits_clear(const ulong *lmap, size_t bit, size_t nbits) +{ + size_t pos = bit & 7; + const u8 *map = (u8 *)lmap + (bit >> 3); + + if (!pos) + goto check_size_t; + if (8 - pos >= nbits) + return !nbits || + !(*map & fill_mask[pos + nbits] & zero_mask[pos]); + + if (*map++ & zero_mask[pos]) + return false; + nbits -= 8 - pos; + +check_size_t: + pos = ((size_t)map) & (sizeof(size_t) - 1); + if (!pos) + goto step_size_t; + + pos = sizeof(size_t) - pos; + if (nbits < pos * 8) + goto step_size_t; + for (nbits -= pos * 8; pos; pos--, map++) { + if (*map) + return false; + } + +step_size_t: + for (pos = nbits / BITS_IN_SIZE_T; pos; pos--, map += sizeof(size_t)) { + if (*((size_t *)map)) + return false; + } + + for (pos = (nbits % BITS_IN_SIZE_T) >> 3; pos; pos--, map++) { + if (*map) + return false; + } + + pos = nbits & 7; + if (pos && (*map & fill_mask[pos])) + return false; + + // All bits are zero + return true; +} + +/* + * are_bits_set + * + * Returns true if all bits [bit, bit+nbits) are ones "1" + */ +bool are_bits_set(const ulong *lmap, size_t bit, size_t nbits) +{ + u8 mask; + size_t pos = bit & 7; + const u8 *map = (u8 *)lmap + (bit >> 3); + + if (!pos) + goto check_size_t; + + if (8 - pos >= nbits) { + mask = fill_mask[pos + nbits] & zero_mask[pos]; + return !nbits || (*map & mask) == mask; + } + + mask = zero_mask[pos]; + if ((*map++ & mask) != mask) + return false; + nbits -= 8 - pos; + +check_size_t: + pos = ((size_t)map) & (sizeof(size_t) - 1); // 0,1,2,3 + if (!pos) + goto step_size_t; + pos = sizeof(size_t) - pos; + if (nbits < pos * 8) + goto step_size_t; + + for (nbits -= pos * 8; pos; pos--, map++) { + if (*map != 0xFF) + return false; + } + +step_size_t: + for (pos = nbits / BITS_IN_SIZE_T; pos; pos--, map += sizeof(size_t)) { + if (*((size_t *)map) != MINUS_ONE_T) + return false; + } + + for (pos = (nbits % BITS_IN_SIZE_T) >> 3; pos; pos--, map++) { + if (*map != 0xFF) + return false; + } + + pos = nbits & 7; + if (pos) { + u8 mask = fill_mask[pos]; + + if ((*map & mask) != mask) + return false; + } + + // All bits are ones + return true; +} diff --git a/fs/ntfs3/bitmap.c b/fs/ntfs3/bitmap.c new file mode 100644 index 000000000000..f37a6976525b --- /dev/null +++ b/fs/ntfs3/bitmap.c @@ -0,0 +1,1545 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/fs/ntfs3/bitmap.c + * + * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved. + * + */ + +#include +#include +#include +#include +#include + +#include "debug.h" +#include "ntfs.h" +#include "ntfs_fs.h" + +struct rb_node_key { + struct rb_node node; + size_t key; +}; + +/* + * Tree is sorted by start (key) + */ +struct e_node { + struct rb_node_key start; /* Tree sorted by start */ + struct rb_node_key count; /* Tree sorted by len*/ +}; + +static int wnd_rescan(wnd_bitmap *wnd); +static struct buffer_head *wnd_map(wnd_bitmap *wnd, size_t iw); +static bool wnd_is_free_hlp(wnd_bitmap *wnd, size_t bit, size_t bits); + +static inline u32 wnd_bits(const wnd_bitmap *wnd, size_t i) +{ + return i + 1 == wnd->nwnd ? wnd->bits_last : wnd->sb->s_blocksize * 8; +} + +/* + * b_pos + b_len - biggest fragment + * Scan range [wpos wbits) window 'buf' + * Returns -1 if not found + */ +static size_t wnd_scan(const ulong *buf, size_t wbit, u32 wpos, u32 wend, + size_t to_alloc, size_t *prev_tail, size_t *b_pos, + size_t *b_len) +{ + while (wpos < wend) { + size_t free_len; + u32 free_bits, end; + u32 used = find_next_zero_bit(buf, wend, wpos); + + if (used >= wend) { + if (*b_len < *prev_tail) { + *b_pos = wbit - *prev_tail; + *b_len = *prev_tail; + } + + *prev_tail = 0; + return -1; + } + + if (used > wpos) { + wpos = used; + if (*b_len < *prev_tail) { + *b_pos = wbit - *prev_tail; + *b_len = *prev_tail; + } + + *prev_tail = 0; + } + + /* + * Now we have a fragment [wpos, wend) staring with 0 + */ + end = wpos + to_alloc - *prev_tail; + free_bits = find_next_bit(buf, min(end, wend), wpos); + + free_len = *prev_tail + free_bits - wpos; + + if (*b_len < free_len) { + *b_pos = wbit + wpos - *prev_tail; + *b_len = free_len; + } + + if (free_len >= to_alloc) + return wbit + wpos - *prev_tail; + + if (free_bits >= wend) { + *prev_tail += free_bits - wpos; + return -1; + } + + wpos = free_bits + 1; + + *prev_tail = 0; + } + + return -1; +} + +/* + * wnd_close + * + * + */ +void wnd_close(wnd_bitmap *wnd) +{ + struct rb_node *node, *next; + + if (wnd->free_bits != wnd->free_holder) + ntfs_free(wnd->free_bits); + run_close(&wnd->run); + + node = rb_first(&wnd->start_tree); + + while (node) { + next = rb_next(node); + rb_erase(node, &wnd->start_tree); + ntfs_free(rb_entry(node, struct e_node, start.node)); + node = next; + } +} + +static struct rb_node *rb_lookup(struct rb_root *root, size_t v) +{ + struct rb_node **p = &root->rb_node; + struct rb_node *r = NULL; + + while (*p) { + struct rb_node_key *k; + + k = rb_entry(*p, struct rb_node_key, node); + if (v < k->key) + p = &(*p)->rb_left; + else if (v > k->key) { + r = &k->node; + p = &(*p)->rb_right; + } else + return &k->node; + } + + return r; +} + +/* + * rb_insert_count + * + * Helper function to insert special kind of 'count' tree + */ +static inline bool rb_insert_count(struct rb_root *root, struct e_node *e) +{ + struct rb_node **p = &root->rb_node; + struct rb_node *parent = NULL; + size_t e_ckey = e->count.key; + size_t e_skey = e->start.key; + + while (*p) { + struct e_node *k = + rb_entry(parent = *p, struct e_node, count.node); + + if (e_ckey > k->count.key) + p = &(*p)->rb_left; + else if (e_ckey < k->count.key) + p = &(*p)->rb_right; + else if (e_skey < k->start.key) + p = &(*p)->rb_left; + else if (e_skey > k->start.key) + p = &(*p)->rb_right; + else { + WARN_ON(1); + return false; + } + } + + rb_link_node(&e->count.node, parent, p); + rb_insert_color(&e->count.node, root); + return true; +} + +/* + * inline bool rb_insert_start + * + * Helper function to insert special kind of 'count' tree + */ +static inline bool rb_insert_start(struct rb_root *root, struct e_node *e) +{ + struct rb_node **p = &root->rb_node; + struct rb_node *parent = NULL; + size_t e_skey = e->start.key; + + while (*p) { + struct e_node *k; + + parent = *p; + + k = rb_entry(parent, struct e_node, start.node); + if (e_skey < k->start.key) + p = &(*p)->rb_left; + else if (e_skey > k->start.key) + p = &(*p)->rb_right; + else { + WARN_ON(1); + return false; + } + } + + rb_link_node(&e->start.node, parent, p); + rb_insert_color(&e->start.node, root); + return true; +} + +#define NTFS_MAX_WND_EXTENTS (32u * 1024u) + +/* + * wnd_add_free_ext + * + * adds a new extent of free space + * build = 1 when building tree + */ +static void wnd_add_free_ext(wnd_bitmap *wnd, size_t bit, size_t len, + bool build) +{ + struct e_node *e, *e0 = NULL; + size_t ib, end_in = bit + len; + struct rb_node *n; + + if (!build) + goto lookup; + + if (wnd->count >= NTFS_MAX_WND_EXTENTS && len <= wnd->extent_min) { + wnd->uptodated = -1; + return; + } + + goto insert_new; + +lookup: + /* Try to find extent before 'bit' */ + n = rb_lookup(&wnd->start_tree, bit); + + if (!n) + n = rb_first(&wnd->start_tree); + else { + e = rb_entry(n, struct e_node, start.node); + + n = rb_next(n); + if (e->start.key + e->count.key == bit) { + /* Remove left */ + bit = e->start.key; + len += e->count.key; + + rb_erase(&e->start.node, &wnd->start_tree); + rb_erase(&e->count.node, &wnd->count_tree); + wnd->count -= 1; + e0 = e; + } + } + + while (n) { + size_t next_end; + + e = rb_entry(n, struct e_node, start.node); + + next_end = e->start.key + e->count.key; + if (e->start.key > end_in) + break; + + /* Remove right */ + n = rb_next(n); + len += next_end - end_in; + end_in = next_end; + rb_erase(&e->start.node, &wnd->start_tree); + rb_erase(&e->count.node, &wnd->count_tree); + wnd->count -= 1; + + if (!e0) + e0 = e; + else + ntfs_free(e); + } + + if (wnd->uptodated == 1) + goto insert_new; + + /* Check bits before 'bit' */ + ib = wnd->zone_bit == wnd->zone_end || bit < wnd->zone_end ? + 0 : + wnd->zone_end; + + while (bit > ib && wnd_is_free_hlp(wnd, bit - 1, 1)) { + bit -= 1; + len += 1; + } + + /* Check bits after 'end_in' */ + ib = wnd->zone_bit == wnd->zone_end || end_in > wnd->zone_bit ? + wnd->nbits : + wnd->zone_bit; + + while (end_in < ib && wnd_is_free_hlp(wnd, end_in, 1)) { + end_in += 1; + len += 1; + } + +insert_new: + /* Insert new fragment */ + if (wnd->count < NTFS_MAX_WND_EXTENTS) + goto allocate_new; + + if (e0) + ntfs_free(e0); + + wnd->uptodated = -1; + + /* Compare with smallest fragment */ + n = rb_last(&wnd->count_tree); + e = rb_entry(n, struct e_node, count.node); + if (len <= e->count.key) + goto out; /* Do not insert small fragments */ + + if (build) { + struct e_node *e2; + + n = rb_prev(n); + e2 = rb_entry(n, struct e_node, count.node); + /* smallest fragment will be 'e2->count.key' */ + wnd->extent_min = e2->count.key; + } + + /* Replace smallest fragment by new one */ + rb_erase(&e->start.node, &wnd->start_tree); + rb_erase(&e->count.node, &wnd->count_tree); + wnd->count -= 1; + goto insert; + +allocate_new: + e = e0 ? e0 : ntfs_alloc(sizeof(struct e_node), 0); + if (!e) { + wnd->uptodated = -1; + goto out; + } + + if (build && len <= wnd->extent_min) + wnd->extent_min = len; +insert: + e->start.key = bit; + e->count.key = len; + if (len > wnd->extent_max) + wnd->extent_max = len; + + rb_insert_start(&wnd->start_tree, e); + rb_insert_count(&wnd->count_tree, e); + wnd->count += 1; + +out:; +} + +/* + * wnd_remove_free_ext + * + * removes a run from the cached free space + */ +static void wnd_remove_free_ext(wnd_bitmap *wnd, size_t bit, size_t len) +{ + struct rb_node *n, *n3; + struct e_node *e, *e3; + size_t end_in = bit + len; + size_t end3, end, new_key, new_len, max_new_len; + bool bmax; + + /* Try to find extent before 'bit' */ + n = rb_lookup(&wnd->start_tree, bit); + + if (!n) + return; + + e = rb_entry(n, struct e_node, start.node); + end = e->start.key + e->count.key; + + new_key = new_len = 0; + len = e->count.key; + + /* Range [bit,end_in) must be inside 'e' or outside 'e' and 'n' */ + if (e->start.key > bit) + goto check_biggest; + + if (end_in <= end) { + /* Range [bit,end_in) inside 'e' */ + new_key = end_in; + new_len = end - end_in; + len = bit - e->start.key; + goto check_biggest; + } + + if (bit <= end) + goto check_biggest; + + bmax = false; + + n3 = rb_next(n); + + while (n3) { + e3 = rb_entry(n3, struct e_node, start.node); + if (e3->start.key >= end_in) + break; + + if (e3->count.key == wnd->extent_max) + bmax = true; + + end3 = e3->start.key + e3->count.key; + if (end3 > end_in) { + e3->start.key = end_in; + rb_erase(&e3->count.node, &wnd->count_tree); + e3->count.key = end3 - end_in; + rb_insert_count(&wnd->count_tree, e3); + break; + } + + n3 = rb_next(n3); + rb_erase(&e3->start.node, &wnd->start_tree); + rb_erase(&e3->count.node, &wnd->count_tree); + wnd->count -= 1; + ntfs_free(e3); + } + if (!bmax) + return; + n3 = rb_first(&wnd->count_tree); + wnd->extent_max = + n3 ? rb_entry(n3, struct e_node, count.node)->count.key : 0; + return; + +check_biggest: + if (e->count.key != wnd->extent_max) + goto check_len; + + /* We have to change the biggest extent */ + n3 = rb_prev(&e->count.node); + if (n3) + goto check_len; + + n3 = rb_next(&e->count.node); + max_new_len = len > new_len ? len : new_len; + if (!n3) { + wnd->extent_max = max_new_len; + goto check_len; + } + e3 = rb_entry(n3, struct e_node, count.node); + wnd->extent_max = max(e3->count.key, max_new_len); + +check_len: + if (!len) { + if (new_len) { + e->start.key = new_key; + rb_erase(&e->count.node, &wnd->count_tree); + e->count.key = new_len; + rb_insert_count(&wnd->count_tree, e); + } else { + rb_erase(&e->start.node, &wnd->start_tree); + rb_erase(&e->count.node, &wnd->count_tree); + wnd->count -= 1; + ntfs_free(e); + } + goto out; + } + rb_erase(&e->count.node, &wnd->count_tree); + e->count.key = len; + rb_insert_count(&wnd->count_tree, e); + + if (!new_len) + goto out; + + if (wnd->count >= NTFS_MAX_WND_EXTENTS) { + wnd->uptodated = -1; + + /* Get minimal extent */ + e = rb_entry(rb_last(&wnd->count_tree), struct e_node, + count.node); + if (e->count.key > new_len) + goto out; + + /* Replace minimum */ + rb_erase(&e->start.node, &wnd->start_tree); + rb_erase(&e->count.node, &wnd->count_tree); + wnd->count -= 1; + } else { + e = ntfs_alloc(sizeof(struct e_node), 0); + if (!e) + wnd->uptodated = -1; + } + + if (e) { + e->start.key = new_key; + e->count.key = new_len; + rb_insert_start(&wnd->start_tree, e); + rb_insert_count(&wnd->count_tree, e); + wnd->count += 1; + } + +out: + if (!wnd->count && 1 != wnd->uptodated) + wnd_rescan(wnd); +} + +/* + * wnd_rescan + * + * Scan all bitmap. used while initialization. + */ +static int wnd_rescan(wnd_bitmap *wnd) +{ + int err = 0; + size_t prev_tail = 0; + struct super_block *sb = wnd->sb; + ntfs_sb_info *sbi = sb->s_fs_info; + u64 lbo, len = 0; + u32 blocksize = sb->s_blocksize; + u8 cluster_bits = sbi->cluster_bits; + const u32 ra_bytes = 512 * 1024; + const u32 ra_pages = ra_bytes >> PAGE_SHIFT; + u32 wbits = 8 * sb->s_blocksize; + u32 ra_mask = (ra_bytes >> sb->s_blocksize_bits) - 1; + struct address_space *mapping = sb->s_bdev->bd_inode->i_mapping; + u32 used, frb; + const ulong *buf; + size_t wpos, wbit, iw, vbo; + struct buffer_head *bh = NULL; + CLST lcn, clen; + + wnd->uptodated = 0; + wnd->extent_max = 0; + wnd->extent_min = MINUS_ONE_T; + wnd->total_zeroes = 0; + + vbo = 0; + iw = 0; + +start_wnd: + + if (iw + 1 == wnd->nwnd) + wbits = wnd->bits_last; + + if (wnd->inited) { + if (!wnd->free_bits[iw]) { + /* all ones */ + if (!prev_tail) + goto next_wnd; + + wnd_add_free_ext(wnd, vbo * 8 - prev_tail, prev_tail, + true); + prev_tail = 0; + goto next_wnd; + } + if (wbits == wnd->free_bits[iw]) { + /* all zeroes */ + prev_tail += wbits; + wnd->total_zeroes += wbits; + goto next_wnd; + } + } + + if (len) + goto read_wnd; + + if (!run_lookup_entry(&wnd->run, vbo >> cluster_bits, &lcn, &clen, + NULL)) { + err = -ENOENT; + goto out; + } + + lbo = (u64)lcn << cluster_bits; + len = (u64)clen << cluster_bits; + +read_wnd: + if (!(iw & ra_mask)) + page_cache_readahead_unbounded(mapping, NULL, lbo >> PAGE_SHIFT, + ra_pages, 0); + + bh = ntfs_bread(sb, lbo >> sb->s_blocksize_bits); + if (!bh) { + err = -EIO; + goto out; + } + + buf = (ulong *)bh->b_data; + + used = __bitmap_weight(buf, wbits); + if (used < wbits) { + frb = wbits - used; + wnd->free_bits[iw] = frb; + wnd->total_zeroes += frb; + } + + wpos = 0; + wbit = vbo * 8; + + if (wbit + wbits > wnd->nbits) + wbits = wnd->nbits - wbit; + +next_range: + used = find_next_zero_bit(buf, wbits, wpos); + + if (used > wpos && prev_tail) { + wnd_add_free_ext(wnd, wbit + wpos - prev_tail, prev_tail, true); + prev_tail = 0; + } + + wpos = used; + + if (wpos >= wbits) { + /* No free blocks */ + prev_tail = 0; + goto next_wnd; + } + + frb = find_next_bit(buf, wbits, wpos); + if (frb >= wbits) { + /* keep last free block */ + prev_tail += frb - wpos; + goto next_wnd; + } + + wnd_add_free_ext(wnd, wbit + wpos - prev_tail, frb + prev_tail - wpos, + true); + + /* Skip free block and first '1' */ + wpos = frb + 1; + /* Reset previous tail */ + prev_tail = 0; + if (wpos < wbits) + goto next_range; +next_wnd: + + if (bh) + put_bh(bh); + bh = NULL; + + vbo += blocksize; + if (len) { + len -= blocksize; + lbo += blocksize; + } + + if (++iw < wnd->nwnd) + goto start_wnd; + + /* Add last block */ + if (prev_tail) + wnd_add_free_ext(wnd, wnd->nbits - prev_tail, prev_tail, true); + + /* + * Before init cycle wnd->uptodated was 0 + * If any errors or limits occurs while initialization then + * wnd->uptodated will be -1 + * If 'uptodated' is still 0 then Tree is really updated + */ + if (!wnd->uptodated) + wnd->uptodated = 1; + + if (wnd->zone_bit != wnd->zone_end) { + size_t zlen = wnd->zone_end - wnd->zone_bit; + + wnd->zone_end = wnd->zone_bit; + wnd_zone_set(wnd, wnd->zone_bit, zlen); + } + +out: + return err; +} + +/* + * wnd_init + */ +int wnd_init(wnd_bitmap *wnd, struct super_block *sb, size_t nbits) +{ + int err; + u32 blocksize = sb->s_blocksize; + u32 wbits = blocksize * 8; + + init_rwsem(&wnd->rw_lock); + + wnd->sb = sb; + wnd->nbits = nbits; + wnd->total_zeroes = nbits; + wnd->extent_max = MINUS_ONE_T; + wnd->zone_bit = wnd->zone_end = 0; + wnd->nwnd = bytes_to_block(sb, bitmap_size(nbits)); + wnd->bits_last = nbits & (wbits - 1); + if (!wnd->bits_last) + wnd->bits_last = wbits; + + if (wnd->nwnd <= ARRAY_SIZE(wnd->free_holder)) { + wnd->free_bits = wnd->free_holder; + } else { + wnd->free_bits = ntfs_alloc(wnd->nwnd * sizeof(u16), 1); + if (!wnd->free_bits) + return -ENOMEM; + } + + err = wnd_rescan(wnd); + if (err) + return err; + + wnd->inited = true; + + return 0; +} + +/* + * wnd_map + * + * call sb_bread for requested window + */ +static struct buffer_head *wnd_map(wnd_bitmap *wnd, size_t iw) +{ + size_t vbo; + CLST lcn, clen; + struct super_block *sb = wnd->sb; + ntfs_sb_info *sbi; + struct buffer_head *bh; + u64 lbo; + + sbi = sb->s_fs_info; + vbo = (u64)iw << sb->s_blocksize_bits; + + if (!run_lookup_entry(&wnd->run, vbo >> sbi->cluster_bits, &lcn, &clen, + NULL)) { + return ERR_PTR(-ENOENT); + } + + lbo = ((u64)lcn << sbi->cluster_bits) + (vbo & sbi->cluster_mask); + + bh = ntfs_bread(wnd->sb, lbo >> sb->s_blocksize_bits); + + if (!bh) + return ERR_PTR(-EIO); + + return bh; +} + +/* + * wnd_set_free + * + * Marks the bits range from bit to bit + bits as free + */ +int wnd_set_free(wnd_bitmap *wnd, size_t bit, size_t bits) +{ + int err = 0; + struct super_block *sb = wnd->sb; + size_t bits0 = bits; + u32 wbits = 8 * sb->s_blocksize; + size_t iw = bit >> (sb->s_blocksize_bits + 3); + u32 wbit = bit & (wbits - 1); + struct buffer_head *bh; + + while (iw < wnd->nwnd && bits) { + u32 tail, op; + ulong *buf; + + if (iw + 1 == wnd->nwnd) + wbits = wnd->bits_last; + + tail = wbits - wbit; + op = tail < bits ? tail : bits; + + bh = wnd_map(wnd, iw); + if (IS_ERR(bh)) { + err = PTR_ERR(bh); + break; + } + + buf = (ulong *)bh->b_data; + + lock_buffer(bh); + + __bitmap_clear(buf, wbit, op); + + wnd->free_bits[iw] += op; + + set_buffer_uptodate(bh); + mark_buffer_dirty(bh); + unlock_buffer(bh); + put_bh(bh); + + wnd->total_zeroes += op; + bits -= op; + wbit = 0; + iw += 1; + } + + wnd_add_free_ext(wnd, bit, bits0, false); + + return err; +} + +/* + * wnd_set_used + * + * Marks the bits range from bit to bit + bits as used + */ +int wnd_set_used(wnd_bitmap *wnd, size_t bit, size_t bits) +{ + int err = 0; + struct super_block *sb = wnd->sb; + size_t bits0 = bits; + size_t iw = bit >> (sb->s_blocksize_bits + 3); + u32 wbits = 8 * sb->s_blocksize; + u32 wbit = bit & (wbits - 1); + struct buffer_head *bh; + + while (iw < wnd->nwnd && bits) { + u32 tail, op; + ulong *buf; + + if (unlikely(iw + 1 == wnd->nwnd)) + wbits = wnd->bits_last; + + tail = wbits - wbit; + op = tail < bits ? tail : bits; + + bh = wnd_map(wnd, iw); + if (IS_ERR(bh)) { + err = PTR_ERR(bh); + break; + } + buf = (ulong *)bh->b_data; + + lock_buffer(bh); + + __bitmap_set(buf, wbit, op); + wnd->free_bits[iw] -= op; + + set_buffer_uptodate(bh); + mark_buffer_dirty(bh); + unlock_buffer(bh); + put_bh(bh); + + wnd->total_zeroes -= op; + bits -= op; + wbit = 0; + iw += 1; + } + + if (!RB_EMPTY_ROOT(&wnd->start_tree)) + wnd_remove_free_ext(wnd, bit, bits0); + + return err; +} + +/* + * wnd_is_free_hlp + * + * Returns true if all clusters [bit, bit+bits) are free (bitmap only) + */ +static bool wnd_is_free_hlp(wnd_bitmap *wnd, size_t bit, size_t bits) +{ + struct super_block *sb = wnd->sb; + size_t iw = bit >> (sb->s_blocksize_bits + 3); + u32 wbits = 8 * sb->s_blocksize; + u32 wbit = bit & (wbits - 1); + + while (iw < wnd->nwnd && bits) { + u32 tail, op; + + if (unlikely(iw + 1 == wnd->nwnd)) + wbits = wnd->bits_last; + + tail = wbits - wbit; + op = tail < bits ? tail : bits; + + if (wbits != wnd->free_bits[iw]) { + bool ret; + struct buffer_head *bh = wnd_map(wnd, iw); + + if (IS_ERR(bh)) + return false; + + ret = are_bits_clear((ulong *)bh->b_data, wbit, op); + + put_bh(bh); + if (!ret) + return false; + } + + bits -= op; + wbit = 0; + iw += 1; + } + + return true; +} + +/* + * wnd_is_free + * + * Returns true if all clusters [bit, bit+bits) are free + */ +bool wnd_is_free(wnd_bitmap *wnd, size_t bit, size_t bits) +{ + bool ret; + struct rb_node *n; + size_t end; + struct e_node *e; + + if (RB_EMPTY_ROOT(&wnd->start_tree)) + goto use_wnd; + + n = rb_lookup(&wnd->start_tree, bit); + if (!n) + goto use_wnd; + + e = rb_entry(n, struct e_node, start.node); + + end = e->start.key + e->count.key; + + if (bit < end && bit + bits <= end) + return true; + +use_wnd: + ret = wnd_is_free_hlp(wnd, bit, bits); + + return ret; +} + +/* + * wnd_is_used + * + * Returns true if all clusters [bit, bit+bits) are used + */ +bool wnd_is_used(wnd_bitmap *wnd, size_t bit, size_t bits) +{ + bool ret = false; + struct super_block *sb = wnd->sb; + size_t iw = bit >> (sb->s_blocksize_bits + 3); + u32 wbits = 8 * sb->s_blocksize; + u32 wbit = bit & (wbits - 1); + size_t end; + struct rb_node *n; + struct e_node *e; + + if (RB_EMPTY_ROOT(&wnd->start_tree)) + goto use_wnd; + + end = bit + bits; + n = rb_lookup(&wnd->start_tree, end - 1); + if (!n) + goto use_wnd; + + e = rb_entry(n, struct e_node, start.node); + if (e->start.key + e->count.key > bit) + return false; + +use_wnd: + while (iw < wnd->nwnd && bits) { + u32 tail, op; + + if (unlikely(iw + 1 == wnd->nwnd)) + wbits = wnd->bits_last; + + tail = wbits - wbit; + op = tail < bits ? tail : bits; + + if (wnd->free_bits[iw]) { + bool ret; + struct buffer_head *bh = wnd_map(wnd, iw); + + if (IS_ERR(bh)) + goto out; + + ret = are_bits_set((ulong *)bh->b_data, wbit, op); + put_bh(bh); + if (!ret) + goto out; + } + + bits -= op; + wbit = 0; + iw += 1; + } + ret = true; + +out: + return ret; +} + +/* + * wnd_find + * - flags - BITMAP_FIND_XXX flags + * + * looks for free space + * Returns 0 if not found + */ +size_t wnd_find(wnd_bitmap *wnd, size_t to_alloc, size_t hint, size_t flags, + size_t *allocated) +{ + struct super_block *sb; + u32 wbits, wpos, wzbit, wzend; + size_t fnd, max_alloc, b_len, b_pos; + size_t iw, prev_tail, nwnd, wbit, ebit, zbit, zend; + size_t to_alloc0 = to_alloc; + const ulong *buf; + const struct e_node *e; + const struct rb_node *pr, *cr; + u8 log2_bits; + bool fbits_valid; + struct buffer_head *bh; + + /* fast checking for available free space */ + if (flags & BITMAP_FIND_FULL) { + size_t zeroes = wnd_zeroes(wnd); + + zeroes -= wnd->zone_end - wnd->zone_bit; + if (zeroes < to_alloc0) + goto no_space; + + if (to_alloc0 > wnd->extent_max) + goto no_space; + } else { + if (to_alloc > wnd->extent_max) + to_alloc = wnd->extent_max; + } + + if (wnd->zone_bit <= hint && hint < wnd->zone_end) + hint = wnd->zone_end; + + max_alloc = wnd->nbits; + b_len = b_pos = 0; + + if (hint >= max_alloc) + hint = 0; + + if (RB_EMPTY_ROOT(&wnd->start_tree)) { + if (wnd->uptodated == 1) { + /* extents tree is updated -> no free space */ + goto no_space; + } + goto scan_bitmap; + } + + e = NULL; + if (!hint) + goto allocate_biggest; + + /* Use hint: enumerate extents by start >= hint */ + pr = NULL; + cr = wnd->start_tree.rb_node; + + for (;;) { + e = rb_entry(cr, struct e_node, start.node); + + if (e->start.key == hint) + break; + + if (e->start.key < hint) { + pr = cr; + cr = cr->rb_right; + if (!cr) + break; + continue; + } + + cr = cr->rb_left; + if (!cr) { + e = pr ? rb_entry(pr, struct e_node, start.node) : NULL; + break; + } + } + + if (!e) + goto allocate_biggest; + + if (e->start.key + e->count.key > hint) { + /* We have found extension with 'hint' inside */ + size_t len = e->start.key + e->count.key - hint; + + if (len >= to_alloc && hint + to_alloc <= max_alloc) { + fnd = hint; + goto found; + } + + if (!(flags & BITMAP_FIND_FULL)) { + if (len > to_alloc) + len = to_alloc; + + if (hint + len <= max_alloc) { + fnd = hint; + to_alloc = len; + goto found; + } + } + } + +allocate_biggest: + + /* Allocate from biggest free extent */ + e = rb_entry(rb_first(&wnd->count_tree), struct e_node, count.node); + if (e->count.key != wnd->extent_max) + wnd->extent_max = e->count.key; + + if (e->count.key < max_alloc) { + if (e->count.key >= to_alloc) + ; + else if (flags & BITMAP_FIND_FULL) { + if (e->count.key < to_alloc0) { + /* Biggest free block is less then requested */ + goto no_space; + } + to_alloc = e->count.key; + } else if (-1 != wnd->uptodated) + to_alloc = e->count.key; + else { + /* Check if we can use more bits */ + size_t op, max_check; + struct rb_root start_tree; + + memcpy(&start_tree, &wnd->start_tree, + sizeof(struct rb_root)); + memset(&wnd->start_tree, 0, sizeof(struct rb_root)); + + max_check = e->start.key + to_alloc; + if (max_check > max_alloc) + max_check = max_alloc; + for (op = e->start.key + e->count.key; op < max_check; + op++) { + if (!wnd_is_free(wnd, op, 1)) + break; + } + memcpy(&wnd->start_tree, &start_tree, + sizeof(struct rb_root)); + to_alloc = op - e->start.key; + } + + /* Prepare to return */ + fnd = e->start.key; + if (e->start.key + to_alloc > max_alloc) + to_alloc = max_alloc - e->start.key; + goto found; + } + + if (wnd->uptodated == 1) { + /* extents tree is updated -> no free space */ + goto no_space; + } + + b_len = e->count.key; + b_pos = e->start.key; + +scan_bitmap: + sb = wnd->sb; + log2_bits = sb->s_blocksize_bits + 3; + + /* At most two ranges [hint, max_alloc) + [0, hint) */ +Again: + + /* TODO: optimize request for case nbits > wbits */ + iw = hint >> log2_bits; + wbits = sb->s_blocksize * 8; + wpos = hint & (wbits - 1); + prev_tail = 0; + fbits_valid = true; + + if (max_alloc == wnd->nbits) { + nwnd = wnd->nwnd; + } else { + size_t t = max_alloc + wbits - 1; + + nwnd = likely(t > max_alloc) ? (t >> log2_bits) : wnd->nwnd; + } + + /* Enumerate all windows */ + iw -= 1; +next_wnd: + iw += 1; + if (iw >= nwnd) + goto estimate; + + wbit = iw << log2_bits; + + if (!wnd->free_bits[iw]) { + if (prev_tail > b_len) { + b_pos = wbit - prev_tail; + b_len = prev_tail; + } + + /* Skip full used window */ + prev_tail = 0; + wpos = 0; + goto next_wnd; + } + + if (unlikely(iw + 1 == nwnd)) { + if (max_alloc == wnd->nbits) + wbits = wnd->bits_last; + else { + size_t t = max_alloc & (wbits - 1); + + if (t) { + wbits = t; + fbits_valid = false; + } + } + } + + if (wnd->zone_end <= wnd->zone_bit) + goto skip_zone; + + ebit = wbit + wbits; + zbit = max(wnd->zone_bit, wbit); + zend = min(wnd->zone_end, ebit); + + /* Here we have a window [wbit, ebit) and zone [zbit, zend) */ + if (zend <= zbit) { + /* Zone does not overlap window */ + goto skip_zone; + } + + wzbit = zbit - wbit; + wzend = zend - wbit; + + /* Zone overlaps window */ + if (wnd->free_bits[iw] == wzend - wzbit) { + prev_tail = 0; + wpos = 0; + goto next_wnd; + } + + /* Scan two ranges window: [wbit, zbit) and [zend, ebit) */ + bh = wnd_map(wnd, iw); + + if (IS_ERR(bh)) { + /* TODO: error */ + prev_tail = 0; + wpos = 0; + goto next_wnd; + } + + buf = (ulong *)bh->b_data; + + /* Scan range [wbit, zbit) */ + if (wpos < wzbit) { + /* Scan range [wpos, zbit) */ + fnd = wnd_scan(buf, wbit, wpos, wzbit, to_alloc, &prev_tail, + &b_pos, &b_len); + if (fnd != MINUS_ONE_T) { + put_bh(bh); + goto found; + } + } + + prev_tail = 0; + + /* Scan range [zend, ebit) */ + if (wzend < wbits) { + fnd = wnd_scan(buf, wbit, max(wzend, wpos), wbits, to_alloc, + &prev_tail, &b_pos, &b_len); + if (fnd != MINUS_ONE_T) { + put_bh(bh); + goto found; + } + } + + wpos = 0; + put_bh(bh); + goto next_wnd; + +skip_zone: + /* Current window does not overlap zone */ + if (!wpos && fbits_valid && wnd->free_bits[iw] == wbits) { + /* window is empty */ + if (prev_tail + wbits >= to_alloc) { + fnd = wbit + wpos - prev_tail; + goto found; + } + + /* Increase 'prev_tail' and process next window */ + prev_tail += wbits; + wpos = 0; + goto next_wnd; + } + + /* read window */ + bh = wnd_map(wnd, iw); + if (IS_ERR(bh)) { + // TODO: error + prev_tail = 0; + wpos = 0; + goto next_wnd; + } + + buf = (ulong *)bh->b_data; + + /* Scan range [wpos, eBits) */ + fnd = wnd_scan(buf, wbit, wpos, wbits, to_alloc, &prev_tail, &b_pos, + &b_len); + put_bh(bh); + if (fnd != MINUS_ONE_T) + goto found; + goto next_wnd; + +estimate: + if (b_len < prev_tail) { + /* The last fragment */ + b_len = prev_tail; + b_pos = max_alloc - prev_tail; + } + + if (hint) { + /* + * We have scanned range [hint max_alloc) + * Prepare to scan range [0 hint + to_alloc) + */ + size_t nextmax = hint + to_alloc; + + if (likely(nextmax >= hint) && nextmax < max_alloc) + max_alloc = nextmax; + hint = 0; + goto Again; + } + + if (!b_len) + goto no_space; + + wnd->extent_max = b_len; + + if (flags & BITMAP_FIND_FULL) + goto no_space; + + fnd = b_pos; + to_alloc = b_len; + +found: + if (flags & BITMAP_FIND_MARK_AS_USED) { + /* TODO optimize remove extent (pass 'e'?) */ + if (wnd_set_used(wnd, fnd, to_alloc)) + goto no_space; + } else if (wnd->extent_max != MINUS_ONE_T && + to_alloc > wnd->extent_max) { + wnd->extent_max = to_alloc; + } + + *allocated = fnd; + return to_alloc; + +no_space: + return 0; +} + +/* + * wnd_extend + * + * Extend bitmap ($MFT bitmap) + */ +int wnd_extend(wnd_bitmap *wnd, size_t new_bits) +{ + int err; + struct super_block *sb = wnd->sb; + ntfs_sb_info *sbi = sb->s_fs_info; + u32 blocksize = sb->s_blocksize; + u32 wbits = blocksize * 8; + u32 b0, new_last; + size_t bits, iw, new_wnd; + size_t old_bits = wnd->nbits; + u16 *new_free; + + if (new_bits <= old_bits) + return -EINVAL; + + /* align to 8 byte boundary */ + new_wnd = bytes_to_block(sb, bitmap_size(new_bits)); + new_last = new_bits & (wbits - 1); + if (!new_last) + new_last = wbits; + + if (new_wnd == wnd->nwnd) + goto skip_reallocate; + + if (new_wnd <= ARRAY_SIZE(wnd->free_holder)) + new_free = wnd->free_holder; + else { + new_free = ntfs_alloc(new_wnd * sizeof(u16), 0); + if (!new_free) + return -ENOMEM; + } + + if (new_free != wnd->free_bits) + memcpy(new_free, wnd->free_bits, wnd->nwnd * sizeof(short)); + memset(new_free + wnd->nwnd, 0, (new_wnd - wnd->nwnd) * sizeof(short)); + if (wnd->free_bits != wnd->free_holder) + ntfs_free(wnd->free_bits); + + wnd->free_bits = new_free; + +skip_reallocate: + /* Zero bits [old_bits,new_bits) */ + bits = new_bits - old_bits; + b0 = old_bits & (wbits - 1); + + for (iw = old_bits >> (sb->s_blocksize_bits + 3); bits; iw += 1) { + u32 op; + size_t frb; + u64 vbo, lbo, bytes; + struct buffer_head *bh; + ulong *buf; + + if (iw + 1 == new_wnd) + wbits = new_last; + + op = b0 + bits > wbits ? wbits - b0 : bits; + vbo = (u64)iw * blocksize; + + err = ntfs_vbo_to_pbo(sbi, &wnd->run, vbo, &lbo, &bytes); + if (err) + break; + + bh = ntfs_bread(sb, lbo >> sb->s_blocksize_bits); + if (!bh) + return -EIO; + + lock_buffer(bh); + buf = (ulong *)bh->b_data; + + __bitmap_clear(buf, b0, blocksize * 8 - b0); + frb = wbits - __bitmap_weight(buf, wbits); + wnd->total_zeroes += frb - wnd->free_bits[iw]; + wnd->free_bits[iw] = frb; + + set_buffer_uptodate(bh); + mark_buffer_dirty(bh); + unlock_buffer(bh); + /*err = sync_dirty_buffer(bh);*/ + + b0 = 0; + bits -= op; + } + + wnd->nbits = new_bits; + wnd->nwnd = new_wnd; + wnd->bits_last = new_last; + + wnd_add_free_ext(wnd, old_bits, new_bits - old_bits, false); + + return 0; +} + +/* + * wnd_zone_set + */ +void wnd_zone_set(wnd_bitmap *wnd, size_t lcn, size_t len) +{ + size_t zlen; + + zlen = wnd->zone_end - wnd->zone_bit; + if (zlen) + wnd_add_free_ext(wnd, wnd->zone_bit, zlen, false); + + if (!RB_EMPTY_ROOT(&wnd->start_tree) && len) + wnd_remove_free_ext(wnd, lcn, len); + + wnd->zone_bit = lcn; + wnd->zone_end = lcn + len; +} + +int ntfs_trim_fs(ntfs_sb_info *sbi, struct fstrim_range *range) +{ + int err = 0; + struct super_block *sb = sbi->sb; + wnd_bitmap *wnd = &sbi->used.bitmap; + u32 wbits = 8 * sb->s_blocksize; + CLST len = 0, lcn = 0, done = 0; + CLST minlen = bytes_to_cluster(sbi, range->minlen); + CLST lcn_from = bytes_to_cluster(sbi, range->start); + size_t iw = lcn_from >> (sb->s_blocksize_bits + 3); + u32 wbit = lcn_from & (wbits - 1); + const ulong *buf; + CLST lcn_to; + + if (!minlen) + minlen = 1; + + if (range->len == (u64)-1) + lcn_to = wnd->nbits; + else + lcn_to = bytes_to_cluster(sbi, range->start + range->len); + + down_read_nested(&wnd->rw_lock, BITMAP_MUTEX_CLUSTERS); + + for (; iw < wnd->nbits; iw++, wbit = 0) { + CLST lcn_wnd = iw * wbits; + struct buffer_head *bh; + + if (lcn_wnd > lcn_to) + break; + + if (!wnd->free_bits[iw]) + continue; + + if (iw + 1 == wnd->nwnd) + wbits = wnd->bits_last; + + if (lcn_wnd + wbits > lcn_to) + wbits = lcn_to - lcn_wnd; + + bh = wnd_map(wnd, iw); + if (IS_ERR(bh)) { + err = PTR_ERR(bh); + break; + } + + buf = (ulong *)bh->b_data; + + for (; wbit < wbits; wbit++) { + if (!test_bit(wbit, buf)) { + if (!len) + lcn = lcn_wnd + wbit; + len += 1; + continue; + } + if (len >= minlen) { + err = ntfs_discard(sbi, lcn, len); + if (err) + goto out; + done += len; + } + len = 0; + } + put_bh(bh); + } + + /* Process the last fragment */ + if (len >= minlen) { + err = ntfs_discard(sbi, lcn, len); + if (err) + goto out; + done += len; + } + +out: + range->len = done << sbi->cluster_bits; + + up_read(&wnd->rw_lock); + + return err; +} From patchwork Fri Aug 21 16:25:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Komarov X-Patchwork-Id: 11729959 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3E682109B for ; Fri, 21 Aug 2020 16:32:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0A766207DE for ; Fri, 21 Aug 2020 16:32:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=paragon-software.com header.i=@paragon-software.com header.b="eulgS7Ek" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728343AbgHUQ3w (ORCPT ); Fri, 21 Aug 2020 12:29:52 -0400 Received: from relayfre-01.paragon-software.com ([176.12.100.13]:56478 "EHLO relayfre-01.paragon-software.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727843AbgHUQZi (ORCPT ); Fri, 21 Aug 2020 12:25:38 -0400 Received: from dlg2.mail.paragon-software.com (vdlg-exch-02.paragon-software.com [172.30.1.105]) by relayfre-01.paragon-software.com (Postfix) with ESMTPS id 4B7A1455; Fri, 21 Aug 2020 19:25:22 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=paragon-software.com; s=mail; t=1598027122; bh=h0Vx0Uwc9jIqcsIS7jAHAU5koRi+myalDtiaFGksg2Y=; h=From:To:CC:Subject:Date; b=eulgS7EkjWz6v+64GjicMUqIb9cZTOSchejnAuGBkrcYcIUHxInu0VkuKY++b6ELW xHMiT+rXY/0JzKrEgrRaIfk0WZmOBgPZHtCrJ62snnUJgzcCFtWK2oVvmhvkB9U/B5 L1HLZQTuLyFtAhXOPJdA61+4ThzVd38s7a8SO3Dw= Received: from vdlg-exch-02.paragon-software.com (172.30.1.105) by vdlg-exch-02.paragon-software.com (172.30.1.105) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1847.3; Fri, 21 Aug 2020 19:25:21 +0300 Received: from vdlg-exch-02.paragon-software.com ([fe80::586:6d72:3fe5:bd9b]) by vdlg-exch-02.paragon-software.com ([fe80::586:6d72:3fe5:bd9b%6]) with mapi id 15.01.1847.003; Fri, 21 Aug 2020 19:25:21 +0300 From: Konstantin Komarov To: "viro@zeniv.linux.org.uk" , "linux-kernel@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" CC: =?iso-8859-1?q?Pali_Roh=E1r?= Subject: [PATCH v2 05/10] fs/ntfs3: Add attrib operations Thread-Topic: [PATCH v2 05/10] fs/ntfs3: Add attrib operations Thread-Index: AdZ30y6FatEcY85OSAufRNT3XuT5ZQ== Date: Fri, 21 Aug 2020 16:25:21 +0000 Message-ID: Accept-Language: ru-RU, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.30.8.36] MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This adds attrib operations Signed-off-by: Konstantin Komarov --- fs/ntfs3/attrib.c | 1285 +++++++++++++++++++++++++++++++++++++++++++ fs/ntfs3/attrlist.c | 455 +++++++++++++++ fs/ntfs3/xattr.c | 968 ++++++++++++++++++++++++++++++++ 3 files changed, 2708 insertions(+) create mode 100644 fs/ntfs3/attrib.c create mode 100644 fs/ntfs3/attrlist.c create mode 100644 fs/ntfs3/xattr.c diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c new file mode 100644 index 000000000000..fdf72211efea --- /dev/null +++ b/fs/ntfs3/attrib.c @@ -0,0 +1,1285 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/fs/ntfs3/attrib.c + * + * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved. + * + * TODO: merge attr_set_size/attr_data_get_block/attr_allocate_frame? + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "debug.h" +#include "ntfs.h" +#include "ntfs_fs.h" + +#ifdef NTFS3_PREALLOCATE +/* + * You can set external NTFS_MIN_LOG2_OF_CLUMP/NTFS_MAX_LOG2_OF_CLUMP to manage + * preallocate algorithm + */ +#ifndef NTFS_MIN_LOG2_OF_CLUMP +#define NTFS_MIN_LOG2_OF_CLUMP 16 +#endif + +#ifndef NTFS_MAX_LOG2_OF_CLUMP +#define NTFS_MAX_LOG2_OF_CLUMP 26 +#endif + +// 16M +#define NTFS_CLUMP_MIN (1 << (NTFS_MIN_LOG2_OF_CLUMP + 8)) +// 16G +#define NTFS_CLUMP_MAX (1ull << (NTFS_MAX_LOG2_OF_CLUMP + 8)) + +/* + * get_pre_allocated + * + */ +static inline u64 get_pre_allocated(u64 size) +{ + u32 clump; + u8 align_shift; + u64 ret; + + if (size <= NTFS_CLUMP_MIN) { + clump = 1 << NTFS_MIN_LOG2_OF_CLUMP; + align_shift = NTFS_MIN_LOG2_OF_CLUMP; + } else if (size >= NTFS_CLUMP_MAX) { + clump = 1 << NTFS_MAX_LOG2_OF_CLUMP; + align_shift = NTFS_MAX_LOG2_OF_CLUMP; + } else { + align_shift = NTFS_MIN_LOG2_OF_CLUMP - 1 + + __ffs(size >> (8 + NTFS_MIN_LOG2_OF_CLUMP)); + clump = 1u << align_shift; + } + + ret = (((size + clump - 1) >> align_shift)) << align_shift; + + return ret; +} +#endif + +/* + * attr_must_be_resident + * + * returns true if attribute must be resident + */ +static inline bool attr_must_be_resident(ntfs_sb_info *sbi, ATTR_TYPE type) +{ + const ATTR_DEF_ENTRY *de; + + switch (type) { + case ATTR_STD: + case ATTR_NAME: + case ATTR_ID: + case ATTR_LABEL: + case ATTR_VOL_INFO: + case ATTR_ROOT: + case ATTR_EA_INFO: + return true; + default: + de = ntfs_query_def(sbi, type); + if (de && (de->flags & NTFS_ATTR_MUST_BE_RESIDENT)) + return true; + return false; + } +} + +/* + * attr_load_runs + * + * load all runs stored in 'attr' + */ +int attr_load_runs(ATTRIB *attr, ntfs_inode *ni, struct runs_tree *run) +{ + int err; + CLST svcn = le64_to_cpu(attr->nres.svcn); + CLST evcn = le64_to_cpu(attr->nres.evcn); + u32 asize; + u16 run_off; + + if (svcn >= evcn + 1 || run_is_mapped_full(run, svcn, evcn)) + return 0; + + asize = le32_to_cpu(attr->size); + run_off = le16_to_cpu(attr->nres.run_off); + err = run_unpack_ex(run, ni->mi.sbi, ni->mi.rno, svcn, evcn, + Add2Ptr(attr, run_off), asize - run_off); + if (err < 0) + return err; + + return 0; +} + +/* + * int run_deallocate_ex + * + * Deallocate clusters + */ +static int run_deallocate_ex(ntfs_sb_info *sbi, struct runs_tree *run, CLST vcn, + CLST len, CLST *done, bool trim) +{ + int err = 0; + CLST vcn0 = vcn, lcn, clen, dn = 0; + size_t idx; + + if (!len) + goto out; + + if (!run_lookup_entry(run, vcn, &lcn, &clen, &idx)) { +failed: + run_truncate(run, vcn0); + err = -EINVAL; + goto out; + } + + for (;;) { + if (clen > len) + clen = len; + + if (!clen) { + err = -EINVAL; + goto out; + } + + if (lcn != SPARSE_LCN) { + mark_as_free_ex(sbi, lcn, clen, trim); + dn += clen; + } + + len -= clen; + if (!len) + break; + + if (!run_get_entry(run, ++idx, &vcn, &lcn, &clen)) { + // save memory - don't load entire run + goto failed; + } + } + +out: + if (done) + *done = dn; + + return err; +} + +/* + * attr_allocate_clusters + * + * find free space, mark it as used and store in 'run' + */ +int attr_allocate_clusters(ntfs_sb_info *sbi, struct runs_tree *run, CLST vcn, + CLST lcn, CLST len, CLST *pre_alloc, + enum ALLOCATE_OPT opt, CLST *alen, const size_t fr, + CLST *new_lcn) +{ + int err; + CLST flen, vcn0 = vcn, pre = pre_alloc ? *pre_alloc : 0; + wnd_bitmap *wnd = &sbi->used.bitmap; + size_t cnt = run->count; + + for (;;) { + err = ntfs_look_for_free_space(sbi, lcn, len + pre, &lcn, &flen, + opt); + +#ifdef NTFS3_PREALLOCATE + if (err == -ENOSPC && pre) { + pre = 0; + if (*pre_alloc) + *pre_alloc = 0; + continue; + } +#endif + + if (err) + goto out; + + if (new_lcn && vcn == vcn0) + *new_lcn = lcn; + + /* Add new fragment into run storage */ + if (!run_add_entry(run, vcn, lcn, flen)) { + down_write_nested(&wnd->rw_lock, BITMAP_MUTEX_CLUSTERS); + wnd_set_free(wnd, lcn, flen); + up_write(&wnd->rw_lock); + err = -ENOMEM; + goto out; + } + + vcn += flen; + + if (flen >= len || opt == ALLOCATE_MFT || + (fr && run->count - cnt >= fr)) { + *alen = vcn - vcn0; + return 0; + } + + len -= flen; + } + +out: + /* undo */ + run_deallocate_ex(sbi, run, vcn0, vcn - vcn0, NULL, false); + run_truncate(run, vcn0); + + return err; +} + +/* + * attr_set_size_res + * + * helper for attr_set_size + */ +static int attr_set_size_res(ntfs_inode *ni, ATTRIB *attr, u64 new_size, + struct runs_tree *run, ATTRIB **ins_attr) +{ + int err = 0; + mft_inode *mi = &ni->mi; + ntfs_sb_info *sbi = mi->sbi; + MFT_REC *rec = mi->mrec; + u32 used = le32_to_cpu(rec->used); + u32 asize = le32_to_cpu(attr->size); + u32 aoff = PtrOffset(rec, attr); + u32 rsize = le32_to_cpu(attr->res.data_size); + u32 tail = used - aoff - asize; + char *next = Add2Ptr(attr, asize); + int dsize; + CLST len, alen; + ATTRIB *attr_s = NULL; + bool is_ext; + + if (new_size >= sbi->max_bytes_per_attr) + goto resident2nonresident; + + dsize = QuadAlign(new_size) - QuadAlign(rsize); + + if (dsize < 0) { + char *end = Add2Ptr(rec, sbi->record_size); + + /*TODO: is it necessary? check range in case of bad record*/ + if (next <= (char *)rec || next > end || + next + dsize <= (char *)rec || next + dsize + tail > end) { + return -EINVAL; + } + + memmove(next + dsize, next, tail); + } else if (dsize > 0) { + if (used + dsize > sbi->max_bytes_per_attr) + goto resident2nonresident; + memmove(next + dsize, next, tail); + memset(next, 0, dsize); + } + + rec->used = cpu_to_le32(used + dsize); + attr->size = cpu_to_le32(asize + dsize); + attr->res.data_size = cpu_to_le32(new_size); + mi->dirty = true; + *ins_attr = attr; + + return 0; + +resident2nonresident: + len = bytes_to_cluster(sbi, rsize); + + run_init(run); + + is_ext = is_attr_ext(attr); + + if (!len) { + alen = 0; + } else if (is_ext) { + if (!run_add_entry(run, 0, SPARSE_LCN, len)) { + err = -ENOMEM; + goto out; + } + alen = len; + } else { + err = attr_allocate_clusters(sbi, run, 0, 0, len, NULL, + ALLOCATE_DEF, &alen, 0, NULL); + if (err) + goto out; + + err = ntfs_sb_write_run(sbi, run, 0, resident_data(attr), + rsize); + if (err) + goto out; + } + + attr_s = ntfs_memdup(attr, asize); + if (!attr_s) { + err = -ENOMEM; + goto out; + } + + /*verify(mi_remove_attr(mi, attr));*/ + used -= asize; + memmove(attr, Add2Ptr(attr, asize), used - aoff); + rec->used = cpu_to_le32(used); + mi->dirty = true; + + err = ni_insert_nonresident(ni, attr_s->type, attr_name(attr_s), + attr_s->name_len, run, 0, alen, + attr_s->flags, &attr, NULL); + if (err) + goto out; + + ntfs_free(attr_s); + attr->nres.data_size = cpu_to_le64(rsize); + attr->nres.valid_size = attr->nres.data_size; + + *ins_attr = attr; + + if (attr_s->type == ATTR_DATA && !attr_s->name_len && + run == &ni->file.run) { + ni->ni_flags &= ~NI_FLAG_RESIDENT; + } + + /* Resident attribute becomes non resident */ + return 0; + +out: + /* undo: do not trim new allocated clusters */ + run_deallocate(sbi, run, false); + run_close(run); + + if (attr_s) { + memmove(next, Add2Ptr(rec, aoff), used - aoff); + memcpy(Add2Ptr(rec, aoff), attr_s, asize); + rec->used = cpu_to_le32(used + asize); + mi->dirty = true; + ntfs_free(attr_s); + } + + return err; +} + +/* + * attr_set_size + * + * change the size of attribute + * Extend: + * - sparse/compressed: no allocated clusters + * - normal: append allocated and preallocated new clusters + * Shrink: + * - no deallocate if keep_prealloc is set + */ +int attr_set_size(ntfs_inode *ni, ATTR_TYPE type, const __le16 *name, + u8 name_len, struct runs_tree *run, u64 new_size, + const u64 *new_valid, bool keep_prealloc, ATTRIB **ret) +{ + int err = 0; + ntfs_sb_info *sbi = ni->mi.sbi; + u8 cluster_bits = sbi->cluster_bits; + bool is_mft = + ni->mi.rno == MFT_REC_MFT && type == ATTR_DATA && !name_len; + u64 old_valid, old_size, old_alloc, new_alloc, new_alloc_tmp; + ATTRIB *attr, *attr_b; + ATTR_LIST_ENTRY *le, *le_b; + mft_inode *mi, *mi_b; + CLST alen, vcn, lcn, new_alen, old_alen, svcn, evcn; + CLST next_svcn, pre_alloc = -1, done = 0; + bool is_ext; + u32 align; + MFT_REC *rec; + +again: + le_b = NULL; + attr_b = ni_find_attr(ni, NULL, &le_b, type, name, name_len, NULL, + &mi_b); + if (!attr_b) { + err = -ENOENT; + goto out; + } + + if (!attr_b->non_res) { + err = attr_set_size_res(ni, attr_b, new_size, run, &attr_b); + if (err) + goto out; + + if (!attr_b->non_res) + goto out; + + /* Resident attribute becomes non resident */ + goto again; + } + + is_ext = is_attr_ext(attr_b); + +again_1: + + if (is_ext) { + align = 1u << (attr_b->nres.c_unit + cluster_bits); + if (is_attr_sparsed(attr_b)) + keep_prealloc = false; + } else { + align = sbi->cluster_size; + } + + old_valid = le64_to_cpu(attr_b->nres.valid_size); + old_size = le64_to_cpu(attr_b->nres.data_size); + old_alloc = le64_to_cpu(attr_b->nres.alloc_size); + old_alen = old_alloc >> cluster_bits; + + new_alloc = (new_size + align - 1) & ~(u64)(align - 1); + new_alen = new_alloc >> cluster_bits; + + if (keep_prealloc && is_ext) + keep_prealloc = false; + + if (keep_prealloc && new_size < old_size) { + attr_b->nres.data_size = cpu_to_le64(new_size); + mi_b->dirty = true; + goto ok; + } + + vcn = old_alen - 1; + + svcn = le64_to_cpu(attr_b->nres.svcn); + evcn = le64_to_cpu(attr_b->nres.evcn); + + if (svcn <= vcn && vcn <= evcn) { + attr = attr_b; + le = le_b; + mi = mi_b; + } else if (!le_b) { + err = -EINVAL; + goto out; + } else { + le = le_b; + attr = ni_find_attr(ni, attr_b, &le, type, name, name_len, &vcn, + &mi); + if (!attr) { + err = -EINVAL; + goto out; + } + +next_le_1: + svcn = le64_to_cpu(attr->nres.svcn); + evcn = le64_to_cpu(attr->nres.evcn); + } + +next_le: + rec = mi->mrec; + + err = attr_load_runs(attr, ni, run); + if (err) + goto out; + + if (new_size > old_size) { + CLST to_allocate; + size_t cnt, free; + + if (new_alloc <= old_alloc) { + attr_b->nres.data_size = cpu_to_le64(new_size); + mi_b->dirty = true; + goto ok; + } + + to_allocate = new_alen - old_alen; +add_alloc_in_same_attr_seg: + lcn = 0; + if (is_mft) { + /* mft allocates clusters from mftzone */ + pre_alloc = 0; + } else if (is_ext) { + /* no preallocate for sparse/compress */ + pre_alloc = 0; + } else if (pre_alloc == -1) { + pre_alloc = 0; +#ifdef NTFS3_PREALLOCATE + if (type == ATTR_DATA && !name_len) { + CLST new_alen2 = bytes_to_cluster( + sbi, get_pre_allocated(new_size)); + pre_alloc = new_alen2 - new_alen; + } +#endif + /* Get the last lcn to allocate from */ + if (old_alen && + !run_lookup_entry(run, vcn, &lcn, NULL, NULL)) { + lcn = SPARSE_LCN; + } + + if (lcn == SPARSE_LCN) + lcn = 0; + else if (lcn) + lcn += 1; + + free = wnd_zeroes(&sbi->used.bitmap); + if (to_allocate > free) { + err = -ENOSPC; + goto out; + } + + if (pre_alloc && to_allocate + pre_alloc > free) + pre_alloc = 0; + } + + vcn = old_alen; + cnt = run->count; + + if (is_ext) { + if (!run_add_entry(run, vcn, SPARSE_LCN, to_allocate)) { + err = -ENOMEM; + goto out; + } + alen = to_allocate; + } else { + /* ~3 bytes per fragment */ + err = attr_allocate_clusters( + sbi, run, vcn, lcn, to_allocate, &pre_alloc, + is_mft ? ALLOCATE_MFT : 0, &alen, + is_mft ? 0 : + (sbi->record_size - + le32_to_cpu(rec->used) + 8) / + 3 + + 1, + NULL); + if (err) + goto out; + } + + done += alen; + vcn += alen; + if (to_allocate > alen) + to_allocate -= alen; + else + to_allocate = 0; + +pack_runs: + err = mi_pack_runs(mi, attr, run, vcn - svcn); + if (err) + goto out; + + next_svcn = le64_to_cpu(attr->nres.evcn) + 1; + new_alloc_tmp = (u64)next_svcn << cluster_bits; + attr_b->nres.alloc_size = cpu_to_le64(new_alloc_tmp); + mi_b->dirty = true; + + if (next_svcn >= vcn && !to_allocate) { + /* Normal way. update attribute and exit */ + attr_b->nres.data_size = cpu_to_le64(new_size); + goto ok; + } + + /* at least two mft to avoid recursive loop*/ + if (is_mft && next_svcn == vcn && + (done << sbi->cluster_bits) >= 2 * sbi->record_size) { + new_size = new_alloc_tmp; + attr_b->nres.data_size = attr_b->nres.alloc_size; + goto ok; + } + + if (le32_to_cpu(rec->used) < sbi->record_size) { + old_alen = next_svcn; + evcn = old_alen - 1; + goto add_alloc_in_same_attr_seg; + } + + if (type == ATTR_LIST) { + err = ni_expand_list(ni); + if (err) + goto out; + if (next_svcn < vcn) + goto pack_runs; + + /* layout of records is changed */ + goto again; + } + + if (!ni->attr_list.size) { + err = ni_create_attr_list(ni); + if (err) + goto out; + /* layout of records is changed */ + } + + if (next_svcn >= vcn) + goto again; + + /* insert new attribute segment */ + err = ni_insert_nonresident(ni, type, name, name_len, run, + next_svcn, vcn - next_svcn, + attr_b->flags, &attr, &mi); + if (err) + goto out; + + if (!is_mft) + run_truncate_head(run, evcn + 1); + + svcn = le64_to_cpu(attr->nres.svcn); + evcn = le64_to_cpu(attr->nres.evcn); + + le_b = NULL; + /* layout of records maybe changed */ + /* find base attribute to update*/ + attr_b = ni_find_attr(ni, NULL, &le_b, type, name, name_len, + NULL, &mi_b); + if (!attr_b) { + err = -ENOENT; + goto out; + } + + attr_b->nres.alloc_size = cpu_to_le64(vcn << cluster_bits); + attr_b->nres.data_size = attr_b->nres.alloc_size; + attr_b->nres.valid_size = attr_b->nres.alloc_size; + mi_b->dirty = true; + goto again_1; + } + + if (new_size != old_size || + (new_alloc != old_alloc && !keep_prealloc)) { + vcn = max(svcn, new_alen); + new_alloc_tmp = (u64)vcn << cluster_bits; + + err = run_deallocate_ex(sbi, run, vcn, evcn - vcn + 1, &alen, + true); + if (err) + goto out; + + run_truncate(run, vcn); + + if (vcn > svcn) { + err = mi_pack_runs(mi, attr, run, vcn - svcn); + if (err < 0) + goto out; + } else if (le && le->vcn) { + u16 le_sz = le16_to_cpu(le->size); + + /* + * NOTE: list entries for one attribute are always + * the same size. We deal with last entry (vcn==0) + * and it is not first in entries array + * (list entry for std attribute always first) + * So it is safe to step back + */ + mi_remove_attr(mi, attr); + + if (!al_remove_le(ni, le)) { + err = -EINVAL; + goto out; + } + + le = (ATTR_LIST_ENTRY *)((u8 *)le - le_sz); + } else { + attr->nres.evcn = cpu_to_le64((u64)vcn - 1); + mi->dirty = true; + } + + attr_b->nres.alloc_size = cpu_to_le64(new_alloc_tmp); + + if (vcn == new_alen) { + attr_b->nres.data_size = cpu_to_le64(new_size); + if (new_size < old_valid) + attr_b->nres.valid_size = + attr_b->nres.data_size; + } else { + if (new_alloc_tmp <= + le64_to_cpu(attr_b->nres.data_size)) + attr_b->nres.data_size = + attr_b->nres.alloc_size; + if (new_alloc_tmp < + le64_to_cpu(attr_b->nres.valid_size)) + attr_b->nres.valid_size = + attr_b->nres.alloc_size; + } + + if (is_ext) + le64_sub_cpu(&attr_b->nres.total_size, + ((u64)alen << cluster_bits)); + + mi_b->dirty = true; + + if (new_alloc_tmp <= new_alloc) + goto ok; + + old_size = new_alloc_tmp; + vcn = svcn - 1; + + if (le == le_b) { + attr = attr_b; + mi = mi_b; + evcn = svcn - 1; + svcn = 0; + goto next_le; + } + + if (le->type != type || le->name_len != name_len || + memcmp(le_name(le), name, name_len * sizeof(short))) { + err = -EINVAL; + goto out; + } + + err = ni_load_mi(ni, le, &mi); + if (err) + goto out; + + attr = mi_find_attr(mi, NULL, type, name, name_len, &le->id); + if (!attr) { + err = -EINVAL; + goto out; + } + goto next_le_1; + } + +ok: + if (new_valid) { + __le64 valid = cpu_to_le64(min(*new_valid, new_size)); + + if (attr_b->nres.valid_size != valid) { + attr_b->nres.valid_size = valid; + mi_b->dirty = true; + } + } + +out: + if (!err && attr_b && ret) + *ret = attr_b; + + /* update inode_set_bytes*/ + if (!err && attr_b && attr_b->non_res && + ((type == ATTR_DATA && !name_len) || + (type == ATTR_ALLOC && name == I30_NAME))) { + ni->vfs_inode.i_size = new_size; + inode_set_bytes(&ni->vfs_inode, + le64_to_cpu(attr_b->nres.alloc_size)); + } + + return err; +} + +int attr_data_get_block(ntfs_inode *ni, CLST vcn, CLST *lcn, CLST *len, + bool *new) +{ + int err = 0; + struct runs_tree *run = &ni->file.run; + ntfs_sb_info *sbi; + u8 cluster_bits; + ATTRIB *attr, *attr_b; + ATTR_LIST_ENTRY *le, *le_b; + mft_inode *mi, *mi_b; + CLST hint, svcn, evcn1, new_evcn1, next_svcn; + u64 new_size, total_size, new_alloc; + u32 clst_per_frame, frame_size; + bool ok; + + if (new) + *new = false; + + down_read(&ni->file.run_lock); + ok = run_lookup_entry(run, vcn, lcn, len, NULL); + up_read(&ni->file.run_lock); + + if (ok && (*lcn != SPARSE_LCN || !new)) { + /* normal way */ + return 0; + } + + sbi = ni->mi.sbi; + cluster_bits = sbi->cluster_bits; + new_size = ((u64)vcn + 1) << cluster_bits; + + ni_lock(ni); + down_write(&ni->file.run_lock); + +again: + le_b = NULL; + attr_b = ni_find_attr(ni, NULL, &le_b, ATTR_DATA, NULL, 0, NULL, &mi_b); + if (!attr_b) { + err = -ENOENT; + goto out; + } + + if (!attr_b->non_res) { + if (!new) { + *lcn = RESIDENT_LCN; + goto out; + } + + err = attr_set_size_res(ni, attr_b, new_size, run, &attr_b); + if (err) + goto out; + + if (!attr_b->non_res) { + /* Resident attribute still resident */ + *lcn = RESIDENT_LCN; + goto out; + } + + /* Resident attribute becomes non resident */ + goto again; + } + + clst_per_frame = 1u << attr_b->nres.c_unit; + frame_size = clst_per_frame << cluster_bits; + new_alloc = (new_size + frame_size - 1) & ~(u64)(frame_size - 1); + + svcn = le64_to_cpu(attr_b->nres.svcn); + evcn1 = le64_to_cpu(attr_b->nres.evcn) + 1; + + attr = attr_b; + le = le_b; + mi = mi_b; + + if (le_b && (vcn < svcn || evcn1 <= vcn)) { + attr = ni_find_attr(ni, attr_b, &le, ATTR_DATA, NULL, 0, &vcn, + &mi); + if (!attr) { + err = -EINVAL; + goto out; + } + svcn = le64_to_cpu(attr->nres.svcn); + evcn1 = le64_to_cpu(attr->nres.evcn) + 1; + } + + err = attr_load_runs(attr, ni, run); + if (err) + goto out; + + if (!ok) { + ok = run_lookup_entry(run, vcn, lcn, len, NULL); + if (ok && (*lcn != SPARSE_LCN || !new)) { + /* normal way */ + err = 0; + goto out; + } + } + + if (!is_attr_ext(attr_b)) { + err = -EINVAL; + goto out; + } + + /* Get the last lcn to allocate from */ + hint = 0; + + if (vcn > evcn1) { + if (!run_add_entry(run, evcn1, SPARSE_LCN, vcn - evcn1)) { + err = -ENOMEM; + goto out; + } + } else if (vcn && !run_lookup_entry(run, vcn - 1, &hint, NULL, NULL)) { + hint = -1; + } + + err = attr_allocate_clusters(sbi, run, vcn, hint + 1, clst_per_frame, + NULL, 0, len, 0, lcn); + if (err) + goto out; + + *new = true; + + new_evcn1 = vcn + clst_per_frame; + if (new_evcn1 < evcn1) + new_evcn1 = evcn1; + + total_size = le64_to_cpu(attr_b->nres.total_size) + frame_size; + +repack: + + err = mi_pack_runs(mi, attr, run, new_evcn1 - svcn); + if (err < 0) + goto out; + + attr_b->nres.total_size = cpu_to_le64(total_size); + inode_set_bytes(&ni->vfs_inode, total_size); + + mi_b->dirty = true; + mark_inode_dirty(&ni->vfs_inode); + + next_svcn = le64_to_cpu(attr->nres.evcn) + 1; + + if (next_svcn >= evcn1) { + /* Normal way. update attribute and exit */ + goto out; + } + + if (!ni->attr_list.le) { + err = ni_create_attr_list(ni); + if (err) + goto out; + /* layout of records is changed */ + le_b = NULL; + attr_b = ni_find_attr(ni, NULL, &le_b, ATTR_DATA, NULL, 0, NULL, + &mi_b); + if (!attr_b) { + err = -ENOENT; + goto out; + } + + attr = attr_b; + le = le_b; + mi = mi_b; + goto repack; + } + + /* Estimate next attribute */ + attr = ni_find_attr(ni, attr, &le, ATTR_DATA, NULL, 0, &evcn1, &mi); + + if (attr && le32_to_cpu(mi->mrec->used) + 8 <= sbi->record_size) { + svcn = next_svcn; + evcn1 = le64_to_cpu(attr->nres.evcn) + 1; + + err = attr_load_runs(attr, ni, run); + if (err) + goto out; + + attr->nres.svcn = cpu_to_le64(svcn); + err = mi_pack_runs(mi, attr, run, evcn1 - svcn); + if (err < 0) + goto out; + + le->vcn = cpu_to_le64(svcn); + + mi->dirty = true; + + next_svcn = le64_to_cpu(attr->nres.evcn) + 1; + + if (next_svcn >= evcn1) { + /* Normal way. update attribute and exit */ + goto out; + } + } + + err = ni_insert_nonresident(ni, ATTR_DATA, NULL, 0, run, next_svcn, + evcn1 - next_svcn, attr_b->flags, &attr, + &mi); + if (err) + goto out; + + run_truncate_head(run, vcn); + +out: + up_write(&ni->file.run_lock); + ni_unlock(ni); + + return err; +} + +/* + * attr_load_runs_vcn + * + * load runs with vcn + */ +int attr_load_runs_vcn(ntfs_inode *ni, ATTR_TYPE type, const __le16 *name, + u8 name_len, struct runs_tree *run, CLST vcn) +{ + ATTRIB *attr; + int err; + CLST svcn, evcn; + u16 ro; + + attr = ni_find_attr(ni, NULL, NULL, type, name, name_len, &vcn, NULL); + if (!attr) + return -ENOENT; + + svcn = le64_to_cpu(attr->nres.svcn); + evcn = le64_to_cpu(attr->nres.evcn); + + if (evcn < vcn || vcn < svcn) + return -EINVAL; + + ro = le16_to_cpu(attr->nres.run_off); + err = run_unpack_ex(run, ni->mi.sbi, ni->mi.rno, svcn, evcn, + Add2Ptr(attr, ro), le32_to_cpu(attr->size) - ro); + if (err < 0) + return err; + return 0; +} + +/* + * attr_is_frame_compressed + * + * This function is used to detect compressed frame + */ +int attr_is_frame_compressed(ntfs_inode *ni, ATTRIB *attr, CLST frame, + CLST *clst_data, bool *is_compr) +{ + int err; + u32 clst_frame; + CLST len, lcn, vcn, alen, slen, vcn1; + size_t idx; + struct runs_tree *run; + + *clst_data = 0; + *is_compr = false; + + if (!is_attr_compressed(attr)) + return 0; + + if (!attr->non_res) + return 0; + + clst_frame = 1u << attr->nres.c_unit; + vcn = frame * clst_frame; + run = &ni->file.run; + + if (!run_lookup_entry(run, vcn, &lcn, &len, &idx)) { + err = attr_load_runs_vcn(ni, attr->type, attr_name(attr), + attr->name_len, run, vcn); + if (err) + return err; + + if (!run_lookup_entry(run, vcn, &lcn, &len, &idx)) + return -ENOENT; + } + + if (lcn == SPARSE_LCN) { + /* The frame is sparsed if "clst_frame" clusters are sparsed */ + *is_compr = true; + return 0; + } + + if (len >= clst_frame) { + /* + * The frame is not compressed 'cause + * it does not contain any sparse clusters + */ + *clst_data = clst_frame; + return 0; + } + + alen = bytes_to_cluster(ni->mi.sbi, le64_to_cpu(attr->nres.alloc_size)); + slen = 0; + *clst_data = len; + + /* + * The frame is compressed if *clst_data + slen >= clst_frame + * Check next fragments + */ + while ((vcn += len) < alen) { + vcn1 = vcn; + + if (!run_get_entry(run, ++idx, &vcn, &lcn, &len) || + vcn1 != vcn) { + err = attr_load_runs_vcn(ni, attr->type, + attr_name(attr), + attr->name_len, run, vcn1); + if (err) + return err; + vcn = vcn1; + + if (!run_lookup_entry(run, vcn, &lcn, &len, &idx)) + return -ENOENT; + } + + if (lcn == SPARSE_LCN) + slen += len; + else { + if (slen) { + /* + * data_clusters + sparse_clusters = + * not enough for frame + */ + return -EINVAL; + } + *clst_data += len; + } + + if (*clst_data + slen >= clst_frame) { + if (!slen) { + /* + * There is no sparsed clusters in this frame + * So it is not compressed + */ + *clst_data = clst_frame; + } else + *is_compr = *clst_data < clst_frame; + break; + } + } + + return 0; +} + +/* + * attr_allocate_frame + * + * allocate/free clusters for 'frame' + */ +int attr_allocate_frame(ntfs_inode *ni, CLST frame, size_t compr_size, + u64 new_valid) +{ + int err = 0; + struct runs_tree *run = &ni->file.run; + ntfs_sb_info *sbi = ni->mi.sbi; + ATTRIB *attr, *attr_b; + ATTR_LIST_ENTRY *le, *le_b; + mft_inode *mi, *mi_b; + CLST svcn, evcn1, next_svcn, lcn, len; + CLST vcn, clst_data; + u64 total_size, valid_size, data_size; + bool is_compr; + + le_b = NULL; + attr_b = ni_find_attr(ni, NULL, &le_b, ATTR_DATA, NULL, 0, NULL, &mi_b); + if (!attr_b) + return -ENOENT; + + if (!is_attr_ext(attr_b)) + return -EINVAL; + + vcn = frame << NTFS_LZNT_CUNIT; + total_size = le64_to_cpu(attr_b->nres.total_size); + + svcn = le64_to_cpu(attr_b->nres.svcn); + evcn1 = le64_to_cpu(attr_b->nres.evcn) + 1; + data_size = le64_to_cpu(attr_b->nres.data_size); + + if (svcn <= vcn && vcn < evcn1) { + attr = attr_b; + le = le_b; + mi = mi_b; + } else if (!le_b) { + err = -EINVAL; + goto out; + } else { + le = le_b; + attr = ni_find_attr(ni, attr_b, &le, ATTR_DATA, NULL, 0, &vcn, + &mi); + if (!attr) { + err = -EINVAL; + goto out; + } + + svcn = le64_to_cpu(attr->nres.svcn); + evcn1 = le64_to_cpu(attr->nres.evcn) + 1; + } + + err = attr_load_runs(attr, ni, run); + if (err) + goto out; + + err = attr_is_frame_compressed(ni, attr_b, frame, &clst_data, + &is_compr); + if (err) + goto out; + + total_size -= clst_data << sbi->cluster_bits; + + len = bytes_to_cluster(sbi, compr_size); + + if (len == clst_data) + goto out; + + if (len < clst_data) { + err = run_deallocate_ex(sbi, run, vcn + len, clst_data - len, + NULL, true); + if (err) + goto out; + + if (!run_add_entry(run, vcn + len, SPARSE_LCN, + clst_data - len)) { + err = -ENOMEM; + goto out; + } + } else { + CLST alen, hint; + /* Get the last lcn to allocate from */ + if (vcn + clst_data && + !run_lookup_entry(run, vcn + clst_data - 1, &hint, NULL, + NULL)) { + hint = -1; + } + + err = attr_allocate_clusters(sbi, run, vcn + clst_data, + hint + 1, len - clst_data, NULL, 0, + &alen, 0, &lcn); + if (err) + goto out; + } + + total_size += len << sbi->cluster_bits; + +repack: + err = mi_pack_runs(mi, attr, run, evcn1 - svcn); + if (err < 0) + goto out; + + attr_b->nres.total_size = cpu_to_le64(total_size); + inode_set_bytes(&ni->vfs_inode, total_size); + + mi_b->dirty = true; + mark_inode_dirty(&ni->vfs_inode); + + next_svcn = le64_to_cpu(attr->nres.evcn) + 1; + + if (next_svcn >= evcn1) { + /* Normal way. update attribute and exit */ + goto out; + } + + if (!ni->attr_list.size) { + err = ni_create_attr_list(ni); + if (err) + goto out; + /* layout of records is changed */ + le_b = NULL; + attr_b = ni_find_attr(ni, NULL, &le_b, ATTR_DATA, NULL, 0, NULL, + &mi_b); + if (!attr_b) { + err = -ENOENT; + goto out; + } + + attr = attr_b; + le = le_b; + mi = mi_b; + goto repack; + } + + /* Estimate next attribute */ + attr = ni_find_attr(ni, attr, &le, ATTR_DATA, NULL, 0, &evcn1, &mi); + + if (attr && le32_to_cpu(mi->mrec->used) + 8 <= sbi->record_size) { + svcn = next_svcn; + evcn1 = le64_to_cpu(attr->nres.evcn) + 1; + + err = attr_load_runs(attr, ni, run); + if (err) + goto out; + + attr->nres.svcn = cpu_to_le64(svcn); + err = mi_pack_runs(mi, attr, run, evcn1 - svcn); + if (err < 0) + goto out; + + le->vcn = cpu_to_le64(svcn); + + mi->dirty = true; + + next_svcn = le64_to_cpu(attr->nres.evcn) + 1; + + if (next_svcn >= evcn1) { + /* Normal way. update attribute and exit */ + goto out; + } + } + + err = ni_insert_nonresident(ni, ATTR_DATA, NULL, 0, run, next_svcn, + evcn1 - next_svcn, attr_b->flags, &attr, + &mi); + if (err) + goto out; + + run_truncate_head(run, vcn); + +out: + if (new_valid > data_size) + new_valid = data_size; + + valid_size = le64_to_cpu(attr_b->nres.valid_size); + if (new_valid != valid_size) { + attr_b->nres.valid_size = cpu_to_le64(valid_size); + mi_b->dirty = true; + } + + return err; +} diff --git a/fs/ntfs3/attrlist.c b/fs/ntfs3/attrlist.c new file mode 100644 index 000000000000..3b86572a2585 --- /dev/null +++ b/fs/ntfs3/attrlist.c @@ -0,0 +1,455 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/fs/ntfs3/attrib.c + * + * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved. + * + */ + +#include +#include +#include +#include +#include + +#include "debug.h" +#include "ntfs.h" +#include "ntfs_fs.h" + +/* Returns true if le is valid */ +static inline bool al_is_valid_le(const ntfs_inode *ni, ATTR_LIST_ENTRY *le) +{ + if (!le || !ni->attr_list.le || !ni->attr_list.size) + return false; + + return PtrOffset(ni->attr_list.le, le) + le16_to_cpu(le->size) <= + ni->attr_list.size; +} + +void al_destroy(ntfs_inode *ni) +{ + run_close(&ni->attr_list.run); + ntfs_free(ni->attr_list.le); + ni->attr_list.le = NULL; + ni->attr_list.size = 0; + ni->attr_list.dirty = false; +} + +/* + * ntfs_load_attr_list + * + * This method makes sure that the ATTRIB list, if present, + * has been properly set up. + */ +int ntfs_load_attr_list(ntfs_inode *ni, ATTRIB *attr) +{ + int err; + size_t lsize; + void *le = NULL; + + if (ni->attr_list.size) + return 0; + + if (!attr->non_res) { + lsize = le32_to_cpu(attr->res.data_size); + le = ntfs_alloc(al_aligned(lsize), 0); + if (!le) { + err = -ENOMEM; + goto out; + } + memcpy(le, resident_data(attr), lsize); + } else if (attr->nres.svcn) { + err = -EINVAL; + goto out; + } else { + u16 run_off = le16_to_cpu(attr->nres.run_off); + + lsize = le64_to_cpu(attr->nres.data_size); + + run_init(&ni->attr_list.run); + + err = run_unpack_ex(&ni->attr_list.run, ni->mi.sbi, ni->mi.rno, + 0, le64_to_cpu(attr->nres.evcn), + Add2Ptr(attr, run_off), + le32_to_cpu(attr->size) - run_off); + if (err < 0) + goto out; + + le = ntfs_alloc(al_aligned(lsize), 0); + if (!le) { + err = -ENOMEM; + goto out; + } + + err = ntfs_read_run_nb(ni->mi.sbi, &ni->attr_list.run, 0, le, + lsize, NULL); + if (err) + goto out; + } + + ni->attr_list.size = lsize; + ni->attr_list.le = le; + + return 0; + +out: + ni->attr_list.le = le; + al_destroy(ni); + + return err; +} + +/* + * al_enumerate + * + * Returns the next list le + * if le is NULL then returns the first le + */ +ATTR_LIST_ENTRY *al_enumerate(ntfs_inode *ni, ATTR_LIST_ENTRY *le) +{ + size_t off; + u16 sz; + + if (!le) { + le = ni->attr_list.le; + } else { + sz = le16_to_cpu(le->size); + if (sz < sizeof(ATTR_LIST_ENTRY)) { + /* Impossible 'cause we should not return such le */ + return NULL; + } + le = Add2Ptr(le, sz); + } + + /* Check boundary */ + off = PtrOffset(ni->attr_list.le, le); + if (off + sizeof(ATTR_LIST_ENTRY) > ni->attr_list.size) { + // The regular end of list + return NULL; + } + + sz = le16_to_cpu(le->size); + + /* Check le for errors */ + if (sz < sizeof(ATTR_LIST_ENTRY) || off + sz > ni->attr_list.size || + sz < le->name_off + le->name_len * sizeof(short)) { + return NULL; + } + + return le; +} + +/* + * al_find_le + * + * finds the first le in the list which matches type, name and vcn + * Returns NULL if not found + */ +ATTR_LIST_ENTRY *al_find_le(ntfs_inode *ni, ATTR_LIST_ENTRY *le, + const ATTRIB *attr) +{ + CLST svcn = attr_svcn(attr); + + return al_find_ex(ni, le, attr->type, attr_name(attr), attr->name_len, + &svcn); +} + +/* + * al_find_ex + * + * finds the first le in the list which matches type, name and vcn + * Returns NULL if not found + */ +ATTR_LIST_ENTRY *al_find_ex(ntfs_inode *ni, ATTR_LIST_ENTRY *le, ATTR_TYPE type, + const __le16 *name, u8 name_len, const CLST *vcn) +{ + ATTR_LIST_ENTRY *ret = NULL; + u32 type_in = le32_to_cpu(type); + + while ((le = al_enumerate(ni, le))) { + u64 le_vcn; + int diff; + + /* List entries are sorted by type, name and vcn */ + diff = le32_to_cpu(le->type) - type_in; + if (diff < 0) + continue; + + if (diff > 0) + return ret; + + if (le->name_len != name_len) + continue; + + if (name_len && + memcmp(le_name(le), name, name_len * sizeof(short))) + continue; + + if (!vcn) + return le; + + le_vcn = le64_to_cpu(le->vcn); + if (*vcn == le_vcn) + return le; + + if (*vcn < le_vcn) + return ret; + + ret = le; + } + + return ret; +} + +/* + * al_find_le_to_insert + * + * finds the first list entry which matches type, name and vcn + * Returns NULL if not found + */ +static ATTR_LIST_ENTRY *al_find_le_to_insert(ntfs_inode *ni, ATTR_TYPE type, + const __le16 *name, u8 name_len, + const CLST *vcn) +{ + ATTR_LIST_ENTRY *le = NULL, *prev; + u32 type_in = le32_to_cpu(type); + int diff; + + /* List entries are sorted by type, name, vcn */ +next: + le = al_enumerate(ni, prev = le); + if (!le) + goto out; + diff = le32_to_cpu(le->type) - type_in; + if (diff < 0) + goto next; + if (diff > 0) + goto out; + + if (ntfs_cmp_names(name, name_len, le_name(le), le->name_len, NULL) > 0) + goto next; + + if (!vcn || *vcn > le64_to_cpu(le->vcn)) + goto next; + +out: + if (!le) + le = prev ? Add2Ptr(prev, le16_to_cpu(prev->size)) : + ni->attr_list.le; + + return le; +} + +/* + * al_add_le + * + * adds an "attribute list entry" to the list. + */ +int al_add_le(ntfs_inode *ni, ATTR_TYPE type, const __le16 *name, u8 name_len, + CLST svcn, __le16 id, const MFT_REF *ref, + ATTR_LIST_ENTRY **new_le) +{ + int err; + ATTRIB *attr; + ATTR_LIST_ENTRY *le; + size_t off; + u16 sz; + size_t asize, new_asize; + u64 new_size; + typeof(ni->attr_list) *al = &ni->attr_list; + + /* + * Compute the size of the new le and the new length of the + * list with al le added. + */ + sz = le_size(name_len); + new_size = al->size + sz; + asize = al_aligned(al->size); + new_asize = al_aligned(new_size); + + /* Scan forward to the point at which the new le should be inserted. */ + le = al_find_le_to_insert(ni, type, name, name_len, &svcn); + off = PtrOffset(al->le, le); + + if (new_size > asize) { + void *ptr = ntfs_alloc(new_asize, 0); + + if (!ptr) + return -ENOMEM; + + memcpy(ptr, al->le, off); + memcpy(Add2Ptr(ptr, off + sz), le, al->size - off); + le = Add2Ptr(ptr, off); + ntfs_free(al->le); + al->le = ptr; + } else { + memmove(Add2Ptr(le, sz), le, al->size - off); + } + + al->size = new_size; + + le->type = type; + le->size = cpu_to_le16(sz); + le->name_len = name_len; + le->name_off = offsetof(ATTR_LIST_ENTRY, name); + le->vcn = cpu_to_le64(svcn); + le->ref = *ref; + le->id = id; + memcpy(le->name, name, sizeof(short) * name_len); + + al->dirty = true; + + err = attr_set_size(ni, ATTR_LIST, NULL, 0, &al->run, new_size, + &new_size, true, &attr); + if (err) + return err; + + if (attr && attr->non_res) { + err = ntfs_sb_write_run(ni->mi.sbi, &al->run, 0, al->le, + al->size); + if (err) + return err; + } + + al->dirty = false; + *new_le = le; + + return 0; +} + +/* + * al_remove_le + * + * removes 'le' from attribute list + */ +bool al_remove_le(ntfs_inode *ni, ATTR_LIST_ENTRY *le) +{ + u16 size; + size_t off; + typeof(ni->attr_list) *al = &ni->attr_list; + + if (!al_is_valid_le(ni, le)) + return false; + + /* Save on stack the size of le */ + size = le16_to_cpu(le->size); + off = PtrOffset(al->le, le); + + memmove(le, Add2Ptr(le, size), al->size - (off + size)); + + al->size -= size; + al->dirty = true; + + return true; +} + +/* + * al_delete_le + * + * deletes from the list the first le which matches its parameters. + */ +bool al_delete_le(ntfs_inode *ni, ATTR_TYPE type, CLST vcn, const __le16 *name, + size_t name_len, const MFT_REF *ref) +{ + u16 size; + ATTR_LIST_ENTRY *le; + size_t off; + typeof(ni->attr_list) *al = &ni->attr_list; + + /* Scan forward to the first le that matches the input */ + le = al_find_ex(ni, NULL, type, name, name_len, &vcn); + if (!le) + return false; + + off = PtrOffset(al->le, le); + + if (!ref) + goto del; + + /* + * The caller specified a segment reference, so we have to + * scan through the matching entries until we find that segment + * reference or we run of matching entries. + */ +next: + if (off + sizeof(ATTR_LIST_ENTRY) > al->size) + goto del; + if (le->type != type) + goto del; + if (le->name_len != name_len) + goto del; + if (name_len && + memcmp(name, Add2Ptr(le, le->name_off), name_len * sizeof(short))) + goto del; + if (le64_to_cpu(le->vcn) != vcn) + goto del; + if (!memcmp(ref, &le->ref, sizeof(*ref))) + goto del; + + off += le16_to_cpu(le->size); + le = Add2Ptr(al->le, off); + goto next; + +del: + /* + * If we've gone off the end of the list, or if the type, name, + * and vcn don't match, then we don't have any matching records. + */ + if (off >= al->size) + return false; + if (le->type != type) + return false; + if (le->name_len != name_len) + return false; + if (name_len && + memcmp(name, Add2Ptr(le, le->name_off), name_len * sizeof(short))) + return false; + if (le64_to_cpu(le->vcn) != vcn) + return false; + + /* Save on stack the size of le */ + size = le16_to_cpu(le->size); + /* Delete the le. */ + memmove(le, Add2Ptr(le, size), al->size - (off + size)); + + al->size -= size; + al->dirty = true; + return true; +} + +/* + * al_update + * + * + */ +int al_update(ntfs_inode *ni) +{ + int err; + ntfs_sb_info *sbi = ni->mi.sbi; + ATTRIB *attr; + typeof(ni->attr_list) *al = &ni->attr_list; + + if (!al->dirty) + return 0; + + err = attr_set_size(ni, ATTR_LIST, NULL, 0, &al->run, al->size, NULL, + false, &attr); + if (err) + goto out; + + if (!attr->non_res) + memcpy(resident_data(attr), al->le, al->size); + else { + err = ntfs_sb_write_run(sbi, &al->run, 0, al->le, al->size); + if (err) + goto out; + + attr->nres.valid_size = attr->nres.data_size; + } + + ni->mi.dirty = true; + al->dirty = false; + +out: + return err; +} diff --git a/fs/ntfs3/xattr.c b/fs/ntfs3/xattr.c new file mode 100644 index 000000000000..c26e6b141b64 --- /dev/null +++ b/fs/ntfs3/xattr.c @@ -0,0 +1,968 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/fs/ntfs3/xattr.c + * + * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved. + * + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "debug.h" +#include "ntfs.h" +#include "ntfs_fs.h" + +#define SYSTEM_DOS_ATTRIB "system.dos_attrib" +#define SYSTEM_NTFS_ATTRIB "system.ntfs_attrib" +#define SYSTEM_NTFS_ATTRIB_BE "system.ntfs_attrib_be" +#define SAMBA_PROCESS_NAME "smbd" +#define USER_DOSATTRIB "user.DOSATTRIB" + +static inline size_t unpacked_ea_size(const EA_FULL *ea) +{ + return !ea->size ? DwordAlign(offsetof(EA_FULL, name) + 1 + + ea->name_len + le16_to_cpu(ea->elength)) : + le32_to_cpu(ea->size); +} + +static inline size_t packed_ea_size(const EA_FULL *ea) +{ + return offsetof(EA_FULL, name) + 1 - offsetof(EA_FULL, flags) + + ea->name_len + le16_to_cpu(ea->elength); +} + +/* + * find_ea + * + * assume there is at least one xattr in the list + */ +static inline bool find_ea(const EA_FULL *ea_all, u32 bytes, const char *name, + u8 name_len, u32 *off) +{ + *off = 0; + + if (!ea_all || !bytes) + return false; + + for (;;) { + const EA_FULL *ea = Add2Ptr(ea_all, *off); + u32 next_off = *off + unpacked_ea_size(ea); + + if (next_off > bytes) + return false; + + if (ea->name_len == name_len && + !memcmp(ea->name, name, name_len)) + return true; + + *off = next_off; + if (next_off >= bytes) + return false; + } +} + +/* + * ntfs_read_ea + * + * reads all xattrs + * ea - new allocated memory + * info - pointer into resident data + */ +static int ntfs_read_ea(ntfs_inode *ni, EA_FULL **ea, size_t add_bytes, + const EA_INFO **info) +{ + int err; + ATTR_LIST_ENTRY *le = NULL; + ATTRIB *attr_info, *attr_ea; + void *ea_p; + u32 size; + + static_assert(le32_to_cpu(ATTR_EA_INFO) < le32_to_cpu(ATTR_EA)); + + *ea = NULL; + *info = NULL; + + attr_info = + ni_find_attr(ni, NULL, &le, ATTR_EA_INFO, NULL, 0, NULL, NULL); + attr_ea = + ni_find_attr(ni, attr_info, &le, ATTR_EA, NULL, 0, NULL, NULL); + + if (!attr_ea || !attr_info) + return 0; + + *info = resident_data_ex(attr_info, sizeof(EA_INFO)); + if (!*info) + return -EINVAL; + + /* Check Ea limit */ + size = le32_to_cpu((*info)->size); + if (size > MAX_EA_DATA_SIZE || size + add_bytes > MAX_EA_DATA_SIZE) + return -EINVAL; + + /* Allocate memory for packed Ea */ + ea_p = ntfs_alloc(size + add_bytes, 0); + if (!ea_p) + return -ENOMEM; + + if (attr_ea->non_res) { + struct runs_tree run; + + run_init(&run); + + err = attr_load_runs(attr_ea, ni, &run); + if (!err) + err = ntfs_read_run_nb(ni->mi.sbi, &run, 0, ea_p, size, + NULL); + run_close(&run); + + if (err) + goto out; + } else { + void *p = resident_data_ex(attr_ea, size); + + if (!p) { + err = -EINVAL; + goto out; + } + memcpy(ea_p, p, size); + } + + memset(Add2Ptr(ea_p, size), 0, add_bytes); + *ea = ea_p; + return 0; + +out: + ntfs_free(ea_p); + *ea = NULL; + return err; +} + +/* + * ntfs_listxattr_hlp + * + * copy a list of xattrs names into the buffer + * provided, or compute the buffer size required + */ +static int ntfs_listxattr_hlp(ntfs_inode *ni, char *buffer, + size_t bytes_per_buffer, size_t *bytes) +{ + const EA_INFO *info; + EA_FULL *ea_all = NULL; + const EA_FULL *ea; + u32 off, size; + int err; + + *bytes = 0; + + err = ntfs_read_ea(ni, &ea_all, 0, &info); + if (err) + return err; + + if (!info) + return 0; + + size = le32_to_cpu(info->size); + + if (!ea_all) + return 0; + + /* Enumerate all xattrs */ + off = 0; +next_ea: + if (off >= size) + goto out; + + ea = Add2Ptr(ea_all, off); + + if (!buffer) + goto skip_ea; + + if (*bytes + ea->name_len + 1 > bytes_per_buffer) { + err = -ERANGE; + goto out; + } + + memcpy(buffer + *bytes, ea->name, ea->name_len); + buffer[*bytes + ea->name_len] = 0; + +skip_ea: + *bytes += ea->name_len + 1; + off += unpacked_ea_size(ea); + goto next_ea; + +out: + ntfs_free(ea_all); + return err; +} + +/* + * ntfs_get_ea + * + * reads xattr + */ +static int ntfs_get_ea(ntfs_inode *ni, const char *name, size_t name_len, + void *buffer, size_t bytes_per_buffer, u32 *len) +{ + const EA_INFO *info; + EA_FULL *ea_all = NULL; + const EA_FULL *ea; + u32 off; + int err; + + *len = 0; + + if (name_len > 255) { + err = -ENAMETOOLONG; + goto out; + } + + err = ntfs_read_ea(ni, &ea_all, 0, &info); + if (err) + goto out; + + if (!info) + goto out; + + /* Enumerate all xattrs */ + if (!find_ea(ea_all, le32_to_cpu(info->size), name, name_len, &off)) { + err = -ENODATA; + goto out; + } + ea = Add2Ptr(ea_all, off); + + *len = le16_to_cpu(ea->elength); + if (!buffer) { + err = 0; + goto out; + } + + if (*len > bytes_per_buffer) { + err = -ERANGE; + goto out; + } + memcpy(buffer, ea->name + ea->name_len + 1, *len); + err = 0; + +out: + ntfs_free(ea_all); + + return err; +} + +static noinline int ntfs_getxattr_hlp(struct inode *inode, const char *name, + void *value, size_t size, + size_t *required) +{ + ntfs_inode *ni = ntfs_i(inode); + int err; + u32 len; + + if (!(ni->ni_flags & NI_FLAG_EA)) + return -ENODATA; + + if (!required) + ni_lock(ni); + + err = ntfs_get_ea(ni, name, strlen(name), value, size, &len); + if (!err) + err = len; + else if (-ERANGE == err && required) + *required = len; + + if (!required) + ni_unlock(ni); + + return err; +} + +static noinline int ntfs_set_ea(struct inode *inode, const char *name, + const void *value, size_t val_size, int flags, + int locked) +{ + ntfs_inode *ni = ntfs_i(inode); + ntfs_sb_info *sbi = ni->mi.sbi; + int err; + EA_INFO ea_info; + const EA_INFO *info; + EA_FULL *new_ea; + EA_FULL *ea_all = NULL; + size_t name_len, add; + u32 off, size; + ATTRIB *attr; + ATTR_LIST_ENTRY *le; + mft_inode *mi; + struct runs_tree ea_run; + u64 new_sz; + void *p; + + if (!locked) + ni_lock(ni); + + run_init(&ea_run); + name_len = strlen(name); + + if (name_len > 255) { + err = -ENAMETOOLONG; + goto out; + } + + add = DwordAlign(offsetof(EA_FULL, name) + 1 + name_len + val_size); + + err = ntfs_read_ea(ni, &ea_all, add, &info); + if (err) + goto out; + + if (!info) { + memset(&ea_info, 0, sizeof(ea_info)); + size = 0; + } else { + memcpy(&ea_info, info, sizeof(ea_info)); + size = le32_to_cpu(ea_info.size); + } + + if (info && find_ea(ea_all, size, name, name_len, &off)) { + EA_FULL *ea; + size_t ea_sz; + + if (flags & XATTR_CREATE) { + err = -EEXIST; + goto out; + } + + /* Remove current xattr */ + ea = Add2Ptr(ea_all, off); + if (ea->flags & FILE_NEED_EA) + le16_add_cpu(&ea_info.count, -1); + + ea_sz = unpacked_ea_size(ea); + + le16_add_cpu(&ea_info.size_pack, 0 - packed_ea_size(ea)); + + memmove(ea, Add2Ptr(ea, ea_sz), size - off - ea_sz); + + size -= ea_sz; + memset(Add2Ptr(ea_all, size), 0, ea_sz); + + ea_info.size = cpu_to_le32(size); + + if ((flags & XATTR_REPLACE) && !val_size) + goto update_ea; + } else { + if (flags & XATTR_REPLACE) { + err = -ENODATA; + goto out; + } + + if (!ea_all) { + ea_all = ntfs_alloc(add, 1); + if (!ea_all) { + err = -ENOMEM; + goto out; + } + } + } + + /* append new xattr */ + new_ea = Add2Ptr(ea_all, size); + new_ea->size = cpu_to_le32(add); + new_ea->flags = 0; + new_ea->name_len = name_len; + new_ea->elength = cpu_to_le16(val_size); + memcpy(new_ea->name, name, name_len); + new_ea->name[name_len] = 0; + memcpy(new_ea->name + name_len + 1, value, val_size); + + le16_add_cpu(&ea_info.size_pack, packed_ea_size(new_ea)); + size += add; + ea_info.size = cpu_to_le32(size); + +update_ea: + + if (!info) { + /* Create xattr */ + if (!size) { + err = 0; + goto out; + } + + err = ni_insert_resident(ni, sizeof(EA_INFO), ATTR_EA_INFO, + NULL, 0, NULL, NULL); + if (err) + goto out; + + err = ni_insert_resident(ni, 0, ATTR_EA, NULL, 0, NULL, NULL); + if (err) + goto out; + } + + new_sz = size; + err = attr_set_size(ni, ATTR_EA, NULL, 0, &ea_run, new_sz, &new_sz, + false, NULL); + if (err) + goto out; + + le = NULL; + attr = ni_find_attr(ni, NULL, &le, ATTR_EA_INFO, NULL, 0, NULL, &mi); + if (!attr) { + err = -EINVAL; + goto out; + } + + if (!size) { + /* delete xattr, ATTR_EA_INFO */ + err = ni_remove_attr_le(ni, attr, le); + if (err) + goto out; + } else { + p = resident_data_ex(attr, sizeof(EA_INFO)); + if (!p) { + err = -EINVAL; + goto out; + } + memcpy(p, &ea_info, sizeof(EA_INFO)); + mi->dirty = true; + } + + le = NULL; + attr = ni_find_attr(ni, NULL, &le, ATTR_EA, NULL, 0, NULL, &mi); + if (!attr) { + err = -EINVAL; + goto out; + } + + if (!size) { + /* delete xattr, ATTR_EA */ + err = ni_remove_attr_le(ni, attr, le); + if (err) + goto out; + } else if (attr->non_res) { + err = ntfs_sb_write_run(sbi, &ea_run, 0, ea_all, size); + if (err) + goto out; + } else { + p = resident_data_ex(attr, size); + if (!p) { + err = -EINVAL; + goto out; + } + memcpy(p, ea_all, size); + mi->dirty = true; + } + + ni->ni_flags |= NI_FLAG_UPDATE_PARENT; + mark_inode_dirty(&ni->vfs_inode); + + /* Check if we delete the last xattr */ + if (val_size || flags != XATTR_REPLACE || + ntfs_listxattr_hlp(ni, NULL, 0, &val_size) || val_size) { + ni->ni_flags |= NI_FLAG_EA; + } else { + ni->ni_flags &= ~NI_FLAG_EA; + } + +out: + if (!locked) + ni_unlock(ni); + + run_close(&ea_run); + ntfs_free(ea_all); + + return err; +} + +static inline void ntfs_posix_acl_release(struct posix_acl *acl) +{ + if (acl && refcount_dec_and_test(&acl->a_refcount)) + kfree(acl); +} + +static struct posix_acl *ntfs_get_acl_ex(struct inode *inode, int type, + int locked) +{ + ntfs_inode *ni = ntfs_i(inode); + const char *name; + struct posix_acl *acl; + size_t req; + int err; + void *buf; + + buf = __getname(); + if (!buf) + return ERR_PTR(-ENOMEM); + + /* Possible values of 'type' was already checked above */ + name = type == ACL_TYPE_ACCESS ? XATTR_NAME_POSIX_ACL_ACCESS : + XATTR_NAME_POSIX_ACL_DEFAULT; + + if (!locked) + ni_lock(ni); + + err = ntfs_getxattr_hlp(inode, name, buf, PATH_MAX, &req); + + if (!locked) + ni_unlock(ni); + + /* Translate extended attribute to acl */ + if (err > 0) { + acl = posix_acl_from_xattr(&init_user_ns, buf, err); + if (!IS_ERR(acl)) + set_cached_acl(inode, type, acl); + } else { + acl = err == -ENODATA ? NULL : ERR_PTR(err); + } + + __putname(buf); + + return acl; +} + +/* + * ntfs_get_acl + * + * inode_operations::get_acl + */ +struct posix_acl *ntfs_get_acl(struct inode *inode, int type) +{ + struct posix_acl *acl; + ntfs_inode *ni = ntfs_i(inode); + + ni_lock(ni); + + acl = ntfs_get_acl_ex(inode, type, 0); + + ni_unlock(ni); + + return acl; +} + +static int ntfs_set_acl_ex(struct inode *inode, struct posix_acl *acl, int type, + int locked) +{ + const char *name; + size_t size; + void *value = NULL; + int err = 0; + + if (S_ISLNK(inode->i_mode)) + return -EOPNOTSUPP; + + switch (type) { + case ACL_TYPE_ACCESS: + if (acl) { + umode_t mode = inode->i_mode; + + err = posix_acl_equiv_mode(acl, &mode); + if (err < 0) + return err; + + if (inode->i_mode != mode) { + inode->i_mode = mode; + mark_inode_dirty(inode); + } + + if (!err) { + /* + * acl can be exactly represented in the + * traditional file mode permission bits + */ + acl = NULL; + goto out; + } + } + name = XATTR_NAME_POSIX_ACL_ACCESS; + break; + + case ACL_TYPE_DEFAULT: + if (!S_ISDIR(inode->i_mode)) + return acl ? -EACCES : 0; + name = XATTR_NAME_POSIX_ACL_DEFAULT; + break; + + default: + return -EINVAL; + } + + if (!acl) + goto out; + + size = posix_acl_xattr_size(acl->a_count); + value = ntfs_alloc(size, 0); + if (!value) + return -ENOMEM; + + err = posix_acl_to_xattr(&init_user_ns, acl, value, size); + if (err) + goto out; + + err = ntfs_set_ea(inode, name, value, size, 0, locked); + if (err) + goto out; + +out: + if (!err) + set_cached_acl(inode, type, acl); + + kfree(value); + + return err; +} + +/* + * ntfs_set_acl + * + * inode_operations::set_acl + */ +int ntfs_set_acl(struct inode *inode, struct posix_acl *acl, int type) +{ + int err; + ntfs_inode *ni = ntfs_i(inode); + + ni_lock(ni); + + err = ntfs_set_acl_ex(inode, acl, type, 0); + + ni_unlock(ni); + + return err; +} + +static int ntfs_xattr_get_acl(struct inode *inode, int type, void *buffer, + size_t size) +{ + struct super_block *sb = inode->i_sb; + ntfs_sb_info *sbi = sb->s_fs_info; + struct posix_acl *acl; + int err; + + if (!sbi->options.acl) + return -EOPNOTSUPP; + + acl = ntfs_get_acl(inode, type); + if (IS_ERR(acl)) + return PTR_ERR(acl); + + if (!acl) + return -ENODATA; + + err = posix_acl_to_xattr(&init_user_ns, acl, buffer, size); + ntfs_posix_acl_release(acl); + + return err; +} + +static int ntfs_xattr_set_acl(struct inode *inode, int type, const void *value, + size_t size) +{ + struct super_block *sb = inode->i_sb; + ntfs_sb_info *sbi = sb->s_fs_info; + struct posix_acl *acl; + int err; + + if (!sbi->options.acl) + return -EOPNOTSUPP; + + if (!inode_owner_or_capable(inode)) + return -EPERM; + + if (!value) + return 0; + + acl = posix_acl_from_xattr(&init_user_ns, value, size); + if (IS_ERR(acl)) + return PTR_ERR(acl); + + if (acl) { + err = posix_acl_valid(sb->s_user_ns, acl); + if (err) + goto release_and_out; + } + + err = ntfs_set_acl(inode, acl, type); + +release_and_out: + ntfs_posix_acl_release(acl); + return err; +} + +/* + * ntfs_acl_chmod + * + * helper for 'ntfs_setattr' + */ +int ntfs_acl_chmod(struct inode *inode) +{ + struct super_block *sb = inode->i_sb; + ntfs_sb_info *sbi = sb->s_fs_info; + int err; + + if (!sbi->options.acl) + return 0; + + if (S_ISLNK(inode->i_mode)) + return -EOPNOTSUPP; + + err = posix_acl_chmod(inode, inode->i_mode); + + return err; +} + +/* + * ntfs_permission + * + * inode_operations::permission + */ +int ntfs_permission(struct inode *inode, int mask) +{ + struct super_block *sb = inode->i_sb; + ntfs_sb_info *sbi = sb->s_fs_info; + int err; + + if (sbi->options.no_acs_rules) { + /* "no access rules" mode - allow all changes */ + return 0; + } + + err = generic_permission(inode, mask); + + return err; +} + +/* + * ntfs_listxattr + * + * inode_operations::listxattr + */ +ssize_t ntfs_listxattr(struct dentry *dentry, char *buffer, size_t size) +{ + struct inode *inode = d_inode(dentry); + ntfs_inode *ni = ntfs_i(inode); + ssize_t ret = -1; + int err; + + if (!(ni->ni_flags & NI_FLAG_EA)) { + ret = 0; + goto out; + } + + ni_lock(ni); + + err = ntfs_listxattr_hlp(ni, buffer, size, (size_t *)&ret); + + ni_unlock(ni); + + if (err) + ret = err; +out: + + return ret; +} + +static int ntfs_getxattr(const struct xattr_handler *handler, struct dentry *de, + struct inode *inode, const char *name, void *buffer, + size_t size) +{ + int err; + ntfs_inode *ni = ntfs_i(inode); + struct super_block *sb = inode->i_sb; + ntfs_sb_info *sbi = sb->s_fs_info; + size_t name_len = strlen(name); + + /* Dispatch request */ + if (name_len == sizeof(SYSTEM_DOS_ATTRIB) - 1 && + !memcmp(name, SYSTEM_DOS_ATTRIB, sizeof(SYSTEM_DOS_ATTRIB))) { + /* system.dos_attrib */ + if (!buffer) + err = sizeof(u8); + else if (size < sizeof(u8)) + err = -ENODATA; + else { + err = sizeof(u8); + *(u8 *)buffer = le32_to_cpu(ni->std_fa); + } + goto out; + } + + if (name_len == sizeof(SYSTEM_NTFS_ATTRIB) - 1 && + !memcmp(name, SYSTEM_NTFS_ATTRIB, sizeof(SYSTEM_NTFS_ATTRIB))) { + /* system.ntfs_attrib */ + if (!buffer) + err = sizeof(u32); + else if (size < sizeof(u32)) + err = -ENODATA; + else { + err = sizeof(u32); + *(u32 *)buffer = le32_to_cpu(ni->std_fa); + } + goto out; + } + + if (name_len == sizeof(SYSTEM_NTFS_ATTRIB_BE) - 1 && + !memcmp(name, SYSTEM_NTFS_ATTRIB_BE, + sizeof(SYSTEM_NTFS_ATTRIB_BE))) { + /* system.ntfs_attrib_be */ + if (!buffer) + err = sizeof(u32); + else if (size < sizeof(u32)) + err = -ENODATA; + else { + err = sizeof(u32); + *(__be32 *)buffer = + cpu_to_be32(le32_to_cpu(ni->std_fa)); + } + goto out; + } + + if (name_len == sizeof(USER_DOSATTRIB) - 1 && + !memcmp(current->comm, SAMBA_PROCESS_NAME, + sizeof(SAMBA_PROCESS_NAME)) && + !memcmp(name, USER_DOSATTRIB, sizeof(USER_DOSATTRIB))) { + /* user.DOSATTRIB */ + if (!buffer) + err = 5; + else if (size < 5) + err = -ENODATA; + else { + err = sprintf((char *)buffer, "0x%x", + le32_to_cpu(ni->std_fa) & 0xff) + + 1; + } + goto out; + } + + if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 && + !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS, + sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) || + (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 && + !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT, + sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) { + err = sbi->options.acl ? + ntfs_xattr_get_acl( + inode, + name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - + 1 ? + ACL_TYPE_ACCESS : + ACL_TYPE_DEFAULT, + buffer, size) : + -EOPNOTSUPP; + goto out; + } + + err = ntfs_getxattr_hlp(inode, name, buffer, size, NULL); + +out: + return err; +} + +/* + * ntfs_setxattr + * + * inode_operations::setxattr + */ +static noinline int ntfs_setxattr(const struct xattr_handler *handler, + struct dentry *de, struct inode *inode, + const char *name, const void *value, + size_t size, int flags) +{ + int err = -EINVAL; + ntfs_inode *ni = ntfs_i(inode); + size_t name_len = strlen(name); + u32 attrib = 0; /* not necessary just to suppress warnings */ + struct super_block *sb = inode->i_sb; + ntfs_sb_info *sbi = sb->s_fs_info; + + /* Dispatch request */ + if (name_len == sizeof(SYSTEM_DOS_ATTRIB) - 1 && + !memcmp(name, SYSTEM_DOS_ATTRIB, sizeof(SYSTEM_DOS_ATTRIB))) { + if (sizeof(u8) != size) + goto out; + attrib = *(u8 *)value; + goto set_dos_attr; + } + + if (name_len == sizeof(SYSTEM_NTFS_ATTRIB) - 1 && + !memcmp(name, SYSTEM_NTFS_ATTRIB, sizeof(SYSTEM_NTFS_ATTRIB))) { + if (sizeof(u32) != size) + goto out; + attrib = *(u32 *)value; + goto set_dos_attr; + } + + if (name_len == sizeof(SYSTEM_NTFS_ATTRIB_BE) - 1 && + !memcmp(name, SYSTEM_NTFS_ATTRIB_BE, + sizeof(SYSTEM_NTFS_ATTRIB_BE))) { + if (sizeof(u32) != size) + goto out; + attrib = be32_to_cpu(*(__be32 *)value); + goto set_dos_attr; + } + + if (name_len == sizeof(USER_DOSATTRIB) - 1 && + !memcmp(current->comm, SAMBA_PROCESS_NAME, + sizeof(SAMBA_PROCESS_NAME)) && + !memcmp(name, USER_DOSATTRIB, sizeof(USER_DOSATTRIB))) { + if (size < 4 || ((char *)value)[size - 1]) + goto out; + + /* + * The input value must be string in form 0x%x with last zero + * This means that the 'size' must be 4, 5, ... + * E.g: 0x1 - 4 bytes, 0x20 - 5 bytes + */ + if (sscanf((char *)value, "0x%x", &attrib) != 1) + goto out; + +set_dos_attr: + if (!value) + goto out; + + ni->std_fa = cpu_to_le32(attrib); + mark_inode_dirty(inode); + err = 0; + + goto out; + } + + if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 && + !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS, + sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) || + (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 && + !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT, + sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) { + err = sbi->options.acl ? + ntfs_xattr_set_acl( + inode, + name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - + 1 ? + ACL_TYPE_ACCESS : + ACL_TYPE_DEFAULT, + value, size) : + -EOPNOTSUPP; + goto out; + } + + err = ntfs_set_ea(inode, name, value, size, flags, 0); + +out: + return err; +} + +static bool ntfs_xattr_user_list(struct dentry *dentry) +{ + return 1; +} + +static const struct xattr_handler ntfs_xattr_handler = { + .prefix = "", + .get = ntfs_getxattr, + .set = ntfs_setxattr, + .list = ntfs_xattr_user_list, +}; + +const struct xattr_handler *ntfs_xattr_handlers[] = { &ntfs_xattr_handler, + NULL }; From patchwork Fri Aug 21 16:25:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Komarov X-Patchwork-Id: 11729933 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 30787739 for ; Fri, 21 Aug 2020 16:26:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0DBA020656 for ; Fri, 21 Aug 2020 16:26:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=paragon-software.com header.i=@paragon-software.com header.b="DeNmvMBo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728125AbgHUQ0B (ORCPT ); Fri, 21 Aug 2020 12:26:01 -0400 Received: from relaydlg-01.paragon-software.com ([81.5.88.159]:59064 "EHLO relaydlg-01.paragon-software.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727913AbgHUQZk (ORCPT ); Fri, 21 Aug 2020 12:25:40 -0400 Received: from dlg2.mail.paragon-software.com (vdlg-exch-02.paragon-software.com [172.30.1.105]) by relaydlg-01.paragon-software.com (Postfix) with ESMTPS id A5EBE821D1; Fri, 21 Aug 2020 19:25:27 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=paragon-software.com; s=mail; t=1598027127; bh=q3Ici6spO7j6AC6ovvmNlqYqXDOsp1CFMrkmDulHLhM=; h=From:To:CC:Subject:Date; b=DeNmvMBo85k04lw9LCEzSKyyEpDkQMYVqzY2BLMw4hrz2NaLtoOZYFQ+tujHRAh8o CZdRCLShvebHDJefct5HZ4u39MRpDpNl+cDiARV/OoadSTLdDjmmuouBuaCQu/uDcb 8CAkrpCtsfWj8jU8Nt+O7NBIMrGjJK6+lq5Jbvwc= Received: from vdlg-exch-02.paragon-software.com (172.30.1.105) by vdlg-exch-02.paragon-software.com (172.30.1.105) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1847.3; Fri, 21 Aug 2020 19:25:27 +0300 Received: from vdlg-exch-02.paragon-software.com ([fe80::586:6d72:3fe5:bd9b]) by vdlg-exch-02.paragon-software.com ([fe80::586:6d72:3fe5:bd9b%6]) with mapi id 15.01.1847.003; Fri, 21 Aug 2020 19:25:27 +0300 From: Konstantin Komarov To: "viro@zeniv.linux.org.uk" , "linux-kernel@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" CC: =?iso-8859-1?q?Pali_Roh=E1r?= Subject: [PATCH v2 06/10] fs/ntfs3: Add compression Thread-Topic: [PATCH v2 06/10] fs/ntfs3: Add compression Thread-Index: AdZ30z3rsB9DvpW3SXSScNijIm+BLQ== Date: Fri, 21 Aug 2020 16:25:27 +0000 Message-ID: Accept-Language: ru-RU, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.30.8.36] MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This adds compression Signed-off-by: Konstantin Komarov --- fs/ntfs3/lznt.c | 449 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 449 insertions(+) create mode 100644 fs/ntfs3/lznt.c diff --git a/fs/ntfs3/lznt.c b/fs/ntfs3/lznt.c new file mode 100644 index 000000000000..db3256c08387 --- /dev/null +++ b/fs/ntfs3/lznt.c @@ -0,0 +1,449 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/fs/ntfs3/lznt.c + * + * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved. + * + */ +#include +#include +#include +#include +#include + +#include "debug.h" +#include "ntfs.h" +#include "ntfs_fs.h" + +/* Dst buffer is too small */ +#define LZNT_ERROR_TOOSMALL -2 +/* src buffer is zero */ +#define LZNT_ERROR_ALL_ZEROS 1 +#define LZNT_CHUNK_SIZE 0x1000 + +struct lznt_hash { + const u8 *p1; + const u8 *p2; +}; + +struct lznt { + const u8 *unc; + const u8 *unc_end; + const u8 *best_match; + size_t max_len; + bool std; + + struct lznt_hash hash[LZNT_CHUNK_SIZE]; +}; + +static inline size_t get_match_len(const u8 *ptr, const u8 *end, const u8 *prev, + size_t max_len) +{ + size_t len = 0; + + while (ptr + len < end && ptr[len] == prev[len] && ++len < max_len) + ; + return len; +} + +static size_t longest_match_std(const u8 *src, struct lznt *ctx) +{ + size_t hash_index; + size_t len1 = 0, len2 = 0; + const u8 **hash; + + hash_index = + ((40543U * ((((src[0] << 4) ^ src[1]) << 4) ^ src[2])) >> 4) & + (LZNT_CHUNK_SIZE - 1); + + hash = &(ctx->hash[hash_index].p1); + + if (hash[0] >= ctx->unc && hash[0] < src && hash[0][0] == src[0] && + hash[0][1] == src[1] && hash[0][2] == src[2]) { + len1 = 3; + if (ctx->max_len > 3) + len1 += get_match_len(src + 3, ctx->unc_end, + hash[0] + 3, ctx->max_len - 3); + } + + if (hash[1] >= ctx->unc && hash[1] < src && hash[1][0] == src[0] && + hash[1][1] == src[1] && hash[1][2] == src[2]) { + len2 = 3; + if (ctx->max_len > 3) + len2 += get_match_len(src + 3, ctx->unc_end, + hash[1] + 3, ctx->max_len - 3); + } + + /* Compare two matches and select the best one */ + if (len1 < len2) { + ctx->best_match = hash[1]; + len1 = len2; + } else + ctx->best_match = hash[0]; + + hash[1] = hash[0]; + hash[0] = src; + return len1; +} + +static size_t longest_match_best(const u8 *src, struct lznt *ctx) +{ + size_t max_len; + const u8 *ptr; + + if (ctx->unc >= src || !ctx->max_len) + return 0; + + max_len = 0; + for (ptr = ctx->unc; ptr < src; ++ptr) { + size_t len = + get_match_len(src, ctx->unc_end, ptr, ctx->max_len); + if (len >= max_len) { + max_len = len; + ctx->best_match = ptr; + } + } + + return max_len >= 3 ? max_len : 0; +} + +static const size_t s_max_len[] = { + 0x1002, 0x802, 0x402, 0x202, 0x102, 0x82, 0x42, 0x22, 0x12, +}; + +static const size_t s_max_off[] = { + 0x10, 0x20, 0x40, 0x80, 0x100, 0x200, 0x400, 0x800, 0x1000, +}; + +static inline u16 make_pair(size_t offset, size_t len, size_t index) +{ + return ((offset - 1) << (12 - index)) | + ((len - 3) & (((1 << (12 - index)) - 1))); +} + +static inline size_t parse_pair(u16 pair, size_t *offset, size_t index) +{ + *offset = 1 + (pair >> (12 - index)); + return 3 + (pair & ((1 << (12 - index)) - 1)); +} + +// 0x3FFF +#define HeaderOfNonCompressedChunk ((LZNT_CHUNK_SIZE + 2 - 3) | 0x3000) + +/* + * compess_chunk + * + * returns one of the tree values: + * 0 - ok, 'cmpr' contains 'cmpr_chunk_size' bytes of compressed data + * 1 == LZNT_ERROR_ALL_ZEROS - input buffer is full zero + * -2 == LZNT_ERROR_TOOSMALL + */ +static inline int compess_chunk(size_t (*match)(const u8 *, struct lznt *), + const u8 *unc, const u8 *unc_end, u8 *cmpr, + u8 *cmpr_end, size_t *cmpr_chunk_size, + struct lznt *ctx) +{ + size_t cnt = 0; + size_t idx = 0; + const u8 *up = unc; + u8 *cp = cmpr + 3; + u8 *cp2 = cmpr + 2; + u8 not_zero = 0; + /* Control byte of 8-bit values: ( 0 - means byte as is, 1 - short pair ) */ + u8 ohdr = 0; + u8 *last; + u16 t16; + + if (unc + LZNT_CHUNK_SIZE < unc_end) + unc_end = unc + LZNT_CHUNK_SIZE; + + last = min(cmpr + LZNT_CHUNK_SIZE + sizeof(short), cmpr_end); + + ctx->unc = unc; + ctx->unc_end = unc_end; + ctx->max_len = s_max_len[0]; + + while (up < unc_end) { + size_t max_len; + + while (unc + s_max_off[idx] < up) + ctx->max_len = s_max_len[++idx]; + + // Find match + max_len = up + 3 <= unc_end ? (*match)(up, ctx) : 0; + + if (!max_len) { + if (cp >= last) + goto NotCompressed; + not_zero |= *cp++ = *up++; + } else if (cp + 1 >= last) { + goto NotCompressed; + } else { + t16 = make_pair(up - ctx->best_match, max_len, idx); + *cp++ = t16; + *cp++ = t16 >> 8; + + ohdr |= 1 << cnt; + up += max_len; + } + + cnt = (cnt + 1) & 7; + if (!cnt) { + *cp2 = ohdr; + ohdr = 0; + cp2 = cp; + cp += 1; + } + } + + if (cp2 < last) + *cp2 = ohdr; + else + cp -= 1; + + *cmpr_chunk_size = cp - cmpr; + + t16 = (*cmpr_chunk_size - 3) | 0xB000; + cmpr[0] = t16; + cmpr[1] = t16 >> 8; + + return not_zero ? 0 : LZNT_ERROR_ALL_ZEROS; + +NotCompressed: + + if ((cmpr + LZNT_CHUNK_SIZE + sizeof(short)) > last) + return LZNT_ERROR_TOOSMALL; + + /* Copy non cmpr data */ + cmpr[0] = 0xff; + cmpr[1] = 0x3f; + + memcpy(cmpr + sizeof(short), unc, LZNT_CHUNK_SIZE); + *cmpr_chunk_size = LZNT_CHUNK_SIZE + sizeof(short); + + return 0; +} + +static inline ssize_t decompress_chunk(u8 *unc, u8 *unc_end, const u8 *cmpr, + const u8 *cmpr_end) +{ + u8 *up = unc; + u8 ch = *cmpr++; + size_t bit = 0; + size_t index = 0; + u16 pair; + size_t offset, length; + + /* Do decompression until pointers are inside range */ + while (up < unc_end && cmpr < cmpr_end) { + /* Correct index */ + while (unc + s_max_off[index] < up) + index += 1; + + /* Check the current flag for zero */ + if (!(ch & (1 << bit))) { + /* Just copy byte */ + *up++ = *cmpr++; + goto next; + } + + /* Check for boundary */ + if (cmpr + 1 >= cmpr_end) + return -EINVAL; + + /* Read a short from little endian stream */ + pair = cmpr[1]; + pair <<= 8; + pair |= cmpr[0]; + + cmpr += 2; + + /* Translate packed information into offset and length */ + length = parse_pair(pair, &offset, index); + + /* Check offset for boundary */ + if (unc + offset > up) + return -EINVAL; + + /* Truncate the length if necessary */ + if (up + length >= unc_end) + length = unc_end - up; + + /* Now we copy bytes. This is the heart of LZ algorithm. */ + for (; length > 0; length--, up++) + *up = *(up - offset); + +next: + /* Advance flag bit value */ + bit = (bit + 1) & 7; + + if (!bit) { + if (cmpr >= cmpr_end) + break; + + ch = *cmpr++; + } + } + + /* return the size of uncompressed data */ + return up - unc; +} + +struct lznt *get_compression_ctx(bool std) +{ + struct lznt *r = ntfs_alloc( + std ? sizeof(struct lznt) : offsetof(struct lznt, hash), 1); + + if (r) + r->std = std; + return r; +} + +/* + * compress_lznt + * + * Compresses "unc" into "cmpr" + * +x - ok, 'cmpr' contains 'final_compressed_size' bytes of compressed data + * 0 - input buffer is full zero + */ +size_t compress_lznt(const void *unc, size_t unc_size, void *cmpr, + size_t cmpr_size, struct lznt *ctx) +{ + int err; + size_t (*match)(const u8 *src, struct lznt *ctx); + u8 *p = cmpr; + u8 *end = p + cmpr_size; + const u8 *unc_chunk = unc; + const u8 *unc_end = unc_chunk + unc_size; + bool is_zero = true; + + if (ctx->std) { + match = &longest_match_std; + memset(ctx->hash, 0, sizeof(ctx->hash)); + } else { + match = &longest_match_best; + } + + /* compression cycle */ + for (; unc_chunk < unc_end; unc_chunk += LZNT_CHUNK_SIZE) { + cmpr_size = 0; + err = compess_chunk(match, unc_chunk, unc_end, p, end, + &cmpr_size, ctx); + if (err < 0) + return unc_size; + + if (is_zero && err != LZNT_ERROR_ALL_ZEROS) + is_zero = false; + + p += cmpr_size; + } + + if (p <= end - 2) + p[0] = p[1] = 0; + + return is_zero ? 0 : PtrOffset(cmpr, p); +} + +/* + * decompress_lznt + * + * decompresses "cmpr" into "unc" + */ +ssize_t decompress_lznt(const void *cmpr, size_t cmpr_size, void *unc, + size_t unc_size) +{ + const u8 *cmpr_chunk = cmpr; + const u8 *cmpr_end = cmpr_chunk + cmpr_size; + u8 *unc_chunk = unc; + u8 *unc_end = unc_chunk + unc_size; + u16 chunk_hdr; + + if (cmpr_size < sizeof(short)) + return -EINVAL; + + /* read chunk header */ + chunk_hdr = cmpr_chunk[1]; + chunk_hdr <<= 8; + chunk_hdr |= cmpr_chunk[0]; + + /* loop through decompressing chunks */ + for (;;) { + size_t chunk_size_saved; + size_t unc_use; + size_t cmpr_use = 3 + (chunk_hdr & (LZNT_CHUNK_SIZE - 1)); + + /* Check that the chunk actually fits the supplied buffer */ + if (cmpr_chunk + cmpr_use > cmpr_end) + return -EINVAL; + + /* First make sure the chunk contains compressed data */ + if (chunk_hdr & 0x8000) { + /* Decompress a chunk and return if we get an error */ + ssize_t err = + decompress_chunk(unc_chunk, unc_end, + cmpr_chunk + sizeof(chunk_hdr), + cmpr_chunk + cmpr_use); + if (err < 0) + return err; + unc_use = err; + } else { + /* This chunk does not contain compressed data */ + unc_use = unc_chunk + LZNT_CHUNK_SIZE > unc_end ? + unc_end - unc_chunk : + LZNT_CHUNK_SIZE; + + if (cmpr_chunk + sizeof(chunk_hdr) + unc_use > + cmpr_end) { + return -EINVAL; + } + + memcpy(unc_chunk, cmpr_chunk + sizeof(chunk_hdr), + unc_use); + } + + /* Advance pointers */ + cmpr_chunk += cmpr_use; + unc_chunk += unc_use; + + /* Check for the end of unc buffer */ + if (unc_chunk >= unc_end) + break; + + /* Proceed the next chunk */ + if (cmpr_chunk > cmpr_end - 2) + break; + + chunk_size_saved = LZNT_CHUNK_SIZE; + + /* read chunk header */ + chunk_hdr = cmpr_chunk[1]; + chunk_hdr <<= 8; + chunk_hdr |= cmpr_chunk[0]; + + if (!chunk_hdr) + break; + + /* Check the size of unc buffer */ + if (unc_use < chunk_size_saved) { + size_t t1 = chunk_size_saved - unc_use; + u8 *t2 = unc_chunk + t1; + + /* 'Zero' memory */ + if (t2 >= unc_end) + break; + + memset(unc_chunk, 0, t1); + unc_chunk = t2; + } + } + + /* Check compression boundary */ + if (cmpr_chunk > cmpr_end) + return -EINVAL; + + /* + * The unc size is just a difference between current + * pointer and original one + */ + return PtrOffset(unc, unc_chunk); +} From patchwork Fri Aug 21 16:25:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Komarov X-Patchwork-Id: 11729941 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2D1C9739 for ; Fri, 21 Aug 2020 16:29:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1583220724 for ; Fri, 21 Aug 2020 16:29:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=paragon-software.com header.i=@paragon-software.com header.b="ZeXwYQll" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728271AbgHUQ3q (ORCPT ); Fri, 21 Aug 2020 12:29:46 -0400 Received: from relaydlg-01.paragon-software.com ([81.5.88.159]:59074 "EHLO relaydlg-01.paragon-software.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728648AbgHUQZ5 (ORCPT ); Fri, 21 Aug 2020 12:25:57 -0400 Received: from dlg2.mail.paragon-software.com (vdlg-exch-02.paragon-software.com [172.30.1.105]) by relaydlg-01.paragon-software.com (Postfix) with ESMTPS id 51712821EC; Fri, 21 Aug 2020 19:25:39 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=paragon-software.com; s=mail; t=1598027139; bh=DVLMZjHII+6h7ZanHM0twnhITbQwT9Zy/q8JsKmxgFc=; h=From:To:CC:Subject:Date; b=ZeXwYQllCIlnL4AbW7uL+9nvE39a9kzFeA7b8koU33OC7Zs6rj1t+w4K8+X9/eeqe BRpkz9snV5JHIC9SOn8AXQqxXusEdoF32Q6mjRN70bdYMEvPCIWAbSl69YBPz3LtDr ZL3NoA1nIFE0BbYXDCETl338OGpwqWrT2BGn6gdc= Received: from vdlg-exch-02.paragon-software.com (172.30.1.105) by vdlg-exch-02.paragon-software.com (172.30.1.105) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1847.3; Fri, 21 Aug 2020 19:25:38 +0300 Received: from vdlg-exch-02.paragon-software.com ([fe80::586:6d72:3fe5:bd9b]) by vdlg-exch-02.paragon-software.com ([fe80::586:6d72:3fe5:bd9b%6]) with mapi id 15.01.1847.003; Fri, 21 Aug 2020 19:25:38 +0300 From: Konstantin Komarov To: "viro@zeniv.linux.org.uk" , "linux-kernel@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" CC: =?iso-8859-1?q?Pali_Roh=E1r?= Subject: [PATCH v2 08/10] fs/ntfs3: Add Kconfig, Makefile and doc Thread-Topic: [PATCH v2 08/10] fs/ntfs3: Add Kconfig, Makefile and doc Thread-Index: AdZ302F8VVG9og4hTA6+RhoU6EU4hg== Date: Fri, 21 Aug 2020 16:25:37 +0000 Message-ID: <74de75d537ac486e9fcfe7931181a9b9@paragon-software.com> Accept-Language: ru-RU, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.30.8.36] MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This adds fs/ntfs3 Kconfig, Makefile and Documentation file Signed-off-by: Konstantin Komarov --- Documentation/filesystems/ntfs3.rst | 93 +++++++++++++++++++++++++++++ fs/ntfs3/Kconfig | 23 +++++++ fs/ntfs3/Makefile | 11 ++++ 3 files changed, 127 insertions(+) create mode 100644 Documentation/filesystems/ntfs3.rst create mode 100644 fs/ntfs3/Kconfig create mode 100644 fs/ntfs3/Makefile diff --git a/Documentation/filesystems/ntfs3.rst b/Documentation/filesystems/ntfs3.rst new file mode 100644 index 000000000000..4a510a6cdaee --- /dev/null +++ b/Documentation/filesystems/ntfs3.rst @@ -0,0 +1,93 @@ +.. SPDX-License-Identifier: GPL-2.0 + +===== +NTFS3 +===== + + +Summary and Features +==================== + +NTFS3 is fully functional NTFS Read-Write driver. The driver works with +NTFS versions up to 3.1, normal/compressed/sparse files +and journal replaying. File system type to use on mount is 'ntfs3'. + +- This driver implements NTFS read/write support for normal, sparsed and + compressed files. + NOTE: Operations with compressed files require increased memory consumption; +- Supports native journal replaying; +- Supports extended attributes; +- Supports NFS export of mounted NTFS volumes. + +Mount Options +============= + +The list below describes mount options supported by NTFS3 driver in addtion to +generic ones. + +=============================================================================== + +nls=name These options inform the driver how to interpret path + strings and translate them to Unicode and back. In case + none of these options are set, or if specified codepage + doesn't exist on the system, the default codepage will be + used (CONFIG_NLS_DEFAULT). + Examples: + 'nls=utf8' + +uid= +gid= +umask= Controls the default permissions for files/directories created + after the NTFS volume is mounted. + +fmask= +dmask= Instead of specifying umask which applies both to + files and directories, fmask applies only to files and + dmask only to directories. + +nohidden Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN) + attribute will not be shown under Linux. + +sys_immutable Files with the Windows-specific SYSTEM + (FILE_ATTRIBUTE_SYSTEM) attribute will be marked as system + immutable files. + +discard Enable support of the TRIM command for improved performance + on delete operations, which is recommended for use with the + solid-state drives (SSD). + +force Forces the driver to mount partitions even if 'dirty' flag + (volume dirty) is set. Not recommended for use. + +sparse Create new files as "sparse". + +showmeta Use this parameter to show all meta-files (System Files) on + a mounted NTFS partition. + By default, all meta-files are hidden. + +no_acs_rules "No access rules" mount option sets access rights for + files/folders to 777 and owner/group to root. This mount + option absorbs all other permissions: + - permissions change for files/folders will be reported + as successful, but they will remain 777; + - owner/group change will be reported as successful, but + they will stay as root + +=============================================================================== + + +ToDo list +========= + +- Full journaling support (currently journal replaying is supported) over JBD. + + +References +========== +https://www.paragon-software.com/home/ntfs-linux-professional/ + - Commercial version of the NTFS driver for Linux. + +almaz.alexandrovich@paragon-software.com + - Direct e-mail address for feedback and requests on the NTFS3 implementation. + + diff --git a/fs/ntfs3/Kconfig b/fs/ntfs3/Kconfig new file mode 100644 index 000000000000..92a9c68008c8 --- /dev/null +++ b/fs/ntfs3/Kconfig @@ -0,0 +1,23 @@ +# SPDX-License-Identifier: GPL-2.0-only +config NTFS3_FS + tristate "NTFS Read-Write file system support" + select NLS + help + Windows OS native file system (NTFS) support up to NTFS version 3.1. + + Y or M enables the NTFS3 driver with full features enabled (read, + write, journal replaying, sparse/compressed files support). + File system type to use on mount is "ntfs3". Module name (M option) + is also "ntfs3". + + Documentation: + +config NTFS3_64BIT_CLUSTER + bool "64 bits per NTFS clusters" + depends on NTFS3_FS && 64BIT + help + Windows implementation of ntfs.sys uses 32 bits per clusters. + If activated 64 bits per clusters you will be able to use 4k cluster + for 16T+ volumes. Windows will not be able to mount such volumes. + + It is recommended to say N here. diff --git a/fs/ntfs3/Makefile b/fs/ntfs3/Makefile new file mode 100644 index 000000000000..4d4fe198b8b8 --- /dev/null +++ b/fs/ntfs3/Makefile @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for the ntfs3 filesystem support. +# + +obj-$(CONFIG_NTFS3_FS) += ntfs3.o + +ntfs3-objs := bitfunc.o bitmap.o inode.o fsntfs.o frecord.o \ + index.o attrlist.o record.o attrib.o run.o xattr.o\ + upcase.o super.o file.o dir.o namei.o lznt.o\ + fslog.o From patchwork Fri Aug 21 16:25:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Komarov X-Patchwork-Id: 11729937 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1FC94739 for ; Fri, 21 Aug 2020 16:27:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0792C207DE for ; Fri, 21 Aug 2020 16:27:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=paragon-software.com header.i=@paragon-software.com header.b="rczlk549" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727797AbgHUQ0u (ORCPT ); Fri, 21 Aug 2020 12:26:50 -0400 Received: from relayfre-01.paragon-software.com ([176.12.100.13]:56494 "EHLO relayfre-01.paragon-software.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728132AbgHUQZ6 (ORCPT ); Fri, 21 Aug 2020 12:25:58 -0400 Received: from dlg2.mail.paragon-software.com (vdlg-exch-02.paragon-software.com [172.30.1.105]) by relayfre-01.paragon-software.com (Postfix) with ESMTPS id B5E721D18; Fri, 21 Aug 2020 19:25:43 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=paragon-software.com; s=mail; t=1598027143; bh=8E6k18B7c0f3bcTyCXNXAK7kqIu03Uxrb75/ltFKeAM=; h=From:To:CC:Subject:Date; b=rczlk549K9Q2XsfuE3Cm/06o3xK0EMAaSL9Lyjz6JUGhOzG1JJwLyIlyMp399pZXd I/E0daCUgNZLSptwiCgxF/Kwt79VUjxxUbv14oy8xuHApR7w4Gv7ItRqXiTN7fjjNj 7QWA22zRIG4EOOLfjp0NaoDfOoHSPF3xlrxI5Czk= Received: from vdlg-exch-02.paragon-software.com (172.30.1.105) by vdlg-exch-02.paragon-software.com (172.30.1.105) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1847.3; Fri, 21 Aug 2020 19:25:43 +0300 Received: from vdlg-exch-02.paragon-software.com ([fe80::586:6d72:3fe5:bd9b]) by vdlg-exch-02.paragon-software.com ([fe80::586:6d72:3fe5:bd9b%6]) with mapi id 15.01.1847.003; Fri, 21 Aug 2020 19:25:43 +0300 From: Konstantin Komarov To: "viro@zeniv.linux.org.uk" , "linux-kernel@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" CC: =?iso-8859-1?q?Pali_Roh=E1r?= Subject: [PATCH v2 09/10] fs/ntfs3: Add NTFS3 in fs/Kconfig and fs/Makefile Thread-Topic: [PATCH v2 09/10] fs/ntfs3: Add NTFS3 in fs/Kconfig and fs/Makefile Thread-Index: AdZ304HHyxScINzYRJCNaIFCVIqnAw== Date: Fri, 21 Aug 2020 16:25:42 +0000 Message-ID: <35de976db78443cc9ed18bcd410e6f4a@paragon-software.com> Accept-Language: ru-RU, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.30.8.36] MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This adds NTFS3 in fs/Kconfig and fs/Makefile Signed-off-by: Konstantin Komarov Reported-by: kernel test robot Reported-by: kernel test robot --- fs/Kconfig | 1 + fs/Makefile | 1 + 2 files changed, 2 insertions(+) diff --git a/fs/Kconfig b/fs/Kconfig index aa4c12282301..eae96d55ab67 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -145,6 +145,7 @@ menu "DOS/FAT/EXFAT/NT Filesystems" source "fs/fat/Kconfig" source "fs/exfat/Kconfig" source "fs/ntfs/Kconfig" +source "fs/ntfs3/Kconfig" endmenu endif # BLOCK diff --git a/fs/Makefile b/fs/Makefile index 1c7b0e3f6daa..b0b4ad8affa0 100644 --- a/fs/Makefile +++ b/fs/Makefile @@ -100,6 +100,7 @@ obj-$(CONFIG_SYSV_FS) += sysv/ obj-$(CONFIG_CIFS) += cifs/ obj-$(CONFIG_HPFS_FS) += hpfs/ obj-$(CONFIG_NTFS_FS) += ntfs/ +obj-$(CONFIG_NTFS3_FS) += ntfs3/ obj-$(CONFIG_UFS_FS) += ufs/ obj-$(CONFIG_EFS_FS) += efs/ obj-$(CONFIG_JFFS2_FS) += jffs2/ From patchwork Fri Aug 21 16:25:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Komarov X-Patchwork-Id: 11729935 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7533D739 for ; Fri, 21 Aug 2020 16:26:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5C992206DA for ; Fri, 21 Aug 2020 16:26:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=paragon-software.com header.i=@paragon-software.com header.b="RrS1F7UT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727859AbgHUQ0h (ORCPT ); Fri, 21 Aug 2020 12:26:37 -0400 Received: from relaydlg-01.paragon-software.com ([81.5.88.159]:59085 "EHLO relaydlg-01.paragon-software.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728482AbgHUQZ7 (ORCPT ); Fri, 21 Aug 2020 12:25:59 -0400 Received: from dlg2.mail.paragon-software.com (vdlg-exch-02.paragon-software.com [172.30.1.105]) by relaydlg-01.paragon-software.com (Postfix) with ESMTPS id 4C42F8221B; Fri, 21 Aug 2020 19:25:50 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=paragon-software.com; s=mail; t=1598027150; bh=bdiuRaddL1qh2Dbt4gYmcg3C+nR6kLGAyfYYipie7lc=; h=From:To:CC:Subject:Date; b=RrS1F7UTSJhSPUL7efWbN9iOgZfQ+0DQ7JOdhFCCjhbVPT6ZVrbRahWAZ+a9SWf7F xbaYkTfhKbIIZJGJMI5Fi/F6ro4EAX6pBCqfJWLHdjbXronx1JQB1tdVFAqzGYQssK LX6w0cTq0i8Vl337hLzBAlIXyMVKg3cwXbQtwES0= Received: from vdlg-exch-02.paragon-software.com (172.30.1.105) by vdlg-exch-02.paragon-software.com (172.30.1.105) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1847.3; Fri, 21 Aug 2020 19:25:49 +0300 Received: from vdlg-exch-02.paragon-software.com ([fe80::586:6d72:3fe5:bd9b]) by vdlg-exch-02.paragon-software.com ([fe80::586:6d72:3fe5:bd9b%6]) with mapi id 15.01.1847.003; Fri, 21 Aug 2020 19:25:49 +0300 From: Konstantin Komarov To: "viro@zeniv.linux.org.uk" , "linux-kernel@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" CC: =?iso-8859-1?q?Pali_Roh=E1r?= Subject: [PATCH v2 10/10] fs/ntfs3: Add MAINTAINERS Thread-Topic: [PATCH v2 10/10] fs/ntfs3: Add MAINTAINERS Thread-Index: AdZ3047+o//2/kjARjiwpjsa2mG6fg== Date: Fri, 21 Aug 2020 16:25:49 +0000 Message-ID: <480fae2412784df8b92249ef12328f46@paragon-software.com> Accept-Language: ru-RU, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.30.8.36] MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org MAINTAINERS update Signed-off-by: Konstantin Komarov --- MAINTAINERS | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index deaafb617361..5629fb0421f6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -12353,6 +12353,13 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/aia21/ntfs.git F: Documentation/filesystems/ntfs.rst F: fs/ntfs/ +NTFS3 FILESYSTEM +M: Konstantin Komarov +S: Supported +W: http://www.paragon-software.com/ +F: Documentation/filesystems/ntfs3.rst +F: fs/ntfs3/ + NUBUS SUBSYSTEM M: Finn Thain L: linux-m68k@lists.linux-m68k.org