diff mbox series

[Bug,Report] OOB-read BUG in HFS+ filesystem

Message ID tencent_B730B2241BE4152C9D6AA80789EEE1DEE30A@qq.com (mailing list archive)
State New
Headers show
Series [Bug,Report] OOB-read BUG in HFS+ filesystem | expand

Commit Message

now4yreal April 14, 2025, 1:45 p.m. UTC
Dear Linux Security Maintainers,
I would like to report a OOB-read vulnerability in the HFS+ file system, which I discovered using our in-house developed kernel fuzzer, Symsyz.

1. Vulnerability Detail and Root Cause:
The vulnerability occurs in the function `hfsplus_bnode_dump` at LOC1(Please see the code below), where it calls `hfs_bnode_read_u16` to read `key_off` from the file system at the offset `off`. The value of `key_off` is a value in the image, and its size is controllable by the user (in the POC, we control this value to be 29234). At LOC2, `key_off` is used as an offset to read the content of the file system, triggering the following control flow: `hfs_bnode_read_u16 -> hfs_bnode_read`. The problem arises in the `hfs_bnode_read` function at LOC3, where the value of `key_off` read from the file system is not validated. If `key_off >> PAGE_SHIFT` exceeds the range of `node->page`, it will cause an out-of-bounds read at LOC4, triggering the vulnerability.

```c
// fs/hfsplus/bnode.c +291
void hfs_bnode_dump(struct hfs_bnode *node)
{
	struct hfs_bnode_desc desc;
	__be32 cnid;
	int i, off, key_off;

	hfs_dbg(BNODE_MOD, "bnode: %d\n", node->this);
	hfs_bnode_read(node, &desc, 0, sizeof(desc));
	hfs_dbg(BNODE_MOD, "%d, %d, %d, %d, %d\n",
		be32_to_cpu(desc.next), be32_to_cpu(desc.prev),
		desc.type, desc.height, be16_to_cpu(desc.num_recs));

	off = node->tree->node_size - 2;
	for (i = be16_to_cpu(desc.num_recs); i >= 0; off -= 2, i--) {
		key_off = hfs_bnode_read_u16(node, off); <------- LOC1: read offset from filesystem
		hfs_dbg(BNODE_MOD, " %d", key_off);
		if (i && node->type == HFS_NODE_INDEX) {
			int tmp;

			if (node->tree->attributes & HFS_TREE_VARIDXKEYS ||
					node->tree->cnid == HFSPLUS_ATTR_CNID)
				tmp = hfs_bnode_read_u16(node, key_off) + 2;
			else
				tmp = node->tree->max_key_len + 2;
			hfs_dbg_cont(BNODE_MOD, " (%d", tmp);
			hfs_bnode_read(node, &cnid, key_off + tmp, 4);
			hfs_dbg_cont(BNODE_MOD, ",%d)", be32_to_cpu(cnid));
		} else if (i && node->type == HFS_NODE_LEAF) {
			int tmp;

			tmp = hfs_bnode_read_u16(node, key_off); <------------ LOC2:read content according to the key_off
			hfs_dbg_cont(BNODE_MOD, " (%d)", tmp);
		}
	}
	hfs_dbg_cont(BNODE_MOD, "\n");
}

// fs/hfsplus/bnode.c +22
void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
{
	struct page **pagep;
	int l;

	off += node->page_offset; <------------ LOC3: missing check
	pagep = node->page + (off >> PAGE_SHIFT); <------------ LOC4: trigger the bug
	off &= ~PAGE_MASK;

	l = min_t(int, len, PAGE_SIZE - off);
	memcpy_from_page(buf, *pagep, off, l);

	while ((len -= l) != 0) {
		buf += l;
		l = min_t(int, len, PAGE_SIZE);
		memcpy_from_page(buf, *++pagep, 0, l);
	}
}

```

2. Impact Analysis
Through this vulnerability, it is possible to construct arbitrary kernel memory reads, which can be used to leak the kernel base address. When combined with other kernel arbitrary write vulnerabilities, this can lead to kernel control flow hijacking and other severe security issues.

3. Suggested Fix
Add validation for `off` in the function `hfs_bnode_read` (fs/hfsplus/bnode.c +22), a possible patch may as below.

```
```


4. Crash Log Overview:
```
BUG: KASAN: slab-out-of-bounds in hfsplus_bnode_read+0x228/0x240 fs/hfsplus/bnode.c:32
Read of size 8 at addr ffff88802315cfc0 by task syz.0.7/9865

CPU: 0 UID: 0 PID: 9865 Comm: syz.0.7 Not tainted 6.15.0-rc1-00308-gecd5d67ad602 #3 PREEMPT(full)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
Call Trace:
 <task>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x10e/0x1f0 lib/dump_stack.c:120
 print_address_description mm/kasan/report.c:408 [inline]
 print_report+0xc6/0x680 mm/kasan/report.c:521
 kasan_report+0xe4/0x120 mm/kasan/report.c:634
 hfsplus_bnode_read+0x228/0x240 fs/hfsplus/bnode.c:32
 hfsplus_bnode_read_u16 fs/hfsplus/bnode.c:45 [inline]
 hfsplus_bnode_dump+0x31f/0x3c0 fs/hfsplus/bnode.c:321
 hfsplus_brec_remove+0x3d2/0x4e0 fs/hfsplus/brec.c:229
 __hfsplus_delete_attr+0x2a0/0x3b0 fs/hfsplus/attributes.c:299
 hfsplus_delete_all_attrs+0x26f/0x330 fs/hfsplus/attributes.c:378
 hfsplus_delete_cat+0x851/0xde0 fs/hfsplus/catalog.c:425
 hfsplus_unlink+0x20f/0x7f0 fs/hfsplus/dir.c:385
 hfsplus_rename+0xbc/0x200 fs/hfsplus/dir.c:547
 vfs_rename+0xf47/0x2120 fs/namei.c:5086
 do_renameat2+0x82c/0xc90 fs/namei.c:5235
 __do_sys_renameat2 fs/namei.c:5269 [inline]
 __se_sys_renameat2 fs/namei.c:5266 [inline]
 __x64_sys_renameat2+0xe7/0x130 fs/namei.c:5266
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xc7/0x250 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f2a209b2d5d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 &lt;48&gt; 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f2a2181cba8 EFLAGS: 00000246 ORIG_RAX: 000000000000013c
RAX: ffffffffffffffda RBX: 00007f2a20bd5fa0 RCX: 00007f2a209b2d5d
RDX: 0000000000000004 RSI: 00004000000000c0 RDI: 0000000000000005
RBP: 00007f2a20a36327 R08: 0000000000000000 R09: 0000000000000000
R10: 0000400000000180 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f2a20bd5fac R14: 00007f2a20bd6038 R15: 00007f2a2181cd40
 </task>

Allocated by task 9865:
 kasan_save_stack+0x33/0x60 mm/kasan/common.c:47
 kasan_save_track+0x14/0x30 mm/kasan/common.c:68
 poison_kmalloc_redzone mm/kasan/common.c:377 [inline]
 __kasan_kmalloc+0xaa/0xb0 mm/kasan/common.c:394
 kasan_kmalloc include/linux/kasan.h:260 [inline]
 __do_kmalloc_node mm/slub.c:4331 [inline]
 __kmalloc_noprof+0x20e/0x560 mm/slub.c:4343
 kmalloc_noprof include/linux/slab.h:909 [inline]
 kzalloc_noprof include/linux/slab.h:1039 [inline]
 __hfs_bnode_create+0x107/0x8b0 fs/hfsplus/bnode.c:409
 hfsplus_bnode_find+0x2db/0xd20 fs/hfsplus/bnode.c:486
 hfsplus_brec_find+0x2b8/0x520 fs/hfsplus/bfind.c:172
 hfsplus_find_attr fs/hfsplus/attributes.c:160 [inline]
 hfsplus_delete_all_attrs+0x248/0x330 fs/hfsplus/attributes.c:371
 hfsplus_delete_cat+0x851/0xde0 fs/hfsplus/catalog.c:425
 hfsplus_unlink+0x20f/0x7f0 fs/hfsplus/dir.c:385
 hfsplus_rename+0xbc/0x200 fs/hfsplus/dir.c:547
 vfs_rename+0xf47/0x2120 fs/namei.c:5086
 do_renameat2+0x82c/0xc90 fs/namei.c:5235
 __do_sys_renameat2 fs/namei.c:5269 [inline]
 __se_sys_renameat2 fs/namei.c:5266 [inline]
 __x64_sys_renameat2+0xe7/0x130 fs/namei.c:5266
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xc7/0x250 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

The buggy address belongs to the object at ffff88802315cf00
 which belongs to the cache kmalloc-192 of size 192
The buggy address is located 40 bytes to the right of
 allocated 152-byte region [ffff88802315cf00, ffff88802315cf98)

The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0xffff88802315ce00 pfn:0x2315c
flags: 0xfff00000000200(workingset|node=0|zone=1|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 00fff00000000200 ffff88801b4413c0 ffffea0000005310 ffff88801b440288
raw: ffff88802315ce00 0000000000100002 00000000f5000000 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 0, migratetype Unmovable, gfp_mask 0x252800(GFP_NOWAIT|__GFP_NORETRY|__GFP_COMP|__GFP_THISNODE), pid 9, tgid 9 (kworker/0:0), ts 36080866601, free_ts 24921668201
 set_page_owner include/linux/page_owner.h:32 [inline]
 post_alloc_hook+0x181/0x1b0 mm/page_alloc.c:1717
 prep_new_page mm/page_alloc.c:1725 [inline]
 get_page_from_freelist+0x1074/0x3780 mm/page_alloc.c:3652
 __alloc_pages_slowpath mm/page_alloc.c:4473 [inline]
 __alloc_frozen_pages_noprof+0x5a5/0x2420 mm/page_alloc.c:4947
 alloc_slab_page mm/slub.c:2461 [inline]
 allocate_slab mm/slub.c:2623 [inline]
 new_slab+0x94/0x340 mm/slub.c:2676
 ___slab_alloc+0xd97/0x1970 mm/slub.c:3862
 __slab_alloc.isra.0+0x56/0xb0 mm/slub.c:3952
 __slab_alloc_node mm/slub.c:4027 [inline]
 slab_alloc_node mm/slub.c:4188 [inline]
 __kmalloc_cache_node_noprof+0x276/0x420 mm/slub.c:4370
 kmalloc_node_noprof include/linux/slab.h:928 [inline]
 alloc_worker kernel/workqueue.c:2647 [inline]
 create_worker+0x10f/0x7e0 kernel/workqueue.c:2790
 maybe_create_worker kernel/workqueue.c:3063 [inline]
 manage_workers kernel/workqueue.c:3115 [inline]
 worker_thread+0x926/0xe60 kernel/workqueue.c:3375
 kthread+0x3a5/0x770 kernel/kthread.c:464
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
page last free pid 5273 tgid 5273 stack trace:
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1262 [inline]
 __free_frozen_pages+0x709/0x1030 mm/page_alloc.c:2680
 rcu_do_batch kernel/rcu/tree.c:2568 [inline]
 rcu_core+0x7ad/0x14a0 kernel/rcu/tree.c:2824
 handle_softirqs+0x1e7/0x8a0 kernel/softirq.c:579
 __do_softirq kernel/softirq.c:613 [inline]
 invoke_softirq kernel/softirq.c:453 [inline]
 __irq_exit_rcu+0xfe/0x160 kernel/softirq.c:680
 irq_exit_rcu+0x9/0x30 kernel/softirq.c:696
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1049 [inline]
 sysvec_apic_timer_interrupt+0xa3/0xc0 arch/x86/kernel/apic/apic.c:1049
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702

Memory state around the buggy address:
 ffff88802315ce80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
 ffff88802315cf00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
&gt;ffff88802315cf80: 00 00 00 fc fc fc fc fc fc fc fc fc fc fc fc fc
                                           ^
 ffff88802315d000: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc
 ffff88802315d080: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc
```

Since I am not a core hfs developer and only have a general understanding of the file system’s internal logic, there might be inaccuracies in this analysis. I would appreciate it if you could forward this report to the appropriate maintainers for confirmation and further investigation. Please feel free to reach out if you need any clarification or would like additional information.

I’ve attached the POC (written in C) for your convenience — it can be compiled directly with `gcc`.

Thanks for your attention to this matter.

Best regards,
luka

Comments

Christian Brauner April 14, 2025, 2:18 p.m. UTC | #1
On Mon, Apr 14, 2025 at 09:45:25PM +0800, now4yreal wrote:
> Dear Linux Security Maintainers,
> I would like to report a OOB-read vulnerability in the HFS+ file
> system, which I discovered using our in-house developed kernel fuzzer,
> Symsyz.

Bug reports from non-official syzbot instances are generally not
accepted.

hfs and hfsplus are orphaned filesystems since at least 2014. Bug
reports for such filesystems won't receive much attention from the core
maintainers.

I'm very very close to putting them on the chopping block as they're
slowly turning into pointless burdens.
Matthew Wilcox April 14, 2025, 2:21 p.m. UTC | #2
On Mon, Apr 14, 2025 at 04:18:27PM +0200, Christian Brauner wrote:
> On Mon, Apr 14, 2025 at 09:45:25PM +0800, now4yreal wrote:
> > Dear Linux Security Maintainers,
> > I would like to report a OOB-read vulnerability in the HFS+ file
> > system, which I discovered using our in-house developed kernel fuzzer,
> > Symsyz.
> 
> Bug reports from non-official syzbot instances are generally not
> accepted.
> 
> hfs and hfsplus are orphaned filesystems since at least 2014. Bug
> reports for such filesystems won't receive much attention from the core
> maintainers.
> 
> I'm very very close to putting them on the chopping block as they're
> slowly turning into pointless burdens.

I've tried asking some people who are long term Apple & Linux people,
but haven't been able to find anyone interested in becoming maintainer.
Let's drop both hfs & hfsplus.  Ten years of being unmaintained is
long enough.
David Sterba April 14, 2025, 4:23 p.m. UTC | #3
On Mon, Apr 14, 2025 at 03:21:56PM +0100, Matthew Wilcox wrote:
> On Mon, Apr 14, 2025 at 04:18:27PM +0200, Christian Brauner wrote:
> > On Mon, Apr 14, 2025 at 09:45:25PM +0800, now4yreal wrote:
> > > Dear Linux Security Maintainers,
> > > I would like to report a OOB-read vulnerability in the HFS+ file
> > > system, which I discovered using our in-house developed kernel fuzzer,
> > > Symsyz.
> > 
> > Bug reports from non-official syzbot instances are generally not
> > accepted.
> > 
> > hfs and hfsplus are orphaned filesystems since at least 2014. Bug
> > reports for such filesystems won't receive much attention from the core
> > maintainers.
> > 
> > I'm very very close to putting them on the chopping block as they're
> > slowly turning into pointless burdens.
> 
> I've tried asking some people who are long term Apple & Linux people,
> but haven't been able to find anyone interested in becoming maintainer.
> Let's drop both hfs & hfsplus.  Ten years of being unmaintained is
> long enough.

Agreed. If needed there are FUSE implementations to access .dmg files
with HFS/HFS+ or other standalone tools.

https://github.com/0x09/hfsfuse
https://github.com/darlinghq/darling-dmg
Christian Brauner April 15, 2025, 7:52 a.m. UTC | #4
On Mon, Apr 14, 2025 at 06:23:28PM +0200, David Sterba wrote:
> On Mon, Apr 14, 2025 at 03:21:56PM +0100, Matthew Wilcox wrote:
> > On Mon, Apr 14, 2025 at 04:18:27PM +0200, Christian Brauner wrote:
> > > On Mon, Apr 14, 2025 at 09:45:25PM +0800, now4yreal wrote:
> > > > Dear Linux Security Maintainers,
> > > > I would like to report a OOB-read vulnerability in the HFS+ file
> > > > system, which I discovered using our in-house developed kernel fuzzer,
> > > > Symsyz.
> > > 
> > > Bug reports from non-official syzbot instances are generally not
> > > accepted.
> > > 
> > > hfs and hfsplus are orphaned filesystems since at least 2014. Bug
> > > reports for such filesystems won't receive much attention from the core
> > > maintainers.
> > > 
> > > I'm very very close to putting them on the chopping block as they're
> > > slowly turning into pointless burdens.
> > 
> > I've tried asking some people who are long term Apple & Linux people,
> > but haven't been able to find anyone interested in becoming maintainer.
> > Let's drop both hfs & hfsplus.  Ten years of being unmaintained is
> > long enough.
> 
> Agreed. If needed there are FUSE implementations to access .dmg files
> with HFS/HFS+ or other standalone tools.
> 
> https://github.com/0x09/hfsfuse
> https://github.com/darlinghq/darling-dmg

Ok, I'm open to trying. I'm adding a deprecation message when initating
a new hfs{plus} context logged to dmesg and then we can try and remove
it by the end of the year.
Johannes Thumshirn April 15, 2025, 9:16 a.m. UTC | #5
On 15.04.25 09:52, Christian Brauner wrote:
> On Mon, Apr 14, 2025 at 06:23:28PM +0200, David Sterba wrote:
>> On Mon, Apr 14, 2025 at 03:21:56PM +0100, Matthew Wilcox wrote:
>>> On Mon, Apr 14, 2025 at 04:18:27PM +0200, Christian Brauner wrote:
>>>> On Mon, Apr 14, 2025 at 09:45:25PM +0800, now4yreal wrote:
>>>>> Dear Linux Security Maintainers,
>>>>> I would like to report a OOB-read vulnerability in the HFS+ file
>>>>> system, which I discovered using our in-house developed kernel fuzzer,
>>>>> Symsyz.
>>>>
>>>> Bug reports from non-official syzbot instances are generally not
>>>> accepted.
>>>>
>>>> hfs and hfsplus are orphaned filesystems since at least 2014. Bug
>>>> reports for such filesystems won't receive much attention from the core
>>>> maintainers.
>>>>
>>>> I'm very very close to putting them on the chopping block as they're
>>>> slowly turning into pointless burdens.
>>>
>>> I've tried asking some people who are long term Apple & Linux people,
>>> but haven't been able to find anyone interested in becoming maintainer.
>>> Let's drop both hfs & hfsplus.  Ten years of being unmaintained is
>>> long enough.
>>
>> Agreed. If needed there are FUSE implementations to access .dmg files
>> with HFS/HFS+ or other standalone tools.
>>
>> https://github.com/0x09/hfsfuse
>> https://github.com/darlinghq/darling-dmg
> 
> Ok, I'm open to trying. I'm adding a deprecation message when initating
> a new hfs{plus} context logged to dmesg and then we can try and remove
> it by the end of the year.
> 
> 

Just a word of caution though, (at least Intel) Macs have their EFI ESP 
partition on HFS+ instead of FAT. I don't own an Apple Silicon Mac so I 
can't check if it's there as well.
Christian Brauner April 15, 2025, 9:31 a.m. UTC | #6
On Tue, Apr 15, 2025 at 09:16:58AM +0000, Johannes Thumshirn wrote:
> On 15.04.25 09:52, Christian Brauner wrote:
> > On Mon, Apr 14, 2025 at 06:23:28PM +0200, David Sterba wrote:
> >> On Mon, Apr 14, 2025 at 03:21:56PM +0100, Matthew Wilcox wrote:
> >>> On Mon, Apr 14, 2025 at 04:18:27PM +0200, Christian Brauner wrote:
> >>>> On Mon, Apr 14, 2025 at 09:45:25PM +0800, now4yreal wrote:
> >>>>> Dear Linux Security Maintainers,
> >>>>> I would like to report a OOB-read vulnerability in the HFS+ file
> >>>>> system, which I discovered using our in-house developed kernel fuzzer,
> >>>>> Symsyz.
> >>>>
> >>>> Bug reports from non-official syzbot instances are generally not
> >>>> accepted.
> >>>>
> >>>> hfs and hfsplus are orphaned filesystems since at least 2014. Bug
> >>>> reports for such filesystems won't receive much attention from the core
> >>>> maintainers.
> >>>>
> >>>> I'm very very close to putting them on the chopping block as they're
> >>>> slowly turning into pointless burdens.
> >>>
> >>> I've tried asking some people who are long term Apple & Linux people,
> >>> but haven't been able to find anyone interested in becoming maintainer.
> >>> Let's drop both hfs & hfsplus.  Ten years of being unmaintained is
> >>> long enough.
> >>
> >> Agreed. If needed there are FUSE implementations to access .dmg files
> >> with HFS/HFS+ or other standalone tools.
> >>
> >> https://github.com/0x09/hfsfuse
> >> https://github.com/darlinghq/darling-dmg
> > 
> > Ok, I'm open to trying. I'm adding a deprecation message when initating
> > a new hfs{plus} context logged to dmesg and then we can try and remove
> > it by the end of the year.
> > 
> > 
> 
> Just a word of caution though, (at least Intel) Macs have their EFI ESP 
> partition on HFS+ instead of FAT. I don't own an Apple Silicon Mac so I 
> can't check if it's there as well.

Yeah, someone mentioned that. Well, then we hopefully have someone
stepping up to for maintainership.
Johannes Thumshirn April 15, 2025, 10:23 a.m. UTC | #7
On 15.04.25 11:31, Christian Brauner wrote:
> On Tue, Apr 15, 2025 at 09:16:58AM +0000, Johannes Thumshirn wrote:
>> On 15.04.25 09:52, Christian Brauner wrote:
>>> Ok, I'm open to trying. I'm adding a deprecation message when initating
>>> a new hfs{plus} context logged to dmesg and then we can try and remove
>>> it by the end of the year.
>>>
>>>
>>
>> Just a word of caution though, (at least Intel) Macs have their EFI ESP
>> partition on HFS+ instead of FAT. I don't own an Apple Silicon Mac so I
>> can't check if it's there as well.
> 
> Yeah, someone mentioned that. Well, then we hopefully have someone
> stepping up to for maintainership.
> 

I hope you aren't considering me here :D. I'm lacking the time to 
volunteer as a Maintainer but I can offer to look into some fixes.
Christian Brauner April 15, 2025, 11:25 a.m. UTC | #8
On Tue, Apr 15, 2025 at 10:23:27AM +0000, Johannes Thumshirn wrote:
> On 15.04.25 11:31, Christian Brauner wrote:
> > On Tue, Apr 15, 2025 at 09:16:58AM +0000, Johannes Thumshirn wrote:
> >> On 15.04.25 09:52, Christian Brauner wrote:
> >>> Ok, I'm open to trying. I'm adding a deprecation message when initating
> >>> a new hfs{plus} context logged to dmesg and then we can try and remove
> >>> it by the end of the year.
> >>>
> >>>
> >>
> >> Just a word of caution though, (at least Intel) Macs have their EFI ESP
> >> partition on HFS+ instead of FAT. I don't own an Apple Silicon Mac so I
> >> can't check if it's there as well.
> > 
> > Yeah, someone mentioned that. Well, then we hopefully have someone
> > stepping up to for maintainership.
> > 
> 
> I hope you aren't considering me here :D. I'm lacking the time to 
> volunteer as a Maintainer but I can offer to look into some fixes.

No no, I'm aware. I'm just saying that if this is really crucial this
Mac use-case then we better find someone to take care of it properly.
diff mbox series

Patch

diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c
index 87974d5e6791..5bd31ebe648b 100644
--- a/fs/hfsplus/bnode.c
+++ b/fs/hfsplus/bnode.c
@@ -22,10 +22,14 @@ 
 void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
 {
        struct page **pagep;
-       int l;
+       int l, pagenum;
 
        off += node-&gt;page_offset;
-       pagep = node-&gt;page + (off &gt;&gt; PAGE_SHIFT);
+       pagenum = off &gt;&gt; PAGE_SHIFT
+       if (pagenum &gt;= node-&gt;tree-&gt;pages_per_bnode)
+               break;
+       
+       pagep = node-&gt;page + pagenum;
        off &amp;= ~PAGE_MASK;
 
        l = min_t(int, len, PAGE_SIZE - off);