diff mbox series

[v2] mm, page_alloc: Fix has_unmovable_pages for HugePages

Message ID 20181217225113.17864-1-osalvador@suse.de (mailing list archive)
State New, archived
Headers show
Series [v2] mm, page_alloc: Fix has_unmovable_pages for HugePages | expand

Commit Message

Oscar Salvador Dec. 17, 2018, 10:51 p.m. UTC
v1 -> v2:
	- Fix the logic for skipping pages by Michal

---
From e346b151037d3c37feb10a981a4d2a25018acf81 Mon Sep 17 00:00:00 2001
From: Oscar Salvador <osalvador@suse.de>
Date: Mon, 17 Dec 2018 14:53:35 +0100
Subject: [PATCH] mm, page_alloc: Fix has_unmovable_pages for HugePages

While playing with gigantic hugepages and memory_hotplug, I triggered
the following #PF when "cat memoryX/removable":

<---
kernel: BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
kernel: #PF error: [normal kernel read fault]
kernel: PGD 0 P4D 0
kernel: Oops: 0000 [#1] SMP PTI
kernel: CPU: 1 PID: 1481 Comm: cat Tainted: G            E     4.20.0-rc6-mm1-1-default+ #18
kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.0.0-prebuilt.qemu-project.org 04/01/2014
kernel: RIP: 0010:has_unmovable_pages+0x154/0x210
kernel: Code: 1b ff ff ff eb 32 48 8b 45 00 bf 00 10 00 00 a9 00 00 01 00 74 07 0f b6 4d 51 48 d3 e7 e8 c4 81 05 00 48 85 c0 49 89 c1 75 7e <41> 8b 41 08 83 f8 09 74 41 83 f8 1b 74 3c 4d 2b 64 24 58 49 81 ec
kernel: RSP: 0018:ffffc90000a1fd30 EFLAGS: 00010246
kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000009
kernel: RDX: ffffffff82aed4f0 RSI: 0000000000001000 RDI: 0000000000001000
kernel: RBP: ffffea0001800000 R08: 0000000000200000 R09: 0000000000000000
kernel: R10: 0000000000001000 R11: 0000000000000003 R12: ffff88813ffd45c0
kernel: R13: 0000000000060000 R14: 0000000000000001 R15: ffffea0000000000
kernel: FS:  00007fd71d9b3500(0000) GS:ffff88813bb00000(0000) knlGS:0000000000000000
kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
kernel: CR2: 0000000000000008 CR3: 00000001371c2002 CR4: 00000000003606e0
kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
kernel: Call Trace:
kernel:  is_mem_section_removable+0x7d/0x100
kernel:  removable_show+0x90/0xb0
kernel:  dev_attr_show+0x1c/0x50
kernel:  sysfs_kf_seq_show+0xca/0x1b0
kernel:  seq_read+0x133/0x380
kernel:  __vfs_read+0x26/0x180
kernel:  vfs_read+0x89/0x140
kernel:  ksys_read+0x42/0x90
kernel:  do_syscall_64+0x5b/0x180
kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
kernel: RIP: 0033:0x7fd71d4c8b41
kernel: Code: fe ff ff 48 8d 3d 27 9e 09 00 48 83 ec 08 e8 96 02 02 00 66 0f 1f 44 00 00 8b 05 ea fc 2c 00 48 63 ff 85 c0 75 13 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 57 f3 c3 0f 1f 44 00 00 55 53 48 89 d5 48 89
kernel: RSP: 002b:00007ffeab5f6448 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
kernel: RAX: ffffffffffffffda RBX: 0000000000020000 RCX: 00007fd71d4c8b41
kernel: RDX: 0000000000020000 RSI: 00007fd71d809000 RDI: 0000000000000003
kernel: RBP: 0000000000020000 R08: ffffffffffffffff R09: 0000000000000000
kernel: R10: 000000000000038b R11: 0000000000000246 R12: 00007fd71d809000
kernel: R13: 0000000000000003 R14: 00007fd71d80900f R15: 0000000000020000
kernel: Modules linked in: af_packet(E) xt_tcpudp(E) ipt_REJECT(E) xt_conntrack(E) nf_conntrack(E) nf_defrag_ipv4(E) ip_set(E) nfnetlink(E) ebtable_nat(E) ebtable_broute(E) bridge(E) stp(E) llc(E) iptable_mangle(E) iptable_raw(E) iptable_security(E) ebtable_filter(E) ebtables(E) iptable_filter(E) ip_tables(E) x_tables(E) kvm_intel(E) kvm(E) irqbypass(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) bochs_drm(E) ttm(E) drm_kms_helper(E) drm(E) aesni_intel(E) virtio_net(E) syscopyarea(E) net_failover(E) sysfillrect(E) failover(E) aes_x86_64(E) crypto_simd(E) sysimgblt(E) cryptd(E) pcspkr(E) glue_helper(E) parport_pc(E) fb_sys_fops(E) i2c_piix4(E) parport(E) button(E) btrfs(E) libcrc32c(E) xor(E) zstd_decompress(E) zstd_compress(E) raid6_pq(E) sd_mod(E) ata_generic(E) ata_piix(E) ahci(E) libahci(E) serio_raw(E) crc32c_intel(E) virtio_pci(E) virtio_ring(E) virtio(E) libata(E) sg(E) scsi_mod(E) autofs4(E)
kernel: CR2: 0000000000000008
kernel: ---[ end trace 49cade81474e40e7 ]---
kernel: RIP: 0010:has_unmovable_pages+0x154/0x210
kernel: Code: 1b ff ff ff eb 32 48 8b 45 00 bf 00 10 00 00 a9 00 00 01 00 74 07 0f b6 4d 51 48 d3 e7 e8 c4 81 05 00 48 85 c0 49 89 c1 75 7e <41> 8b 41 08 83 f8 09 74 41 83 f8 1b 74 3c 4d 2b 64 24 58 49 81 ec
kernel: RSP: 0018:ffffc90000a1fd30 EFLAGS: 00010246
kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000009
kernel: RDX: ffffffff82aed4f0 RSI: 0000000000001000 RDI: 0000000000001000
kernel: RBP: ffffea0001800000 R08: 0000000000200000 R09: 0000000000000000
kernel: R10: 0000000000001000 R11: 0000000000000003 R12: ffff88813ffd45c0
kernel: R13: 0000000000060000 R14: 0000000000000001 R15: ffffea0000000000
kernel: FS:  00007fd71d9b3500(0000) GS:ffff88813bb00000(0000) knlGS:0000000000000000
kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
kernel: CR2: 0000000000000008 CR3: 00000001371c2002 CR4: 00000000003606e0
kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
--->

The reason is we do not pass the Head to page_hstate(), and so,
the call to compound_order() in page_hstate() returns 0, so
we end up checking all hstates's size to match PAGE_SIZE.

Obviously, we do not find any hstate matching that size, and we
return NULL.
Then, we dereference that NULL pointer in hugepage_migration_supported()
and we got the #PF from above.

Fix that by getting the head page before calling page_hstate().

Also, since gigantic pages span several pageblocks, re-adjust the logic
for skipping pages.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
---
 mm/page_alloc.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

Comments

Andrew Morton Dec. 17, 2018, 11:07 p.m. UTC | #1
On Mon, 17 Dec 2018 23:51:13 +0100 Oscar Salvador <osalvador@suse.de> wrote:

> v1 -> v2:
> 	- Fix the logic for skipping pages by Michal
> 
> ---

Please be careful with the "^---$".  It signifies end-of-changelog, so
I ended up without a changelog!

> >From e346b151037d3c37feb10a981a4d2a25018acf81 Mon Sep 17 00:00:00 2001
> From: Oscar Salvador <osalvador@suse.de>
> Date: Mon, 17 Dec 2018 14:53:35 +0100
> Subject: [PATCH] mm, page_alloc: Fix has_unmovable_pages for HugePages
> 
> While playing with gigantic hugepages and memory_hotplug, I triggered
> the following #PF when "cat memoryX/removable":
> 
> ...
>
> Also, since gigantic pages span several pageblocks, re-adjust the logic
> for skipping pages.
> 
> Signed-off-by: Oscar Salvador <osalvador@suse.de>

cc:stable?
Michal Hocko Dec. 18, 2018, 7:36 a.m. UTC | #2
On Mon 17-12-18 15:07:26, Andrew Morton wrote:
> On Mon, 17 Dec 2018 23:51:13 +0100 Oscar Salvador <osalvador@suse.de> wrote:
> 
> > v1 -> v2:
> > 	- Fix the logic for skipping pages by Michal
> > 
> > ---
> 
> Please be careful with the "^---$".  It signifies end-of-changelog, so
> I ended up without a changelog!
> 
> > >From e346b151037d3c37feb10a981a4d2a25018acf81 Mon Sep 17 00:00:00 2001
> > From: Oscar Salvador <osalvador@suse.de>
> > Date: Mon, 17 Dec 2018 14:53:35 +0100
> > Subject: [PATCH] mm, page_alloc: Fix has_unmovable_pages for HugePages
> > 
> > While playing with gigantic hugepages and memory_hotplug, I triggered
> > the following #PF when "cat memoryX/removable":
> > 
> > ...
> >
> > Also, since gigantic pages span several pageblocks, re-adjust the logic
> > for skipping pages.
> > 
> > Signed-off-by: Oscar Salvador <osalvador@suse.de>

Acked-by: Michal Hocko <mhocko@suse.com>

> cc:stable?

See http://lkml.kernel.org/r/20181217152936.GR30879@dhcp22.suse.cz. I
believe nobody is simply using gigantic pages and hotplug at the same
time and those pages do not seem to cross cma regions as well. At least
not since hugepage_migration_supported stops reporting giga pages as
migrateable.

That being said, I do not think we really need it in stable but it
should be relatively easy to backport so no objection from me to put it
there.
Andrew Morton Dec. 18, 2018, 9:46 p.m. UTC | #3
On Tue, 18 Dec 2018 08:36:55 +0100 Michal Hocko <mhocko@kernel.org> wrote:

> > > Signed-off-by: Oscar Salvador <osalvador@suse.de>
> 
> Acked-by: Michal Hocko <mhocko@suse.com>

Thanks.

> > cc:stable?
> 
> See http://lkml.kernel.org/r/20181217152936.GR30879@dhcp22.suse.cz. I
> believe nobody is simply using gigantic pages and hotplug at the same
> time and those pages do not seem to cross cma regions as well. At least
> not since hugepage_migration_supported stops reporting giga pages as
> migrateable.
> 
> That being said, I do not think we really need it in stable but it
> should be relatively easy to backport so no objection from me to put it
> there.

OK, done.  Sasha would have grabbed it anyway :(
Oscar Salvador Dec. 18, 2018, 9:51 p.m. UTC | #4
On Mon, Dec 17, 2018 at 03:07:26PM -0800, Andrew Morton wrote:
> On Mon, 17 Dec 2018 23:51:13 +0100 Oscar Salvador <osalvador@suse.de> wrote:
> 
> > v1 -> v2:
> > 	- Fix the logic for skipping pages by Michal
> > 
> > ---
> 
> Please be careful with the "^---$".  It signifies end-of-changelog, so
> I ended up without a changelog!

Sorry Andrew, somehow I screwed it up!
I will be more careful next time.

> 
> > >From e346b151037d3c37feb10a981a4d2a25018acf81 Mon Sep 17 00:00:00 2001
> > From: Oscar Salvador <osalvador@suse.de>
> > Date: Mon, 17 Dec 2018 14:53:35 +0100
> > Subject: [PATCH] mm, page_alloc: Fix has_unmovable_pages for HugePages
> > 
> > While playing with gigantic hugepages and memory_hotplug, I triggered
> > the following #PF when "cat memoryX/removable":
> > 
> > ...
> >
> > Also, since gigantic pages span several pageblocks, re-adjust the logic
> > for skipping pages.
> > 
> > Signed-off-by: Oscar Salvador <osalvador@suse.de>
> 
> cc:stable?
>
Wei Yang Dec. 19, 2018, 2:25 p.m. UTC | #5
On Mon, Dec 17, 2018 at 11:51:13PM +0100, Oscar Salvador wrote:
>v1 -> v2:
>	- Fix the logic for skipping pages by Michal
>
>---
>>From e346b151037d3c37feb10a981a4d2a25018acf81 Mon Sep 17 00:00:00 2001
>From: Oscar Salvador <osalvador@suse.de>
>Date: Mon, 17 Dec 2018 14:53:35 +0100
>Subject: [PATCH] mm, page_alloc: Fix has_unmovable_pages for HugePages
>
>While playing with gigantic hugepages and memory_hotplug, I triggered
>the following #PF when "cat memoryX/removable":
>
><---
>kernel: BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
>kernel: #PF error: [normal kernel read fault]
>kernel: PGD 0 P4D 0
>kernel: Oops: 0000 [#1] SMP PTI
>kernel: CPU: 1 PID: 1481 Comm: cat Tainted: G            E     4.20.0-rc6-mm1-1-default+ #18
>kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.0.0-prebuilt.qemu-project.org 04/01/2014
>kernel: RIP: 0010:has_unmovable_pages+0x154/0x210
>kernel: Code: 1b ff ff ff eb 32 48 8b 45 00 bf 00 10 00 00 a9 00 00 01 00 74 07 0f b6 4d 51 48 d3 e7 e8 c4 81 05 00 48 85 c0 49 89 c1 75 7e <41> 8b 41 08 83 f8 09 74 41 83 f8 1b 74 3c 4d 2b 64 24 58 49 81 ec
>kernel: RSP: 0018:ffffc90000a1fd30 EFLAGS: 00010246
>kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000009
>kernel: RDX: ffffffff82aed4f0 RSI: 0000000000001000 RDI: 0000000000001000
>kernel: RBP: ffffea0001800000 R08: 0000000000200000 R09: 0000000000000000
>kernel: R10: 0000000000001000 R11: 0000000000000003 R12: ffff88813ffd45c0
>kernel: R13: 0000000000060000 R14: 0000000000000001 R15: ffffea0000000000
>kernel: FS:  00007fd71d9b3500(0000) GS:ffff88813bb00000(0000) knlGS:0000000000000000
>kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>kernel: CR2: 0000000000000008 CR3: 00000001371c2002 CR4: 00000000003606e0
>kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>kernel: Call Trace:
>kernel:  is_mem_section_removable+0x7d/0x100
>kernel:  removable_show+0x90/0xb0
>kernel:  dev_attr_show+0x1c/0x50
>kernel:  sysfs_kf_seq_show+0xca/0x1b0
>kernel:  seq_read+0x133/0x380
>kernel:  __vfs_read+0x26/0x180
>kernel:  vfs_read+0x89/0x140
>kernel:  ksys_read+0x42/0x90
>kernel:  do_syscall_64+0x5b/0x180
>kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>kernel: RIP: 0033:0x7fd71d4c8b41
>kernel: Code: fe ff ff 48 8d 3d 27 9e 09 00 48 83 ec 08 e8 96 02 02 00 66 0f 1f 44 00 00 8b 05 ea fc 2c 00 48 63 ff 85 c0 75 13 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 57 f3 c3 0f 1f 44 00 00 55 53 48 89 d5 48 89
>kernel: RSP: 002b:00007ffeab5f6448 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
>kernel: RAX: ffffffffffffffda RBX: 0000000000020000 RCX: 00007fd71d4c8b41
>kernel: RDX: 0000000000020000 RSI: 00007fd71d809000 RDI: 0000000000000003
>kernel: RBP: 0000000000020000 R08: ffffffffffffffff R09: 0000000000000000
>kernel: R10: 000000000000038b R11: 0000000000000246 R12: 00007fd71d809000
>kernel: R13: 0000000000000003 R14: 00007fd71d80900f R15: 0000000000020000
>kernel: Modules linked in: af_packet(E) xt_tcpudp(E) ipt_REJECT(E) xt_conntrack(E) nf_conntrack(E) nf_defrag_ipv4(E) ip_set(E) nfnetlink(E) ebtable_nat(E) ebtable_broute(E) bridge(E) stp(E) llc(E) iptable_mangle(E) iptable_raw(E) iptable_security(E) ebtable_filter(E) ebtables(E) iptable_filter(E) ip_tables(E) x_tables(E) kvm_intel(E) kvm(E) irqbypass(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) bochs_drm(E) ttm(E) drm_kms_helper(E) drm(E) aesni_intel(E) virtio_net(E) syscopyarea(E) net_failover(E) sysfillrect(E) failover(E) aes_x86_64(E) crypto_simd(E) sysimgblt(E) cryptd(E) pcspkr(E) glue_helper(E) parport_pc(E) fb_sys_fops(E) i2c_piix4(E) parport(E) button(E) btrfs(E) libcrc32c(E) xor(E) zstd_decompress(E) zstd_compress(E) raid6_pq(E) sd_mod(E) ata_generic(E) ata_piix(E) ahci(E) libahci(E) serio_raw(E) crc32c_intel(E) virtio_pci(E) virtio_ring(E) virtio(E) libata(E) sg(E) scsi_mod(E) autofs4(E)
>kernel: CR2: 0000000000000008
>kernel: ---[ end trace 49cade81474e40e7 ]---
>kernel: RIP: 0010:has_unmovable_pages+0x154/0x210
>kernel: Code: 1b ff ff ff eb 32 48 8b 45 00 bf 00 10 00 00 a9 00 00 01 00 74 07 0f b6 4d 51 48 d3 e7 e8 c4 81 05 00 48 85 c0 49 89 c1 75 7e <41> 8b 41 08 83 f8 09 74 41 83 f8 1b 74 3c 4d 2b 64 24 58 49 81 ec
>kernel: RSP: 0018:ffffc90000a1fd30 EFLAGS: 00010246
>kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000009
>kernel: RDX: ffffffff82aed4f0 RSI: 0000000000001000 RDI: 0000000000001000
>kernel: RBP: ffffea0001800000 R08: 0000000000200000 R09: 0000000000000000
>kernel: R10: 0000000000001000 R11: 0000000000000003 R12: ffff88813ffd45c0
>kernel: R13: 0000000000060000 R14: 0000000000000001 R15: ffffea0000000000
>kernel: FS:  00007fd71d9b3500(0000) GS:ffff88813bb00000(0000) knlGS:0000000000000000
>kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>kernel: CR2: 0000000000000008 CR3: 00000001371c2002 CR4: 00000000003606e0
>kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>--->
>
>The reason is we do not pass the Head to page_hstate(), and so,
>the call to compound_order() in page_hstate() returns 0, so
>we end up checking all hstates's size to match PAGE_SIZE.
>
>Obviously, we do not find any hstate matching that size, and we
>return NULL.
>Then, we dereference that NULL pointer in hugepage_migration_supported()
>and we got the #PF from above.
>
>Fix that by getting the head page before calling page_hstate().
>
>Also, since gigantic pages span several pageblocks, re-adjust the logic
>for skipping pages.
>
>Signed-off-by: Oscar Salvador <osalvador@suse.de>
>---
> mm/page_alloc.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
>diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>index a6e7bfd18cde..90ad281f750c 100644
>--- a/mm/page_alloc.c
>+++ b/mm/page_alloc.c
>@@ -8038,11 +8038,14 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
> 		 * handle each tail page individually in migration.
> 		 */
> 		if (PageHuge(page)) {
>+			struct page *head = compound_head(page);
>+			unsigned int skip_pages;
> 
>-			if (!hugepage_migration_supported(page_hstate(page)))
>+			if (!hugepage_migration_supported(page_hstate(head)))
> 				goto unmovable;
> 
>-			iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
>+			skip_pages = (1 << compound_order(head)) - (page - head);
>+			iter = round_up(iter + 1, skip_pages) - 1;

The comment of round_up says round up to next specified power of 2.  And
second parameter must be a power of 2.

Look skip_pages not satisfy this.

> 			continue;
> 		}
> 
>-- 
>2.13.7
Wei Yang Dec. 19, 2018, 2:28 p.m. UTC | #6
On Wed, Dec 19, 2018 at 02:25:28PM +0000, Wei Yang wrote:
>On Mon, Dec 17, 2018 at 11:51:13PM +0100, Oscar Salvador wrote:
>>v1 -> v2:
>>	- Fix the logic for skipping pages by Michal
>>
>>---
>>>From e346b151037d3c37feb10a981a4d2a25018acf81 Mon Sep 17 00:00:00 2001
>>From: Oscar Salvador <osalvador@suse.de>
>>Date: Mon, 17 Dec 2018 14:53:35 +0100
>>Subject: [PATCH] mm, page_alloc: Fix has_unmovable_pages for HugePages
>>
>>While playing with gigantic hugepages and memory_hotplug, I triggered
>>the following #PF when "cat memoryX/removable":
>>
>><---
>>kernel: BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
>>kernel: #PF error: [normal kernel read fault]
>>kernel: PGD 0 P4D 0
>>kernel: Oops: 0000 [#1] SMP PTI
>>kernel: CPU: 1 PID: 1481 Comm: cat Tainted: G            E     4.20.0-rc6-mm1-1-default+ #18
>>kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.0.0-prebuilt.qemu-project.org 04/01/2014
>>kernel: RIP: 0010:has_unmovable_pages+0x154/0x210
>>kernel: Code: 1b ff ff ff eb 32 48 8b 45 00 bf 00 10 00 00 a9 00 00 01 00 74 07 0f b6 4d 51 48 d3 e7 e8 c4 81 05 00 48 85 c0 49 89 c1 75 7e <41> 8b 41 08 83 f8 09 74 41 83 f8 1b 74 3c 4d 2b 64 24 58 49 81 ec
>>kernel: RSP: 0018:ffffc90000a1fd30 EFLAGS: 00010246
>>kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000009
>>kernel: RDX: ffffffff82aed4f0 RSI: 0000000000001000 RDI: 0000000000001000
>>kernel: RBP: ffffea0001800000 R08: 0000000000200000 R09: 0000000000000000
>>kernel: R10: 0000000000001000 R11: 0000000000000003 R12: ffff88813ffd45c0
>>kernel: R13: 0000000000060000 R14: 0000000000000001 R15: ffffea0000000000
>>kernel: FS:  00007fd71d9b3500(0000) GS:ffff88813bb00000(0000) knlGS:0000000000000000
>>kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>kernel: CR2: 0000000000000008 CR3: 00000001371c2002 CR4: 00000000003606e0
>>kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>>kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>>kernel: Call Trace:
>>kernel:  is_mem_section_removable+0x7d/0x100
>>kernel:  removable_show+0x90/0xb0
>>kernel:  dev_attr_show+0x1c/0x50
>>kernel:  sysfs_kf_seq_show+0xca/0x1b0
>>kernel:  seq_read+0x133/0x380
>>kernel:  __vfs_read+0x26/0x180
>>kernel:  vfs_read+0x89/0x140
>>kernel:  ksys_read+0x42/0x90
>>kernel:  do_syscall_64+0x5b/0x180
>>kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>kernel: RIP: 0033:0x7fd71d4c8b41
>>kernel: Code: fe ff ff 48 8d 3d 27 9e 09 00 48 83 ec 08 e8 96 02 02 00 66 0f 1f 44 00 00 8b 05 ea fc 2c 00 48 63 ff 85 c0 75 13 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 57 f3 c3 0f 1f 44 00 00 55 53 48 89 d5 48 89
>>kernel: RSP: 002b:00007ffeab5f6448 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
>>kernel: RAX: ffffffffffffffda RBX: 0000000000020000 RCX: 00007fd71d4c8b41
>>kernel: RDX: 0000000000020000 RSI: 00007fd71d809000 RDI: 0000000000000003
>>kernel: RBP: 0000000000020000 R08: ffffffffffffffff R09: 0000000000000000
>>kernel: R10: 000000000000038b R11: 0000000000000246 R12: 00007fd71d809000
>>kernel: R13: 0000000000000003 R14: 00007fd71d80900f R15: 0000000000020000
>>kernel: Modules linked in: af_packet(E) xt_tcpudp(E) ipt_REJECT(E) xt_conntrack(E) nf_conntrack(E) nf_defrag_ipv4(E) ip_set(E) nfnetlink(E) ebtable_nat(E) ebtable_broute(E) bridge(E) stp(E) llc(E) iptable_mangle(E) iptable_raw(E) iptable_security(E) ebtable_filter(E) ebtables(E) iptable_filter(E) ip_tables(E) x_tables(E) kvm_intel(E) kvm(E) irqbypass(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) bochs_drm(E) ttm(E) drm_kms_helper(E) drm(E) aesni_intel(E) virtio_net(E) syscopyarea(E) net_failover(E) sysfillrect(E) failover(E) aes_x86_64(E) crypto_simd(E) sysimgblt(E) cryptd(E) pcspkr(E) glue_helper(E) parport_pc(E) fb_sys_fops(E) i2c_piix4(E) parport(E) button(E) btrfs(E) libcrc32c(E) xor(E) zstd_decompress(E) zstd_compress(E) raid6_pq(E) sd_mod(E) ata_generic(E) ata_piix(E) ahci(E) libahci(E) serio_raw(E) crc32c_intel(E) virtio_pci(E) virtio_ring(E) virtio(E) libata(E) sg(E) scsi_mod(E) autofs4(E)
>>kernel: CR2: 0000000000000008
>>kernel: ---[ end trace 49cade81474e40e7 ]---
>>kernel: RIP: 0010:has_unmovable_pages+0x154/0x210
>>kernel: Code: 1b ff ff ff eb 32 48 8b 45 00 bf 00 10 00 00 a9 00 00 01 00 74 07 0f b6 4d 51 48 d3 e7 e8 c4 81 05 00 48 85 c0 49 89 c1 75 7e <41> 8b 41 08 83 f8 09 74 41 83 f8 1b 74 3c 4d 2b 64 24 58 49 81 ec
>>kernel: RSP: 0018:ffffc90000a1fd30 EFLAGS: 00010246
>>kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000009
>>kernel: RDX: ffffffff82aed4f0 RSI: 0000000000001000 RDI: 0000000000001000
>>kernel: RBP: ffffea0001800000 R08: 0000000000200000 R09: 0000000000000000
>>kernel: R10: 0000000000001000 R11: 0000000000000003 R12: ffff88813ffd45c0
>>kernel: R13: 0000000000060000 R14: 0000000000000001 R15: ffffea0000000000
>>kernel: FS:  00007fd71d9b3500(0000) GS:ffff88813bb00000(0000) knlGS:0000000000000000
>>kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>kernel: CR2: 0000000000000008 CR3: 00000001371c2002 CR4: 00000000003606e0
>>kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>>kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>>--->
>>
>>The reason is we do not pass the Head to page_hstate(), and so,
>>the call to compound_order() in page_hstate() returns 0, so
>>we end up checking all hstates's size to match PAGE_SIZE.
>>
>>Obviously, we do not find any hstate matching that size, and we
>>return NULL.
>>Then, we dereference that NULL pointer in hugepage_migration_supported()
>>and we got the #PF from above.
>>
>>Fix that by getting the head page before calling page_hstate().
>>
>>Also, since gigantic pages span several pageblocks, re-adjust the logic
>>for skipping pages.
>>
>>Signed-off-by: Oscar Salvador <osalvador@suse.de>
>>---
>> mm/page_alloc.c | 7 +++++--
>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>
>>diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>index a6e7bfd18cde..90ad281f750c 100644
>>--- a/mm/page_alloc.c
>>+++ b/mm/page_alloc.c
>>@@ -8038,11 +8038,14 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
>> 		 * handle each tail page individually in migration.
>> 		 */
>> 		if (PageHuge(page)) {
>>+			struct page *head = compound_head(page);
>>+			unsigned int skip_pages;
>> 
>>-			if (!hugepage_migration_supported(page_hstate(page)))
>>+			if (!hugepage_migration_supported(page_hstate(head)))
>> 				goto unmovable;
>> 
>>-			iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
>>+			skip_pages = (1 << compound_order(head)) - (page - head);
>>+			iter = round_up(iter + 1, skip_pages) - 1;
>
>The comment of round_up says round up to next specified power of 2.  And
>second parameter must be a power of 2.
>
>Look skip_pages not satisfy this.
>

Maybe the first version from Oscar is correct, because round_up() does
the (page - head) calculation in it.

>> 			continue;
>> 		}
>> 
>>-- 
>>2.13.7
>
>-- 
>Wei Yang
>Help you, Help me
Oscar Salvador Dec. 19, 2018, 11:39 p.m. UTC | #7
On Wed, Dec 19, 2018 at 02:25:28PM +0000, Wei Yang wrote:
> >-			iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
> >+			skip_pages = (1 << compound_order(head)) - (page - head);
> >+			iter = round_up(iter + 1, skip_pages) - 1;
> 
> The comment of round_up says round up to next specified power of 2.  And
> second parameter must be a power of 2.
> 
> Look skip_pages not satisfy this.

I thought that gigantic pages were always allocated on 1GB aligned.
At least alloc_gigantic_page() looks for 1GB range, aligned to that.
But I see that in alloc_contig_range(), the boundaries can differ.

Anyway, unless I am missing something, I think that we could just
get rid of the round_up() and do something like:

<--
skip_pages = (1 << compound_order(head)) - (page - head);
iter = skip_pages - 1;
-->

which looks more simple IMHO.

It should just work for 2MB and 1GB Hugepages.
Michal Hocko Dec. 20, 2018, 9:12 a.m. UTC | #8
On Thu 20-12-18 00:39:18, Oscar Salvador wrote:
> On Wed, Dec 19, 2018 at 02:25:28PM +0000, Wei Yang wrote:
> > >-			iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
> > >+			skip_pages = (1 << compound_order(head)) - (page - head);
> > >+			iter = round_up(iter + 1, skip_pages) - 1;
> > 
> > The comment of round_up says round up to next specified power of 2.  And
> > second parameter must be a power of 2.
> > 
> > Look skip_pages not satisfy this.

Yes this is true but the resulting numbers should be correct even for
skips that are not power of 2 AFAIC. Or do you have any counter example?

> 
> At least alloc_gigantic_page() looks for 1GB range, aligned to that.
> But I see that in alloc_contig_range(), the boundaries can differ.
> 
> Anyway, unless I am missing something, I think that we could just
> get rid of the round_up() and do something like:
> 
> <--
> skip_pages = (1 << compound_order(head)) - (page - head);
> iter = skip_pages - 1;
> --
> 
> which looks more simple IMHO.

Agreed!
Oscar Salvador Dec. 20, 2018, 12:49 p.m. UTC | #9
On Thu, Dec 20, 2018 at 10:12:28AM +0100, Michal Hocko wrote:
> > <--
> > skip_pages = (1 << compound_order(head)) - (page - head);
> > iter = skip_pages - 1;
> > --
> > 
> > which looks more simple IMHO.
> 
> Agreed!

Andrew, can you please apply the next diff chunk on top of the patch:

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4812287e56a0..978576d93783 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8094,7 +8094,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
 				goto unmovable;
 
 			skip_pages = (1 << compound_order(head)) - (page - head);
-			iter = round_up(iter + 1, skip_pages) - 1;
+			iter = skip_pages - 1;
 			continue;
 		}

Thanks!
Michal Hocko Dec. 20, 2018, 1:06 p.m. UTC | #10
On Thu 20-12-18 13:49:28, Oscar Salvador wrote:
> On Thu, Dec 20, 2018 at 10:12:28AM +0100, Michal Hocko wrote:
> > > <--
> > > skip_pages = (1 << compound_order(head)) - (page - head);
> > > iter = skip_pages - 1;
> > > --
> > > 
> > > which looks more simple IMHO.
> > 
> > Agreed!
> 
> Andrew, can you please apply the next diff chunk on top of the patch:
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 4812287e56a0..978576d93783 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8094,7 +8094,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
>  				goto unmovable;
>  
>  			skip_pages = (1 << compound_order(head)) - (page - head);
> -			iter = round_up(iter + 1, skip_pages) - 1;
> +			iter = skip_pages - 1;

You did want iter += skip_pages - 1 here right?
Wei Yang Dec. 20, 2018, 1:08 p.m. UTC | #11
On Thu, Dec 20, 2018 at 01:49:28PM +0100, Oscar Salvador wrote:
>On Thu, Dec 20, 2018 at 10:12:28AM +0100, Michal Hocko wrote:
>> > <--
>> > skip_pages = (1 << compound_order(head)) - (page - head);
>> > iter = skip_pages - 1;
>> > --
>> > 
>> > which looks more simple IMHO.
>> 
>> Agreed!
>
>Andrew, can you please apply the next diff chunk on top of the patch:
>
>diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>index 4812287e56a0..978576d93783 100644
>--- a/mm/page_alloc.c
>+++ b/mm/page_alloc.c
>@@ -8094,7 +8094,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
> 				goto unmovable;
> 
> 			skip_pages = (1 << compound_order(head)) - (page - head);
>-			iter = round_up(iter + 1, skip_pages) - 1;
>+			iter = skip_pages - 1;

This complicated the calculation. 

The original code is correct.

iter = round_up(iter + 1, 1<<compound_order(head)) - 1;

> 			continue;
> 		}
>
>Thanks!
>-- 
>Oscar Salvador
>SUSE L3
Oscar Salvador Dec. 20, 2018, 1:41 p.m. UTC | #12
On Thu, Dec 20, 2018 at 02:06:06PM +0100, Michal Hocko wrote:
> On Thu 20-12-18 13:49:28, Oscar Salvador wrote:
> > On Thu, Dec 20, 2018 at 10:12:28AM +0100, Michal Hocko wrote:
> > > > <--
> > > > skip_pages = (1 << compound_order(head)) - (page - head);
> > > > iter = skip_pages - 1;
> > > > --
> > > > 
> > > > which looks more simple IMHO.
> > > 
> > > Agreed!
> > 
> > Andrew, can you please apply the next diff chunk on top of the patch:
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 4812287e56a0..978576d93783 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -8094,7 +8094,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
> >  				goto unmovable;
> >  
> >  			skip_pages = (1 << compound_order(head)) - (page - head);
> > -			iter = round_up(iter + 1, skip_pages) - 1;
> > +			iter = skip_pages - 1;
> 
> You did want iter += skip_pages - 1 here right?

Bleh, yeah.
I am taking vacation today so my brain has left me hours ago, sorry.
Should be:

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4812287e56a0..0634fbdef078 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8094,7 +8094,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
                                goto unmovable;
 
                        skip_pages = (1 << compound_order(head)) - (page - head);
-                       iter = round_up(iter + 1, skip_pages) - 1;
+                       iter += skip_pages - 1;
                        continue;
                }
Oscar Salvador Dec. 20, 2018, 1:49 p.m. UTC | #13
On Thu, Dec 20, 2018 at 01:08:57PM +0000, Wei Yang wrote:
> This complicated the calculation. 
> 
> The original code is correct.
> 
> iter = round_up(iter + 1, 1<<compound_order(head)) - 1;

I think it would be correct if we know for sure that everthing
is pageblock aligned.
Because 2mb-hugepages fit in one pageblock, and 1gb-hugepages expands
512 pageblocks exactly.

But I think that it is better if we leave the assumption behind.
Oscar Salvador Dec. 20, 2018, 2:21 p.m. UTC | #14
On Thu, Dec 20, 2018 at 02:41:32PM +0100, Oscar Salvador wrote:
> On Thu, Dec 20, 2018 at 02:06:06PM +0100, Michal Hocko wrote:
> > You did want iter += skip_pages - 1 here right?
> 
> Bleh, yeah.
> I am taking vacation today so my brain has left me hours ago, sorry.
> Should be:
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 4812287e56a0..0634fbdef078 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8094,7 +8094,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
>                                 goto unmovable;
>  
>                         skip_pages = (1 << compound_order(head)) - (page - head);
> -                       iter = round_up(iter + 1, skip_pages) - 1;
> +                       iter += skip_pages - 1;
>                         continue;
>                 }

On a second thought, I think it should not really matter.

AFAICS, we can have these scenarios:

1) the head page is the first page in the pabeblock
2) first page in the pageblock is not a head but part of a hugepage
3) the head is somewhere within the pageblock

For cases 1) and 3), iter will just get the right value and we will
break the loop afterwards.

In case 2), iter will be set to a value to skip over the remaining pages.

I am assuming that hugepages are allocated and packed together.

Note that I am not against the change, but I just wanted to see if there is
something I am missing.
Michal Hocko Dec. 20, 2018, 2:39 p.m. UTC | #15
On Thu 20-12-18 15:21:27, Oscar Salvador wrote:
> On Thu, Dec 20, 2018 at 02:41:32PM +0100, Oscar Salvador wrote:
> > On Thu, Dec 20, 2018 at 02:06:06PM +0100, Michal Hocko wrote:
> > > You did want iter += skip_pages - 1 here right?
> > 
> > Bleh, yeah.
> > I am taking vacation today so my brain has left me hours ago, sorry.
> > Should be:
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 4812287e56a0..0634fbdef078 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -8094,7 +8094,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
> >                                 goto unmovable;
> >  
> >                         skip_pages = (1 << compound_order(head)) - (page - head);
> > -                       iter = round_up(iter + 1, skip_pages) - 1;
> > +                       iter += skip_pages - 1;
> >                         continue;
> >                 }
> 
> On a second thought, I think it should not really matter.
> 
> AFAICS, we can have these scenarios:
> 
> 1) the head page is the first page in the pabeblock
> 2) first page in the pageblock is not a head but part of a hugepage
> 3) the head is somewhere within the pageblock
> 
> For cases 1) and 3), iter will just get the right value and we will
> break the loop afterwards.
> 
> In case 2), iter will be set to a value to skip over the remaining pages.
> 
> I am assuming that hugepages are allocated and packed together.
> 
> Note that I am not against the change, but I just wanted to see if there is
> something I am missing.

Yes, you are missing that this code should be as sane as possible ;) You
are right that we are only processing one pageorder worth of pfns and
that the page order is bound to HUGETLB_PAGE_ORDER _right_now_. But
there is absolutely zero reason to hardcode that assumption into a
simple loop, right?
Wei Yang Dec. 20, 2018, 3:32 p.m. UTC | #16
On Thu, Dec 20, 2018 at 03:21:27PM +0100, Oscar Salvador wrote:
>On Thu, Dec 20, 2018 at 02:41:32PM +0100, Oscar Salvador wrote:
>> On Thu, Dec 20, 2018 at 02:06:06PM +0100, Michal Hocko wrote:
>> > You did want iter += skip_pages - 1 here right?
>> 
>> Bleh, yeah.
>> I am taking vacation today so my brain has left me hours ago, sorry.
>> Should be:
>> 
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 4812287e56a0..0634fbdef078 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -8094,7 +8094,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
>>                                 goto unmovable;
>>  
>>                         skip_pages = (1 << compound_order(head)) - (page - head);
>> -                       iter = round_up(iter + 1, skip_pages) - 1;
>> +                       iter += skip_pages - 1;
>>                         continue;
>>                 }
>
>On a second thought, I think it should not really matter.
>
>AFAICS, we can have these scenarios:
>
>1) the head page is the first page in the pabeblock
>2) first page in the pageblock is not a head but part of a hugepage
>3) the head is somewhere within the pageblock
>
>For cases 1) and 3), iter will just get the right value and we will
>break the loop afterwards.
>
>In case 2), iter will be set to a value to skip over the remaining pages.
>
>I am assuming that hugepages are allocated and packed together.
>
>Note that I am not against the change, but I just wanted to see if there is
>something I am missing.

I have another way of classification.

First is three cases of expected new_iter.

             1          2                        3
             v          v                        v
 HugePage    +-----------------------------------+
                                                  ^
                                                  |
                                               new_iter

From this char, we may have three cases:

  1) iter is the head page 
  2) iter is the middle page
  2) iter is the tail page

No matter which case iter starts, new_iter should be point to tail + 1.

Second is the relationship between the new_iter and the pageblock, only
two cases:

  1) new_iter is still in current pageblock
  2) new_iter is out of current pageblock

For both cases, current loop handles it well.

Now let's go back to see how to calculate new_iter. From the chart
above, we can see this formula stands for all three cases:

    new_iter = round_up(iter + 1, page_size(HugePage))

So it looks the first version is correct.

>-- 
>Oscar Salvador
>SUSE L3
Oscar Salvador Dec. 20, 2018, 3:37 p.m. UTC | #17
On Thu, Dec 20, 2018 at 03:39:39PM +0100, Michal Hocko wrote:
> Yes, you are missing that this code should be as sane as possible ;) You
> are right that we are only processing one pageorder worth of pfns and
> that the page order is bound to HUGETLB_PAGE_ORDER _right_now_. But
> there is absolutely zero reason to hardcode that assumption into a
> simple loop, right?

Of course, it makes sense to keep the code as sane as possible.
This is why I said I was not against the change, but I wanted to
see if I was missing something else besides the assumption.

Thanks
Oscar Salvador Dec. 20, 2018, 3:52 p.m. UTC | #18
On Thu, Dec 20, 2018 at 03:32:37PM +0000, Wei Yang wrote:
> Now let's go back to see how to calculate new_iter. From the chart
> above, we can see this formula stands for all three cases:
> 
>     new_iter = round_up(iter + 1, page_size(HugePage))
> 
> So it looks the first version is correct.

Let us assume:

* iter = 0 (page first of the pageblock)
* page is a tail
* hugepage is 2mb

So we have the following:

iter = round_up(iter + 1, 1<<compound_order(head)) - 1;

which translates to:

iter = round_up(1, 512) - 1 = 511;

Then iter will be incremented to 512, and we break the loop.

The outcome of this is that ouf ot 512 pages, we only scanned 1,
and we skipped all the other 511 pages by mistake.
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a6e7bfd18cde..90ad281f750c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8038,11 +8038,14 @@  bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
 		 * handle each tail page individually in migration.
 		 */
 		if (PageHuge(page)) {
+			struct page *head = compound_head(page);
+			unsigned int skip_pages;
 
-			if (!hugepage_migration_supported(page_hstate(page)))
+			if (!hugepage_migration_supported(page_hstate(head)))
 				goto unmovable;
 
-			iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
+			skip_pages = (1 << compound_order(head)) - (page - head);
+			iter = round_up(iter + 1, skip_pages) - 1;
 			continue;
 		}