Message ID | 20241113083235.166798-1-tujinjiang@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: fix NULL pointer dereference in alloc_pages_bulk_noprof | expand |
On 11/13/24 09:32, Jinjiang Tu wrote: > We triggered a NULL pointer dereference for ac.preferred_zoneref->zone > in alloc_pages_bulk_noprof() when the task is migrated between cpusets. > > When cpuset is enabled, in prepare_alloc_pages(), ac->nodemask may be > ¤t->mems_allowed. when first_zones_zonelist() is called to find > preferred_zoneref, the ac->nodemask may be modified concurrently if the > task is migrated between different cpusets. Assuming we have 2 NUMA Node, > when traversing Node1 in ac->zonelist, the nodemask is 2, and when > traversing Node2 in ac->zonelist, the nodemask is 1. As a result, the > ac->preferred_zoneref points to NULL zone. > > In alloc_pages_bulk_noprof(), for_each_zone_zonelist_nodemask() finds a > allowable zone and calls zonelist_node_idx(ac.preferred_zoneref), leading > to NULL pointer dereference. > > __alloc_pages_noprof() fixes this issue by checking NULL pointer in commit > ea57485af8f4 ("mm, page_alloc: fix check for NULL preferred_zone") and > commit df76cee6bbeb ("mm, page_alloc: remove redundant checks from alloc > fastpath"). > > To fix it, check NULL pointer for preferred_zoneref->zone. > > Fixes: 387ba26fb1cb ("mm/page_alloc: add a bulk page allocator") > Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Thanks. > --- > mm/page_alloc.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index c6c7bb3ea71b..4afe8bc06358 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4592,7 +4592,8 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, > gfp = alloc_gfp; > > /* Find an allowed local zone that meets the low watermark. */ > - for_each_zone_zonelist_nodemask(zone, z, ac.zonelist, ac.highest_zoneidx, ac.nodemask) { > + z = ac.preferred_zoneref; > + for_next_zone_zonelist_nodemask(zone, z, ac.highest_zoneidx, ac.nodemask) { > unsigned long mark; > > if (cpusets_enabled() && (alloc_flags & ALLOC_CPUSET) &&
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c6c7bb3ea71b..4afe8bc06358 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4592,7 +4592,8 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, gfp = alloc_gfp; /* Find an allowed local zone that meets the low watermark. */ - for_each_zone_zonelist_nodemask(zone, z, ac.zonelist, ac.highest_zoneidx, ac.nodemask) { + z = ac.preferred_zoneref; + for_next_zone_zonelist_nodemask(zone, z, ac.highest_zoneidx, ac.nodemask) { unsigned long mark; if (cpusets_enabled() && (alloc_flags & ALLOC_CPUSET) &&
We triggered a NULL pointer dereference for ac.preferred_zoneref->zone in alloc_pages_bulk_noprof() when the task is migrated between cpusets. When cpuset is enabled, in prepare_alloc_pages(), ac->nodemask may be ¤t->mems_allowed. when first_zones_zonelist() is called to find preferred_zoneref, the ac->nodemask may be modified concurrently if the task is migrated between different cpusets. Assuming we have 2 NUMA Node, when traversing Node1 in ac->zonelist, the nodemask is 2, and when traversing Node2 in ac->zonelist, the nodemask is 1. As a result, the ac->preferred_zoneref points to NULL zone. In alloc_pages_bulk_noprof(), for_each_zone_zonelist_nodemask() finds a allowable zone and calls zonelist_node_idx(ac.preferred_zoneref), leading to NULL pointer dereference. __alloc_pages_noprof() fixes this issue by checking NULL pointer in commit ea57485af8f4 ("mm, page_alloc: fix check for NULL preferred_zone") and commit df76cee6bbeb ("mm, page_alloc: remove redundant checks from alloc fastpath"). To fix it, check NULL pointer for preferred_zoneref->zone. Fixes: 387ba26fb1cb ("mm/page_alloc: add a bulk page allocator") Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com> --- mm/page_alloc.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)