Message ID | 20250207100453.9989-1-richard.weiyang@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/mm_init.c: use round_up() to align movable range | expand |
On 2/7/2025 3:34 PM, Wei Yang wrote: > Since MAX_ORDER_NR_PAGES is power of 2, let's use a faster version. Makes sense to me. Reviewed-by: Shivank Garg <shivankg@amd.com> I noticed two similar instances in the same file where round_up() might also be applicable: mm_init.c (usemap_size): usemapsize = roundup(zonesize, pageblock_nr_pages); usemapsize = roundup(usemapsize, BITS_PER_LONG); Since both pageblock_nr_pages (1UL << pageblock_order) and BITS_PER_LONG (32 or 64) are powers of 2, these could potentially use round_up() as well. Perhaps worth considering in a follow-up patch? Thanks, Shivank > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > --- > mm/mm_init.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/mm_init.c b/mm/mm_init.c > index dec4084fe15a..99ef70a8b63c 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -438,7 +438,7 @@ static void __init find_zone_movable_pfns_for_nodes(void) > * was requested by the user > */ > required_movablecore = > - roundup(required_movablecore, MAX_ORDER_NR_PAGES); > + round_up(required_movablecore, MAX_ORDER_NR_PAGES); > required_movablecore = min(totalpages, required_movablecore); > corepages = totalpages - required_movablecore; > > @@ -549,7 +549,7 @@ static void __init find_zone_movable_pfns_for_nodes(void) > unsigned long start_pfn, end_pfn; > > zone_movable_pfn[nid] = > - roundup(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES); > + round_up(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES); > > get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); > if (zone_movable_pfn[nid] >= end_pfn)
On Tue, Feb 11, 2025 at 11:43:52PM +0530, Shivank Garg wrote: > > >On 2/7/2025 3:34 PM, Wei Yang wrote: >> Since MAX_ORDER_NR_PAGES is power of 2, let's use a faster version. > >Makes sense to me. > >Reviewed-by: Shivank Garg <shivankg@amd.com> > Thanks for taking a look. > >I noticed two similar instances in the same file >where round_up() might also be applicable: > > mm_init.c (usemap_size): > usemapsize = roundup(zonesize, pageblock_nr_pages); > usemapsize = roundup(usemapsize, BITS_PER_LONG); > >Since both pageblock_nr_pages (1UL << pageblock_order) and BITS_PER_LONG (32 or 64) >are powers of 2, these could potentially use round_up() as well. Perhaps >worth considering in a follow-up patch? It looks reasonable to me. I would prepare one. Thanks. > >Thanks, >Shivank > > > >> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >> --- >> mm/mm_init.c | 4 ++-- >> 1 file changed, 2 insertions(+), 2 deletions(-) >> >> diff --git a/mm/mm_init.c b/mm/mm_init.c >> index dec4084fe15a..99ef70a8b63c 100644 >> --- a/mm/mm_init.c >> +++ b/mm/mm_init.c >> @@ -438,7 +438,7 @@ static void __init find_zone_movable_pfns_for_nodes(void) >> * was requested by the user >> */ >> required_movablecore = >> - roundup(required_movablecore, MAX_ORDER_NR_PAGES); >> + round_up(required_movablecore, MAX_ORDER_NR_PAGES); >> required_movablecore = min(totalpages, required_movablecore); >> corepages = totalpages - required_movablecore; >> >> @@ -549,7 +549,7 @@ static void __init find_zone_movable_pfns_for_nodes(void) >> unsigned long start_pfn, end_pfn; >> >> zone_movable_pfn[nid] = >> - roundup(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES); >> + round_up(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES); >> >> get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); >> if (zone_movable_pfn[nid] >= end_pfn)
On Fri, Feb 07, 2025 at 10:04:53AM +0000, Wei Yang wrote: > Since MAX_ORDER_NR_PAGES is power of 2, let's use a faster version. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> > --- > mm/mm_init.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/mm_init.c b/mm/mm_init.c > index dec4084fe15a..99ef70a8b63c 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -438,7 +438,7 @@ static void __init find_zone_movable_pfns_for_nodes(void) > * was requested by the user > */ > required_movablecore = > - roundup(required_movablecore, MAX_ORDER_NR_PAGES); > + round_up(required_movablecore, MAX_ORDER_NR_PAGES); > required_movablecore = min(totalpages, required_movablecore); > corepages = totalpages - required_movablecore; > > @@ -549,7 +549,7 @@ static void __init find_zone_movable_pfns_for_nodes(void) > unsigned long start_pfn, end_pfn; > > zone_movable_pfn[nid] = > - roundup(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES); > + round_up(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES); > > get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); > if (zone_movable_pfn[nid] >= end_pfn) > -- > 2.34.1 >
On 2/7/25 15:34, Wei Yang wrote: > Since MAX_ORDER_NR_PAGES is power of 2, let's use a faster version. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > --- > mm/mm_init.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/mm_init.c b/mm/mm_init.c > index dec4084fe15a..99ef70a8b63c 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -438,7 +438,7 @@ static void __init find_zone_movable_pfns_for_nodes(void) > * was requested by the user > */ > required_movablecore = > - roundup(required_movablecore, MAX_ORDER_NR_PAGES); > + round_up(required_movablecore, MAX_ORDER_NR_PAGES); > required_movablecore = min(totalpages, required_movablecore); > corepages = totalpages - required_movablecore; > > @@ -549,7 +549,7 @@ static void __init find_zone_movable_pfns_for_nodes(void) > unsigned long start_pfn, end_pfn; > > zone_movable_pfn[nid] = > - roundup(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES); > + round_up(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES); > > get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); > if (zone_movable_pfn[nid] >= end_pfn) Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
diff --git a/mm/mm_init.c b/mm/mm_init.c index dec4084fe15a..99ef70a8b63c 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -438,7 +438,7 @@ static void __init find_zone_movable_pfns_for_nodes(void) * was requested by the user */ required_movablecore = - roundup(required_movablecore, MAX_ORDER_NR_PAGES); + round_up(required_movablecore, MAX_ORDER_NR_PAGES); required_movablecore = min(totalpages, required_movablecore); corepages = totalpages - required_movablecore; @@ -549,7 +549,7 @@ static void __init find_zone_movable_pfns_for_nodes(void) unsigned long start_pfn, end_pfn; zone_movable_pfn[nid] = - roundup(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES); + round_up(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES); get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); if (zone_movable_pfn[nid] >= end_pfn)
Since MAX_ORDER_NR_PAGES is power of 2, let's use a faster version. Signed-off-by: Wei Yang <richard.weiyang@gmail.com> --- mm/mm_init.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)