mbox series

[v2,0/8] memblock: clenup

Message ID 20240425071929.18004-1-richard.weiyang@gmail.com (mailing list archive)
Headers show
Series memblock: clenup | expand

Message

Wei Yang April 25, 2024, 7:19 a.m. UTC
Changes in v2:

* remove the two round reduction patch
* add 129th memory block to memblock.reserved
* add memblock_reserve_many_may_conflict_check()
* two new patch at the end

Wei Yang (8):
  memblock tests: reserve the 129th memory block at all possible
    position
  memblock tests: add memblock_reserve_many_may_conflict_check()
  mm/memblock: fix comment for memblock_isolate_range()
  mm/memblock: remove consecutive regions at once
  memblock tests: add memblock_overlaps_region_checks
  mm/memblock: return true directly on finding overlap region
  mm/memblock: use PAGE_ALIGN_DOWN to get pgend in free_memmap
  mm/memblock: default region's nid may be MAX_NUMNODES

 mm/memblock.c                            |  36 +--
 tools/include/linux/mm.h                 |   1 +
 tools/testing/memblock/tests/basic_api.c | 286 ++++++++++++++++++-----
 tools/testing/memblock/tests/common.c    |   4 +-
 tools/testing/memblock/tests/common.h    |   4 +
 5 files changed, 262 insertions(+), 69 deletions(-)

Comments

Mike Rapoport April 28, 2024, 6:40 a.m. UTC | #1
On Thu, Apr 25, 2024 at 07:19:23AM +0000, Wei Yang wrote:
> This may trigger the case fixed by commit 48c3b583bbdd ("mm/memblock:
> fix overlapping allocation when doubling reserved array").
> 
> This is done by adding the 129th reserve region into memblock.memory. If
> memblock_double_array() use this reserve region as new array, it fails.
> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  tools/testing/memblock/tests/basic_api.c | 123 +++++++++++++++++++++++
>  tools/testing/memblock/tests/common.c    |   4 +-
>  tools/testing/memblock/tests/common.h    |   1 +
>  3 files changed, 126 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
> index 1ae62272867a..748950e02589 100644
> --- a/tools/testing/memblock/tests/basic_api.c
> +++ b/tools/testing/memblock/tests/basic_api.c
> @@ -991,6 +991,128 @@ static int memblock_reserve_many_check(void)
>  	return 0;
>  }
>  
> +/* Keep the gap so these memory region will not be merged. */

The gap where? What regions should not be merged?
Also please add a comment with the test description

> +#define MEMORY_BASE_OFFSET(idx, offset) ((offset) + (MEM_SIZE * 2) * (idx))
> +static int memblock_reserve_many_may_conflict_check(void)
> +{
> +	int i, skip;
> +	void *orig_region;
> +	struct region r = {
> +		.base = SZ_16K,
> +		.size = SZ_16K,
> +	};
> +	phys_addr_t new_reserved_regions_size;
> +
> +	/*
> +	 *  0        1          129
> +	 *  +---+    +---+      +---+
> +	 *  |32K|    |32K|  ..  |32K|
> +	 *  +---+    +---+      +---+
> +	 *
> +	 * Pre-allocate the range for 129 memory block + one range for double
> +	 * memblock.reserved.regions at idx 0.
> +	 * See commit 48c3b583bbdd ("mm/memblock: fix overlapping allocation
> +	 * when doubling reserved array")

Sorry, but I'm failing to parse it

> +	 */
> +	dummy_physical_memory_init();
> +	phys_addr_t memory_base = dummy_physical_memory_base();
> +	phys_addr_t offset = PAGE_ALIGN(memory_base);
> +
> +	PREFIX_PUSH();
> +
> +	/* Reserve the 129th memory block for all possible positions*/
> +	for (skip = 1; skip <= INIT_MEMBLOCK_REGIONS + 1; skip++) {
> +		reset_memblock_regions();
> +		memblock_allow_resize();
> +
> +		reset_memblock_attributes();
> +		/* Add a valid memory region used by double_array(). */
> +		memblock_add(MEMORY_BASE_OFFSET(0, offset), MEM_SIZE);
> +		/*
> +		 * Add a memory region which will be reserved as 129th memory
> +		 * region. This is not expected to be used by double_array().
> +		 */
> +		memblock_add(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE);
> +
> +		for (i = 1; i <= INIT_MEMBLOCK_REGIONS + 1; i++) {
> +			if (i == skip)
> +				continue;
> +
> +			/* Reserve some fakes memory region to fulfill the memblock. */
> +			memblock_reserve(MEMORY_BASE_OFFSET(i, offset), MEM_SIZE);
> +
> +			if (i < skip) {
> +				ASSERT_EQ(memblock.reserved.cnt, i);
> +				ASSERT_EQ(memblock.reserved.total_size, i * MEM_SIZE);
> +			} else {
> +				ASSERT_EQ(memblock.reserved.cnt, i - 1);
> +				ASSERT_EQ(memblock.reserved.total_size, (i - 1) * MEM_SIZE);
> +			}
> +		}
> +
> +		orig_region = memblock.reserved.regions;
> +
> +		/* This reserve the 129 memory_region, and makes it double array. */
> +		memblock_reserve(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE);
> +
> +		/*
> +		 * This is the memory region size used by the doubled reserved.regions,
> +		 * and it has been reserved due to it has been used. The size is used to
> +		 * calculate the total_size that the memblock.reserved have now.
> +		 */
> +		new_reserved_regions_size = PAGE_ALIGN((INIT_MEMBLOCK_REGIONS * 2) *
> +						sizeof(struct memblock_region));
> +		/*
> +		 * The double_array() will find a free memory region as the new
> +		 * reserved.regions, and the used memory region will be reserved, so
> +		 * there will be one more region exist in the reserved memblock. And the
> +		 * one more reserved region's size is new_reserved_regions_size.
> +		 */
> +		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 2);
> +		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
> +							new_reserved_regions_size);
> +		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
> +
> +		/*
> +		 * The first reserved region is allocated for double array
> +		 * with the size of new_reserved_regions_size and the base to be
> +		 * MEMORY_BASE_OFFSET(0, offset) + SZ_32K - new_reserved_regions_size
> +		 */
> +		ASSERT_EQ(memblock.reserved.regions[0].base + memblock.reserved.regions[0].size,
> +			  MEMORY_BASE_OFFSET(0, offset) + SZ_32K);
> +		ASSERT_EQ(memblock.reserved.regions[0].size, new_reserved_regions_size);
> +
> +		/*
> +		 * Now memblock_double_array() works fine. Let's check after the
> +		 * double_array(), the memblock_reserve() still works as normal.
> +		 */
> +		memblock_reserve(r.base, r.size);
> +		ASSERT_EQ(memblock.reserved.regions[0].base, r.base);
> +		ASSERT_EQ(memblock.reserved.regions[0].size, r.size);
> +
> +		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 3);
> +		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
> +							new_reserved_regions_size +
> +							r.size);
> +		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
> +
> +		/*
> +		 * The current reserved.regions is occupying a range of memory that
> +		 * allocated from dummy_physical_memory_init(). After free the memory,
> +		 * we must not use it. So restore the origin memory region to make sure
> +		 * the tests can run as normal and not affected by the double array.
> +		 */
> +		memblock.reserved.regions = orig_region;
> +		memblock.reserved.cnt = INIT_MEMBLOCK_RESERVED_REGIONS;
> +	}
> +
> +	dummy_physical_memory_cleanup();
> +
> +	test_pass_pop();
> +
> +	return 0;
> +}
> +
>  static int memblock_reserve_checks(void)
>  {
>  	prefix_reset();
> @@ -1006,6 +1128,7 @@ static int memblock_reserve_checks(void)
>  	memblock_reserve_between_check();
>  	memblock_reserve_near_max_check();
>  	memblock_reserve_many_check();
> +	memblock_reserve_many_may_conflict_check();
>  
>  	prefix_pop();
>  
> diff --git a/tools/testing/memblock/tests/common.c b/tools/testing/memblock/tests/common.c
> index c2c569f12178..5633ffc5aaa7 100644
> --- a/tools/testing/memblock/tests/common.c
> +++ b/tools/testing/memblock/tests/common.c
> @@ -61,7 +61,7 @@ void reset_memblock_attributes(void)
>  
>  static inline void fill_memblock(void)
>  {
> -	memset(memory_block.base, 1, MEM_SIZE);
> +	memset(memory_block.base, 1, MEM_ALLOC_SIZE);

I believe PHYS_MEM_SIZE is a better name.

>  }
>  
>  void setup_memblock(void)
> @@ -103,7 +103,7 @@ void setup_numa_memblock(const unsigned int node_fracs[])
>  
>  void dummy_physical_memory_init(void)
>  {
> -	memory_block.base = malloc(MEM_SIZE);
> +	memory_block.base = malloc(MEM_ALLOC_SIZE);
>  	assert(memory_block.base);
>  	fill_memblock();
>  }
> diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock/tests/common.h
> index b5ec59aa62d7..741d57315ba6 100644
> --- a/tools/testing/memblock/tests/common.h
> +++ b/tools/testing/memblock/tests/common.h
> @@ -12,6 +12,7 @@
>  #include <../selftests/kselftest.h>
>  
>  #define MEM_SIZE		SZ_32K
> +#define MEM_ALLOC_SIZE		SZ_16M

Do we really need 16M? Wouldn't one or two suffice?

>  #define NUMA_NODES		8
>  
>  #define INIT_MEMBLOCK_REGIONS			128
> -- 
> 2.34.1
>
Mike Rapoport April 28, 2024, 6:43 a.m. UTC | #2
On Thu, Apr 25, 2024 at 07:19:24AM +0000, Wei Yang wrote:
> The isolated range is [*@start_rgn, *@end_rgn - 1], while the comment says
> "the end region inside the range" is *@end_rgn.
> 
> Let's correct it.
> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  mm/memblock.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 98d25689cf10..5a363ef283d0 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -772,12 +772,12 @@ bool __init_memblock memblock_validate_numa_coverage(unsigned long threshold_byt
>   * @base: base of range to isolate
>   * @size: size of range to isolate
>   * @start_rgn: out parameter for the start of isolated region
> - * @end_rgn: out parameter for the end of isolated region
> + * @end_rgn: out parameter for the (end + 1) of isolated region

end can be inclusive or exclusive, please let's keep this line ...

>   *
>   * Walk @type and ensure that regions don't cross the boundaries defined by
>   * [@base, @base + @size).  Crossing regions are split at the boundaries,
>   * which may create at most two more regions.  The index of the first
> - * region inside the range is returned in *@start_rgn and end in *@end_rgn.
> + * region inside the range is returned in *@start_rgn and (end + 1) in *@end_rgn.

... and emphasise here that end is exclusive, e.g

The index of the first region inside the range is returned in *@start_rgn
and the index of the first region after the range is returned in *@end_rgn.

>   *
>   * Return:
>   * 0 on success, -errno on failure.
> -- 
> 2.34.1
>
Wei Yang April 28, 2024, 12:36 p.m. UTC | #3
On Sun, Apr 28, 2024 at 09:40:33AM +0300, Mike Rapoport wrote:
>On Thu, Apr 25, 2024 at 07:19:23AM +0000, Wei Yang wrote:
>> This may trigger the case fixed by commit 48c3b583bbdd ("mm/memblock:
>> fix overlapping allocation when doubling reserved array").
>> 
>> This is done by adding the 129th reserve region into memblock.memory. If
>> memblock_double_array() use this reserve region as new array, it fails.
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  tools/testing/memblock/tests/basic_api.c | 123 +++++++++++++++++++++++
>>  tools/testing/memblock/tests/common.c    |   4 +-
>>  tools/testing/memblock/tests/common.h    |   1 +
>>  3 files changed, 126 insertions(+), 2 deletions(-)
>> 
>> diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
>> index 1ae62272867a..748950e02589 100644
>> --- a/tools/testing/memblock/tests/basic_api.c
>> +++ b/tools/testing/memblock/tests/basic_api.c
>> @@ -991,6 +991,128 @@ static int memblock_reserve_many_check(void)
>>  	return 0;
>>  }
>>  
>> +/* Keep the gap so these memory region will not be merged. */
>
>The gap where? What regions should not be merged?

We would reserve range [1-129] like below. The gap is between each range,
which is 32K, to prevent merge during memblock_reserve().

	 0        1          129
	 +---+    +---+      +---+
	 |32K|    |32K|  ..  |32K|
	 +---+    +---+      +---+
             |gap |
             |<-->|

>Also please add a comment with the test description

Sure.

>
>> +#define MEMORY_BASE_OFFSET(idx, offset) ((offset) + (MEM_SIZE * 2) * (idx))
>> +static int memblock_reserve_many_may_conflict_check(void)
>> +{
>> +	int i, skip;
>> +	void *orig_region;
>> +	struct region r = {
>> +		.base = SZ_16K,
>> +		.size = SZ_16K,
>> +	};
>> +	phys_addr_t new_reserved_regions_size;
>> +
>> +	/*
>> +	 *  0        1          129
>> +	 *  +---+    +---+      +---+
>> +	 *  |32K|    |32K|  ..  |32K|
>> +	 *  +---+    +---+      +---+
>> +	 *
>> +	 * Pre-allocate the range for 129 memory block + one range for double
>> +	 * memblock.reserved.regions at idx 0.
>> +	 * See commit 48c3b583bbdd ("mm/memblock: fix overlapping allocation
>> +	 * when doubling reserved array")
>
>Sorry, but I'm failing to parse it
>
>> +	 */
>> +	dummy_physical_memory_init();
>> +	phys_addr_t memory_base = dummy_physical_memory_base();
>> +	phys_addr_t offset = PAGE_ALIGN(memory_base);
>> +
>> +	PREFIX_PUSH();
>> +
>> +	/* Reserve the 129th memory block for all possible positions*/
>> +	for (skip = 1; skip <= INIT_MEMBLOCK_REGIONS + 1; skip++) {
>> +		reset_memblock_regions();
>> +		memblock_allow_resize();
>> +
>> +		reset_memblock_attributes();
>> +		/* Add a valid memory region used by double_array(). */
>> +		memblock_add(MEMORY_BASE_OFFSET(0, offset), MEM_SIZE);
>> +		/*
>> +		 * Add a memory region which will be reserved as 129th memory
>> +		 * region. This is not expected to be used by double_array().
>> +		 */
>> +		memblock_add(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE);
>> +
>> +		for (i = 1; i <= INIT_MEMBLOCK_REGIONS + 1; i++) {
>> +			if (i == skip)
>> +				continue;
>> +
>> +			/* Reserve some fakes memory region to fulfill the memblock. */
>> +			memblock_reserve(MEMORY_BASE_OFFSET(i, offset), MEM_SIZE);
>> +
>> +			if (i < skip) {
>> +				ASSERT_EQ(memblock.reserved.cnt, i);
>> +				ASSERT_EQ(memblock.reserved.total_size, i * MEM_SIZE);
>> +			} else {
>> +				ASSERT_EQ(memblock.reserved.cnt, i - 1);
>> +				ASSERT_EQ(memblock.reserved.total_size, (i - 1) * MEM_SIZE);
>> +			}
>> +		}
>> +
>> +		orig_region = memblock.reserved.regions;
>> +
>> +		/* This reserve the 129 memory_region, and makes it double array. */
>> +		memblock_reserve(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE);
>> +
>> +		/*
>> +		 * This is the memory region size used by the doubled reserved.regions,
>> +		 * and it has been reserved due to it has been used. The size is used to
>> +		 * calculate the total_size that the memblock.reserved have now.
>> +		 */
>> +		new_reserved_regions_size = PAGE_ALIGN((INIT_MEMBLOCK_REGIONS * 2) *
>> +						sizeof(struct memblock_region));
>> +		/*
>> +		 * The double_array() will find a free memory region as the new
>> +		 * reserved.regions, and the used memory region will be reserved, so
>> +		 * there will be one more region exist in the reserved memblock. And the
>> +		 * one more reserved region's size is new_reserved_regions_size.
>> +		 */
>> +		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 2);
>> +		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
>> +							new_reserved_regions_size);
>> +		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
>> +
>> +		/*
>> +		 * The first reserved region is allocated for double array
>> +		 * with the size of new_reserved_regions_size and the base to be
>> +		 * MEMORY_BASE_OFFSET(0, offset) + SZ_32K - new_reserved_regions_size
>> +		 */
>> +		ASSERT_EQ(memblock.reserved.regions[0].base + memblock.reserved.regions[0].size,
>> +			  MEMORY_BASE_OFFSET(0, offset) + SZ_32K);
>> +		ASSERT_EQ(memblock.reserved.regions[0].size, new_reserved_regions_size);
>> +
>> +		/*
>> +		 * Now memblock_double_array() works fine. Let's check after the
>> +		 * double_array(), the memblock_reserve() still works as normal.
>> +		 */
>> +		memblock_reserve(r.base, r.size);
>> +		ASSERT_EQ(memblock.reserved.regions[0].base, r.base);
>> +		ASSERT_EQ(memblock.reserved.regions[0].size, r.size);
>> +
>> +		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 3);
>> +		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
>> +							new_reserved_regions_size +
>> +							r.size);
>> +		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
>> +
>> +		/*
>> +		 * The current reserved.regions is occupying a range of memory that
>> +		 * allocated from dummy_physical_memory_init(). After free the memory,
>> +		 * we must not use it. So restore the origin memory region to make sure
>> +		 * the tests can run as normal and not affected by the double array.
>> +		 */
>> +		memblock.reserved.regions = orig_region;
>> +		memblock.reserved.cnt = INIT_MEMBLOCK_RESERVED_REGIONS;
>> +	}
>> +
>> +	dummy_physical_memory_cleanup();
>> +
>> +	test_pass_pop();
>> +
>> +	return 0;
>> +}
>> +
>>  static int memblock_reserve_checks(void)
>>  {
>>  	prefix_reset();
>> @@ -1006,6 +1128,7 @@ static int memblock_reserve_checks(void)
>>  	memblock_reserve_between_check();
>>  	memblock_reserve_near_max_check();
>>  	memblock_reserve_many_check();
>> +	memblock_reserve_many_may_conflict_check();
>>  
>>  	prefix_pop();
>>  
>> diff --git a/tools/testing/memblock/tests/common.c b/tools/testing/memblock/tests/common.c
>> index c2c569f12178..5633ffc5aaa7 100644
>> --- a/tools/testing/memblock/tests/common.c
>> +++ b/tools/testing/memblock/tests/common.c
>> @@ -61,7 +61,7 @@ void reset_memblock_attributes(void)
>>  
>>  static inline void fill_memblock(void)
>>  {
>> -	memset(memory_block.base, 1, MEM_SIZE);
>> +	memset(memory_block.base, 1, MEM_ALLOC_SIZE);
>
>I believe PHYS_MEM_SIZE is a better name.
>

Sounds better.

>>  }
>>  
>>  void setup_memblock(void)
>> @@ -103,7 +103,7 @@ void setup_numa_memblock(const unsigned int node_fracs[])
>>  
>>  void dummy_physical_memory_init(void)
>>  {
>> -	memory_block.base = malloc(MEM_SIZE);
>> +	memory_block.base = malloc(MEM_ALLOC_SIZE);
>>  	assert(memory_block.base);
>>  	fill_memblock();
>>  }
>> diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock/tests/common.h
>> index b5ec59aa62d7..741d57315ba6 100644
>> --- a/tools/testing/memblock/tests/common.h
>> +++ b/tools/testing/memblock/tests/common.h
>> @@ -12,6 +12,7 @@
>>  #include <../selftests/kselftest.h>
>>  
>>  #define MEM_SIZE		SZ_32K
>> +#define MEM_ALLOC_SIZE		SZ_16M
>
>Do we really need 16M? Wouldn't one or two suffice?
>

	 0        1          129
	 +---+    +---+      +---+
	 |32K|    |32K|  ..  |32K|
	 +---+    +---+      +---+

This is the range we are manipulating, which is roughly 

  130 * 64K = 128 * 64K + 128K = 8M + 128K

So I choose the next power of 2, 16M.

Since we insert the 129th range at all possible position, we may allocate the
double array at any place when the bug is triggered. So we need to allocate
the whole range instead of just allocate range 0 as we expect.

>>  #define NUMA_NODES		8
>>  
>>  #define INIT_MEMBLOCK_REGIONS			128
>> -- 
>> 2.34.1
>> 
>
>-- 
>Sincerely yours,
>Mike.
Wei Yang April 28, 2024, 1:07 p.m. UTC | #4
On Sun, Apr 28, 2024 at 09:43:54AM +0300, Mike Rapoport wrote:
>On Thu, Apr 25, 2024 at 07:19:24AM +0000, Wei Yang wrote:
>> The isolated range is [*@start_rgn, *@end_rgn - 1], while the comment says
>> "the end region inside the range" is *@end_rgn.
>> 
>> Let's correct it.
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  mm/memblock.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>> 
>> diff --git a/mm/memblock.c b/mm/memblock.c
>> index 98d25689cf10..5a363ef283d0 100644
>> --- a/mm/memblock.c
>> +++ b/mm/memblock.c
>> @@ -772,12 +772,12 @@ bool __init_memblock memblock_validate_numa_coverage(unsigned long threshold_byt
>>   * @base: base of range to isolate
>>   * @size: size of range to isolate
>>   * @start_rgn: out parameter for the start of isolated region
>> - * @end_rgn: out parameter for the end of isolated region
>> + * @end_rgn: out parameter for the (end + 1) of isolated region
>
>end can be inclusive or exclusive, please let's keep this line ...
>
>>   *
>>   * Walk @type and ensure that regions don't cross the boundaries defined by
>>   * [@base, @base + @size).  Crossing regions are split at the boundaries,
>>   * which may create at most two more regions.  The index of the first
>> - * region inside the range is returned in *@start_rgn and end in *@end_rgn.
>> + * region inside the range is returned in *@start_rgn and (end + 1) in *@end_rgn.
>
>... and emphasise here that end is exclusive, e.g
>
>The index of the first region inside the range is returned in *@start_rgn
>and the index of the first region after the range is returned in *@end_rgn.
>

Looks better, thanks.

>>   *
>>   * Return:
>>   * 0 on success, -errno on failure.
>> -- 
>> 2.34.1
>> 
>
>-- 
>Sincerely yours,
>Mike.
Wei Yang April 30, 2024, 1:49 a.m. UTC | #5
On Sun, Apr 28, 2024 at 09:40:33AM +0300, Mike Rapoport wrote:
>On Thu, Apr 25, 2024 at 07:19:23AM +0000, Wei Yang wrote:
>> This may trigger the case fixed by commit 48c3b583bbdd ("mm/memblock:
>> fix overlapping allocation when doubling reserved array").
>> 
>> This is done by adding the 129th reserve region into memblock.memory. If
>> memblock_double_array() use this reserve region as new array, it fails.
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  tools/testing/memblock/tests/basic_api.c | 123 +++++++++++++++++++++++
>>  tools/testing/memblock/tests/common.c    |   4 +-
>>  tools/testing/memblock/tests/common.h    |   1 +
>>  3 files changed, 126 insertions(+), 2 deletions(-)
>> 
>> diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
>> index 1ae62272867a..748950e02589 100644
>> --- a/tools/testing/memblock/tests/basic_api.c
>> +++ b/tools/testing/memblock/tests/basic_api.c
>> @@ -991,6 +991,128 @@ static int memblock_reserve_many_check(void)
>>  	return 0;
>>  }
>>  
>> +/* Keep the gap so these memory region will not be merged. */
>
>The gap where? What regions should not be merged?
>Also please add a comment with the test description
>
>> +#define MEMORY_BASE_OFFSET(idx, offset) ((offset) + (MEM_SIZE * 2) * (idx))
>> +static int memblock_reserve_many_may_conflict_check(void)
>> +{
>> +	int i, skip;
>> +	void *orig_region;
>> +	struct region r = {
>> +		.base = SZ_16K,
>> +		.size = SZ_16K,
>> +	};
>> +	phys_addr_t new_reserved_regions_size;
>> +
>> +	/*
>> +	 *  0        1          129
>> +	 *  +---+    +---+      +---+
>> +	 *  |32K|    |32K|  ..  |32K|
>> +	 *  +---+    +---+      +---+
>> +	 *
>> +	 * Pre-allocate the range for 129 memory block + one range for double
>> +	 * memblock.reserved.regions at idx 0.
>> +	 * See commit 48c3b583bbdd ("mm/memblock: fix overlapping allocation
>> +	 * when doubling reserved array")
>
>Sorry, but I'm failing to parse it
>

Sorry for missed this one. You mean how this case is triggered?

Suppose current memblock looks like this:

                 0        1    
memblock.memory  +---+    +---+
                 |32K|    |32K|
                 +---+    +---+

                 0                2          128
memblock.reserved+---+            +---+      +---+
                 |32K|            |32K|  ..  |32K|
                 +---+            +---+      +---+
                          ^
                          |
Now we want to reserve range 1 here. Since there are 128 blocks in
memblock.reserved, it will double array. Now memblock_double_array() will
"find" available range in memblock.memory.

Since we set top down, it will find memblock.memory.regions[1]. This conflict
the range we want to reserve.
Mike Rapoport May 1, 2024, 8:44 a.m. UTC | #6
On Tue, Apr 30, 2024 at 01:49:05AM +0000, Wei Yang wrote:
> On Sun, Apr 28, 2024 at 09:40:33AM +0300, Mike Rapoport wrote:
> >On Thu, Apr 25, 2024 at 07:19:23AM +0000, Wei Yang wrote:
> >> This may trigger the case fixed by commit 48c3b583bbdd ("mm/memblock:
> >> fix overlapping allocation when doubling reserved array").
> >> 
> >> This is done by adding the 129th reserve region into memblock.memory. If
> >> memblock_double_array() use this reserve region as new array, it fails.
> >> 
> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> >> ---
> >>  tools/testing/memblock/tests/basic_api.c | 123 +++++++++++++++++++++++
> >>  tools/testing/memblock/tests/common.c    |   4 +-
> >>  tools/testing/memblock/tests/common.h    |   1 +
> >>  3 files changed, 126 insertions(+), 2 deletions(-)
> >> 
> >> diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
> >> index 1ae62272867a..748950e02589 100644
> >> --- a/tools/testing/memblock/tests/basic_api.c
> >> +++ b/tools/testing/memblock/tests/basic_api.c
> >> @@ -991,6 +991,128 @@ static int memblock_reserve_many_check(void)
> >>  	return 0;
> >>  }
> >>  
> >> +/* Keep the gap so these memory region will not be merged. */
> >
> >The gap where? What regions should not be merged?
> >Also please add a comment with the test description
> >
> >> +#define MEMORY_BASE_OFFSET(idx, offset) ((offset) + (MEM_SIZE * 2) * (idx))
> >> +static int memblock_reserve_many_may_conflict_check(void)
> >> +{
> >> +	int i, skip;
> >> +	void *orig_region;
> >> +	struct region r = {
> >> +		.base = SZ_16K,
> >> +		.size = SZ_16K,
> >> +	};
> >> +	phys_addr_t new_reserved_regions_size;
> >> +
> >> +	/*
> >> +	 *  0        1          129
> >> +	 *  +---+    +---+      +---+
> >> +	 *  |32K|    |32K|  ..  |32K|
> >> +	 *  +---+    +---+      +---+
> >> +	 *
> >> +	 * Pre-allocate the range for 129 memory block + one range for double
> >> +	 * memblock.reserved.regions at idx 0.
> >> +	 * See commit 48c3b583bbdd ("mm/memblock: fix overlapping allocation
> >> +	 * when doubling reserved array")
> >
> >Sorry, but I'm failing to parse it
> >
> 
> Sorry for missed this one. You mean how this case is triggered?
> 
> Suppose current memblock looks like this:
> 
>                  0        1    
> memblock.memory  +---+    +---+
>                  |32K|    |32K|
>                  +---+    +---+
> 
>                  0                2          128
> memblock.reserved+---+            +---+      +---+
>                  |32K|            |32K|  ..  |32K|
>                  +---+            +---+      +---+
>                           ^
>                           |
> Now we want to reserve range 1 here. Since there are 128 blocks in
> memblock.reserved, it will double array. Now memblock_double_array() will
> "find" available range in memblock.memory.
> 
> Since we set top down, it will find memblock.memory.regions[1]. This conflict
> the range we want to reserve.

Please include something like this explanation in the test description
 
> -- 
> Wei Yang
> Help you, Help me