From patchwork Mon Feb 28 14:46:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karolina Drobnik X-Patchwork-Id: 12763430 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8780C433F5 for ; Mon, 28 Feb 2022 14:47:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F35B8D0003; Mon, 28 Feb 2022 09:47:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A3AE8D0001; Mon, 28 Feb 2022 09:47:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3472A8D0003; Mon, 28 Feb 2022 09:47:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0105.hostedemail.com [216.40.44.105]) by kanga.kvack.org (Postfix) with ESMTP id 284638D0001 for ; Mon, 28 Feb 2022 09:47:19 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id DBD8C181C49DF for ; Mon, 28 Feb 2022 14:47:18 +0000 (UTC) X-FDA: 79192466556.26.42D0888 Received: from mail-lf1-f44.google.com (mail-lf1-f44.google.com [209.85.167.44]) by imf24.hostedemail.com (Postfix) with ESMTP id 5A9BC180005 for ; Mon, 28 Feb 2022 14:47:18 +0000 (UTC) Received: by mail-lf1-f44.google.com with SMTP id b11so21733263lfb.12 for ; Mon, 28 Feb 2022 06:47:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GLLwg9Fhnlb38L2tvLtaEuKjOJI6ov8ETtoYyG5ZRfY=; b=Z/+pS6GgFR7zm4jLJc10kJpgAMBJgbwi0ldF1E1IIRJLou2mJu28USb16L76C1UteM spNeQalSMHsWeYetdWrCMI9Q+1K2XgVLaiOHoGj3WtzwUcZXL5i/iCBH48qtRQ5cYRux 8jOx7z9LClaNjYVG7m1Ix4x47fYSB6etFDPDRv/zx1Ec1gj6h9tlUuQI3hTeoU1LBPue F3u2JPl7/L/4dMi+r9s7MVTDY7Dp/z26jKG+MmyHnmNTubTpEYSkpjhRXowZY4veAH07 2/h+x+ZuTMONbwwKvlbKkFVAWbASwBPTjLJt6HscsT9rGfpvPL4CuGKKeC0CsKWkhN61 TUVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GLLwg9Fhnlb38L2tvLtaEuKjOJI6ov8ETtoYyG5ZRfY=; b=Gr8wlAM2WvUUGism6JoVVoqdjIuivZPi8jiUSTcON2qU1+8seSmtHP8AsspNYDQHgc 8WP0KQafhVnnglbMjKrLdux33esLOE1o711CPI6qp8EfowR+1kcAp2iQ+3d1aQ6MRX7S zxyHBb/gccwCAo/1JW4NCz7qj/K1iCm5hxpC/fD5v7IZW+YjPHnjiMBdUIEGS/f7ijxM 6lWA6fQzMT6xgY6DgESd0QAYBqOu3MTsQBZJJQ4SkJf/LKYmPep3kSmUwdW8Wxb2Gcip pJ1lvmPdzkFjjhDV9ePxunrU7JbqSx/MNKXn3+NOtQFmOlQzXhTB02CyPsWDkBoD63w4 1SZg== X-Gm-Message-State: AOAM5335Mk5d9JRnk8QX14bra4rBrOnXGG3CFqgtHq3wRus0COiwntGV jmyoFxUC94NkhGp93G6yksPGLtKdyzU= X-Google-Smtp-Source: ABdhPJx5rOLrb4DxrJhKqGG5ymfCovJvN4eC3oSniQJOAkJUanT5YKcq6AmbAdTYBtfZJHBBB2SlPg== X-Received: by 2002:ac2:549a:0:b0:443:f15d:e582 with SMTP id t26-20020ac2549a000000b00443f15de582mr13323295lfk.90.1646059636694; Mon, 28 Feb 2022 06:47:16 -0800 (PST) Received: from elysium.toya.net.pl (staticline-31-183-165-244.toya.net.pl. [31.183.165.244]) by smtp.gmail.com with ESMTPSA id r14-20020ac252ae000000b00443f3cbc03asm993996lfm.6.2022.02.28.06.47.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 06:47:16 -0800 (PST) From: Karolina Drobnik To: linux-mm@kvack.org Cc: rppt@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, Karolina Drobnik Subject: [PATCH 1/9] memblock tests: Split up reset_memblock function Date: Mon, 28 Feb 2022 15:46:43 +0100 Message-Id: <5cc1ba9a0ade922dbf4ba450165b81a9ed17d4a9.1646055639.git.karolinadrobnik@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 5A9BC180005 X-Stat-Signature: 8wujbximaxsixtxu8bqx3gusqpchubdf Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="Z/+pS6Gg"; spf=pass (imf24.hostedemail.com: domain of karolinadrobnik@gmail.com designates 209.85.167.44 as permitted sender) smtp.mailfrom=karolinadrobnik@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1646059638-728033 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All memblock data structure fields are reset in one function. In some test cases, it's preferred to reset memory region arrays without modifying other values like allocation direction flag. Extract two functions from reset_memblock, so it's possible to reset different parts of memblock: - reset_memblock_regions - reset region arrays and their counters - reset_memblock_attributes - set other fields to their default values Update checks in basic_api.c to use new definitions. Remove reset_memblock call from memblock_initialization_check, so the true initial values are tested. Signed-off-by: Karolina Drobnik --- tools/testing/memblock/tests/basic_api.c | 48 ++++++++++++------------ tools/testing/memblock/tests/common.c | 14 ++++--- tools/testing/memblock/tests/common.h | 3 +- 3 files changed, 33 insertions(+), 32 deletions(-) -- 2.30.2 diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c index fbb989f6ddbf..d5035a3dcce8 100644 --- a/tools/testing/memblock/tests/basic_api.c +++ b/tools/testing/memblock/tests/basic_api.c @@ -8,8 +8,6 @@ static int memblock_initialization_check(void) { - reset_memblock(); - assert(memblock.memory.regions); assert(memblock.memory.cnt == 1); assert(memblock.memory.max == EXPECTED_MEMBLOCK_REGIONS); @@ -43,7 +41,7 @@ static int memblock_add_simple_check(void) .size = SZ_4M }; - reset_memblock(); + reset_memblock_regions(); memblock_add(r.base, r.size); assert(rgn->base == r.base); @@ -72,7 +70,7 @@ static int memblock_add_node_simple_check(void) .size = SZ_16M }; - reset_memblock(); + reset_memblock_regions(); memblock_add_node(r.base, r.size, 1, MEMBLOCK_HOTPLUG); assert(rgn->base == r.base); @@ -110,7 +108,7 @@ static int memblock_add_disjoint_check(void) .size = SZ_8K }; - reset_memblock(); + reset_memblock_regions(); memblock_add(r1.base, r1.size); memblock_add(r2.base, r2.size); @@ -151,7 +149,7 @@ static int memblock_add_overlap_top_check(void) total_size = (r1.base - r2.base) + r1.size; - reset_memblock(); + reset_memblock_regions(); memblock_add(r1.base, r1.size); memblock_add(r2.base, r2.size); @@ -190,7 +188,7 @@ static int memblock_add_overlap_bottom_check(void) total_size = (r2.base - r1.base) + r2.size; - reset_memblock(); + reset_memblock_regions(); memblock_add(r1.base, r1.size); memblock_add(r2.base, r2.size); @@ -225,7 +223,7 @@ static int memblock_add_within_check(void) .size = SZ_1M }; - reset_memblock(); + reset_memblock_regions(); memblock_add(r1.base, r1.size); memblock_add(r2.base, r2.size); @@ -249,7 +247,7 @@ static int memblock_add_twice_check(void) .size = SZ_2M }; - reset_memblock(); + reset_memblock_regions(); memblock_add(r.base, r.size); memblock_add(r.base, r.size); @@ -290,7 +288,7 @@ static int memblock_reserve_simple_check(void) .size = SZ_128M }; - reset_memblock(); + reset_memblock_regions(); memblock_reserve(r.base, r.size); assert(rgn->base == r.base); @@ -321,7 +319,7 @@ static int memblock_reserve_disjoint_check(void) .size = SZ_512M }; - reset_memblock(); + reset_memblock_regions(); memblock_reserve(r1.base, r1.size); memblock_reserve(r2.base, r2.size); @@ -364,7 +362,7 @@ static int memblock_reserve_overlap_top_check(void) total_size = (r1.base - r2.base) + r1.size; - reset_memblock(); + reset_memblock_regions(); memblock_reserve(r1.base, r1.size); memblock_reserve(r2.base, r2.size); @@ -404,7 +402,7 @@ static int memblock_reserve_overlap_bottom_check(void) total_size = (r2.base - r1.base) + r2.size; - reset_memblock(); + reset_memblock_regions(); memblock_reserve(r1.base, r1.size); memblock_reserve(r2.base, r2.size); @@ -440,7 +438,7 @@ static int memblock_reserve_within_check(void) .size = SZ_64K }; - reset_memblock(); + reset_memblock_regions(); memblock_reserve(r1.base, r1.size); memblock_reserve(r2.base, r2.size); @@ -465,7 +463,7 @@ static int memblock_reserve_twice_check(void) .size = SZ_2M }; - reset_memblock(); + reset_memblock_regions(); memblock_reserve(r.base, r.size); memblock_reserve(r.base, r.size); @@ -511,7 +509,7 @@ static int memblock_remove_simple_check(void) .size = SZ_4M }; - reset_memblock(); + reset_memblock_regions(); memblock_add(r1.base, r1.size); memblock_add(r2.base, r2.size); memblock_remove(r1.base, r1.size); @@ -545,7 +543,7 @@ static int memblock_remove_absent_check(void) .size = SZ_1G }; - reset_memblock(); + reset_memblock_regions(); memblock_add(r1.base, r1.size); memblock_remove(r2.base, r2.size); @@ -585,7 +583,7 @@ static int memblock_remove_overlap_top_check(void) r2_end = r2.base + r2.size; total_size = r1_end - r2_end; - reset_memblock(); + reset_memblock_regions(); memblock_add(r1.base, r1.size); memblock_remove(r2.base, r2.size); @@ -623,7 +621,7 @@ static int memblock_remove_overlap_bottom_check(void) total_size = r2.base - r1.base; - reset_memblock(); + reset_memblock_regions(); memblock_add(r1.base, r1.size); memblock_remove(r2.base, r2.size); @@ -665,7 +663,7 @@ static int memblock_remove_within_check(void) r2_size = (r1.base + r1.size) - (r2.base + r2.size); total_size = r1_size + r2_size; - reset_memblock(); + reset_memblock_regions(); memblock_add(r1.base, r1.size); memblock_remove(r2.base, r2.size); @@ -715,7 +713,7 @@ static int memblock_free_simple_check(void) .size = SZ_1M }; - reset_memblock(); + reset_memblock_regions(); memblock_reserve(r1.base, r1.size); memblock_reserve(r2.base, r2.size); memblock_free((void *)r1.base, r1.size); @@ -749,7 +747,7 @@ static int memblock_free_absent_check(void) .size = SZ_128M }; - reset_memblock(); + reset_memblock_regions(); memblock_reserve(r1.base, r1.size); memblock_free((void *)r2.base, r2.size); @@ -787,7 +785,7 @@ static int memblock_free_overlap_top_check(void) total_size = (r1.size + r1.base) - (r2.base + r2.size); - reset_memblock(); + reset_memblock_regions(); memblock_reserve(r1.base, r1.size); memblock_free((void *)r2.base, r2.size); @@ -824,7 +822,7 @@ static int memblock_free_overlap_bottom_check(void) total_size = r2.base - r1.base; - reset_memblock(); + reset_memblock_regions(); memblock_reserve(r1.base, r1.size); memblock_free((void *)r2.base, r2.size); @@ -867,7 +865,7 @@ static int memblock_free_within_check(void) r2_size = (r1.base + r1.size) - (r2.base + r2.size); total_size = r1_size + r2_size; - reset_memblock(); + reset_memblock_regions(); memblock_reserve(r1.base, r1.size); memblock_free((void *)r2.base, r2.size); diff --git a/tools/testing/memblock/tests/common.c b/tools/testing/memblock/tests/common.c index 03de6eab0c3c..dd7e87c589fe 100644 --- a/tools/testing/memblock/tests/common.c +++ b/tools/testing/memblock/tests/common.c @@ -5,23 +5,25 @@ #define INIT_MEMBLOCK_REGIONS 128 #define INIT_MEMBLOCK_RESERVED_REGIONS INIT_MEMBLOCK_REGIONS -void reset_memblock(void) +void reset_memblock_regions(void) { memset(memblock.memory.regions, 0, memblock.memory.cnt * sizeof(struct memblock_region)); - memset(memblock.reserved.regions, 0, - memblock.reserved.cnt * sizeof(struct memblock_region)); - memblock.memory.cnt = 1; memblock.memory.max = INIT_MEMBLOCK_REGIONS; - memblock.memory.name = "memory"; memblock.memory.total_size = 0; + memset(memblock.reserved.regions, 0, + memblock.reserved.cnt * sizeof(struct memblock_region)); memblock.reserved.cnt = 1; memblock.reserved.max = INIT_MEMBLOCK_RESERVED_REGIONS; - memblock.reserved.name = "reserved"; memblock.reserved.total_size = 0; +} +void reset_memblock_attributes(void) +{ + memblock.memory.name = "memory"; + memblock.reserved.name = "reserved"; memblock.bottom_up = false; memblock.current_limit = MEMBLOCK_ALLOC_ANYWHERE; } diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock/tests/common.h index 48efc4270ea1..b864c64fb60f 100644 --- a/tools/testing/memblock/tests/common.h +++ b/tools/testing/memblock/tests/common.h @@ -10,6 +10,7 @@ struct region { phys_addr_t size; }; -void reset_memblock(void); +void reset_memblock_regions(void); +void reset_memblock_attributes(void); #endif From patchwork Mon Feb 28 14:46:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karolina Drobnik X-Patchwork-Id: 12763431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10917C433FE for ; Mon, 28 Feb 2022 14:47:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B41F8D0005; Mon, 28 Feb 2022 09:47:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 714CC8D0001; Mon, 28 Feb 2022 09:47:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B68A8D0005; Mon, 28 Feb 2022 09:47:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0018.hostedemail.com [216.40.44.18]) by kanga.kvack.org (Postfix) with ESMTP id 4B7AE8D0001 for ; Mon, 28 Feb 2022 09:47:20 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F2639998AE for ; Mon, 28 Feb 2022 14:47:19 +0000 (UTC) X-FDA: 79192466598.23.885DF2B Received: from mail-lj1-f181.google.com (mail-lj1-f181.google.com [209.85.208.181]) by imf25.hostedemail.com (Postfix) with ESMTP id 2486CA0007 for ; Mon, 28 Feb 2022 14:47:19 +0000 (UTC) Received: by mail-lj1-f181.google.com with SMTP id l12so5840765ljh.12 for ; Mon, 28 Feb 2022 06:47:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wVolMfpuoGDt9XCP+363s8gQ0xUicGtJcmsMqdqFw1w=; b=j0Qo309S6hgbmP3ipd/KV2J3R+BIUQFYoZ83AhxK88SCE+HmSRf4w08bt+aPlfdW7/ uYLKmxXKTnSHV2INU3FvleBLvn9tb+GlIssYTgokngxJDd2JXGjI9oPzusCc9XqQwn3T 1Skv19AWT9gSL3FGuNmFXqZCaCslOyIhvdM8AaqcamrkDzBDMf4UyydzdofNjjdP+3ZZ kMOA6VbHJHB/DQQNRpMzLdtHu3zQlv7QrrtgQU5QJT0mTdkNZQ2p5fs2qd44GFsYTDa7 w6fDM6AJ0RY+JxY6nALgJt4J/snff05XTeZZM8fi0Ca5cH90HXknkANA8zFhKwK5A7EX BoqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wVolMfpuoGDt9XCP+363s8gQ0xUicGtJcmsMqdqFw1w=; b=0HhYdbjOqTbWoKlP+gi/Y9qy/zDMJMb4fNJapbG1ZPGEgAbo2D+Q2twvHJSEeX615C wTnAacwE/GX452yNv7H39833EK50JsfG8wgx+l2RMcSHn2WAF0bUk5BhoVKnuDQ5YQuT OtarJsKFTT6dRz6QgbwSDfM68TsoTog/k8PtkthPUk2cPvI4iOnhMO7m11XpL4TpWEAw EdpTPBpELMK70NSyytGi7WE0o3VREgUUtnhVMaTFun3KvlSHnTy8UJXHgLl5m5ugvEvj gTOsMF2Q1aKsdRFoSHEqU0Vbj0GjhQ/DDdlLrZSXfa2SiNfDEB1KjknUTkjfhl2FGtH+ Angg== X-Gm-Message-State: AOAM533hewzzP9Mo0Ka+BUVrVfz8z+iAzFT+hARL1dHfWLQnFNjiSV4b tflfYvI5aAA2qf0DHO01Ez7imggqW00= X-Google-Smtp-Source: ABdhPJypZNl0cy1qm5rfj/e4qV8VLCzwXi5IXSlKHc3dq3Zof8peeKvCKbrtelr5Rod7+SKSnVyJqQ== X-Received: by 2002:a2e:bb8f:0:b0:244:c93b:3dff with SMTP id y15-20020a2ebb8f000000b00244c93b3dffmr14167825lje.226.1646059638221; Mon, 28 Feb 2022 06:47:18 -0800 (PST) Received: from elysium.toya.net.pl (staticline-31-183-165-244.toya.net.pl. [31.183.165.244]) by smtp.gmail.com with ESMTPSA id r14-20020ac252ae000000b00443f3cbc03asm993996lfm.6.2022.02.28.06.47.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 06:47:17 -0800 (PST) From: Karolina Drobnik To: linux-mm@kvack.org Cc: rppt@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, Karolina Drobnik Subject: [PATCH 2/9] memblock tests: Add simulation of physical memory Date: Mon, 28 Feb 2022 15:46:44 +0100 Message-Id: X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspam-User: X-Stat-Signature: wshmzcf5j77mco7o6bzhpz4yb1bm7rpo Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=j0Qo309S; spf=pass (imf25.hostedemail.com: domain of karolinadrobnik@gmail.com designates 209.85.208.181 as permitted sender) smtp.mailfrom=karolinadrobnik@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Queue-Id: 2486CA0007 X-HE-Tag: 1646059639-307404 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allocation functions that return virtual addresses (with an exception of _raw variant) clear the allocated memory after reserving it. This requires valid memory ranges in memblock.memory. Introduce memory_block variable to store memory that can be registered with memblock data structure. Move assert.h and size.h includes to common.h to share them between the test files. Signed-off-by: Karolina Drobnik --- tools/testing/memblock/tests/basic_api.c | 1 - tools/testing/memblock/tests/basic_api.h | 1 - tools/testing/memblock/tests/common.c | 19 +++++++++++++++++++ tools/testing/memblock/tests/common.h | 18 ++++++++++++++++++ 4 files changed, 37 insertions(+), 2 deletions(-) diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c index d5035a3dcce8..fbc1ce160303 100644 --- a/tools/testing/memblock/tests/basic_api.c +++ b/tools/testing/memblock/tests/basic_api.c @@ -1,7 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-or-later #include #include -#include #include "basic_api.h" #define EXPECTED_MEMBLOCK_REGIONS 128 diff --git a/tools/testing/memblock/tests/basic_api.h b/tools/testing/memblock/tests/basic_api.h index 1ceecfca1f47..1873faa54754 100644 --- a/tools/testing/memblock/tests/basic_api.h +++ b/tools/testing/memblock/tests/basic_api.h @@ -2,7 +2,6 @@ #ifndef _MEMBLOCK_BASIC_H #define _MEMBLOCK_BASIC_H -#include #include "common.h" int memblock_basic_checks(void); diff --git a/tools/testing/memblock/tests/common.c b/tools/testing/memblock/tests/common.c index dd7e87c589fe..62d3191f7c9a 100644 --- a/tools/testing/memblock/tests/common.c +++ b/tools/testing/memblock/tests/common.c @@ -5,6 +5,8 @@ #define INIT_MEMBLOCK_REGIONS 128 #define INIT_MEMBLOCK_RESERVED_REGIONS INIT_MEMBLOCK_REGIONS +static struct test_memory memory_block; + void reset_memblock_regions(void) { memset(memblock.memory.regions, 0, @@ -27,3 +29,20 @@ void reset_memblock_attributes(void) memblock.bottom_up = false; memblock.current_limit = MEMBLOCK_ALLOC_ANYWHERE; } + +void setup_memblock(void) +{ + reset_memblock_regions(); + memblock_add((phys_addr_t)memory_block.base, MEM_SIZE); +} + +void dummy_physical_memory_init(void) +{ + memory_block.base = malloc(MEM_SIZE); + assert(memory_block.base); +} + +void dummy_physical_memory_cleanup(void) +{ + free(memory_block.base); +} diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock/tests/common.h index b864c64fb60f..619054d03219 100644 --- a/tools/testing/memblock/tests/common.h +++ b/tools/testing/memblock/tests/common.h @@ -2,8 +2,23 @@ #ifndef _MEMBLOCK_TEST_H #define _MEMBLOCK_TEST_H +#include +#include #include #include +#include + +#define MEM_SIZE SZ_16K + +/* + * Available memory registered with memblock needs to be valid for allocs + * test to run. This is a convenience wrapper for memory allocated in + * dummy_physical_memory_init() that is later registered with memblock + * in setup_memblock(). + */ +struct test_memory { + void *base; +}; struct region { phys_addr_t base; @@ -12,5 +27,8 @@ struct region { void reset_memblock_regions(void); void reset_memblock_attributes(void); +void setup_memblock(void); +void dummy_physical_memory_init(void); +void dummy_physical_memory_cleanup(void); #endif From patchwork Mon Feb 28 14:46:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karolina Drobnik X-Patchwork-Id: 12763432 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B96F6C433F5 for ; Mon, 28 Feb 2022 14:47:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 541968D0006; Mon, 28 Feb 2022 09:47:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4CA5C8D0001; Mon, 28 Feb 2022 09:47:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 36B008D0006; Mon, 28 Feb 2022 09:47:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 294D38D0001 for ; Mon, 28 Feb 2022 09:47:23 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D774722B31 for ; Mon, 28 Feb 2022 14:47:22 +0000 (UTC) X-FDA: 79192466724.03.9028DB0 Received: from mail-lf1-f43.google.com (mail-lf1-f43.google.com [209.85.167.43]) by imf08.hostedemail.com (Postfix) with ESMTP id 57D7D160008 for ; Mon, 28 Feb 2022 14:47:22 +0000 (UTC) Received: by mail-lf1-f43.google.com with SMTP id f37so21719133lfv.8 for ; Mon, 28 Feb 2022 06:47:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vzfSYgDZmcLeTAJSdBrD8HdMLVfupA/uWZ0BQLISLR0=; b=dqj69tyhozNfuLUxXstfRi2s4MOmnG9bazMi/B6UdY0PNjYSV2vqOBHoV+HqcxX1mK 82Fzi1Ja+ts6f53QLz0F2B+jiXy3pu4ghMEM81dbzr+aF3uulXB/MrE2kBR5ujeoCy0H 7roSD3VP+yBcJpCssAM1Z2Bt2kZ6X/Dv85HL+90/e7TRsxU3BbicHfrTLh0iQ7nDzwfe lpLwAcEykQOFieJ7fhD/nMlV5rk7vXWLB67x6KuxCbanaV/84KxGiz0q207/91x6mSj3 LX/0KEJpyCNpsiUc6ipU/4D7eARtZ/rdgl07dZZEYqCQwr+Sq1vW/a3cWPYBf7wqfijQ BH8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vzfSYgDZmcLeTAJSdBrD8HdMLVfupA/uWZ0BQLISLR0=; b=6mgRy6adC252UIDVIZIuTfZhG8KQ1NOAQE4kMSGEx0Hle8C9tyITC2pIyoFDgEShWB MiA3oO7JQOYIp3EwMfWO/wTh6R7WJybaPaA6Tk0fuCzE58akX4y6NjfDqiCeNVc6/QGQ RCY6Ih6fDt0n++LaKTAHwvpekeqVzHO6v/jGRwBdOdhv5aMj4tOmhgzp881z2GkdT21d 7Mz1KtLZxC7rxa0FKuSCCh+X85TpbyFAgDylCZmtM/cpZamDanBTEe9vUnrL32xCcLcj /cuWOVhBytpnXY6JKw++i9ylpM32WKEsIaHkbQqKFrnG+j2X8Dg/ZDFGfgO6eoL2lb85 8/3A== X-Gm-Message-State: AOAM531qLdrTEva7w6ALwQLr+nuyvbQ24+QyymhBoxn5v4f5RHgPlDUT AT0ECStP3aOfDwI3MPPjWS3yLCpB4h4= X-Google-Smtp-Source: ABdhPJyXYAjS6kWzrQ9btsFG3XYiC61Xd5kmGREFGnRjGySFBqwYVVWxebHYFFqFkz9w3/xDTJJWGQ== X-Received: by 2002:ac2:5cc8:0:b0:443:770f:a7 with SMTP id f8-20020ac25cc8000000b00443770f00a7mr12427748lfq.18.1646059640785; Mon, 28 Feb 2022 06:47:20 -0800 (PST) Received: from elysium.toya.net.pl (staticline-31-183-165-244.toya.net.pl. [31.183.165.244]) by smtp.gmail.com with ESMTPSA id r14-20020ac252ae000000b00443f3cbc03asm993996lfm.6.2022.02.28.06.47.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 06:47:20 -0800 (PST) From: Karolina Drobnik To: linux-mm@kvack.org Cc: rppt@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, Karolina Drobnik Subject: [PATCH 3/9] memblock tests: Add memblock_alloc tests for top down Date: Mon, 28 Feb 2022 15:46:45 +0100 Message-Id: <26ccf409b8ff0394559d38d792b2afb24b55887c.1646055639.git.karolinadrobnik@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 57D7D160008 X-Stat-Signature: 195tx4x6ezqafx4936mrbwow1ktgugk4 X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=dqj69tyh; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of karolinadrobnik@gmail.com designates 209.85.167.43 as permitted sender) smtp.mailfrom=karolinadrobnik@gmail.com X-Rspamd-Server: rspam03 X-HE-Tag: 1646059642-112958 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000101, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add checks for memblock_alloc for top down allocation direction. The tested scenarios are: - Region can be allocated on the first fit (with and without region merging) - Region can be allocated on the second fit (with and without region merging) Add checks for both allocation directions: - Region can be allocated between two already existing entries - Limited memory available - All memory is reserved - No available memory registered with memblock Signed-off-by: Karolina Drobnik --- tools/testing/memblock/Makefile | 2 +- tools/testing/memblock/main.c | 3 + tools/testing/memblock/tests/alloc_api.c | 428 +++++++++++++++++++++++ tools/testing/memblock/tests/alloc_api.h | 9 + 4 files changed, 441 insertions(+), 1 deletion(-) create mode 100644 tools/testing/memblock/tests/alloc_api.c create mode 100644 tools/testing/memblock/tests/alloc_api.h diff --git a/tools/testing/memblock/Makefile b/tools/testing/memblock/Makefile index 29715327a2d3..5b01cfd808d0 100644 --- a/tools/testing/memblock/Makefile +++ b/tools/testing/memblock/Makefile @@ -6,7 +6,7 @@ CFLAGS += -I. -I../../include -Wall -O2 -fsanitize=address \ -fsanitize=undefined -D CONFIG_PHYS_ADDR_T_64BIT LDFLAGS += -fsanitize=address -fsanitize=undefined TARGETS = main -TEST_OFILES = tests/basic_api.o tests/common.o +TEST_OFILES = tests/alloc_api.o tests/basic_api.o tests/common.o DEP_OFILES = memblock.o lib/slab.o mmzone.o slab.o OFILES = main.o $(DEP_OFILES) $(TEST_OFILES) EXTR_SRC = ../../../mm/memblock.c diff --git a/tools/testing/memblock/main.c b/tools/testing/memblock/main.c index da65b0adee91..e7cc45dc06d4 100644 --- a/tools/testing/memblock/main.c +++ b/tools/testing/memblock/main.c @@ -1,8 +1,11 @@ // SPDX-License-Identifier: GPL-2.0-or-later #include "tests/basic_api.h" +#include "tests/alloc_api.h" int main(int argc, char **argv) { memblock_basic_checks(); + memblock_alloc_checks(); + return 0; } diff --git a/tools/testing/memblock/tests/alloc_api.c b/tools/testing/memblock/tests/alloc_api.c new file mode 100644 index 000000000000..22ba9a2b4eaf --- /dev/null +++ b/tools/testing/memblock/tests/alloc_api.c @@ -0,0 +1,428 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +#include "alloc_api.h" + +/* + * A simple test that tries to allocate a small memory region. + * Expect to allocate an aligned region near the end of the available memory. + */ +static int alloc_top_down_simple_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + + phys_addr_t size = SZ_2; + phys_addr_t expected_start; + + setup_memblock(); + + expected_start = memblock_end_of_DRAM() - SMP_CACHE_BYTES; + + allocated_ptr = memblock_alloc(size, SMP_CACHE_BYTES); + + assert(allocated_ptr); + assert(rgn->size == size); + assert(rgn->base == expected_start); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +/* + * A test that tries to allocate memory next to a reserved region that starts at + * the misaligned address. Expect to create two separate entries, with the new + * entry aligned to the provided alignment: + * + * + + * | +--------+ +--------| + * | | rgn2 | | rgn1 | + * +------------+--------+---------+--------+ + * ^ + * | + * Aligned address boundary + * + * The allocation direction is top-down and region arrays are sorted from lower + * to higher addresses, so the new region will be the first entry in + * memory.reserved array. The previously reserved region does not get modified. + * Region counter and total size get updated. + */ +static int alloc_top_down_disjoint_check(void) +{ + /* After allocation, this will point to the "old" region */ + struct memblock_region *rgn1 = &memblock.reserved.regions[1]; + struct memblock_region *rgn2 = &memblock.reserved.regions[0]; + struct region r1; + void *allocated_ptr = NULL; + + phys_addr_t r2_size = SZ_16; + /* Use custom alignment */ + phys_addr_t alignment = SMP_CACHE_BYTES * 2; + phys_addr_t total_size; + phys_addr_t expected_start; + + setup_memblock(); + + r1.base = memblock_end_of_DRAM() - SZ_2; + r1.size = SZ_2; + + total_size = r1.size + r2_size; + expected_start = memblock_end_of_DRAM() - alignment; + + memblock_reserve(r1.base, r1.size); + + allocated_ptr = memblock_alloc(r2_size, alignment); + + assert(allocated_ptr); + assert(rgn1->size == r1.size); + assert(rgn1->base == r1.base); + + assert(rgn2->size == r2_size); + assert(rgn2->base == expected_start); + + assert(memblock.reserved.cnt == 2); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate memory when there is enough space at the end + * of the previously reserved block (i.e. first fit): + * + * | +--------+--------------| + * | | r1 | r2 | + * +--------------+--------+--------------+ + * + * Expect a merge of both regions. Only the region size gets updated. + */ +static int alloc_top_down_before_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + + /* The first region ends at the aligned address to test region merging */ + phys_addr_t r1_size = SMP_CACHE_BYTES; + phys_addr_t r2_size = SZ_512; + phys_addr_t total_size = r1_size + r2_size; + + setup_memblock(); + + memblock_reserve(memblock_end_of_DRAM() - total_size, r1_size); + + allocated_ptr = memblock_alloc(r2_size, SMP_CACHE_BYTES); + + assert(allocated_ptr); + assert(rgn->size == total_size); + assert(rgn->base == memblock_end_of_DRAM() - total_size); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate memory when there is not enough space at the + * end of the previously reserved block (i.e. second fit): + * + * | +-----------+------+ | + * | | r2 | r1 | | + * +------------+-----------+------+-----+ + * + * Expect a merge of both regions. Both the base address and size of the region + * get updated. + */ +static int alloc_top_down_after_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + struct region r1; + void *allocated_ptr = NULL; + + phys_addr_t r2_size = SZ_512; + phys_addr_t total_size; + + setup_memblock(); + + /* The first region starts at the aligned address to test region merging */ + r1.base = memblock_end_of_DRAM() - SMP_CACHE_BYTES; + r1.size = SZ_8; + + total_size = r1.size + r2_size; + + memblock_reserve(r1.base, r1.size); + + allocated_ptr = memblock_alloc(r2_size, SMP_CACHE_BYTES); + + assert(allocated_ptr); + assert(rgn->size == total_size); + assert(rgn->base == r1.base - r2_size); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate memory when there are two reserved regions with + * a gap too small to fit the new region: + * + * | +--------+----------+ +------| + * | | r3 | r2 | | r1 | + * +-------+--------+----------+---+------+ + * + * Expect to allocate a region before the one that starts at the lower address, + * and merge them into one. The region counter and total size fields get + * updated. + */ +static int alloc_top_down_second_fit_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + struct region r1, r2; + void *allocated_ptr = NULL; + + phys_addr_t r3_size = SZ_1K; + phys_addr_t total_size; + + setup_memblock(); + + r1.base = memblock_end_of_DRAM() - SZ_512; + r1.size = SZ_512; + + r2.base = r1.base - SZ_512; + r2.size = SZ_256; + + total_size = r1.size + r2.size + r3_size; + + memblock_reserve(r1.base, r1.size); + memblock_reserve(r2.base, r2.size); + + allocated_ptr = memblock_alloc(r3_size, SMP_CACHE_BYTES); + + assert(allocated_ptr); + assert(rgn->size == r2.size + r3_size); + assert(rgn->base == r2.base - r3_size); + + assert(memblock.reserved.cnt == 2); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate memory when there are two reserved regions with + * a gap big enough to accommodate the new region: + * + * | +--------+--------+--------+ | + * | | r2 | r3 | r1 | | + * +-----+--------+--------+--------+-----+ + * + * Expect to merge all of them, creating one big entry in memblock.reserved + * array. The region counter and total size fields get updated. + */ +static int alloc_in_between_generic_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + struct region r1, r2; + void *allocated_ptr = NULL; + + phys_addr_t gap_size = SMP_CACHE_BYTES; + phys_addr_t r3_size = SZ_64; + /* Calculate regions size so there's just enough space for the new entry */ + phys_addr_t rgn_size = (MEM_SIZE - (2 * gap_size + r3_size)) / 2; + phys_addr_t total_size; + + setup_memblock(); + + r1.size = rgn_size; + r1.base = memblock_end_of_DRAM() - (gap_size + rgn_size); + + r2.size = rgn_size; + r2.base = memblock_start_of_DRAM() + gap_size; + + total_size = r1.size + r2.size + r3_size; + + memblock_reserve(r1.base, r1.size); + memblock_reserve(r2.base, r2.size); + + allocated_ptr = memblock_alloc(r3_size, SMP_CACHE_BYTES); + + assert(allocated_ptr); + assert(rgn->size == total_size); + assert(rgn->base == r1.base - r2.size - r3_size); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate memory when the memory is filled with reserved + * regions with memory gaps too small to fit the new region: + * + * +-------+ + * | new | + * +--+----+ + * | +-----+ +-----+ +-----+ | + * | | res | | res | | res | | + * +----+-----+----+-----+----+-----+----+ + * + * Expect no allocation to happen. + */ +static int alloc_small_gaps_generic_check(void) +{ + void *allocated_ptr = NULL; + + phys_addr_t region_size = SZ_1K; + phys_addr_t gap_size = SZ_256; + phys_addr_t region_end; + + setup_memblock(); + + region_end = memblock_start_of_DRAM(); + + while (region_end < memblock_end_of_DRAM()) { + memblock_reserve(region_end + gap_size, region_size); + region_end += gap_size + region_size; + } + + allocated_ptr = memblock_alloc(region_size, SMP_CACHE_BYTES); + + assert(!allocated_ptr); + + return 0; +} + +/* + * A test that tries to allocate memory when all memory is reserved. + * Expect no allocation to happen. + */ +static int alloc_all_reserved_generic_check(void) +{ + void *allocated_ptr = NULL; + + setup_memblock(); + + /* Simulate full memory */ + memblock_reserve(memblock_start_of_DRAM(), MEM_SIZE); + + allocated_ptr = memblock_alloc(SZ_256, SMP_CACHE_BYTES); + + assert(!allocated_ptr); + + return 0; +} + +/* + * A test that tries to allocate memory when the memory is almost full, + * with not enough space left for the new region: + * + * +-------+ + * | new | + * +-------+ + * |-----------------------------+ | + * | reserved | | + * +-----------------------------+---+ + * + * Expect no allocation to happen. + */ +static int alloc_no_space_generic_check(void) +{ + void *allocated_ptr = NULL; + + setup_memblock(); + + phys_addr_t available_size = SZ_256; + phys_addr_t reserved_size = MEM_SIZE - available_size; + + /* Simulate almost-full memory */ + memblock_reserve(memblock_start_of_DRAM(), reserved_size); + + allocated_ptr = memblock_alloc(SZ_1K, SMP_CACHE_BYTES); + + assert(!allocated_ptr); + + return 0; +} + +/* + * A test that tries to allocate memory when the memory is almost full, + * but there is just enough space left: + * + * |---------------------------+---------| + * | reserved | new | + * +---------------------------+---------+ + * + * Expect to allocate memory and merge all the regions. The total size field + * gets updated. + */ +static int alloc_limited_space_generic_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + + phys_addr_t available_size = SZ_256; + phys_addr_t reserved_size = MEM_SIZE - available_size; + + setup_memblock(); + + /* Simulate almost-full memory */ + memblock_reserve(memblock_start_of_DRAM(), reserved_size); + + allocated_ptr = memblock_alloc(available_size, SMP_CACHE_BYTES); + + assert(allocated_ptr); + assert(rgn->size == MEM_SIZE); + assert(rgn->base == memblock_start_of_DRAM()); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == MEM_SIZE); + + return 0; +} + +/* + * A test that tries to allocate memory when there is no available memory + * registered (i.e. memblock.memory has only a dummy entry). + * Expect no allocation to happen. + */ +static int alloc_no_memory_generic_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + + reset_memblock_regions(); + + allocated_ptr = memblock_alloc(SZ_1K, SMP_CACHE_BYTES); + + assert(!allocated_ptr); + assert(rgn->size == 0); + assert(rgn->base == 0); + assert(memblock.reserved.total_size == 0); + + return 0; +} + +int memblock_alloc_checks(void) +{ + reset_memblock_attributes(); + dummy_physical_memory_init(); + + alloc_top_down_simple_check(); + alloc_top_down_disjoint_check(); + alloc_top_down_before_check(); + alloc_top_down_after_check(); + alloc_top_down_second_fit_check(); + alloc_in_between_generic_check(); + alloc_small_gaps_generic_check(); + alloc_all_reserved_generic_check(); + alloc_no_space_generic_check(); + alloc_limited_space_generic_check(); + alloc_no_memory_generic_check(); + + dummy_physical_memory_cleanup(); + + return 0; +} diff --git a/tools/testing/memblock/tests/alloc_api.h b/tools/testing/memblock/tests/alloc_api.h new file mode 100644 index 000000000000..585b085baf21 --- /dev/null +++ b/tools/testing/memblock/tests/alloc_api.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _MEMBLOCK_ALLOCS_H +#define _MEMBLOCK_ALLOCS_H + +#include "common.h" + +int memblock_alloc_checks(void); + +#endif From patchwork Mon Feb 28 14:46:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karolina Drobnik X-Patchwork-Id: 12763433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 707A2C433EF for ; Mon, 28 Feb 2022 14:47:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AC0348D0007; Mon, 28 Feb 2022 09:47:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A6E7A8D0001; Mon, 28 Feb 2022 09:47:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E9698D0007; Mon, 28 Feb 2022 09:47:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 811FF8D0001 for ; Mon, 28 Feb 2022 09:47:24 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 57E4D611D2 for ; Mon, 28 Feb 2022 14:47:24 +0000 (UTC) X-FDA: 79192466808.05.E79C468 Received: from mail-lj1-f181.google.com (mail-lj1-f181.google.com [209.85.208.181]) by imf15.hostedemail.com (Postfix) with ESMTP id C71EBA0004 for ; Mon, 28 Feb 2022 14:47:23 +0000 (UTC) Received: by mail-lj1-f181.google.com with SMTP id v28so17694748ljv.9 for ; Mon, 28 Feb 2022 06:47:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KPIcapYMAPLUekCZ8tupcTqg03NA1STBAuLjc0LJSyM=; b=Yjk+uCLdN0d5cgLZMOvaLiSrtYcFa+EDMH7fqwAAASv/fp1PNe8jAMXcwuL56//vhQ jfHu7zxJMXtnpHWGzlbyHCprAgFNeocsw/Xrh66+VW2Lm/v+3I/XAxXjxakJCP4Wz2uA hwbFAJAJm3RRN2h9aiAAwCLfb64KuznAy2AQJ+evtw9qpjeYwQ6cWIN6M4iEiJN81idz Q4VHSCCj1wBl6BlkJVvsNHeNly+6mKv1X82qe97uhwtHCawKoTYiM+hxT2Jt4wnEWleN NWhJWTntY5mD2jYeNmkoLcQgYilw0aW/H7h23tX3XcU0tPGkllMjK8eUp4bItEIFzKiz FhTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KPIcapYMAPLUekCZ8tupcTqg03NA1STBAuLjc0LJSyM=; b=Cd6VrfpvJ/s9bMUurwqFyjDQdfGgUJ0Cn/BHRaDDZ244J5r1XC51H01Y83AdNlGVq8 gVQk+8E2UUcTSVY/qsNAZKH9GO/9n+3VShpMjEjv6ibLiIDkgKc9XSXhNoZ6ksgA3R2H UYcrQ0ZnBziG5uNIeWOy4YCoZbVkPVi+NSOq0+nLHPAz4/b99Zmd23U+I8Scv1flHZxg y5q1f8/DvH7DiCKXReAn68f+HZuRpSfLBJVOqAsE8pF1KQPHJ4xK7oEtbpM7w2mPkXFB 15DPt1IpzXwy8iMwAta4JoPpFA9nuI7mMCMLRDJskcWqNpYA9RhbK1rDG+uTIVPMz5DF F1mg== X-Gm-Message-State: AOAM5336AhZ1eqcSRHN9aeiq3tvCMA7xxqa0tttwST/JR7X1IVWl/N+e xxLh3kyJ+i26DgTeQgGjxVvzCBQzJ/Q= X-Google-Smtp-Source: ABdhPJzw3BFUm+PtXKHvrKgM6McVl8+kl/0lzmSYPLaE3k3tApKRavKaY3+zz77lcDKHrQN3a/goNg== X-Received: by 2002:a2e:8256:0:b0:246:3e95:77e8 with SMTP id j22-20020a2e8256000000b002463e9577e8mr14271116ljh.493.1646059642388; Mon, 28 Feb 2022 06:47:22 -0800 (PST) Received: from elysium.toya.net.pl (staticline-31-183-165-244.toya.net.pl. [31.183.165.244]) by smtp.gmail.com with ESMTPSA id r14-20020ac252ae000000b00443f3cbc03asm993996lfm.6.2022.02.28.06.47.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 06:47:22 -0800 (PST) From: Karolina Drobnik To: linux-mm@kvack.org Cc: rppt@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, Karolina Drobnik Subject: [PATCH 4/9] memblock tests: Add memblock_alloc tests for bottom up Date: Mon, 28 Feb 2022 15:46:46 +0100 Message-Id: <426674eee20d99dca49caf1ee0142a83dccbc98d.1646055639.git.karolinadrobnik@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspam-User: X-Stat-Signature: s9urm14upf3bf7ukp6m67kdyyzmbukar Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Yjk+uCLd; spf=pass (imf15.hostedemail.com: domain of karolinadrobnik@gmail.com designates 209.85.208.181 as permitted sender) smtp.mailfrom=karolinadrobnik@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Queue-Id: C71EBA0004 X-HE-Tag: 1646059643-724506 X-Bogosity: Ham, tests=bogofilter, spamicity=0.004499, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add checks for memblock_alloc for bottom up allocation direction. The tested scenarios are: - Region can be allocated on the first fit (with and without region merging) - Region can be allocated on the second fit (with and without region merging) Add test case wrappers to test both directions in the same context. Signed-off-by: Karolina Drobnik --- tools/testing/memblock/tests/alloc_api.c | 322 ++++++++++++++++++++++- 1 file changed, 318 insertions(+), 4 deletions(-) diff --git a/tools/testing/memblock/tests/alloc_api.c b/tools/testing/memblock/tests/alloc_api.c index 22ba9a2b4eaf..5d8acf4255d7 100644 --- a/tools/testing/memblock/tests/alloc_api.c +++ b/tools/testing/memblock/tests/alloc_api.c @@ -405,23 +405,337 @@ static int alloc_no_memory_generic_check(void) return 0; } -int memblock_alloc_checks(void) +/* + * A simple test that tries to allocate a small memory region. + * Expect to allocate an aligned region at the beginning of the available + * memory. + */ +static int alloc_bottom_up_simple_check(void) { - reset_memblock_attributes(); - dummy_physical_memory_init(); + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + + setup_memblock(); + + allocated_ptr = memblock_alloc(SZ_2, SMP_CACHE_BYTES); + + assert(allocated_ptr); + assert(rgn->size == SZ_2); + assert(rgn->base == memblock_start_of_DRAM()); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == SZ_2); + + return 0; +} + +/* + * A test that tries to allocate memory next to a reserved region that starts at + * the misaligned address. Expect to create two separate entries, with the new + * entry aligned to the provided alignment: + * + * + + * | +----------+ +----------+ | + * | | rgn1 | | rgn2 | | + * +----+----------+---+----------+-----+ + * ^ + * | + * Aligned address boundary + * + * The allocation direction is bottom-up, so the new region will be the second + * entry in memory.reserved array. The previously reserved region does not get + * modified. Region counter and total size get updated. + */ +static int alloc_bottom_up_disjoint_check(void) +{ + struct memblock_region *rgn1 = &memblock.reserved.regions[0]; + struct memblock_region *rgn2 = &memblock.reserved.regions[1]; + struct region r1; + void *allocated_ptr = NULL; + + phys_addr_t r2_size = SZ_16; + /* Use custom alignment */ + phys_addr_t alignment = SMP_CACHE_BYTES * 2; + phys_addr_t total_size; + phys_addr_t expected_start; + + setup_memblock(); + + r1.base = memblock_start_of_DRAM() + SZ_2; + r1.size = SZ_2; + + total_size = r1.size + r2_size; + expected_start = memblock_start_of_DRAM() + alignment; + + memblock_reserve(r1.base, r1.size); + + allocated_ptr = memblock_alloc(r2_size, alignment); + + assert(allocated_ptr); + + assert(rgn1->size == r1.size); + assert(rgn1->base == r1.base); + + assert(rgn2->size == r2_size); + assert(rgn2->base == expected_start); + + assert(memblock.reserved.cnt == 2); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate memory when there is enough space at + * the beginning of the previously reserved block (i.e. first fit): + * + * |------------------+--------+ | + * | r1 | r2 | | + * +------------------+--------+---------+ + * + * Expect a merge of both regions. Only the region size gets updated. + */ +static int alloc_bottom_up_before_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + phys_addr_t r1_size = SZ_512; + phys_addr_t r2_size = SZ_128; + phys_addr_t total_size = r1_size + r2_size; + + setup_memblock(); + + memblock_reserve(memblock_start_of_DRAM() + r1_size, r2_size); + + allocated_ptr = memblock_alloc(r1_size, SMP_CACHE_BYTES); + + assert(allocated_ptr); + assert(rgn->size == total_size); + assert(rgn->base == memblock_start_of_DRAM()); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate memory when there is not enough space at + * the beginning of the previously reserved block (i.e. second fit): + * + * | +--------+--------------+ | + * | | r1 | r2 | | + * +----+--------+--------------+---------+ + * + * Expect a merge of both regions. Only the region size gets updated. + */ +static int alloc_bottom_up_after_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + struct region r1; + void *allocated_ptr = NULL; + + phys_addr_t r2_size = SZ_512; + phys_addr_t total_size; + + setup_memblock(); + + /* The first region starts at the aligned address to test region merging */ + r1.base = memblock_start_of_DRAM() + SMP_CACHE_BYTES; + r1.size = SZ_64; + + total_size = r1.size + r2_size; + + memblock_reserve(r1.base, r1.size); + + allocated_ptr = memblock_alloc(r2_size, SMP_CACHE_BYTES); + + assert(allocated_ptr); + assert(rgn->size == total_size); + assert(rgn->base == r1.base); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate memory when there are two reserved regions, the + * first one starting at the beginning of the available memory, with a gap too + * small to fit the new region: + * + * |------------+ +--------+--------+ | + * | r1 | | r2 | r3 | | + * +------------+-----+--------+--------+--+ + * + * Expect to allocate after the second region, which starts at the higher + * address, and merge them into one. The region counter and total size fields + * get updated. + */ +static int alloc_bottom_up_second_fit_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[1]; + struct region r1, r2; + void *allocated_ptr = NULL; + + phys_addr_t r3_size = SZ_1K; + phys_addr_t total_size; + + setup_memblock(); + + r1.base = memblock_start_of_DRAM(); + r1.size = SZ_512; + + r2.base = r1.base + r1.size + SZ_512; + r2.size = SZ_256; + + total_size = r1.size + r2.size + r3_size; + + memblock_reserve(r1.base, r1.size); + memblock_reserve(r2.base, r2.size); + + allocated_ptr = memblock_alloc(r3_size, SMP_CACHE_BYTES); + + assert(allocated_ptr); + assert(rgn->size == r2.size + r3_size); + assert(rgn->base == r2.base); + + assert(memblock.reserved.cnt == 2); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* Test case wrappers */ +static int alloc_simple_check(void) +{ + memblock_set_bottom_up(false); alloc_top_down_simple_check(); + memblock_set_bottom_up(true); + alloc_bottom_up_simple_check(); + + return 0; +} + +static int alloc_disjoint_check(void) +{ + memblock_set_bottom_up(false); alloc_top_down_disjoint_check(); + memblock_set_bottom_up(true); + alloc_bottom_up_disjoint_check(); + + return 0; +} + +static int alloc_before_check(void) +{ + memblock_set_bottom_up(false); alloc_top_down_before_check(); + memblock_set_bottom_up(true); + alloc_bottom_up_before_check(); + + return 0; +} + +static int alloc_after_check(void) +{ + memblock_set_bottom_up(false); alloc_top_down_after_check(); - alloc_top_down_second_fit_check(); + memblock_set_bottom_up(true); + alloc_bottom_up_after_check(); + + return 0; +} + +static int alloc_in_between_check(void) +{ + memblock_set_bottom_up(false); + alloc_in_between_generic_check(); + memblock_set_bottom_up(true); alloc_in_between_generic_check(); + + return 0; +} + +static int alloc_second_fit_check(void) +{ + memblock_set_bottom_up(false); + alloc_top_down_second_fit_check(); + memblock_set_bottom_up(true); + alloc_bottom_up_second_fit_check(); + + return 0; +} + +static int alloc_small_gaps_check(void) +{ + memblock_set_bottom_up(false); + alloc_small_gaps_generic_check(); + memblock_set_bottom_up(true); alloc_small_gaps_generic_check(); + + return 0; +} + +static int alloc_all_reserved_check(void) +{ + memblock_set_bottom_up(false); + alloc_all_reserved_generic_check(); + memblock_set_bottom_up(true); alloc_all_reserved_generic_check(); + + return 0; +} + +static int alloc_no_space_check(void) +{ + memblock_set_bottom_up(false); + alloc_no_space_generic_check(); + memblock_set_bottom_up(true); alloc_no_space_generic_check(); + + return 0; +} + +static int alloc_limited_space_check(void) +{ + memblock_set_bottom_up(false); alloc_limited_space_generic_check(); + memblock_set_bottom_up(true); + alloc_limited_space_generic_check(); + + return 0; +} + +static int alloc_no_memory_check(void) +{ + memblock_set_bottom_up(false); + alloc_no_memory_generic_check(); + memblock_set_bottom_up(true); alloc_no_memory_generic_check(); + return 0; +} + +int memblock_alloc_checks(void) +{ + reset_memblock_attributes(); + dummy_physical_memory_init(); + + alloc_simple_check(); + alloc_disjoint_check(); + alloc_before_check(); + alloc_after_check(); + alloc_second_fit_check(); + alloc_small_gaps_check(); + alloc_in_between_check(); + alloc_all_reserved_check(); + alloc_no_space_check(); + alloc_limited_space_check(); + alloc_no_memory_check(); + dummy_physical_memory_cleanup(); return 0; From patchwork Mon Feb 28 14:46:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karolina Drobnik X-Patchwork-Id: 12763434 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9601FC433EF for ; Mon, 28 Feb 2022 14:47:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5B7FB8D0008; Mon, 28 Feb 2022 09:47:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 568EF8D0001; Mon, 28 Feb 2022 09:47:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 408118D0008; Mon, 28 Feb 2022 09:47:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 34A0B8D0001 for ; Mon, 28 Feb 2022 09:47:26 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 12E54811AD for ; Mon, 28 Feb 2022 14:47:26 +0000 (UTC) X-FDA: 79192466892.05.6B123E9 Received: from mail-lf1-f51.google.com (mail-lf1-f51.google.com [209.85.167.51]) by imf16.hostedemail.com (Postfix) with ESMTP id 8AE85180008 for ; Mon, 28 Feb 2022 14:47:25 +0000 (UTC) Received: by mail-lf1-f51.google.com with SMTP id b9so21793559lfv.7 for ; Mon, 28 Feb 2022 06:47:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XkoCnBAa1JQTAcVmjm2BwcgOzEHkTWH89xocRiPiPBg=; b=LBgcJev33Ch2AEoI2UMdD5pREGd2zs6uuwnvr6zMARc7EMkwD9thjxbpOuo4EGFs0J 9IXjHzWYVF2QefyoTy8bBvxJKKF2Dt3xIhrEf5DenGVQpiWp/AVuYA7QxR/TMXRhH9jN vKDWhqgThxWzbJYOCwXsMTN9xUi8Q0z3rIPZTY7NCZutlxh65VKDzwdfWJojebJavxBh EsSRlz59rgY5hw/gVXvhsSG94BnQjHUUCb1As2WV95+0+fk66HPOLGvjcu8vsZQ6IAhF x0RoGBiGmhY9r0p/R2p7A1Z3k29zAW5qH9RyS4pJqy8YFaFJc38nUUY59mfIDxq5B/mi qifw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XkoCnBAa1JQTAcVmjm2BwcgOzEHkTWH89xocRiPiPBg=; b=SS5smSuRCUCvcMmmBatGf4WMyvdvSS0ceGXtYKVX1g38vah61XmzW8383FMo+I4nGr q785hNEDSFUNAO8LmYotNuYk8NZljGqMRqhIEqwm8lciCcN3ijW2E+hWBZQIPYhgQOyZ ba71Th02XQdO8mr7P4Ws1EAPEOYYOIW0WFqq3k7BYv1oaCucfKkKEk8HkZeaDfSemMnU Hkad1V7mpoOe76o2ya+zRdBtvHs/YlQ+8ngvmg3En1XdCauAXfTdnd86o50o1wAuGJ65 WeuJrXUPB8XAJC3BIwsr2welFTHwGCd04d+SeU0xYpVPpYu2IDh81WOlxXx3SC+plQN/ vNkw== X-Gm-Message-State: AOAM533+PGpdMpTMi6GFGqaKG1AvG396rqVtxQqKS49m89JpXtWM7wY0 sGVoj9znVy1467U6Bz7o6jOIY9n5Pao= X-Google-Smtp-Source: ABdhPJz7hQFnuMCEY4QgmJAVggE8tnUUqMH8h47VTQ24mhLVyCDoGjP2dP5SiJSEys0cBw4BMySiUw== X-Received: by 2002:a05:6512:3a95:b0:443:1624:3be1 with SMTP id q21-20020a0565123a9500b0044316243be1mr12829384lfu.355.1646059643992; Mon, 28 Feb 2022 06:47:23 -0800 (PST) Received: from elysium.toya.net.pl (staticline-31-183-165-244.toya.net.pl. [31.183.165.244]) by smtp.gmail.com with ESMTPSA id r14-20020ac252ae000000b00443f3cbc03asm993996lfm.6.2022.02.28.06.47.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 06:47:23 -0800 (PST) From: Karolina Drobnik To: linux-mm@kvack.org Cc: rppt@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, Karolina Drobnik Subject: [PATCH 5/9] memblock tests: Add memblock_alloc_from tests for top down Date: Mon, 28 Feb 2022 15:46:47 +0100 Message-Id: <3dd645f437975fd393010b95b8faa85d2b86490a.1646055639.git.karolinadrobnik@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 8AE85180008 X-Stat-Signature: ta9n9abokq6rtpe5smahjhykb6aiuezk X-Rspam-User: Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=LBgcJev3; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf16.hostedemail.com: domain of karolinadrobnik@gmail.com designates 209.85.167.51 as permitted sender) smtp.mailfrom=karolinadrobnik@gmail.com X-Rspamd-Server: rspam03 X-HE-Tag: 1646059645-469146 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000129, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add checks for memblock_alloc_from for default allocation direction. The tested scenarios are: - Not enough space to allocate memory at the minimal address - Minimal address parameter is smaller than the start address of the available memory - Minimal address is too close to the available memory Add simple memblock_alloc_from test that can be used to test both allocation directions (minimal address is aligned or misaligned). Signed-off-by: Karolina Drobnik --- tools/testing/memblock/Makefile | 3 +- tools/testing/memblock/main.c | 2 + .../memblock/tests/alloc_helpers_api.c | 226 ++++++++++++++++++ .../memblock/tests/alloc_helpers_api.h | 9 + 4 files changed, 239 insertions(+), 1 deletion(-) create mode 100644 tools/testing/memblock/tests/alloc_helpers_api.c create mode 100644 tools/testing/memblock/tests/alloc_helpers_api.h diff --git a/tools/testing/memblock/Makefile b/tools/testing/memblock/Makefile index 5b01cfd808d0..89e374470009 100644 --- a/tools/testing/memblock/Makefile +++ b/tools/testing/memblock/Makefile @@ -6,7 +6,8 @@ CFLAGS += -I. -I../../include -Wall -O2 -fsanitize=address \ -fsanitize=undefined -D CONFIG_PHYS_ADDR_T_64BIT LDFLAGS += -fsanitize=address -fsanitize=undefined TARGETS = main -TEST_OFILES = tests/alloc_api.o tests/basic_api.o tests/common.o +TEST_OFILES = tests/alloc_helpers_api.o tests/alloc_api.o tests/basic_api.o \ + tests/common.o DEP_OFILES = memblock.o lib/slab.o mmzone.o slab.o OFILES = main.o $(DEP_OFILES) $(TEST_OFILES) EXTR_SRC = ../../../mm/memblock.c diff --git a/tools/testing/memblock/main.c b/tools/testing/memblock/main.c index e7cc45dc06d4..b63150ee554f 100644 --- a/tools/testing/memblock/main.c +++ b/tools/testing/memblock/main.c @@ -1,11 +1,13 @@ // SPDX-License-Identifier: GPL-2.0-or-later #include "tests/basic_api.h" #include "tests/alloc_api.h" +#include "tests/alloc_helpers_api.h" int main(int argc, char **argv) { memblock_basic_checks(); memblock_alloc_checks(); + memblock_alloc_helpers_checks(); return 0; } diff --git a/tools/testing/memblock/tests/alloc_helpers_api.c b/tools/testing/memblock/tests/alloc_helpers_api.c new file mode 100644 index 000000000000..dc5152adcc5b --- /dev/null +++ b/tools/testing/memblock/tests/alloc_helpers_api.c @@ -0,0 +1,226 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +#include "alloc_helpers_api.h" + +/* + * A simple test that tries to allocate a memory region above a specified, + * aligned address: + * + * + + * | +-----------+ | + * | | rgn | | + * +----------+-----------+---------+ + * ^ + * | + * Aligned min_addr + * + * Expect to allocate a cleared region at the minimal memory address. + */ +static int alloc_from_simple_generic_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t size = SZ_16; + phys_addr_t min_addr; + + setup_memblock(); + + min_addr = memblock_end_of_DRAM() - SMP_CACHE_BYTES; + + allocated_ptr = memblock_alloc_from(size, SMP_CACHE_BYTES, min_addr); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == size); + assert(rgn->base == min_addr); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +/* + * A test that tries to allocate a memory region above a certain address. + * The minimal address here is not aligned: + * + * + + + * | + +---------+ | + * | | | rgn | | + * +------+------+---------+------------+ + * ^ ^------. + * | | + * min_addr Aligned address + * boundary + * + * Expect to allocate a cleared region at the closest aligned memory address. + */ +static int alloc_from_misaligned_generic_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t size = SZ_32; + phys_addr_t min_addr; + + setup_memblock(); + + /* A misaligned address */ + min_addr = memblock_end_of_DRAM() - (SMP_CACHE_BYTES * 2 - 1); + + allocated_ptr = memblock_alloc_from(size, SMP_CACHE_BYTES, min_addr); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == size); + assert(rgn->base == memblock_end_of_DRAM() - SMP_CACHE_BYTES); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +/* + * A test that tries to allocate a memory region above an address that is too + * close to the end of the memory: + * + * + + + * | +--------+---+ | + * | | rgn + | | + * +-----------+--------+---+------+ + * ^ ^ + * | | + * | min_addr + * | + * Aligned address + * boundary + * + * Expect to prioritize granting memory over satisfying the minimal address + * requirement. + */ +static int alloc_from_top_down_high_addr_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + + phys_addr_t size = SZ_32; + phys_addr_t min_addr; + + setup_memblock(); + + /* The address is too close to the end of the memory */ + min_addr = memblock_end_of_DRAM() - SZ_16; + + allocated_ptr = memblock_alloc_from(size, SMP_CACHE_BYTES, min_addr); + + assert(allocated_ptr); + assert(rgn->size == size); + assert(rgn->base == memblock_end_of_DRAM() - SMP_CACHE_BYTES); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +/* + * A test that tries to allocate a memory region when there is no space + * available above the minimal address above a certain address: + * + * + + * | +---------+-------------| + * | | rgn | | + * +--------+---------+-------------+ + * ^ + * | + * min_addr + * + * Expect to prioritize granting memory over satisfying the minimal address + * requirement and to allocate next to the previously reserved region. The + * regions get merged into one. + */ +static int alloc_from_top_down_no_space_above_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + + phys_addr_t r1_size = SZ_64; + phys_addr_t r2_size = SZ_2; + phys_addr_t total_size = r1_size + r2_size; + phys_addr_t min_addr; + + setup_memblock(); + + min_addr = memblock_end_of_DRAM() - SMP_CACHE_BYTES * 2; + + /* No space above this address */ + memblock_reserve(min_addr, r2_size); + + allocated_ptr = memblock_alloc_from(r1_size, SMP_CACHE_BYTES, min_addr); + + assert(allocated_ptr); + assert(rgn->base == min_addr - r1_size); + assert(rgn->size == total_size); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate a memory region with a minimal address below + * the start address of the available memory. As the allocation is top-down, + * first reserve a region that will force allocation near the start. + * Expect successful allocation and merge of both regions. + */ +static int alloc_from_top_down_min_addr_cap_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + + phys_addr_t r1_size = SZ_64; + phys_addr_t min_addr; + phys_addr_t start_addr; + + setup_memblock(); + + start_addr = (phys_addr_t)memblock_start_of_DRAM(); + min_addr = start_addr - SMP_CACHE_BYTES * 3; + + memblock_reserve(start_addr + r1_size, MEM_SIZE - r1_size); + + allocated_ptr = memblock_alloc_from(r1_size, SMP_CACHE_BYTES, min_addr); + + assert(allocated_ptr); + assert(rgn->base == start_addr); + assert(rgn->size == MEM_SIZE); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == MEM_SIZE); + + return 0; +} + +int memblock_alloc_helpers_checks(void) +{ + reset_memblock_attributes(); + dummy_physical_memory_init(); + + alloc_from_simple_generic_check(); + alloc_from_misaligned_generic_check(); + alloc_from_top_down_high_addr_check(); + alloc_from_top_down_min_addr_cap_check(); + alloc_from_top_down_no_space_above_check(); + + dummy_physical_memory_cleanup(); + + return 0; +} diff --git a/tools/testing/memblock/tests/alloc_helpers_api.h b/tools/testing/memblock/tests/alloc_helpers_api.h new file mode 100644 index 000000000000..c9e4827b1623 --- /dev/null +++ b/tools/testing/memblock/tests/alloc_helpers_api.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _MEMBLOCK_ALLOC_HELPERS_H +#define _MEMBLOCK_ALLOC_HELPERS_H + +#include "common.h" + +int memblock_alloc_helpers_checks(void); + +#endif From patchwork Mon Feb 28 14:46:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karolina Drobnik X-Patchwork-Id: 12763435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57692C433FE for ; Mon, 28 Feb 2022 14:47:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AE7BC8D0009; Mon, 28 Feb 2022 09:47:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A97348D0001; Mon, 28 Feb 2022 09:47:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 936B38D0009; Mon, 28 Feb 2022 09:47:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0112.hostedemail.com [216.40.44.112]) by kanga.kvack.org (Postfix) with ESMTP id 859058D0001 for ; Mon, 28 Feb 2022 09:47:27 -0500 (EST) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 4231C8249980 for ; Mon, 28 Feb 2022 14:47:27 +0000 (UTC) X-FDA: 79192466934.31.CE8B236 Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) by imf25.hostedemail.com (Postfix) with ESMTP id 55FF8A000D for ; Mon, 28 Feb 2022 14:47:27 +0000 (UTC) Received: by mail-lj1-f169.google.com with SMTP id s25so17731187lji.5 for ; Mon, 28 Feb 2022 06:47:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dODaVYwjD7geqFx/VYp9W8G2RuqYSXTNScA1eN7itG0=; b=LZaAe03qJIgVLKW0ROcUF5NtBYIHelCM2LrCq5VA5D5lRnAEbYXzNSRTOTJo2qQT4s I/JDuliC/5lLGARf5qXerJN94Fzlhycn7DfMGj02lslArMjroM8pgQqhwPF+7fxPMyyX SxjeJXuWbWneu83tRzDRi6vvT0Ao6MQ0Bu0H8jviuPVMswymwSAV/56jUNJMRMskVNi8 Vd5R2NgXMKDo8W1b2RzoTpcmzN8j7CWzWXIh9GcDRZqJ8EvXRwchvWYVnYsZMZ7xH5Kt cq+9OgOWhVBbttHtUGT5clurr92m4ch4lNgaGqN9oQWZ+149WJdDIWha7kQ63s17EvBB MhxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dODaVYwjD7geqFx/VYp9W8G2RuqYSXTNScA1eN7itG0=; b=6p1g46dsunGu2nRAxErU+JJknTyeCzFqFESzyJynQMDkwAVo/ZUhyyoIR9XyPLPQWl vKjh2n6Bh6oHIdhOH0l/vZ32Hwac/tXjXRv1lYMZsK+wWccHDFWiWcOvcvHA27J3P+8G Llrou2gjU+8bIteCd/v4PtPyCfrwshfJmwqukvIgCF1lh2b31C7gP1BWY/aIV3yFuLSo S7hk4CIO/ELxN0WnttRmvoB7b8gOh0WhFqOLrIIS7w/+9y8VY9EE6aFvVDIbkW+xM7Qo PnrjFDHo9WA26OxIIJUOSJX2pOLErTy0LfWSO2+j8oFUgrqFNNuXNLUcFCWfiwtT4QEM SdhQ== X-Gm-Message-State: AOAM530YsetKIIOkWJhLjz/MRENNlDj5tr7c0MFam2aqAFiPDONi8+sr B5A9dcPjDtbhcJpkiKGNkIo0nWM+FlM= X-Google-Smtp-Source: ABdhPJzvP0DsMR+uBiVybsUZ/9RqLjAPnDO5Uwmg2UTv5N8cZp0XtxXxyIrbL5MaQ4F5xC+2mT0uDg== X-Received: by 2002:a2e:954:0:b0:241:73c:cb5e with SMTP id 81-20020a2e0954000000b00241073ccb5emr14833424ljj.86.1646059645459; Mon, 28 Feb 2022 06:47:25 -0800 (PST) Received: from elysium.toya.net.pl (staticline-31-183-165-244.toya.net.pl. [31.183.165.244]) by smtp.gmail.com with ESMTPSA id r14-20020ac252ae000000b00443f3cbc03asm993996lfm.6.2022.02.28.06.47.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 06:47:25 -0800 (PST) From: Karolina Drobnik To: linux-mm@kvack.org Cc: rppt@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, Karolina Drobnik Subject: [PATCH 6/9] memblock tests: Add memblock_alloc_from tests for bottom up Date: Mon, 28 Feb 2022 15:46:48 +0100 Message-Id: <506cf5293c8a21c012b7ea87b14af07754d3e656.1646055639.git.karolinadrobnik@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 55FF8A000D X-Stat-Signature: 3fk1uoghn6jnpfnpou1aeghn3xg6h1g4 X-Rspam-User: Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=LZaAe03q; spf=pass (imf25.hostedemail.com: domain of karolinadrobnik@gmail.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=karolinadrobnik@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam07 X-HE-Tag: 1646059647-699727 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000075, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add checks for memblock_alloc_from for bottom up allocation direction. The tested scenarios are: - Not enough space to allocate memory at the minimal address - Minimal address parameter is smaller than the start address of the available memory - Minimal address parameter is too close to the end of the available memory Add test case wrappers to test both directions in the same context. Signed-off-by: Karolina Drobnik --- .../memblock/tests/alloc_helpers_api.c | 175 +++++++++++++++++- 1 file changed, 171 insertions(+), 4 deletions(-) -- 2.30.2 diff --git a/tools/testing/memblock/tests/alloc_helpers_api.c b/tools/testing/memblock/tests/alloc_helpers_api.c index dc5152adcc5b..963a966db461 100644 --- a/tools/testing/memblock/tests/alloc_helpers_api.c +++ b/tools/testing/memblock/tests/alloc_helpers_api.c @@ -209,16 +209,183 @@ static int alloc_from_top_down_min_addr_cap_check(void) return 0; } -int memblock_alloc_helpers_checks(void) +/* + * A test that tries to allocate a memory region above an address that is too + * close to the end of the memory: + * + * + + * |-----------+ + | + * | rgn | | | + * +-----------+--------------+-----+ + * ^ ^ + * | | + * Aligned address min_addr + * boundary + * + * Expect to prioritize granting memory over satisfying the minimal address + * requirement. Allocation happens at beginning of the available memory. + */ +static int alloc_from_bottom_up_high_addr_check(void) { - reset_memblock_attributes(); - dummy_physical_memory_init(); + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + + phys_addr_t size = SZ_32; + phys_addr_t min_addr; + + setup_memblock(); + + /* The address is too close to the end of the memory */ + min_addr = memblock_end_of_DRAM() - SZ_8; + + allocated_ptr = memblock_alloc_from(size, SMP_CACHE_BYTES, min_addr); + + assert(allocated_ptr); + assert(rgn->size == size); + assert(rgn->base == memblock_start_of_DRAM()); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} +/* + * A test that tries to allocate a memory region when there is no space + * available above the minimal address above a certain address: + * + * + + * |-----------+ +-------------------| + * | rgn | | | + * +-----------+----+-------------------+ + * ^ + * | + * min_addr + * + * Expect to prioritize granting memory over satisfying the minimal address + * requirement and to allocate at the beginning of the available memory. + */ +static int alloc_from_bottom_up_no_space_above_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + + phys_addr_t r1_size = SZ_64; + phys_addr_t min_addr; + phys_addr_t r2_size; + + setup_memblock(); + + min_addr = memblock_start_of_DRAM() + SZ_128; + r2_size = memblock_end_of_DRAM() - min_addr; + + /* No space above this address */ + memblock_reserve(min_addr - SMP_CACHE_BYTES, r2_size); + + allocated_ptr = memblock_alloc_from(r1_size, SMP_CACHE_BYTES, min_addr); + + assert(allocated_ptr); + assert(rgn->base == memblock_start_of_DRAM()); + assert(rgn->size == r1_size); + + assert(memblock.reserved.cnt == 2); + assert(memblock.reserved.total_size == r1_size + r2_size); + + return 0; +} + +/* + * A test that tries to allocate a memory region with a minimal address below + * the start address of the available memory. Expect to allocate a region + * at the beginning of the available memory. + */ +static int alloc_from_bottom_up_min_addr_cap_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + + phys_addr_t r1_size = SZ_64; + phys_addr_t min_addr; + phys_addr_t start_addr; + + setup_memblock(); + + start_addr = (phys_addr_t)memblock_start_of_DRAM(); + min_addr = start_addr - SMP_CACHE_BYTES * 3; + + allocated_ptr = memblock_alloc_from(r1_size, SMP_CACHE_BYTES, min_addr); + + assert(allocated_ptr); + assert(rgn->base == start_addr); + assert(rgn->size == r1_size); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == r1_size); + + return 0; +} + +/* Test case wrappers */ +static int alloc_from_simple_check(void) +{ + memblock_set_bottom_up(false); + alloc_from_simple_generic_check(); + memblock_set_bottom_up(true); alloc_from_simple_generic_check(); + + return 0; +} + +static int alloc_from_misaligned_check(void) +{ + memblock_set_bottom_up(false); alloc_from_misaligned_generic_check(); + memblock_set_bottom_up(true); + alloc_from_misaligned_generic_check(); + + return 0; +} + +static int alloc_from_high_addr_check(void) +{ + memblock_set_bottom_up(false); alloc_from_top_down_high_addr_check(); - alloc_from_top_down_min_addr_cap_check(); + memblock_set_bottom_up(true); + alloc_from_bottom_up_high_addr_check(); + + return 0; +} + +static int alloc_from_no_space_above_check(void) +{ + memblock_set_bottom_up(false); alloc_from_top_down_no_space_above_check(); + memblock_set_bottom_up(true); + alloc_from_bottom_up_no_space_above_check(); + + return 0; +} + +static int alloc_from_min_addr_cap_check(void) +{ + memblock_set_bottom_up(false); + alloc_from_top_down_min_addr_cap_check(); + memblock_set_bottom_up(true); + alloc_from_bottom_up_min_addr_cap_check(); + + return 0; +} + +int memblock_alloc_helpers_checks(void) +{ + reset_memblock_attributes(); + dummy_physical_memory_init(); + + alloc_from_simple_check(); + alloc_from_misaligned_check(); + alloc_from_high_addr_check(); + alloc_from_no_space_above_check(); + alloc_from_min_addr_cap_check(); dummy_physical_memory_cleanup(); From patchwork Mon Feb 28 14:46:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karolina Drobnik X-Patchwork-Id: 12763436 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05CC0C433EF for ; Mon, 28 Feb 2022 14:47:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EDFDA8D000A; Mon, 28 Feb 2022 09:47:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E90488D0001; Mon, 28 Feb 2022 09:47:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C6B8D8D000A; Mon, 28 Feb 2022 09:47:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id B78198D0001 for ; Mon, 28 Feb 2022 09:47:29 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 8D9D821626 for ; Mon, 28 Feb 2022 14:47:29 +0000 (UTC) X-FDA: 79192467018.06.14EE902 Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) by imf28.hostedemail.com (Postfix) with ESMTP id 11D43C0010 for ; Mon, 28 Feb 2022 14:47:28 +0000 (UTC) Received: by mail-lf1-f50.google.com with SMTP id t13so9764227lfd.9 for ; Mon, 28 Feb 2022 06:47:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ST1/OP+C0iudIb8MOv4eV/34xIzCZ2XuyTNfiyg3dsQ=; b=QswDV6jB0oVEORqsWnscNDqbP7/QMVkhfjGwksu0whm/1ZdXmrz0KYg/Ru5tA/iJXv JWeKBUhKrXgobIbdsTQjQHWuxPK/zJt45wZzJgX9NZ8Wq201T3rsqSl9FizmXBKtWG2U 2XhyP2UHUTdHyjdy06R73nZg8OWEqbY6zRA6HcGQX7sdvlRFUc9bKwQyaxfH+P3S51t+ 4ZwYS22q1uDqGy8DOLOO8DQI4gpjl/1eT/NRK/FElOhlBt9RcoDsG0B7UMu/xm7oap4F ZMmJIy1E4LAymcZm/Pnj42HiCKtRxkRxPKu4JhW8Zrb59Ft43OKwAOTdGt+WrIh+YECD flFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ST1/OP+C0iudIb8MOv4eV/34xIzCZ2XuyTNfiyg3dsQ=; b=aYsD2S6e7rOlcAZ7KvKmJrSTy/FKgGPjxn8Wzvgv3ZPBPwOGiI4onXboIbAoUqMsbN ij0P0rEagW2BMQtVsWYIZ0EJ19O0KxIbIQzSXMXfVL4fVhbMJxIYaaPXd5qeUZjtGxf4 Yecc0m7hIrzxfjqaR+VR637AAVwZeirk22yjNA4L3SU0E9hTQxykdFtJWQjxsjDgirPX xTW2cjjWE4OIWFMhC7f1/MdgYAnk4sAxxTGrB2dkMhiVov1XLm1c9Celesrdfe3b4cui pNxxVdzZBV7s0HtGa12Qb/cVlGHXcHwnoVaVXIv4EEL7FnNrSsDd2hOQHokmzUZJLk+2 xwHA== X-Gm-Message-State: AOAM531CajzXhnMcDO/IvstXW+U9CHWvSWR7qVFmiKoHaU0zbPqglim8 Hg+1VwiS8JDqiIyMjMfjjkuB8fGXLo0= X-Google-Smtp-Source: ABdhPJwQqAw9b1tqmS3LPX7GEWNoVhO+4bNS1GvxBn8zDhE+Ev3uTo8O3kFx57tti0Mhjhw/k+LikQ== X-Received: by 2002:ac2:44d1:0:b0:443:157e:7bc3 with SMTP id d17-20020ac244d1000000b00443157e7bc3mr12736366lfm.426.1646059647610; Mon, 28 Feb 2022 06:47:27 -0800 (PST) Received: from elysium.toya.net.pl (staticline-31-183-165-244.toya.net.pl. [31.183.165.244]) by smtp.gmail.com with ESMTPSA id r14-20020ac252ae000000b00443f3cbc03asm993996lfm.6.2022.02.28.06.47.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 06:47:27 -0800 (PST) From: Karolina Drobnik To: linux-mm@kvack.org Cc: rppt@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, Karolina Drobnik Subject: [PATCH 7/9] memblock tests: Add memblock_alloc_try_nid tests for top down Date: Mon, 28 Feb 2022 15:46:49 +0100 Message-Id: X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 11D43C0010 X-Stat-Signature: 14o199c4gnxrzok6fkqagk7cun57kgkj Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=QswDV6jB; spf=pass (imf28.hostedemail.com: domain of karolinadrobnik@gmail.com designates 209.85.167.50 as permitted sender) smtp.mailfrom=karolinadrobnik@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1646059648-662013 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add tests for memblock_alloc_try_nid for top down allocation direction. As the definition of this function is pretty close to the core memblock_alloc_range_nid, the test cases implemented here cover most of the code paths related to the memory allocations. The tested scenarios are: - Region can be allocated within the requested range (both with aligned and misaligned boundaries) - Region can be allocated between two already existing entries - Not enough space between already reserved regions - Memory range is too narrow but memory can be allocated before the maximum address - Edge cases: + Minimum address is below memblock_start_of_DRAM() + Maximum address is above memblock_end_of_DRAM() Add checks for both allocation directions: - Region starts at the min_addr and ends at max_addr - Maximum address is too close to the beginning of the available memory - Memory at the range boundaries is reserved but there is enough space to allocate a new region Signed-off-by: Karolina Drobnik --- tools/testing/memblock/Makefile | 4 +- tools/testing/memblock/main.c | 2 + tools/testing/memblock/tests/alloc_nid_api.c | 679 +++++++++++++++++++ tools/testing/memblock/tests/alloc_nid_api.h | 9 + 4 files changed, 692 insertions(+), 2 deletions(-) create mode 100644 tools/testing/memblock/tests/alloc_nid_api.c create mode 100644 tools/testing/memblock/tests/alloc_nid_api.h -- 2.30.2 diff --git a/tools/testing/memblock/Makefile b/tools/testing/memblock/Makefile index 89e374470009..a698e24b35e7 100644 --- a/tools/testing/memblock/Makefile +++ b/tools/testing/memblock/Makefile @@ -6,8 +6,8 @@ CFLAGS += -I. -I../../include -Wall -O2 -fsanitize=address \ -fsanitize=undefined -D CONFIG_PHYS_ADDR_T_64BIT LDFLAGS += -fsanitize=address -fsanitize=undefined TARGETS = main -TEST_OFILES = tests/alloc_helpers_api.o tests/alloc_api.o tests/basic_api.o \ - tests/common.o +TEST_OFILES = tests/alloc_nid_api.o tests/alloc_helpers_api.o tests/alloc_api.o \ + tests/basic_api.o tests/common.o DEP_OFILES = memblock.o lib/slab.o mmzone.o slab.o OFILES = main.o $(DEP_OFILES) $(TEST_OFILES) EXTR_SRC = ../../../mm/memblock.c diff --git a/tools/testing/memblock/main.c b/tools/testing/memblock/main.c index b63150ee554f..fb183c9e76d1 100644 --- a/tools/testing/memblock/main.c +++ b/tools/testing/memblock/main.c @@ -2,12 +2,14 @@ #include "tests/basic_api.h" #include "tests/alloc_api.h" #include "tests/alloc_helpers_api.h" +#include "tests/alloc_nid_api.h" int main(int argc, char **argv) { memblock_basic_checks(); memblock_alloc_checks(); memblock_alloc_helpers_checks(); + memblock_alloc_nid_checks(); return 0; } diff --git a/tools/testing/memblock/tests/alloc_nid_api.c b/tools/testing/memblock/tests/alloc_nid_api.c new file mode 100644 index 000000000000..75cfca47c703 --- /dev/null +++ b/tools/testing/memblock/tests/alloc_nid_api.c @@ -0,0 +1,679 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +#include "alloc_nid_api.h" + +/* + * A simple test that tries to allocate a memory region within min_addr and + * max_addr range: + * + * + + + * | + +-----------+ | + * | | | rgn | | + * +----+-------+-----------+------+ + * ^ ^ + * | | + * min_addr max_addr + * + * Expect to allocate a cleared region that ends at max_addr. + */ +static int alloc_try_nid_top_down_simple_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t size = SZ_128; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t rgn_end; + + setup_memblock(); + + min_addr = memblock_start_of_DRAM() + SMP_CACHE_BYTES * 2; + max_addr = min_addr + SZ_512; + + allocated_ptr = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + rgn_end = rgn->base + rgn->size; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == size); + assert(rgn->base == max_addr - size); + assert(rgn_end == max_addr); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +/* + * A simple test that tries to allocate a memory region within min_addr and + * max_addr range, where the end address is misaligned: + * + * + + + + * | + +---------+ + | + * | | | rgn | | | + * +------+-------+---------+--+----+ + * ^ ^ ^ + * | | | + * min_add | max_addr + * | + * Aligned address + * boundary + * + * Expect to allocate a cleared, aligned region that ends before max_addr. + */ +static int alloc_try_nid_top_down_end_misaligned_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t size = SZ_128; + phys_addr_t misalign = SZ_2; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t rgn_end; + + setup_memblock(); + + min_addr = memblock_start_of_DRAM() + SMP_CACHE_BYTES * 2; + max_addr = min_addr + SZ_512 + misalign; + + allocated_ptr = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + rgn_end = rgn->base + rgn->size; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == size); + assert(rgn->base == max_addr - size - misalign); + assert(rgn_end < max_addr); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +/* + * A simple test that tries to allocate a memory region, which spans over the + * min_addr and max_addr range: + * + * + + + * | +---------------+ | + * | | rgn | | + * +------+---------------+-------+ + * ^ ^ + * | | + * min_addr max_addr + * + * Expect to allocate a cleared region that starts at min_addr and ends at + * max_addr, given that min_addr is aligned. + */ +static int alloc_try_nid_exact_address_generic_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t size = SZ_1K; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t rgn_end; + + setup_memblock(); + + min_addr = memblock_start_of_DRAM() + SMP_CACHE_BYTES; + max_addr = min_addr + size; + + allocated_ptr = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + rgn_end = rgn->base + rgn->size; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == size); + assert(rgn->base == min_addr); + assert(rgn_end == max_addr); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +/* + * A test that tries to allocate a memory region, which can't fit into + * min_addr and max_addr range: + * + * + + + + * | +----------+-----+ | + * | | rgn + | | + * +--------+----------+-----+----+ + * ^ ^ ^ + * | | | + * Aligned | max_addr + * address | + * boundary min_add + * + * Expect to drop the lower limit and allocate a cleared memory region which + * ends at max_addr (if the address is aligned). + */ +static int alloc_try_nid_top_down_narrow_range_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t size = SZ_256; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_memblock(); + + min_addr = memblock_start_of_DRAM() + SZ_512; + max_addr = min_addr + SMP_CACHE_BYTES; + + allocated_ptr = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == size); + assert(rgn->base == max_addr - size); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +/* + * A test that tries to allocate a memory region, which can't fit into + * min_addr and max_addr range, with the latter being too close to the beginning + * of the available memory: + * + * +-------------+ + * | new | + * +-------------+ + * + + + * | + | + * | | | + * +-------+--------------+ + * ^ ^ + * | | + * | max_addr + * | + * min_addr + * + * Expect no allocation to happen. + */ +static int alloc_try_nid_low_max_generic_check(void) +{ + void *allocated_ptr = NULL; + + phys_addr_t size = SZ_1K; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_memblock(); + + min_addr = memblock_start_of_DRAM(); + max_addr = min_addr + SMP_CACHE_BYTES; + + allocated_ptr = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + + assert(!allocated_ptr); + + return 0; +} + +/* + * A test that tries to allocate a memory region within min_addr min_addr range, + * with min_addr being so close that it's next to an allocated region: + * + * + + + * | +--------+---------------| + * | | r1 | rgn | + * +-------+--------+---------------+ + * ^ ^ + * | | + * min_addr max_addr + * + * Expect a merge of both regions. Only the region size gets updated. + */ +static int alloc_try_nid_min_reserved_generic_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t r1_size = SZ_128; + phys_addr_t r2_size = SZ_64; + phys_addr_t total_size = r1_size + r2_size; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t reserved_base; + + setup_memblock(); + + max_addr = memblock_end_of_DRAM(); + min_addr = max_addr - r2_size; + reserved_base = min_addr - r1_size; + + memblock_reserve(reserved_base, r1_size); + + allocated_ptr = memblock_alloc_try_nid(r2_size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == total_size); + assert(rgn->base == reserved_base); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate a memory region within min_addr and max_addr, + * with max_addr being so close that it's next to an allocated region: + * + * + + + * | +-------------+--------| + * | | rgn | r1 | + * +----------+-------------+--------+ + * ^ ^ + * | | + * min_addr max_addr + * + * Expect a merge of regions. Only the region size gets updated. + */ +static int alloc_try_nid_max_reserved_generic_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t r1_size = SZ_64; + phys_addr_t r2_size = SZ_128; + phys_addr_t total_size = r1_size + r2_size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_memblock(); + + max_addr = memblock_end_of_DRAM() - r1_size; + min_addr = max_addr - r2_size; + + memblock_reserve(max_addr, r1_size); + + allocated_ptr = memblock_alloc_try_nid(r2_size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == total_size); + assert(rgn->base == min_addr); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range, when + * there are two reserved regions at the borders, with a gap big enough to fit + * a new region: + * + * + + + * | +--------+ +-------+------+ | + * | | r2 | | rgn | r1 | | + * +----+--------+---+-------+------+--+ + * ^ ^ + * | | + * min_addr max_addr + * + * Expect to merge the new region with r1. The second region does not get + * updated. The total size field gets updated. + */ + +static int alloc_try_nid_top_down_reserved_with_space_check(void) +{ + struct memblock_region *rgn1 = &memblock.reserved.regions[1]; + struct memblock_region *rgn2 = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + struct region r1, r2; + + phys_addr_t r3_size = SZ_64; + phys_addr_t gap_size = SMP_CACHE_BYTES; + phys_addr_t total_size; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_memblock(); + + r1.base = memblock_end_of_DRAM() - SMP_CACHE_BYTES * 2; + r1.size = SMP_CACHE_BYTES; + + r2.size = SZ_128; + r2.base = r1.base - (r3_size + gap_size + r2.size); + + total_size = r1.size + r2.size + r3_size; + min_addr = r2.base + r2.size; + max_addr = r1.base; + + memblock_reserve(r1.base, r1.size); + memblock_reserve(r2.base, r2.size); + + allocated_ptr = memblock_alloc_try_nid(r3_size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn1->size == r1.size + r3_size); + assert(rgn1->base == max_addr - r3_size); + + assert(rgn2->size == r2.size); + assert(rgn2->base == r2.base); + + assert(memblock.reserved.cnt == 2); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range, when + * there are two reserved regions at the borders, with a gap of a size equal to + * the size of the new region: + * + * + + + * | +--------+--------+--------+ | + * | | r2 | r3 | r1 | | + * +-----+--------+--------+--------+-----+ + * ^ ^ + * | | + * min_addr max_addr + * + * Expect to merge all of the regions into one. The region counter and total + * size fields get updated. + */ +static int alloc_try_nid_reserved_full_merge_generic_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + struct region r1, r2; + + phys_addr_t r3_size = SZ_64; + phys_addr_t total_size; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_memblock(); + + r1.base = memblock_end_of_DRAM() - SMP_CACHE_BYTES * 2; + r1.size = SMP_CACHE_BYTES; + + r2.size = SZ_128; + r2.base = r1.base - (r3_size + r2.size); + + total_size = r1.size + r2.size + r3_size; + min_addr = r2.base + r2.size; + max_addr = r1.base; + + memblock_reserve(r1.base, r1.size); + memblock_reserve(r2.base, r2.size); + + allocated_ptr = memblock_alloc_try_nid(r3_size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == total_size); + assert(rgn->base == r2.base); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range, when + * there are two reserved regions at the borders, with a gap that can't fit + * a new region: + * + * + + + * | +----------+------+ +------+ | + * | | r3 | r2 | | r1 | | + * +--+----------+------+----+------+---+ + * ^ ^ + * | | + * | max_addr + * | + * min_addr + * + * Expect to merge the new region with r2. The second region does not get + * updated. The total size counter gets updated. + */ +static int alloc_try_nid_top_down_reserved_no_space_check(void) +{ + struct memblock_region *rgn1 = &memblock.reserved.regions[1]; + struct memblock_region *rgn2 = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + struct region r1, r2; + + phys_addr_t r3_size = SZ_256; + phys_addr_t gap_size = SMP_CACHE_BYTES; + phys_addr_t total_size; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_memblock(); + + r1.base = memblock_end_of_DRAM() - SMP_CACHE_BYTES * 2; + r1.size = SMP_CACHE_BYTES; + + r2.size = SZ_128; + r2.base = r1.base - (r2.size + gap_size); + + total_size = r1.size + r2.size + r3_size; + min_addr = r2.base + r2.size; + max_addr = r1.base; + + memblock_reserve(r1.base, r1.size); + memblock_reserve(r2.base, r2.size); + + allocated_ptr = memblock_alloc_try_nid(r3_size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn1->size == r1.size); + assert(rgn1->base == r1.base); + + assert(rgn2->size == r2.size + r3_size); + assert(rgn2->base == r2.base - r3_size); + + assert(memblock.reserved.cnt == 2); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range, but + * it's too narrow and everything else is reserved: + * + * +-----------+ + * | new | + * +-----------+ + * + + + * |--------------+ +----------| + * | r2 | | r1 | + * +--------------+------+----------+ + * ^ ^ + * | | + * | max_addr + * | + * min_addr + * + * Expect no allocation to happen. + */ + +static int alloc_try_nid_reserved_all_generic_check(void) +{ + void *allocated_ptr = NULL; + struct region r1, r2; + + phys_addr_t r3_size = SZ_256; + phys_addr_t gap_size = SMP_CACHE_BYTES; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_memblock(); + + r1.base = memblock_end_of_DRAM() - SMP_CACHE_BYTES; + r1.size = SMP_CACHE_BYTES; + + r2.size = MEM_SIZE - (r1.size + gap_size); + r2.base = memblock_start_of_DRAM(); + + min_addr = r2.base + r2.size; + max_addr = r1.base; + + memblock_reserve(r1.base, r1.size); + memblock_reserve(r2.base, r2.size); + + allocated_ptr = memblock_alloc_try_nid(r3_size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + + assert(!allocated_ptr); + + return 0; +} + +/* + * A test that tries to allocate a memory region, where max_addr is + * bigger than the end address of the available memory. Expect to allocate + * a cleared region that ends before the end of the memory. + */ +static int alloc_try_nid_top_down_cap_max_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t size = SZ_256; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_memblock(); + + min_addr = memblock_end_of_DRAM() - SZ_1K; + max_addr = memblock_end_of_DRAM() + SZ_256; + + allocated_ptr = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == size); + assert(rgn->base == memblock_end_of_DRAM() - size); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +/* + * A test that tries to allocate a memory region, where min_addr is + * smaller than the start address of the available memory. Expect to allocate + * a cleared region that ends before the end of the memory. + */ +static int alloc_try_nid_top_down_cap_min_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t size = SZ_1K; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_memblock(); + + min_addr = memblock_start_of_DRAM() - SZ_256; + max_addr = memblock_end_of_DRAM(); + + allocated_ptr = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == size); + assert(rgn->base == memblock_end_of_DRAM() - size); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +int memblock_alloc_nid_checks(void) +{ + reset_memblock_attributes(); + dummy_physical_memory_init(); + + alloc_try_nid_top_down_simple_check(); + alloc_try_nid_top_down_end_misaligned_check(); + alloc_try_nid_top_down_narrow_range_check(); + alloc_try_nid_top_down_reserved_with_space_check(); + alloc_try_nid_top_down_reserved_no_space_check(); + alloc_try_nid_top_down_cap_min_check(); + alloc_try_nid_top_down_cap_max_check(); + + alloc_try_nid_min_reserved_generic_check(); + alloc_try_nid_max_reserved_generic_check(); + alloc_try_nid_exact_address_generic_check(); + alloc_try_nid_reserved_full_merge_generic_check(); + alloc_try_nid_reserved_all_generic_check(); + alloc_try_nid_low_max_generic_check(); + + dummy_physical_memory_cleanup(); + + return 0; +} diff --git a/tools/testing/memblock/tests/alloc_nid_api.h b/tools/testing/memblock/tests/alloc_nid_api.h new file mode 100644 index 000000000000..b35cf3c3f489 --- /dev/null +++ b/tools/testing/memblock/tests/alloc_nid_api.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _MEMBLOCK_ALLOC_NID_H +#define _MEMBLOCK_ALLOC_NID_H + +#include "common.h" + +int memblock_alloc_nid_checks(void); + +#endif From patchwork Mon Feb 28 14:46:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karolina Drobnik X-Patchwork-Id: 12763437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCD26C433FE for ; Mon, 28 Feb 2022 14:47:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 679908D000B; Mon, 28 Feb 2022 09:47:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6295C8D0001; Mon, 28 Feb 2022 09:47:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F13F8D000B; Mon, 28 Feb 2022 09:47:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 3EE858D0001 for ; Mon, 28 Feb 2022 09:47:34 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0DACD233E2 for ; Mon, 28 Feb 2022 14:47:34 +0000 (UTC) X-FDA: 79192467228.12.0BE1C1F Received: from mail-lj1-f175.google.com (mail-lj1-f175.google.com [209.85.208.175]) by imf31.hostedemail.com (Postfix) with ESMTP id 8C94420004 for ; Mon, 28 Feb 2022 14:47:33 +0000 (UTC) Received: by mail-lj1-f175.google.com with SMTP id l12so5841729ljh.12 for ; Mon, 28 Feb 2022 06:47:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qvrX1Yzor98efrQLyBGeybuWVwz7FwGBsCyPsnCvWmA=; b=Tq/MYYfTkMwkcfJLEDuM0ogDeq1m7qkW8Wk4roFxQ4cdTu7/Yr3QjgkA7/d/mGFR0V zU6XfLBCYpBymzmM/lgdkIFXuJaEpFq2K6Hfl/sHj8TrQ4bu588OXfnUxd5taqvroPCr WvdFFup4T2Oo+6DJ78ODJOM0mXvpVK84KYtvmdQVnovRin6sO1uzpo3wXNRo0OUuveE+ xEXLwPrEzgmwlgGSMLnAuiAOJS/lXlxefAr81Uya2p6tOafTUyEogbJOee8LIHbxKb5Q 0jc7nu6zLkwEE1VTVxUvbdXb0X/aUKo4cNQsysxjbtgJEYDnSnLuDER5oiADAtqdR+0l HAsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qvrX1Yzor98efrQLyBGeybuWVwz7FwGBsCyPsnCvWmA=; b=N86JFmcp5HDuwuAWchVCSfC4klPf5t8fDtXU6eI8F+2nOlATsM4gbxY/EhwhkPCR51 J2MibtiuNwOBXdcmSFaoZX0d9iUTdNYB34xNn7IJS7tLNw0jiscEAgw7sGJ3EU1TZY8s KUcGXrL5LBou3O0L5iBHwAoVR4RWjJeeCwlF7HOkhRylRdOzrzLNWFw8ZmFNPUrUZUk7 Iyq0UpbTWRbvPIstLyZ/fs6UPDSY6gQNaCET5q2PStA6Tt24vBedV4M68je4t+wgXRrv dUevdbwJb4RNHbTLTmLgWmn+7VdRnI3wAUAIjG+cP0ciuJiCI4sLQJfb4XrDcoockQx1 YSxw== X-Gm-Message-State: AOAM530i0GSdIOw1u30d4Pw8CPCLUZCR/sM+M7n7ly/AYPc54I+gL0ZE /irgOIeMlrlzQa8ItbQ8p2DYQTr8Eqs= X-Google-Smtp-Source: ABdhPJzMQrzC03X+iY35s/ad0q+kVFv1+bEIoBaf0G9KfdX5RqaN/G0rXj+5YgmfZSFW03117E+t6Q== X-Received: by 2002:a05:651c:1a09:b0:246:885c:846f with SMTP id by9-20020a05651c1a0900b00246885c846fmr4904177ljb.170.1646059652164; Mon, 28 Feb 2022 06:47:32 -0800 (PST) Received: from elysium.toya.net.pl (staticline-31-183-165-244.toya.net.pl. [31.183.165.244]) by smtp.gmail.com with ESMTPSA id r14-20020ac252ae000000b00443f3cbc03asm993996lfm.6.2022.02.28.06.47.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 06:47:31 -0800 (PST) From: Karolina Drobnik To: linux-mm@kvack.org Cc: rppt@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, Karolina Drobnik Subject: [PATCH 8/9] memblock tests: Add memblock_alloc_try_nid tests for bottom up Date: Mon, 28 Feb 2022 15:46:50 +0100 Message-Id: <1c0ba11b8da5dc8f71ad45175c536fa4be720984.1646055639.git.karolinadrobnik@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 8C94420004 X-Stat-Signature: ebj4eskhjpwb41iz81qzr384qa93jq3n Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="Tq/MYYfT"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf31.hostedemail.com: domain of karolinadrobnik@gmail.com designates 209.85.208.175 as permitted sender) smtp.mailfrom=karolinadrobnik@gmail.com X-HE-Tag: 1646059653-40069 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add checks for memblock_alloc_try_nid for bottom up allocation direction. As the definition of this function is pretty close to the core memblock_alloc_range_nid, the test cases implemented here cover most of the code paths related to the memory allocations. The tested scenarios are: - Region can be allocated within the requested range (both with aligned and misaligned boundaries) - Region can be allocated between two already existing entries - Not enough space between already reserved regions - Memory at the range boundaries is reserved but there is enough space to allocate a new region - The memory range is too narrow but memory can be allocated before the maximum address - Edge cases: + Minimum address is below memblock_start_of_DRAM() + Maximum address is above memblock_end_of_DRAM() Add test case wrappers to test both directions in the same context. Signed-off-by: Karolina Drobnik --- tools/testing/memblock/tests/alloc_nid_api.c | 496 ++++++++++++++++++- 1 file changed, 492 insertions(+), 4 deletions(-) diff --git a/tools/testing/memblock/tests/alloc_nid_api.c b/tools/testing/memblock/tests/alloc_nid_api.c index 75cfca47c703..03216efe3488 100644 --- a/tools/testing/memblock/tests/alloc_nid_api.c +++ b/tools/testing/memblock/tests/alloc_nid_api.c @@ -653,26 +653,514 @@ static int alloc_try_nid_top_down_cap_min_check(void) return 0; } -int memblock_alloc_nid_checks(void) +/* + * A simple test that tries to allocate a memory region within min_addr and + * max_addr range: + * + * + + + * | +-----------+ | | + * | | rgn | | | + * +----+-----------+-----------+------+ + * ^ ^ + * | | + * min_addr max_addr + * + * Expect to allocate a cleared region that ends before max_addr. + */ +static int alloc_try_nid_bottom_up_simple_check(void) { - reset_memblock_attributes(); - dummy_physical_memory_init(); + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t size = SZ_128; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t rgn_end; + + setup_memblock(); + + min_addr = memblock_start_of_DRAM() + SMP_CACHE_BYTES * 2; + max_addr = min_addr + SZ_512; + + allocated_ptr = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + rgn_end = rgn->base + rgn->size; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == size); + assert(rgn->base == min_addr); + assert(rgn_end < max_addr); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +/* + * A simple test that tries to allocate a memory region within min_addr and + * max_addr range, where the start address is misaligned: + * + * + + + * | + +-----------+ + | + * | | | rgn | | | + * +-----+---+-----------+-----+-----+ + * ^ ^----. ^ + * | | | + * min_add | max_addr + * | + * Aligned address + * boundary + * + * Expect to allocate a cleared, aligned region that ends before max_addr. + */ +static int alloc_try_nid_bottom_up_start_misaligned_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t size = SZ_128; + phys_addr_t misalign = SZ_2; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t rgn_end; + + setup_memblock(); + + min_addr = memblock_start_of_DRAM() + misalign; + max_addr = min_addr + SZ_512; + + allocated_ptr = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + rgn_end = rgn->base + rgn->size; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == size); + assert(rgn->base == min_addr + (SMP_CACHE_BYTES - misalign)); + assert(rgn_end < max_addr); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +/* + * A test that tries to allocate a memory region, which can't fit into min_addr + * and max_addr range: + * + * + + + * |---------+ + + | + * | rgn | | | | + * +---------+---------+----+------+ + * ^ ^ + * | | + * | max_addr + * | + * min_add + * + * Expect to drop the lower limit and allocate a cleared memory region which + * starts at the beginning of the available memory. + */ +static int alloc_try_nid_bottom_up_narrow_range_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t size = SZ_256; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_memblock(); + + min_addr = memblock_start_of_DRAM() + SZ_512; + max_addr = min_addr + SMP_CACHE_BYTES; + + allocated_ptr = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == size); + assert(rgn->base == memblock_start_of_DRAM()); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range, when + * there are two reserved regions at the borders, with a gap big enough to fit + * a new region: + * + * + + + * | +--------+-------+ +------+ | + * | | r2 | rgn | | r1 | | + * +----+--------+-------+---+------+--+ + * ^ ^ + * | | + * min_addr max_addr + * + * Expect to merge the new region with r2. The second region does not get + * updated. The total size field gets updated. + */ + +static int alloc_try_nid_bottom_up_reserved_with_space_check(void) +{ + struct memblock_region *rgn1 = &memblock.reserved.regions[1]; + struct memblock_region *rgn2 = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + struct region r1, r2; + + phys_addr_t r3_size = SZ_64; + phys_addr_t gap_size = SMP_CACHE_BYTES; + phys_addr_t total_size; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_memblock(); + + r1.base = memblock_end_of_DRAM() - SMP_CACHE_BYTES * 2; + r1.size = SMP_CACHE_BYTES; + + r2.size = SZ_128; + r2.base = r1.base - (r3_size + gap_size + r2.size); + + total_size = r1.size + r2.size + r3_size; + min_addr = r2.base + r2.size; + max_addr = r1.base; + + memblock_reserve(r1.base, r1.size); + memblock_reserve(r2.base, r2.size); + + allocated_ptr = memblock_alloc_try_nid(r3_size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn1->size == r1.size); + assert(rgn1->base == max_addr); + + assert(rgn2->size == r2.size + r3_size); + assert(rgn2->base == r2.base); + + assert(memblock.reserved.cnt == 2); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range, when + * there are two reserved regions at the borders, with a gap of a size equal to + * the size of the new region: + * + * + + + * |----------+ +------+ +----+ | + * | r3 | | r2 | | r1 | | + * +----------+----+------+---+----+--+ + * ^ ^ + * | | + * | max_addr + * | + * min_addr + * + * Expect to drop the lower limit and allocate memory at the beginning of the + * available memory. The region counter and total size fields get updated. + * Other regions are not modified. + */ + +static int alloc_try_nid_bottom_up_reserved_no_space_check(void) +{ + struct memblock_region *rgn1 = &memblock.reserved.regions[2]; + struct memblock_region *rgn2 = &memblock.reserved.regions[1]; + struct memblock_region *rgn3 = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + struct region r1, r2; + + phys_addr_t r3_size = SZ_256; + phys_addr_t gap_size = SMP_CACHE_BYTES; + phys_addr_t total_size; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_memblock(); + + r1.base = memblock_end_of_DRAM() - SMP_CACHE_BYTES * 2; + r1.size = SMP_CACHE_BYTES; + + r2.size = SZ_128; + r2.base = r1.base - (r2.size + gap_size); + + total_size = r1.size + r2.size + r3_size; + min_addr = r2.base + r2.size; + max_addr = r1.base; + + memblock_reserve(r1.base, r1.size); + memblock_reserve(r2.base, r2.size); + + allocated_ptr = memblock_alloc_try_nid(r3_size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn3->size == r3_size); + assert(rgn3->base == memblock_start_of_DRAM()); + + assert(rgn2->size == r2.size); + assert(rgn2->base == r2.base); + + assert(rgn1->size == r1.size); + assert(rgn1->base == r1.base); + + assert(memblock.reserved.cnt == 3); + assert(memblock.reserved.total_size == total_size); + + return 0; +} + +/* + * A test that tries to allocate a memory region, where max_addr is + * bigger than the end address of the available memory. Expect to allocate + * a cleared region that starts at the min_addr + */ +static int alloc_try_nid_bottom_up_cap_max_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t size = SZ_256; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_memblock(); + + min_addr = memblock_start_of_DRAM() + SZ_1K; + max_addr = memblock_end_of_DRAM() + SZ_256; + + allocated_ptr = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == size); + assert(rgn->base == min_addr); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +/* + * A test that tries to allocate a memory region, where min_addr is + * smaller than the start address of the available memory. Expect to allocate + * a cleared region at the beginning of the available memory. + */ +static int alloc_try_nid_bottom_up_cap_min_check(void) +{ + struct memblock_region *rgn = &memblock.reserved.regions[0]; + void *allocated_ptr = NULL; + char *b; + + phys_addr_t size = SZ_1K; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_memblock(); + + min_addr = memblock_start_of_DRAM(); + max_addr = memblock_end_of_DRAM() - SZ_256; + allocated_ptr = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, NUMA_NO_NODE); + b = (char *)allocated_ptr; + + assert(allocated_ptr); + assert(*b == 0); + + assert(rgn->size == size); + assert(rgn->base == memblock_start_of_DRAM()); + + assert(memblock.reserved.cnt == 1); + assert(memblock.reserved.total_size == size); + + return 0; +} + +/* Test case wrappers */ +static int alloc_try_nid_simple_check(void) +{ + memblock_set_bottom_up(false); alloc_try_nid_top_down_simple_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_simple_check(); + + return 0; +} + +static int alloc_try_nid_misaligned_check(void) +{ + memblock_set_bottom_up(false); alloc_try_nid_top_down_end_misaligned_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_start_misaligned_check(); + + return 0; +} + +static int alloc_try_nid_narrow_range_check(void) +{ + memblock_set_bottom_up(false); alloc_try_nid_top_down_narrow_range_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_narrow_range_check(); + + return 0; +} + +static int alloc_try_nid_reserved_with_space_check(void) +{ + memblock_set_bottom_up(false); alloc_try_nid_top_down_reserved_with_space_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_reserved_with_space_check(); + + return 0; +} + +static int alloc_try_nid_reserved_no_space_check(void) +{ + memblock_set_bottom_up(false); alloc_try_nid_top_down_reserved_no_space_check(); - alloc_try_nid_top_down_cap_min_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_reserved_no_space_check(); + + return 0; +} + +static int alloc_try_nid_cap_max_check(void) +{ + memblock_set_bottom_up(false); alloc_try_nid_top_down_cap_max_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_cap_max_check(); + + return 0; +} + +static int alloc_try_nid_cap_min_check(void) +{ + memblock_set_bottom_up(false); + alloc_try_nid_top_down_cap_min_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_cap_min_check(); + + return 0; +} +static int alloc_try_nid_min_reserved_check(void) +{ + memblock_set_bottom_up(false); alloc_try_nid_min_reserved_generic_check(); + memblock_set_bottom_up(true); + alloc_try_nid_min_reserved_generic_check(); + + return 0; +} + +static int alloc_try_nid_max_reserved_check(void) +{ + memblock_set_bottom_up(false); alloc_try_nid_max_reserved_generic_check(); + memblock_set_bottom_up(true); + alloc_try_nid_max_reserved_generic_check(); + + return 0; +} + +static int alloc_try_nid_exact_address_check(void) +{ + memblock_set_bottom_up(false); alloc_try_nid_exact_address_generic_check(); + memblock_set_bottom_up(true); + alloc_try_nid_exact_address_generic_check(); + + return 0; +} + +static int alloc_try_nid_reserved_full_merge_check(void) +{ + memblock_set_bottom_up(false); + alloc_try_nid_reserved_full_merge_generic_check(); + memblock_set_bottom_up(true); alloc_try_nid_reserved_full_merge_generic_check(); + + return 0; +} + +static int alloc_try_nid_reserved_all_check(void) +{ + memblock_set_bottom_up(false); + alloc_try_nid_reserved_all_generic_check(); + memblock_set_bottom_up(true); alloc_try_nid_reserved_all_generic_check(); + + return 0; +} + +static int alloc_try_nid_low_max_check(void) +{ + memblock_set_bottom_up(false); + alloc_try_nid_low_max_generic_check(); + memblock_set_bottom_up(true); alloc_try_nid_low_max_generic_check(); + return 0; +} + +int memblock_alloc_nid_checks(void) +{ + reset_memblock_attributes(); + dummy_physical_memory_init(); + + alloc_try_nid_simple_check(); + alloc_try_nid_misaligned_check(); + alloc_try_nid_narrow_range_check(); + alloc_try_nid_reserved_with_space_check(); + alloc_try_nid_reserved_no_space_check(); + alloc_try_nid_cap_max_check(); + alloc_try_nid_cap_min_check(); + + alloc_try_nid_min_reserved_check(); + alloc_try_nid_max_reserved_check(); + alloc_try_nid_exact_address_check(); + alloc_try_nid_reserved_full_merge_check(); + alloc_try_nid_reserved_all_check(); + alloc_try_nid_low_max_check(); + dummy_physical_memory_cleanup(); return 0; From patchwork Mon Feb 28 14:46:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karolina Drobnik X-Patchwork-Id: 12763438 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 900EEC433EF for ; Mon, 28 Feb 2022 14:47:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 347068D000C; Mon, 28 Feb 2022 09:47:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2CE198D0001; Mon, 28 Feb 2022 09:47:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1BD778D000C; Mon, 28 Feb 2022 09:47:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0110.hostedemail.com [216.40.44.110]) by kanga.kvack.org (Postfix) with ESMTP id 0C6778D0001 for ; Mon, 28 Feb 2022 09:47:36 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BED1E8249980 for ; Mon, 28 Feb 2022 14:47:35 +0000 (UTC) X-FDA: 79192467270.19.32E0ACC Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) by imf20.hostedemail.com (Postfix) with ESMTP id 558601C0002 for ; Mon, 28 Feb 2022 14:47:35 +0000 (UTC) Received: by mail-lf1-f49.google.com with SMTP id i11so21799169lfu.3 for ; Mon, 28 Feb 2022 06:47:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lhOyf5GwOxdKBxOHGEH+WB1RGf3JKbPOT7zY+wvKvpo=; b=S+RYY8RZ9EY5+hvJbl5XGbgTDwm4dDMevg8gng2DAjAhl+hzUfE5UpuJSqCdFVEj6S 1NTKYB/mBo70ZGoXWtvtyiM9JUFXiNDPjC15lzvTfVKFbE5L80OlYxlhfNBuerMl5G+n ui/eQHpHBct16ZEW+G8lmCPtNEM1H+hW3N+RF+/RaN8wopIAT19HIxjs9I0BRL6QrZ++ R7L4VouiGAzcat4lhiB5eD4VzqEAFfMNMhp1JdxXgVlfJUcDnRk9ytRH1dbF0G9qaBh3 2fGj7gsDXPyuxL7Kq+mujwCFhBMaxOsEGU21ox6+WMSjuDLRfWScfZE44aYCLm442trM IXrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lhOyf5GwOxdKBxOHGEH+WB1RGf3JKbPOT7zY+wvKvpo=; b=IRhTZ3q3eWvuWqjeDqtLJxxkBMhhF7hOAL5UNKTzafocW3atWTDqbY10L7X4bbrM1g gL72TWat1Fp253axUnYUL6/14JlErXGbj9YvDBLsEbmSFr78OIuV9nxdEuHCWvM410Yr rIRYcfXjxuCPv8K2KvGM1RXNVmOK2DXMOz6WYNjuSuJyWbMwi5Wf0Wqw3qV0xcm7Z6Iz kft5WN2RpHzRBtuZVy5ND/A58WOC9+jU4WrUVdSuoB92mm8NjvXOelIW58hwO0CCWj+U v4QBl1SyRlTnIkdlqiAkIAlVI7pbCS65IJ0POjMsZehrXS2wvy9B8nI968MC0UkEW7oB us5A== X-Gm-Message-State: AOAM532jLoTFiJioApPdcY8cUWQrQPVhSNh2SQGyzKnNC75p+3QJGvWf qjuoZW8tIZd6Y3C45enKeJxF5B/6nR4= X-Google-Smtp-Source: ABdhPJyEeKoOELU4Hs0AN+LD8C7bjctTAL8b1xrVTD9IoYA74oJXqlKROxJpkA1Hu5bWjLvvy2pu4w== X-Received: by 2002:a05:6512:34d4:b0:442:a9c5:8e37 with SMTP id w20-20020a05651234d400b00442a9c58e37mr13051807lfr.362.1646059653785; Mon, 28 Feb 2022 06:47:33 -0800 (PST) Received: from elysium.toya.net.pl (staticline-31-183-165-244.toya.net.pl. [31.183.165.244]) by smtp.gmail.com with ESMTPSA id r14-20020ac252ae000000b00443f3cbc03asm993996lfm.6.2022.02.28.06.47.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 06:47:33 -0800 (PST) From: Karolina Drobnik To: linux-mm@kvack.org Cc: rppt@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, Karolina Drobnik Subject: [PATCH 9/9] memblock tests: Add TODO and README files Date: Mon, 28 Feb 2022 15:46:51 +0100 Message-Id: X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspam-User: X-Stat-Signature: cr9m78ihtrwwwfgkdtztyyqgppt6biy4 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=S+RYY8RZ; spf=pass (imf20.hostedemail.com: domain of karolinadrobnik@gmail.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=karolinadrobnik@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Queue-Id: 558601C0002 X-HE-Tag: 1646059655-527301 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add description of the project, its structure and how to run it. List what is left to implement and what the known issues are. Signed-off-by: Karolina Drobnik --- tools/testing/memblock/README | 114 ++++++++++++++++++++++++++++++++++ tools/testing/memblock/TODO | 28 +++++++++ 2 files changed, 142 insertions(+) create mode 100644 tools/testing/memblock/README create mode 100644 tools/testing/memblock/TODO -- 2.30.2 diff --git a/tools/testing/memblock/README b/tools/testing/memblock/README new file mode 100644 index 000000000000..40c0ce50e7c2 --- /dev/null +++ b/tools/testing/memblock/README @@ -0,0 +1,114 @@ +========================= + Memblock simulator +========================= + + +Introduction +--------------------- + +Memblock is a boot time memory allocator[1] that manages memory regions before +the actual memory management is initialized. Its APIs allow to register physical +memory regions, mark them as available or reserved, allocate a block of memory +within the requested range and/or in specific NUMA node, and many more. + +Because it is used so early in the booting process, testing and debugging it is +difficult. This test suite, usually referred as memblock simulator, is +an attempt at testing the memblock mechanism. It runs one monolithic test that +consist of a series of checks that exercise both the basic operations and +allocation functionalities of memblock. The main data structure of the boot time +memory allocator is initialized at the build time, so the checks here reuse its +instance throughout the duration of the test. To ensure that tests don't affect +each other, region arrays are reset in between. + +As this project uses the actual memblock code and has to run in user space, some +of the kernel definitions were stubbed in the introductory patch[2]. Most of +them don't match the kernel implementation, so one should consult them first +before making any significant changes to the project. + + +Usage +--------------------- + +To run the tests, build the main target and run it: + +$ make; ./main + +A successful run produces no output. It is also possible to override different +configuration parameters. For example, to simulate enabled NUMA, use: + +$ make NUMA=1 + +For the full list of options, see `make help`. + + +Project structure +--------------------- + +The project has one target, main, which calls a group of checks for basic and +allocation functions. Tests for each group are defined in dedicated files, as it +can be seen here: + +memblock +|-- asm ------------------, +|-- lib |-- implement function and struct stubs +|-- linux ------------------' +|-- scripts +| |-- Makefile.include -- handles `make` parameters +|-- tests +| |-- alloc_api.(c|h) -- memblock_alloc tests +| |-- alloc_helpers_api.(c|h) -- memblock_alloc_from tests +| |-- alloc_nid_api.(c|h) -- memblock_alloc_try_nid tests +| |-- basic_api.(c|h) -- memblock_add/memblock_reserve/... tests +| |-- common.(c|h) -- helper functions for resetting memblock; +|-- main.c --------------. dummy physical memory definition +|-- Makefile `- test runner +|-- README +|-- TODO +|-- .gitignore + + +Simulating physical memory +-------------------------- + +Some allocation functions clear the memory in the process, so it is required for +memblock to track valid memory ranges. To achieve this, the test suite registers +with memblock memory stored by test_memory struct. It is a small wrapper that +points to a block of memory allocated via malloc. For each group of allocation +tests, dummy physical memory is allocated, added to memblock, and then released +at the end of the test run. The structure of a test runner checking allocation +functions is as follows: + +int memblock_alloc_foo_checks(void) +{ + reset_memblock_attributes(); /* data structure reset */ + dummy_physical_memory_init(); /* allocate and register memory */ + + (...allocation checks...) + + dummy_physical_memory_cleanup(); /* free the memory */ +} + +There's no need to explicitly free the dummy memory from memblock via +memblock_free() call. The entry will be erased by reset_memblock_regions(), +called at the beginning of each test. + + +Known issues +--------------------- + +1. Requesting a specific NUMA node via memblock_alloc_node() does not work as + intended. Once the fix is in place, tests for this function can be added. + +2. Tests for memblock_alloc_low() can't be easily implemented. The function uses + ARCH_LOW_ADDRESS_LIMIT marco, which can't be changed to point at the low + memory of the memory_block. + + +References +--------------------- + +1. Boot time memory management documentation page: + https://www.kernel.org/doc/html/latest/core-api/boot-time-mm.html + +2. Introduce memblock simulator, lore link: +https://lore.kernel.org/linux-mm/cover.1643796665.git.karolinadrobnik@gmail.com/ diff --git a/tools/testing/memblock/TODO b/tools/testing/memblock/TODO new file mode 100644 index 000000000000..c25b2fdec45e --- /dev/null +++ b/tools/testing/memblock/TODO @@ -0,0 +1,28 @@ +TODO +===== + +1. Add verbose output (e.g., what is being tested and how many tests cases are + passing) + +2. Add flags to Makefile: + + verbosity level + + enable memblock_dbg() messages (i.e. pass "-D CONFIG_DEBUG_MEMORY_INIT" + flag) + +3. Add tests trying to memblock_add() or memblock_reserve() 129th region. + This will trigger memblock_double_array(), make sure it succeeds. + *Important:* These tests require valid memory ranges, use dummy physical + memory block from common.c to implement them. It is also very + likely that the current MEM_SIZE won't be enough for these + test cases. Use realloc to adjust the size accordingly. + +4. Add test cases using this functions (implement them for both directions): + + memblock_alloc_raw() + + memblock_alloc_exact_nid_raw() + + memblock_alloc_try_nid_raw() + +5. Add tests for memblock_alloc_node() to check if the correct NUMA node is set + for the new region + +6. Update comments in tests/basic_api.c to match the style used in + tests/alloc_*.c