From patchwork Mon Mar 31 21:30:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 14034138 Received: from mail-vk1-f202.google.com (mail-vk1-f202.google.com [209.85.221.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 55EA31B4138 for ; Mon, 31 Mar 2025 21:30:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743456634; cv=none; b=GArwJFyGV+Bi0qbJeviwAna+ZUq4aveA4XZExUs8yDX3cSHFeykiZff6wSQXC162FQMFXv7mK1Vq5qlkGzxAD7xmW5WPgqiL6XFRKsWxqH2EXkys/0HxtQN5jxX35Pvxh090IWdjKH6HmQFscZ9y16+2elFezV0/suo9fm0m+KM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743456634; c=relaxed/simple; bh=QVzyFescXVWpRPbHhysiEv6lNMR7Z84PTEV+3tefm0Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qTZRLo8w4EEMvaNzFpy2IixeqfbBrkp2JRX4PRiczmw+AKU+yp1Gi4rgWgH4xazjBK8FTi8a0E5opyPC4xHuhwbxDlsk2Fl+iY21aULk4ruT1uvHKorEN7tmuBUZxEON8E5CA30EfXaNDYeumzKQqw4ozDjej4RxLeEa3DtgO1A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Fz+Uxl4Y; arc=none smtp.client-ip=209.85.221.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Fz+Uxl4Y" Received: by mail-vk1-f202.google.com with SMTP id 71dfb90a1353d-524021ac776so1318469e0c.2 for ; Mon, 31 Mar 2025 14:30:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743456632; x=1744061432; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=42KzVy5OOxbX2nRYaugEY43/RWNtB+bmjk11XtO+x8A=; b=Fz+Uxl4YzmkYqXYrJ/sJbzTzRTBjv7YsGcxg9+0rWaOTL/jX4IsGuxInosI0ACCvLD 5DL0DbKrHF5J479+gC1oIn3zzqszj4oo6JbuNYqFtb9wdyZTORHU3TYFcc5pygfQMLAd tICNtGbfEhk6voOu5iCX0tDP+L4zkorV1zR+0WYiAr1fFFa/qUbSMHWR/nnrDIt7e6Oe KPkl81PAtQsYB7t2MzpOph8gI07eoVx573LHr9fdLlWuu0n704umZOpj0sq/9+I6pWME zTmVKRmndHo5hLjmzo32iihKCGrrVjWp6QAfrG24KfEQtlgz/CRZ/HXHWCJEq5yQw5jb cWLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743456632; x=1744061432; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=42KzVy5OOxbX2nRYaugEY43/RWNtB+bmjk11XtO+x8A=; b=l10/jTTclbj785kemqcss5RVgR8gBbgJKITc9xPT0IxC7AQezud4ZSm9rtIutfYx4t wLbiOQ+aMs2XOzvNwoGVh+llEJrivs9pw3pa+m6YK/VFpHcEMlh0mJEjq0qdwzvJN8ny XlERux4wzS5uIrMXK9voLUKuhBFWh/4t7oL5XWF/1T3zvoJi3n7dO/hU7IJSAgEBEVkk kcdy86CBVifrXscwcrtygTXiQO1v2TEgVovcWNX6I6vT60R2MzCpvIyWizvlT2YNYL+P HzlmzPtf/dqISXsZpbPuVCdiaf4/bDOZ25tkXNr3xvKP7mAE0fddoOk77aP/1IQyGuzB TCug== X-Forwarded-Encrypted: i=1; AJvYcCXvJJeYcZypMo55jBkU/6g+I8qCADZglppe6CWZJJ7QdMwz/h2rCC1gS3gyXLR1SEJisg8=@vger.kernel.org X-Gm-Message-State: AOJu0YySS2kNmQXMA1kp4rhMCkY9PJuegqtxrsUb8FvcA+C2TJl5tUtF Dtnu62oWEUa1cVHhwkQbFaWcdPfRZYO5V7pEuZ1oSvtpsrLOok0VEvU8YvUtSmG5HREw9PO6PW9 MJcs2rmXG62bP4HpffQ== X-Google-Smtp-Source: AGHT+IH1S5Pqrxcnd6Psp4+cbC6TmHHG0jz3SYfEB8vHri/pTtyy440S8+fa1zRdaeI5w8d7xLGt0MgU/VoSpbkI X-Received: from vkci17.prod.google.com ([2002:a05:6122:62f1:b0:526:98a:f3d2]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6122:218b:b0:520:51a4:b81c with SMTP id 71dfb90a1353d-5261d478794mr5909701e0c.6.1743456632248; Mon, 31 Mar 2025 14:30:32 -0700 (PDT) Date: Mon, 31 Mar 2025 21:30:21 +0000 In-Reply-To: <20250331213025.3602082-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250331213025.3602082-1-jthoughton@google.com> X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250331213025.3602082-2-jthoughton@google.com> Subject: [PATCH v2 1/5] KVM: selftests: Extract guts of THP accessor to standalone sysfs helpers From: James Houghton To: Sean Christopherson , kvm@vger.kernel.org Cc: Maxim Levitsky , Axel Rasmussen , Tejun Heo , Johannes Weiner , mkoutny@suse.com, Yosry Ahmed , Yu Zhao , James Houghton , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org From: Sean Christopherson Extract the guts of thp_configured() and get_trans_hugepagesz() to standalone helpers so that the core logic can be reused for other sysfs files, e.g. to query numa_balancing. Opportunistically assert that the initial fscanf() read at least one byte, and add a comment explaining the second call to fscanf(). Signed-off-by: Sean Christopherson Signed-off-by: Maxim Levitsky --- tools/testing/selftests/kvm/lib/test_util.c | 35 ++++++++++++++------- 1 file changed, 24 insertions(+), 11 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c index 8ed0b74ae8373..3dc8538f5d696 100644 --- a/tools/testing/selftests/kvm/lib/test_util.c +++ b/tools/testing/selftests/kvm/lib/test_util.c @@ -132,37 +132,50 @@ void print_skip(const char *fmt, ...) puts(", skipping test"); } -bool thp_configured(void) +static bool test_sysfs_path(const char *path) { - int ret; struct stat statbuf; + int ret; - ret = stat("/sys/kernel/mm/transparent_hugepage", &statbuf); + ret = stat(path, &statbuf); TEST_ASSERT(ret == 0 || (ret == -1 && errno == ENOENT), - "Error in stating /sys/kernel/mm/transparent_hugepage"); + "Error in stat()ing '%s'", path); return ret == 0; } -size_t get_trans_hugepagesz(void) +bool thp_configured(void) +{ + return test_sysfs_path("/sys/kernel/mm/transparent_hugepage"); +} + +static size_t get_sysfs_val(const char *path) { size_t size; FILE *f; int ret; - TEST_ASSERT(thp_configured(), "THP is not configured in host kernel"); - - f = fopen("/sys/kernel/mm/transparent_hugepage/hpage_pmd_size", "r"); - TEST_ASSERT(f != NULL, "Error in opening transparent_hugepage/hpage_pmd_size"); + f = fopen(path, "r"); + TEST_ASSERT(f, "Error opening '%s'", path); ret = fscanf(f, "%ld", &size); + TEST_ASSERT(ret > 0, "Error reading '%s'", path); + + /* Re-scan the input stream to verify the entire file was read. */ ret = fscanf(f, "%ld", &size); - TEST_ASSERT(ret < 1, "Error reading transparent_hugepage/hpage_pmd_size"); - fclose(f); + TEST_ASSERT(ret < 1, "Error reading '%s'", path); + fclose(f); return size; } +size_t get_trans_hugepagesz(void) +{ + TEST_ASSERT(thp_configured(), "THP is not configured in host kernel"); + + return get_sysfs_val("/sys/kernel/mm/transparent_hugepage/hpage_pmd_size"); +} + size_t get_def_hugetlb_pagesz(void) { char buf[64]; From patchwork Mon Mar 31 21:30:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 14034139 Received: from mail-vk1-f201.google.com (mail-vk1-f201.google.com [209.85.221.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23E701C84CD for ; Mon, 31 Mar 2025 21:30:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743456635; cv=none; b=tih7MDx9zDLfixbA+0h+SxKwG7OSE1BDhQ3ivfwNPbaTrZyEEGG8sfCtc3l9BaV2aYtXAl/+GEMcPn8LDY0o4D7z5EEui6XpeM4KU+ihbk6Tqul+t1bEEBF/P5qaET5RvWYFgF1pP6ujTQMjq/phAiONBVAgD3HirC0JkL0h018= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743456635; c=relaxed/simple; bh=NXNzFJB8J5dG4bTpf002iTC9ieUq7kHGPnJmuTk1scw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RAQOCsoGs1hHyqKuTLoyTAO7yKjlRHQEjte3mqk6pbeE1ZBCIKTYtI9qm8fWa4b0dZOVj5Lm2Txm+ByBSighH03witSfz5JJUAOS3S/DZTB2kUSR6J3wNlSELSe25sq4j7PxEpSY8QH21/UaC2HT9hA3HjT5aWosPOVDws1ZEPY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kuDsUntW; arc=none smtp.client-ip=209.85.221.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kuDsUntW" Received: by mail-vk1-f201.google.com with SMTP id 71dfb90a1353d-5240516c312so1145810e0c.3 for ; Mon, 31 Mar 2025 14:30:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743456633; x=1744061433; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ml+2tB7XBVnHeqDN/qqy8GMpXwXcarcniQB/AU85dAs=; b=kuDsUntWkIuzNAVYCk8W1ARX/M3f21ZuWbuKjOz9J86avbQhrblDM0b0d0iPuXl04+ okURw4F9X1wDpu7HbIndZm6n2EC/7WCHln3bgJC282fo4p6nMu91Nz4Qn7L90oaDuVke R7c58sbSOxXuq+g31BGmUxDXuecOXBFPGsgaHSIBRRQfMMPIEyhU+Dg1FtYNMQjTglVQ mTHIUMFSwpeonZUn3fgcKXfMextAgaMkKlXe25C/1gl87KEB5aG1b/t4OrhlmUhatIH2 Qf+yrjSybisjlPdrXtcan6loP5WxUpe3kbKeFlmasS9yPA2K4Xa4I04EGphXc8jhj2QU NIeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743456633; x=1744061433; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ml+2tB7XBVnHeqDN/qqy8GMpXwXcarcniQB/AU85dAs=; b=bEqdoXsl6SuFxv9ZpRxygXkAt2YcbHD7NTKQ26t/2Dwj6F4xIawexeS4emHAu4WOSk E2bMLtEdYJxEtcC2MHC1CoZ4PRBuMK2xR5Iy7Y55zoIW25fl0zzfRrSjFNqAyiIWZpPF bZn/MDc1ncj9oX1Zg2OZURR5Za7mPAa8bYLj4Vv0qI/RrGSO5C69AQWspNdQUU9rvSn/ f7rXrrLSvrfWLogqZHDbIZ4ZzfBw66y5AeR1ZfDnoSe9d1q+CCK8WLv4gvfqv+pv08dk 0b4TX3g5FAdBDiuHqMIjSEyzQOEBu+tLMDhVvOurrl6UUfZlFWBANIZlvyoxfKBbBu61 QOXA== X-Forwarded-Encrypted: i=1; AJvYcCXrjdND9RT+/P2uBEttCVR6TAN4fGs2YC8Fb9eAd37nqGOgQ7pRGhu+q/Yu0rhCr99tkyE=@vger.kernel.org X-Gm-Message-State: AOJu0Yw5QvkP820dkuhG4vJrcg2PM/OJ8xbHx7cYfNVbMBicP2WinEw+ aTefrNLBJB5Z/7wRl3M71uNQ1gknxtlsflae+eWoSf/A5Cmn3ACEFt1+4FEh4ia+ShCT7S2Hgvt GPyIDRDMTqGCkGxCxbg== X-Google-Smtp-Source: AGHT+IHugrMo0w5U9xbhlXQE5dgczq4rbbdZ42UCtlrqSh7hiuwSJhgwXqPRH+V1pf0dIeyuUU8zyl0i9bGBnMXZ X-Received: from vkcy7.prod.google.com ([2002:a05:6122:8c87:b0:520:4c94:bbe8]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6122:d0d:b0:520:4539:4b4c with SMTP id 71dfb90a1353d-5261d492157mr5911286e0c.9.1743456633046; Mon, 31 Mar 2025 14:30:33 -0700 (PDT) Date: Mon, 31 Mar 2025 21:30:22 +0000 In-Reply-To: <20250331213025.3602082-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250331213025.3602082-1-jthoughton@google.com> X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250331213025.3602082-3-jthoughton@google.com> Subject: [PATCH v2 2/5] KVM: selftests: access_tracking_perf_test: Add option to skip the sanity check From: James Houghton To: Sean Christopherson , kvm@vger.kernel.org Cc: Maxim Levitsky , Axel Rasmussen , Tejun Heo , Johannes Weiner , mkoutny@suse.com, Yosry Ahmed , Yu Zhao , James Houghton , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org From: Maxim Levitsky Add an option to skip sanity check of number of still idle pages, and set it by default to skip, in case hypervisor or NUMA balancing is detected. Signed-off-by: Maxim Levitsky Co-developed-by: James Houghton Signed-off-by: James Houghton --- .../selftests/kvm/access_tracking_perf_test.c | 62 ++++++++++++++++--- .../testing/selftests/kvm/include/test_util.h | 1 + tools/testing/selftests/kvm/lib/test_util.c | 7 +++ 3 files changed, 61 insertions(+), 9 deletions(-) diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c index 447e619cf856e..a2ac6fa2ba141 100644 --- a/tools/testing/selftests/kvm/access_tracking_perf_test.c +++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c @@ -65,6 +65,16 @@ static int vcpu_last_completed_iteration[KVM_MAX_VCPUS]; /* Whether to overlap the regions of memory vCPUs access. */ static bool overlap_memory_access; +/* + * If the test should only warn if there are too many idle pages (i.e., it is + * expected). + * -1: Not yet set. + * 0: We do not expect too many idle pages, so FAIL if too many idle pages. + * 1: Having too many idle pages is expected, so merely print a warning if + * too many idle pages are found. + */ +static int idle_pages_warn_only = -1; + struct test_params { /* The backing source for the region of memory. */ enum vm_mem_backing_src_type backing_src; @@ -177,18 +187,12 @@ static void mark_vcpu_memory_idle(struct kvm_vm *vm, * arbitrary; high enough that we ensure most memory access went through * access tracking but low enough as to not make the test too brittle * over time and across architectures. - * - * When running the guest as a nested VM, "warn" instead of asserting - * as the TLB size is effectively unlimited and the KVM doesn't - * explicitly flush the TLB when aging SPTEs. As a result, more pages - * are cached and the guest won't see the "idle" bit cleared. */ if (still_idle >= pages / 10) { -#ifdef __x86_64__ - TEST_ASSERT(this_cpu_has(X86_FEATURE_HYPERVISOR), + TEST_ASSERT(idle_pages_warn_only, "vCPU%d: Too many pages still idle (%lu out of %lu)", vcpu_idx, still_idle, pages); -#endif + printf("WARNING: vCPU%d: Too many pages still idle (%lu out of %lu), " "this will affect performance results.\n", vcpu_idx, still_idle, pages); @@ -328,6 +332,32 @@ static void run_test(enum vm_guest_mode mode, void *arg) memstress_destroy_vm(vm); } +static int access_tracking_unreliable(void) +{ +#ifdef __x86_64__ + /* + * When running nested, the TLB size may be effectively unlimited (for + * example, this is the case when running on KVM L0), and KVM doesn't + * explicitly flush the TLB when aging SPTEs. As a result, more pages + * are cached and the guest won't see the "idle" bit cleared. + */ + if (this_cpu_has(X86_FEATURE_HYPERVISOR)) { + puts("Skipping idle page count sanity check, because the test is run nested"); + return 1; + } +#endif + /* + * When NUMA balancing is enabled, guest memory will be unmapped to get + * NUMA faults, dropping the Accessed bits. + */ + if (is_numa_balancing_enabled()) { + puts("Skipping idle page count sanity check, because NUMA balancing is enabled"); + return 1; + } + + return 0; +} + static void help(char *name) { puts(""); @@ -342,6 +372,12 @@ static void help(char *name) printf(" -v: specify the number of vCPUs to run.\n"); printf(" -o: Overlap guest memory accesses instead of partitioning\n" " them into a separate region of memory for each vCPU.\n"); + printf(" -w: Control whether the test warns or fails if more than 10%\n" + " of pages are still seen as idle/old after accessing guest\n" + " memory. >0 == warn only, 0 == fail, <0 == auto. For auto\n" + " mode, the test fails by default, but switches to warn only\n" + " if NUMA balancing is enabled or the test detects it's running\n" + " in a VM.\n"); backing_src_help("-s"); puts(""); exit(0); @@ -359,7 +395,7 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "hm:b:v:os:")) != -1) { + while ((opt = getopt(argc, argv, "hm:b:v:os:w:")) != -1) { switch (opt) { case 'm': guest_modes_cmdline(optarg); @@ -376,6 +412,11 @@ int main(int argc, char *argv[]) case 's': params.backing_src = parse_backing_src_type(optarg); break; + case 'w': + idle_pages_warn_only = + atoi_non_negative("Idle pages warning", + optarg); + break; case 'h': default: help(argv[0]); @@ -388,6 +429,9 @@ int main(int argc, char *argv[]) "CONFIG_IDLE_PAGE_TRACKING is not enabled"); close(page_idle_fd); + if (idle_pages_warn_only == -1) + idle_pages_warn_only = access_tracking_unreliable(); + for_each_guest_mode(run_test, ¶ms); return 0; diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h index 77d13d7920cb8..c6ef895fbd9ab 100644 --- a/tools/testing/selftests/kvm/include/test_util.h +++ b/tools/testing/selftests/kvm/include/test_util.h @@ -153,6 +153,7 @@ bool is_backing_src_hugetlb(uint32_t i); void backing_src_help(const char *flag); enum vm_mem_backing_src_type parse_backing_src_type(const char *type_name); long get_run_delay(void); +bool is_numa_balancing_enabled(void); /* * Whether or not the given source type is shared memory (as opposed to diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c index 3dc8538f5d696..03eb99af9b8de 100644 --- a/tools/testing/selftests/kvm/lib/test_util.c +++ b/tools/testing/selftests/kvm/lib/test_util.c @@ -176,6 +176,13 @@ size_t get_trans_hugepagesz(void) return get_sysfs_val("/sys/kernel/mm/transparent_hugepage/hpage_pmd_size"); } +bool is_numa_balancing_enabled(void) +{ + if (!test_sysfs_path("/proc/sys/kernel/numa_balancing")) + return false; + return get_sysfs_val("/proc/sys/kernel/numa_balancing") == 1; +} + size_t get_def_hugetlb_pagesz(void) { char buf[64]; From patchwork Mon Mar 31 21:30:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 14034141 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DCC9921C183 for ; Mon, 31 Mar 2025 21:30:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743456637; cv=none; b=KhCNaW1gB1VzvS6pwEkeAOy/8lEFm512zDGL0ke8z6NcRTqQwL964TODvkWIAgoIStYisWf3mbOGJxPRSjOyY/b8t6GOCy3udzn6dUTB8iMigQG345NU6i3CstsdRkrlgq41SmlqlfX7fBYgROSw+X+lARnDIN+ksZoOZi0ulSY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743456637; c=relaxed/simple; bh=mjpU6MDhgU6cTqhOoVD4es2HPtgPtqp4bKNwM9P9cG4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=P+OuLjgHSVIoKvuJSnFCMQZU/pEgrwfTnTtw/ItfIoO7Q2mB7+csKjoBp7UQWURhiOdki0quWS1ECKqA2WZOVUuEg7oRLgtu6McNLcZ8YiV9T1ZRW+pyyh77T4k6QO8Y3xvQ/FAxLewCy5wBMiV183AUby0q+U8lluozDyPwvYQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gRSr1fci; arc=none smtp.client-ip=209.85.222.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gRSr1fci" Received: by mail-qk1-f201.google.com with SMTP id af79cd13be357-7c5d608e703so892135085a.3 for ; Mon, 31 Mar 2025 14:30:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743456634; x=1744061434; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ekwv7KGdyZFU+jTcpCSyOgzKWCtQSrak75BLH4Eza8c=; b=gRSr1fcia8FO0KiWIRW5/jXB+lPLbtzVnnzakUDQSZGeZgwKOokWVmjlDBNMT+5jbL YQk0HYHStuQbBwFL9GTXieIkidmlDtgZvAFnnFk+5joECfrKkdRZW/Wy7zrNU2OLh9eA PF+//oB9nzvNCw3xJNaUFXTrWRgJ5Mm95wwe8btk1F1WhOPYEKBg1GJeVluP+eQODMoo 24dkQCWoyCfg65vfMkBpgaxs+Hpqa8IZYn+vXPgYu9ICdnMfSKdO4fIpOFNsD2ShKyBm hIqhsfOX+GQauQB3++jI26PBsmCGTH9xqjW8QojZUaWbz9ChUmQmdmfKWO8xSHRbzuSK YRMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743456634; x=1744061434; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ekwv7KGdyZFU+jTcpCSyOgzKWCtQSrak75BLH4Eza8c=; b=fTcbSac52VEAWN4XZMbKaIoQgjjQucfv/2laTW7z0FdM6p0k7wxn0YquAiPZlwxHFW Lgj80NS92JUOp+90dxw6shjWo+HFa730J4vF7t64BLWm33DUJFGA0IDGAJXM5eAqsSga Crvn8BFwi2oyprQmB+ivHZwaCQUszXRPQXqSKY20DtpPyPDmSmFTFi0Psaxhq79cSlk+ +rSXc/DD0KYZXBdRVXp9mEDf6nKYjKRwJ+zf6FAR4cO6AZB6wfZwu/vlMerkmKGfnjtQ N8C9GpEdPa8ZY+GTjJaq+i6h5gBV7HKAS1/H5Uvq0g4rqGlrLkznahT8hrHSrJ3BG+a3 2SYQ== X-Forwarded-Encrypted: i=1; AJvYcCWidx1wpk+XlNmlWCVjLYTFdQYhFVkeDVJ4VmzP3ttf+iLfPq8J6G2RdYf9k00m6MmGLxY=@vger.kernel.org X-Gm-Message-State: AOJu0YzjCDowljitVjHk6gLy6ISROhQl29TIeFSbITikeKZ7Cx8H7KXW YZ14YOmQLfAPsgp6hwCpTWZ533sIT3y480Da1/moZxV8b2CHJuuDzXueum5byRQzOzD/i3V69GX 4GzMFj3hmjXkTb1RYsA== X-Google-Smtp-Source: AGHT+IFAnmoz1fj2tWiXC7ZF3nNAKY9wbok9MzxIrTwJ7bxeF0N65ecJB0+NIisgqGxEkKRMPYj00++Iyp0hSJd5 X-Received: from qkjx24.prod.google.com ([2002:a05:620a:14b8:b0:7c5:e0ba:1600]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:620a:bca:b0:7c5:3c69:2bce with SMTP id af79cd13be357-7c6862ebd1cmr1569629685a.7.1743456633860; Mon, 31 Mar 2025 14:30:33 -0700 (PDT) Date: Mon, 31 Mar 2025 21:30:23 +0000 In-Reply-To: <20250331213025.3602082-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250331213025.3602082-1-jthoughton@google.com> X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250331213025.3602082-4-jthoughton@google.com> Subject: [PATCH v2 3/5] cgroup: selftests: Move cgroup_util into its own library From: James Houghton To: Sean Christopherson , kvm@vger.kernel.org Cc: Maxim Levitsky , Axel Rasmussen , Tejun Heo , Johannes Weiner , mkoutny@suse.com, Yosry Ahmed , Yu Zhao , James Houghton , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack KVM selftests will soon need to use some of the cgroup creation and deletion functionality from cgroup_util. Suggested-by: David Matlack Signed-off-by: James Houghton Acked-by: Tejun Heo --- tools/testing/selftests/cgroup/Makefile | 21 ++++++++++--------- .../selftests/cgroup/{ => lib}/cgroup_util.c | 2 +- .../cgroup/{ => lib/include}/cgroup_util.h | 4 ++-- .../testing/selftests/cgroup/lib/libcgroup.mk | 14 +++++++++++++ 4 files changed, 28 insertions(+), 13 deletions(-) rename tools/testing/selftests/cgroup/{ => lib}/cgroup_util.c (99%) rename tools/testing/selftests/cgroup/{ => lib/include}/cgroup_util.h (99%) create mode 100644 tools/testing/selftests/cgroup/lib/libcgroup.mk diff --git a/tools/testing/selftests/cgroup/Makefile b/tools/testing/selftests/cgroup/Makefile index 1b897152bab6e..e01584c2189ac 100644 --- a/tools/testing/selftests/cgroup/Makefile +++ b/tools/testing/selftests/cgroup/Makefile @@ -21,14 +21,15 @@ TEST_GEN_PROGS += test_zswap LOCAL_HDRS += $(selfdir)/clone3/clone3_selftests.h $(selfdir)/pidfd/pidfd.h include ../lib.mk +include lib/libcgroup.mk -$(OUTPUT)/test_core: cgroup_util.c -$(OUTPUT)/test_cpu: cgroup_util.c -$(OUTPUT)/test_cpuset: cgroup_util.c -$(OUTPUT)/test_freezer: cgroup_util.c -$(OUTPUT)/test_hugetlb_memcg: cgroup_util.c -$(OUTPUT)/test_kill: cgroup_util.c -$(OUTPUT)/test_kmem: cgroup_util.c -$(OUTPUT)/test_memcontrol: cgroup_util.c -$(OUTPUT)/test_pids: cgroup_util.c -$(OUTPUT)/test_zswap: cgroup_util.c +$(OUTPUT)/test_core: $(LIBCGROUP_O) +$(OUTPUT)/test_cpu: $(LIBCGROUP_O) +$(OUTPUT)/test_cpuset: $(LIBCGROUP_O) +$(OUTPUT)/test_freezer: $(LIBCGROUP_O) +$(OUTPUT)/test_hugetlb_memcg: $(LIBCGROUP_O) +$(OUTPUT)/test_kill: $(LIBCGROUP_O) +$(OUTPUT)/test_kmem: $(LIBCGROUP_O) +$(OUTPUT)/test_memcontrol: $(LIBCGROUP_O) +$(OUTPUT)/test_pids: $(LIBCGROUP_O) +$(OUTPUT)/test_zswap: $(LIBCGROUP_O) diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/lib/cgroup_util.c similarity index 99% rename from tools/testing/selftests/cgroup/cgroup_util.c rename to tools/testing/selftests/cgroup/lib/cgroup_util.c index 1e2d46636a0ca..f047d8adaec65 100644 --- a/tools/testing/selftests/cgroup/cgroup_util.c +++ b/tools/testing/selftests/cgroup/lib/cgroup_util.c @@ -17,7 +17,7 @@ #include #include "cgroup_util.h" -#include "../clone3/clone3_selftests.h" +#include "../../clone3/clone3_selftests.h" /* Returns read len on success, or -errno on failure. */ static ssize_t read_text(const char *path, char *buf, size_t max_len) diff --git a/tools/testing/selftests/cgroup/cgroup_util.h b/tools/testing/selftests/cgroup/lib/include/cgroup_util.h similarity index 99% rename from tools/testing/selftests/cgroup/cgroup_util.h rename to tools/testing/selftests/cgroup/lib/include/cgroup_util.h index 19b131ee77072..7a0441e5eb296 100644 --- a/tools/testing/selftests/cgroup/cgroup_util.h +++ b/tools/testing/selftests/cgroup/lib/include/cgroup_util.h @@ -2,9 +2,9 @@ #include #include -#include "../kselftest.h" - +#ifndef PAGE_SIZE #define PAGE_SIZE 4096 +#endif #define MB(x) (x << 20) diff --git a/tools/testing/selftests/cgroup/lib/libcgroup.mk b/tools/testing/selftests/cgroup/lib/libcgroup.mk new file mode 100644 index 0000000000000..12323041a5ce6 --- /dev/null +++ b/tools/testing/selftests/cgroup/lib/libcgroup.mk @@ -0,0 +1,14 @@ +CGROUP_DIR := $(selfdir)/cgroup + +LIBCGROUP_C := lib/cgroup_util.c + +LIBCGROUP_O := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBCGROUP_C)) + +CFLAGS += -I$(CGROUP_DIR)/lib/include + +EXTRA_HDRS := $(selfdir)/clone3/clone3_selftests.h + +$(LIBCGROUP_O): $(OUTPUT)/%.o : $(CGROUP_DIR)/%.c $(EXTRA_HDRS) + $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@ + +EXTRA_CLEAN += $(LIBCGROUP_O) From patchwork Mon Mar 31 21:30:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 14034140 Received: from mail-vk1-f201.google.com (mail-vk1-f201.google.com [209.85.221.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C397121CA17 for ; Mon, 31 Mar 2025 21:30:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743456637; cv=none; b=RFxw4T0cHHEAXCvq369io6qL+KgSIxAp1BAf8IiCrsEWssTPcKF+/GdFBOE2gS0G9gL5pHijasgj9CeR8GcbTQupEwDhe/8zTbRCSq7u9Q3aG07XpbdVmfPHqml1ctRElhO9KSmLSsyJpcfsj6V0EboNDtM6AqxO6zpUxH5Kr5s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743456637; c=relaxed/simple; bh=DlDaNBmiP+Lmsrd4wjzooOyHQDU6I4mrrw8YnqHQ8qs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=q4euzHGSzXSiHyq+eadarjOGF7hxB0CaNRSH6kjJ4MKHMsgyFIjGZG40y/zO47s3vfzRSqGAbPIA8kq/ehtRMtTfcx4LvNAEr3hRStoWxABUdKbE+Fhj1hSGEYMYrdMsWFC9nwTKtBcrMA2IEvQyL24D/2CqbP74v0w5glyoozk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FTL77fvC; arc=none smtp.client-ip=209.85.221.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FTL77fvC" Received: by mail-vk1-f201.google.com with SMTP id 71dfb90a1353d-523f7c7e061so3511015e0c.0 for ; Mon, 31 Mar 2025 14:30:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743456634; x=1744061434; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AMmGEtr9mP14oGMDJzQtzY0m4v3Phg9hGuUxD5cZ2iA=; b=FTL77fvCQVtYAynoxFLfrCKwL2TWhbtip/uZrh7MwoQwZHIe0OcufFIVKRXD8QSHH+ MXlDo994VrUlcxIPN6ZPz+K0ozVg/bQwBhtUQ4kSXH3MZTFY0JCmGScoHbzp5nU4QJGx yabGCM/5+cOZTFggRExme76lJWWJ0erUuEwZJ0KqFIpc58H6gAJ1L7rSOR7YmDSzp8vm lK8sSJIoazEirB5k5SNdZ6CVXTh+17irrstWMJ24D7DdkC8V0hha/vP7PFY6TzSeUj8C +r6u11tZ2AX7dksyADFy93cxn2v2rC2k1fWRgYMw2daiYV7ZHYuEAfq9i2XRifimB369 ofiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743456634; x=1744061434; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AMmGEtr9mP14oGMDJzQtzY0m4v3Phg9hGuUxD5cZ2iA=; b=JTiLsSLQUcy8thpU7ANjgraffZPZA4kQyPRQ1i0yEHACLd/0wbsxs4S/SkQ5kP66oA oUt2O+HHNUiOxJgsec53VTfNNXOaHYRKA+6MaKkjfQ+Igeimil29E2ByrbGniWQ9bcAl gpvmdlX2SbqmvXSnoQcg6qrUlfHeX5ufK8SY9V+QtePEFJG/Eq78MVevnJaEpWAG0+1V JiX/yS73H4BoBQs8SODZipBxfUWXt5eBFtGjrqn3cHLQyLUt6mNvg244R0eK3vMEMvpX JWkOithh5MOrNogQ7zHRQR3yoDQ69tAbxN/AUKprY11/Fnx9YQtrx7C2yHgx4A7b293X kPpw== X-Forwarded-Encrypted: i=1; AJvYcCWMhdFBFwKQOYTWtDOoc+C59KacAKXJoHr8N717UeaEDl+hiAZVwkygEilR4ZPMiyWU9/s=@vger.kernel.org X-Gm-Message-State: AOJu0YwIJunFkOmSbdXMbi6KSv18msgAndtHd2paPKYxeDPbUd6IN2RX T/WnUdijrgF3a6zIC/hxRw9t8tvSgAhnd8YID7uwRsMVk75YwH7sUORS5hn3wCYlIFA+ny4hPZv ObPRu6GzUxqNjK1h2Sg== X-Google-Smtp-Source: AGHT+IHgagkSKWGlDovr5gDvy/QEUbGDo/w32g+pQ7uHEVkF+t8NIV8URx2AUHB2qjSBJFbxSuIK8xNpCMDSjF+R X-Received: from vkbes13.prod.google.com ([2002:a05:6122:1b8d:b0:523:79d3:7e63]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6122:3c8b:b0:523:7c70:bc9c with SMTP id 71dfb90a1353d-5261d3ca2b3mr7267971e0c.5.1743456634617; Mon, 31 Mar 2025 14:30:34 -0700 (PDT) Date: Mon, 31 Mar 2025 21:30:24 +0000 In-Reply-To: <20250331213025.3602082-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250331213025.3602082-1-jthoughton@google.com> X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250331213025.3602082-5-jthoughton@google.com> Subject: [PATCH v2 4/5] KVM: selftests: Build and link selftests/cgroup/lib into KVM selftests From: James Houghton To: Sean Christopherson , kvm@vger.kernel.org Cc: Maxim Levitsky , Axel Rasmussen , Tejun Heo , Johannes Weiner , mkoutny@suse.com, Yosry Ahmed , Yu Zhao , James Houghton , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org libcgroup.o is built separately from KVM selftests and cgroup selftests, so different compiler flags used by the different selftests will not conflict with each other. Signed-off-by: James Houghton --- tools/testing/selftests/kvm/Makefile.kvm | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index f773f8f992494..c86a680f52b28 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -219,6 +219,7 @@ OVERRIDE_TARGETS = 1 # importantly defines, i.e. overwrites, $(CC) (unless `make -e` or `make CC=`, # which causes the environment variable to override the makefile). include ../lib.mk +include ../cgroup/lib/libcgroup.mk INSTALL_HDR_PATH = $(top_srcdir)/usr LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/ @@ -272,7 +273,7 @@ LIBKVM_S := $(filter %.S,$(LIBKVM)) LIBKVM_C_OBJ := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBKVM_C)) LIBKVM_S_OBJ := $(patsubst %.S, $(OUTPUT)/%.o, $(LIBKVM_S)) LIBKVM_STRING_OBJ := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBKVM_STRING)) -LIBKVM_OBJS = $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ) $(LIBKVM_STRING_OBJ) +LIBKVM_OBJS = $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ) $(LIBKVM_STRING_OBJ) $(LIBCGROUP_O) SPLIT_TEST_GEN_PROGS := $(patsubst %, $(OUTPUT)/%, $(SPLIT_TESTS)) SPLIT_TEST_GEN_OBJ := $(patsubst %, $(OUTPUT)/$(ARCH)/%.o, $(SPLIT_TESTS)) From patchwork Mon Mar 31 21:30:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 14034142 Received: from mail-qv1-f73.google.com (mail-qv1-f73.google.com [209.85.219.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E06F221D3C9 for ; Mon, 31 Mar 2025 21:30:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743456639; cv=none; b=oTP8UrbB4iJbDFVV5JrzNZXCTa2sw0KEpyn5QFvRW7gq9YgEJP6bTLAIODQxbeRD53DdcqS86huVF9olE+qEfJ6hAjTqYAVLUtWne/ijvwhx9i0E71hYGJwvJr/+OmPAFZpAEBK8W+g2UHIQxfNN12tR8k4UcTNlLdoTJqpWvoI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743456639; c=relaxed/simple; bh=6+lraWvXc21zEAJqvuRM6lZPXoAwTUcYwwW83T26otU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=P6inMSGO4Prgacy0uAkwiX0MN9Z7vVntY2giggmbC2AhYLurjYvteQdXGri8s0KKnLxJ5aCT9xOt0Nx77HNhzhKDwnJzgtk6kVebqxbnq92vHdYS3qaMWnEVa6JWxoBBIy6lPwEDodXdF1QF7lM6tKbzhdzZPNENEu5rEtoZi9w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Ugk02Kvr; arc=none smtp.client-ip=209.85.219.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ugk02Kvr" Received: by mail-qv1-f73.google.com with SMTP id 6a1803df08f44-6ed0526b507so73984416d6.0 for ; Mon, 31 Mar 2025 14:30:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743456636; x=1744061436; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5Sq4CMaYvrnwXwfxJaisSeaUnopobyqI3NDMrHM5jZs=; b=Ugk02KvrHlOUeS1pwZ9mdjg3i5KOYtSzQi1hp6YiAr4F1TAF4TCwHK8/owpi24I5uH uePkIAwmybBIRQofp81rkxfHTMqDcBwXtqech+7etTjlR4k62OsQIx4j+Fy4/wKxXXS5 H59+4Qt4v/XFOFPSQEuO7CzXnM3TK8qLVr0dOkAJBSSuMIVHoMayUoJXbu8fZPI0sGhn BjBMGh7K+pnvQdAyKfuM+b+eMy9XGzEz22j2wsKD1CJS2504OfH2vTOakp90ds93W+jb c012HTGzDXa4vLZMduc58BDhaoNeFP/amJ/pppnBEb17qsZo+FLgonwMRw6ooiVL2jeZ 1nFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743456636; x=1744061436; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5Sq4CMaYvrnwXwfxJaisSeaUnopobyqI3NDMrHM5jZs=; b=j6g1TqaQI+Y+M9xqZjfivmWW4weR4GLJWF7HVMnBi5ShLSN84yhZ4IgOvMEdbOFUm4 8G4grRHy/ZuWIcLRbKYvOWCE1t9BDFwxbcKgQtrNO3gS2TAuMKTMlMoL+yae9TScUSVS 9UXcJtlt0CnEgJ+RcBKGIlYYouocb2rSs+vjSz+SZdheAfyWZZLDyyV5SuR8b1ZfVXFH 3oFuZOyqf7roEpzvPPEmsNRx2eM33NnWlqTGEeC7zYpVw8b+WmcfeKgF75feDx7JsRmV ZnhmAWFO0iPQ8xOqjnffFf11ztRwi9Sd/U7gjlRy3zvxiYX1XJWxP60XzrjuviRSPX9G adSQ== X-Forwarded-Encrypted: i=1; AJvYcCVuVdjXsNPNiIWMjx581TI/KCAaqdrxPExQh4ZXaQEtFGqG2Y/N72RvM11qhaEh9+MuwWA=@vger.kernel.org X-Gm-Message-State: AOJu0Yx98Y8ttsNzaFMxmTQ/nrRfNqICaAAKH1yWJg/Gq5N49qlLKvks uJS9WhcFFbeSEonkYlYbgAu9i8eX+xoy2JPCi7fnNYEhGQJSPAKnjM8Sc8OdhtMwoqoX9Vz52Ao uRUMq7Zpr9zI0uN/rbA== X-Google-Smtp-Source: AGHT+IEmgPcS3rl2NGqYjqldiwr5tRYuLarjy6S5hHY3IYfTpAjdzo/WiGs26s8YiQOqlH7IQGwXKnLWg3/VSIaE X-Received: from qvbpb3.prod.google.com ([2002:a05:6214:4843:b0:6ec:f38a:d191]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6214:5297:b0:6e8:9a2a:145b with SMTP id 6a1803df08f44-6eed60bd410mr153091676d6.23.1743456635791; Mon, 31 Mar 2025 14:30:35 -0700 (PDT) Date: Mon, 31 Mar 2025 21:30:25 +0000 In-Reply-To: <20250331213025.3602082-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250331213025.3602082-1-jthoughton@google.com> X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250331213025.3602082-6-jthoughton@google.com> Subject: [PATCH v2 5/5] KVM: selftests: access_tracking_perf_test: Use MGLRU for access tracking From: James Houghton To: Sean Christopherson , kvm@vger.kernel.org Cc: Maxim Levitsky , Axel Rasmussen , Tejun Heo , Johannes Weiner , mkoutny@suse.com, Yosry Ahmed , Yu Zhao , James Houghton , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org By using MGLRU's debugfs for invoking test_young() and clear_young(), we avoid page_idle's incompatibility with MGLRU, and we can mark pages as idle (clear_young()) much faster. The ability to use page_idle is left in, as it is useful for kernels that do not have MGLRU built in. If MGLRU is enabled but is not usable (e.g. we can't access the debugfs mount), the test will fail, as page_idle is not compatible with MGLRU. cgroup utility functions have been borrowed so that, when running with MGLRU, we can create a memcg in which to run our test. Other MGLRU-debugfs-specific parsing code has been added to lru_gen_util.{c,h}. Co-developed-by: Axel Rasmussen Signed-off-by: Axel Rasmussen Signed-off-by: James Houghton --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../selftests/kvm/access_tracking_perf_test.c | 207 ++++++++-- .../selftests/kvm/include/lru_gen_util.h | 51 +++ .../testing/selftests/kvm/lib/lru_gen_util.c | 383 ++++++++++++++++++ 4 files changed, 616 insertions(+), 26 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/lru_gen_util.h create mode 100644 tools/testing/selftests/kvm/lib/lru_gen_util.c diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index c86a680f52b28..6ab0441238a7f 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -8,6 +8,7 @@ LIBKVM += lib/elf.c LIBKVM += lib/guest_modes.c LIBKVM += lib/io.c LIBKVM += lib/kvm_util.c +LIBKVM += lib/lru_gen_util.c LIBKVM += lib/memstress.c LIBKVM += lib/guest_sprintf.c LIBKVM += lib/rbtree.c diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c index a2ac6fa2ba141..d4ef201b67055 100644 --- a/tools/testing/selftests/kvm/access_tracking_perf_test.c +++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c @@ -7,9 +7,11 @@ * This test measures the performance effects of KVM's access tracking. * Access tracking is driven by the MMU notifiers test_young, clear_young, and * clear_flush_young. These notifiers do not have a direct userspace API, - * however the clear_young notifier can be triggered by marking a pages as idle - * in /sys/kernel/mm/page_idle/bitmap. This test leverages that mechanism to - * enable access tracking on guest memory. + * however the clear_young notifier can be triggered either by + * 1. marking a pages as idle in /sys/kernel/mm/page_idle/bitmap OR + * 2. adding a new MGLRU generation using the lru_gen debugfs file. + * This test leverages page_idle to enable access tracking on guest memory + * unless MGLRU is enabled, in which case MGLRU is used. * * To measure performance this test runs a VM with a configurable number of * vCPUs that each touch every page in disjoint regions of memory. Performance @@ -17,10 +19,11 @@ * predefined region. * * Note that a deterministic correctness test of access tracking is not possible - * by using page_idle as it exists today. This is for a few reasons: + * by using page_idle or MGLRU aging as it exists today. This is for a few + * reasons: * - * 1. page_idle only issues clear_young notifiers, which lack a TLB flush. This - * means subsequent guest accesses are not guaranteed to see page table + * 1. page_idle and MGLRU only issue clear_young notifiers, which lack a TLB flush. + * This means subsequent guest accesses are not guaranteed to see page table * updates made by KVM until some time in the future. * * 2. page_idle only operates on LRU pages. Newly allocated pages are not @@ -48,9 +51,17 @@ #include "guest_modes.h" #include "processor.h" +#include "cgroup_util.h" +#include "lru_gen_util.h" + +static const char *TEST_MEMCG_NAME = "access_tracking_perf_test"; + /* Global variable used to synchronize all of the vCPU threads. */ static int iteration; +/* The cgroup v2 root. Needed for lru_gen-based aging. */ +char cgroup_root[PATH_MAX]; + /* Defines what vCPU threads should do during a given iteration. */ static enum { /* Run the vCPU to access all its memory. */ @@ -75,6 +86,12 @@ static bool overlap_memory_access; */ static int idle_pages_warn_only = -1; +/* Whether or not to use MGLRU instead of page_idle for access tracking */ +static bool use_lru_gen; + +/* Total number of pages to expect in the memcg after touching everything */ +static long total_pages; + struct test_params { /* The backing source for the region of memory. */ enum vm_mem_backing_src_type backing_src; @@ -133,8 +150,24 @@ static void mark_page_idle(int page_idle_fd, uint64_t pfn) "Set page_idle bits for PFN 0x%" PRIx64, pfn); } -static void mark_vcpu_memory_idle(struct kvm_vm *vm, - struct memstress_vcpu_args *vcpu_args) +static void too_many_idle_pages(long idle_pages, long total_pages, int vcpu_idx) +{ + char prefix[18] = {}; + + if (vcpu_idx >= 0) + snprintf(prefix, 18, "vCPU%d: ", vcpu_idx); + + TEST_ASSERT(idle_pages_warn_only, + "%sToo many pages still idle (%lu out of %lu)", + prefix, idle_pages, total_pages); + + printf("WARNING: %sToo many pages still idle (%lu out of %lu), " + "this will affect performance results.\n", + prefix, idle_pages, total_pages); +} + +static void pageidle_mark_vcpu_memory_idle(struct kvm_vm *vm, + struct memstress_vcpu_args *vcpu_args) { int vcpu_idx = vcpu_args->vcpu_idx; uint64_t base_gva = vcpu_args->gva; @@ -188,20 +221,81 @@ static void mark_vcpu_memory_idle(struct kvm_vm *vm, * access tracking but low enough as to not make the test too brittle * over time and across architectures. */ - if (still_idle >= pages / 10) { - TEST_ASSERT(idle_pages_warn_only, - "vCPU%d: Too many pages still idle (%lu out of %lu)", - vcpu_idx, still_idle, pages); - - printf("WARNING: vCPU%d: Too many pages still idle (%lu out of %lu), " - "this will affect performance results.\n", - vcpu_idx, still_idle, pages); - } + if (still_idle >= pages / 10) + too_many_idle_pages(still_idle, pages, + overlap_memory_access ? -1 : vcpu_idx); close(page_idle_fd); close(pagemap_fd); } +int find_generation(struct memcg_stats *stats, long total_pages) +{ + /* + * For finding the generation that contains our pages, use the same + * 90% threshold that page_idle uses. + */ + int gen = lru_gen_find_generation(stats, total_pages * 9 / 10); + + if (gen >= 0) + return gen; + + if (!idle_pages_warn_only) { + TEST_FAIL("Could not find a generation with 90%% of guest memory (%ld pages).", + total_pages * 9 / 10); + return gen; + } + + /* + * We couldn't find a generation with 90% of guest memory, which can + * happen if access tracking is unreliable. Simply look for a majority + * of pages. + */ + puts("WARNING: Couldn't find a generation with 90% of guest memory. " + "Performance results may not be accurate."); + gen = lru_gen_find_generation(stats, total_pages / 2); + TEST_ASSERT(gen >= 0, + "Could not find a generation with 50%% of guest memory (%ld pages).", + total_pages / 2); + return gen; +} + +static void lru_gen_mark_memory_idle(struct kvm_vm *vm) +{ + struct timespec ts_start; + struct timespec ts_elapsed; + struct memcg_stats stats; + int found_gens[2]; + + /* Find current generation the pages lie in. */ + lru_gen_read_memcg_stats(&stats, TEST_MEMCG_NAME); + found_gens[0] = find_generation(&stats, total_pages); + + /* Make a new generation */ + clock_gettime(CLOCK_MONOTONIC, &ts_start); + lru_gen_do_aging(&stats, TEST_MEMCG_NAME); + ts_elapsed = timespec_elapsed(ts_start); + + /* Check the generation again */ + found_gens[1] = find_generation(&stats, total_pages); + + /* + * This function should only be invoked with newly-accessed pages, + * so pages should always move to a newer generation. + */ + if (found_gens[0] >= found_gens[1]) { + /* We did not move to a newer generation. */ + long idle_pages = lru_gen_sum_memcg_stats_for_gen(found_gens[1], + &stats); + + too_many_idle_pages(min_t(long, idle_pages, total_pages), + total_pages, -1); + } + pr_info("%-30s: %ld.%09lds\n", + "Mark memory idle (lru_gen)", ts_elapsed.tv_sec, + ts_elapsed.tv_nsec); +} + static void assert_ucall(struct kvm_vcpu *vcpu, uint64_t expected_ucall) { struct ucall uc; @@ -241,7 +335,7 @@ static void vcpu_thread_main(struct memstress_vcpu_args *vcpu_args) assert_ucall(vcpu, UCALL_SYNC); break; case ITERATION_MARK_IDLE: - mark_vcpu_memory_idle(vm, vcpu_args); + pageidle_mark_vcpu_memory_idle(vm, vcpu_args); break; } @@ -293,15 +387,18 @@ static void access_memory(struct kvm_vm *vm, int nr_vcpus, static void mark_memory_idle(struct kvm_vm *vm, int nr_vcpus) { + if (use_lru_gen) + return lru_gen_mark_memory_idle(vm); + /* * Even though this parallelizes the work across vCPUs, this is still a * very slow operation because page_idle forces the test to mark one pfn - * at a time and the clear_young notifier serializes on the KVM MMU + * at a time and the clear_young notifier may serialize on the KVM MMU * lock. */ pr_debug("Marking VM memory idle (slow)...\n"); iteration_work = ITERATION_MARK_IDLE; - run_iteration(vm, nr_vcpus, "Mark memory idle"); + run_iteration(vm, nr_vcpus, "Mark memory idle (page_idle)"); } static void run_test(enum vm_guest_mode mode, void *arg) @@ -318,6 +415,14 @@ static void run_test(enum vm_guest_mode mode, void *arg) pr_info("\n"); access_memory(vm, nr_vcpus, ACCESS_WRITE, "Populating memory"); + if (use_lru_gen) { + struct memcg_stats stats; + + lru_gen_read_memcg_stats(&stats, TEST_MEMCG_NAME); + TEST_ASSERT(lru_gen_sum_memcg_stats(&stats) >= total_pages, + "Not all pages accounted for. Was the memcg set up correctly?"); + } + /* As a control, read and write to the populated memory first. */ access_memory(vm, nr_vcpus, ACCESS_WRITE, "Writing to populated memory"); access_memory(vm, nr_vcpus, ACCESS_READ, "Reading from populated memory"); @@ -354,7 +459,12 @@ static int access_tracking_unreliable(void) puts("Skipping idle page count sanity check, because NUMA balancing is enabled"); return 1; } + return 0; +} +int run_test_in_cg(const char *cgroup, void *arg) +{ + for_each_guest_mode(run_test, arg); return 0; } @@ -372,7 +482,7 @@ static void help(char *name) printf(" -v: specify the number of vCPUs to run.\n"); printf(" -o: Overlap guest memory accesses instead of partitioning\n" " them into a separate region of memory for each vCPU.\n"); - printf(" -w: Control whether the test warns or fails if more than 10%\n" + printf(" -w: Control whether the test warns or fails if more than 10%%\n" " of pages are still seen as idle/old after accessing guest\n" " memory. >0 == warn only, 0 == fail, <0 == auto. For auto\n" " mode, the test fails by default, but switches to warn only\n" @@ -383,6 +493,12 @@ static void help(char *name) exit(0); } +void destroy_cgroup(char *cg) +{ + printf("Destroying cgroup: %s\n", cg); + cg_destroy(cg); +} + int main(int argc, char *argv[]) { struct test_params params = { @@ -390,6 +506,7 @@ int main(int argc, char *argv[]) .vcpu_memory_bytes = DEFAULT_PER_VCPU_MEM_SIZE, .nr_vcpus = 1, }; + char *new_cg = NULL; int page_idle_fd; int opt; @@ -424,15 +541,53 @@ int main(int argc, char *argv[]) } } - page_idle_fd = open("/sys/kernel/mm/page_idle/bitmap", O_RDWR); - __TEST_REQUIRE(page_idle_fd >= 0, - "CONFIG_IDLE_PAGE_TRACKING is not enabled"); - close(page_idle_fd); + if (lru_gen_usable()) { + if (cg_find_unified_root(cgroup_root, sizeof(cgroup_root), NULL)) + ksft_exit_skip("cgroup v2 isn't mounted\n"); + + new_cg = cg_name(cgroup_root, TEST_MEMCG_NAME); + printf("Creating cgroup: %s\n", new_cg); + if (cg_create(new_cg) && errno != EEXIST) + ksft_exit_skip("could not create new cgroup: %s\n", new_cg); + + use_lru_gen = true; + } else { + page_idle_fd = open("/sys/kernel/mm/page_idle/bitmap", O_RDWR); + __TEST_REQUIRE(page_idle_fd >= 0, + "Couldn't open /sys/kernel/mm/page_idle/bitmap. " + "Is CONFIG_IDLE_PAGE_TRACKING enabled?"); + + close(page_idle_fd); + } if (idle_pages_warn_only == -1) idle_pages_warn_only = access_tracking_unreliable(); - for_each_guest_mode(run_test, ¶ms); + /* + * If guest_page_size is larger than the host's page size, the + * guest (memstress) will only fault in a subset of the host's pages. + */ + total_pages = params.nr_vcpus * params.vcpu_memory_bytes / + max(memstress_args.guest_page_size, + (uint64_t)getpagesize()); + + if (use_lru_gen) { + int ret; + + puts("Using lru_gen for aging"); + /* + * This will fork off a new process to run the test within + * a new memcg, so we need to properly propagate the return + * value up. + */ + ret = cg_run(new_cg, &run_test_in_cg, ¶ms); + destroy_cgroup(new_cg); + if (ret) + return ret; + } else { + puts("Using page_idle for aging"); + for_each_guest_mode(run_test, ¶ms); + } return 0; } diff --git a/tools/testing/selftests/kvm/include/lru_gen_util.h b/tools/testing/selftests/kvm/include/lru_gen_util.h new file mode 100644 index 0000000000000..d32ff5d8ffd05 --- /dev/null +++ b/tools/testing/selftests/kvm/include/lru_gen_util.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Tools for integrating with lru_gen, like parsing the lru_gen debugfs output. + * + * Copyright (C) 2025, Google LLC. + */ +#ifndef SELFTEST_KVM_LRU_GEN_UTIL_H +#define SELFTEST_KVM_LRU_GEN_UTIL_H + +#include +#include +#include + +#include "test_util.h" + +#define MAX_NR_GENS 16 /* MAX_NR_GENS in include/linux/mmzone.h */ +#define MAX_NR_NODES 4 /* Maximum number of nodes supported by the test */ + +#define LRU_GEN_DEBUGFS "/sys/kernel/debug/lru_gen" +#define LRU_GEN_ENABLED_PATH "/sys/kernel/mm/lru_gen/enabled" +#define LRU_GEN_ENABLED 1 +#define LRU_GEN_MM_WALK 2 + +struct generation_stats { + int gen; + long age_ms; + long nr_anon; + long nr_file; +}; + +struct node_stats { + int node; + int nr_gens; /* Number of populated gens entries. */ + struct generation_stats gens[MAX_NR_GENS]; +}; + +struct memcg_stats { + unsigned long memcg_id; + int nr_nodes; /* Number of populated nodes entries. */ + struct node_stats nodes[MAX_NR_NODES]; +}; + +void lru_gen_read_memcg_stats(struct memcg_stats *stats, const char *memcg); +long lru_gen_sum_memcg_stats(const struct memcg_stats *stats); +long lru_gen_sum_memcg_stats_for_gen(int gen, const struct memcg_stats *stats); +void lru_gen_do_aging(struct memcg_stats *stats, const char *memcg); +int lru_gen_find_generation(const struct memcg_stats *stats, + unsigned long total_pages); +bool lru_gen_usable(void); + +#endif /* SELFTEST_KVM_LRU_GEN_UTIL_H */ diff --git a/tools/testing/selftests/kvm/lib/lru_gen_util.c b/tools/testing/selftests/kvm/lib/lru_gen_util.c new file mode 100644 index 0000000000000..783a1f1028a26 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/lru_gen_util.c @@ -0,0 +1,383 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2025, Google LLC. + */ + +#include + +#include "lru_gen_util.h" + +/* + * Tracks state while we parse memcg lru_gen stats. The file we're parsing is + * structured like this (some extra whitespace elided): + * + * memcg (id) (path) + * node (id) + * (gen_nr) (age_in_ms) (nr_anon_pages) (nr_file_pages) + */ +struct memcg_stats_parse_context { + bool consumed; /* Whether or not this line was consumed */ + /* Next parse handler to invoke */ + void (*next_handler)(struct memcg_stats *, + struct memcg_stats_parse_context *, char *); + int current_node_idx; /* Current index in nodes array */ + const char *name; /* The name of the memcg we're looking for */ +}; + +static void memcg_stats_handle_searching(struct memcg_stats *stats, + struct memcg_stats_parse_context *ctx, + char *line); +static void memcg_stats_handle_in_memcg(struct memcg_stats *stats, + struct memcg_stats_parse_context *ctx, + char *line); +static void memcg_stats_handle_in_node(struct memcg_stats *stats, + struct memcg_stats_parse_context *ctx, + char *line); + +struct split_iterator { + char *str; + char *save; +}; + +static char *split_next(struct split_iterator *it) +{ + char *ret = strtok_r(it->str, " \t\n\r", &it->save); + + it->str = NULL; + return ret; +} + +static void memcg_stats_handle_searching(struct memcg_stats *stats, + struct memcg_stats_parse_context *ctx, + char *line) +{ + struct split_iterator it = { .str = line }; + char *prefix = split_next(&it); + char *memcg_id = split_next(&it); + char *memcg_name = split_next(&it); + char *end; + + ctx->consumed = true; + + if (!prefix || strcmp("memcg", prefix)) + return; /* Not a memcg line (maybe empty), skip */ + + TEST_ASSERT(memcg_id && memcg_name, + "malformed memcg line; no memcg id or memcg_name"); + + if (strcmp(memcg_name + 1, ctx->name)) + return; /* Wrong memcg, skip */ + + /* Found it! */ + + stats->memcg_id = strtoul(memcg_id, &end, 10); + TEST_ASSERT(*end == '\0', "malformed memcg id '%s'", memcg_id); + if (!stats->memcg_id) + return; /* Removed memcg? */ + + ctx->next_handler = memcg_stats_handle_in_memcg; +} + +static void memcg_stats_handle_in_memcg(struct memcg_stats *stats, + struct memcg_stats_parse_context *ctx, + char *line) +{ + struct split_iterator it = { .str = line }; + char *prefix = split_next(&it); + char *id = split_next(&it); + long found_node_id; + char *end; + + ctx->consumed = true; + ctx->current_node_idx = -1; + + if (!prefix) + return; /* Skip empty lines */ + + if (!strcmp("memcg", prefix)) { + /* Memcg done, found next one; stop. */ + ctx->next_handler = NULL; + return; + } else if (strcmp("node", prefix)) + TEST_ASSERT(false, "found malformed line after 'memcg ...'," + "token: '%s'", prefix); + + /* At this point we know we have a node line. Parse the ID. */ + + TEST_ASSERT(id, "malformed node line; no node id"); + + found_node_id = strtol(id, &end, 10); + TEST_ASSERT(*end == '\0', "malformed node id '%s'", id); + + ctx->current_node_idx = stats->nr_nodes++; + TEST_ASSERT(ctx->current_node_idx < MAX_NR_NODES, + "memcg has stats for too many nodes, max is %d", + MAX_NR_NODES); + stats->nodes[ctx->current_node_idx].node = found_node_id; + + ctx->next_handler = memcg_stats_handle_in_node; +} + +static void memcg_stats_handle_in_node(struct memcg_stats *stats, + struct memcg_stats_parse_context *ctx, + char *line) +{ + char *my_line = strdup(line); + struct split_iterator it = { .str = my_line }; + char *gen, *age, *nr_anon, *nr_file; + struct node_stats *node_stats; + struct generation_stats *gen_stats; + char *end; + + TEST_ASSERT(it.str, "failed to copy input line"); + + gen = split_next(&it); + + if (!gen) + goto out_consume; /* Skip empty lines */ + + if (!strcmp("memcg", gen) || !strcmp("node", gen)) { + /* + * Reached next memcg or node section. Don't consume, let the + * other handler deal with this. + */ + ctx->next_handler = memcg_stats_handle_in_memcg; + goto out; + } + + node_stats = &stats->nodes[ctx->current_node_idx]; + TEST_ASSERT(node_stats->nr_gens < MAX_NR_GENS, + "found too many generation lines; max is %d", + MAX_NR_GENS); + gen_stats = &node_stats->gens[node_stats->nr_gens++]; + + age = split_next(&it); + nr_anon = split_next(&it); + nr_file = split_next(&it); + + TEST_ASSERT(age && nr_anon && nr_file, + "malformed generation line; not enough tokens"); + + gen_stats->gen = (int)strtol(gen, &end, 10); + TEST_ASSERT(*end == '\0', "malformed generation number '%s'", gen); + + gen_stats->age_ms = strtol(age, &end, 10); + TEST_ASSERT(*end == '\0', "malformed generation age '%s'", age); + + gen_stats->nr_anon = strtol(nr_anon, &end, 10); + TEST_ASSERT(*end == '\0', "malformed anonymous page count '%s'", + nr_anon); + + gen_stats->nr_file = strtol(nr_file, &end, 10); + TEST_ASSERT(*end == '\0', "malformed file page count '%s'", nr_file); + +out_consume: + ctx->consumed = true; +out: + free(my_line); +} + +static void print_memcg_stats(const struct memcg_stats *stats, const char *name) +{ + int node, gen; + + pr_debug("stats for memcg %s (id %lu):\n", name, stats->memcg_id); + for (node = 0; node < stats->nr_nodes; ++node) { + pr_debug("\tnode %d\n", stats->nodes[node].node); + for (gen = 0; gen < stats->nodes[node].nr_gens; ++gen) { + const struct generation_stats *gstats = + &stats->nodes[node].gens[gen]; + + pr_debug("\t\tgen %d\tage_ms %ld" + "\tnr_anon %ld\tnr_file %ld\n", + gstats->gen, gstats->age_ms, gstats->nr_anon, + gstats->nr_file); + } + } +} + +/* Re-read lru_gen debugfs information for @memcg into @stats. */ +void lru_gen_read_memcg_stats(struct memcg_stats *stats, const char *memcg) +{ + FILE *f; + ssize_t read = 0; + char *line = NULL; + size_t bufsz; + struct memcg_stats_parse_context ctx = { + .next_handler = memcg_stats_handle_searching, + .name = memcg, + }; + + memset(stats, 0, sizeof(struct memcg_stats)); + + f = fopen(LRU_GEN_DEBUGFS, "r"); + TEST_ASSERT(f, "fopen(%s) failed", LRU_GEN_DEBUGFS); + + while (ctx.next_handler && (read = getline(&line, &bufsz, f)) > 0) { + ctx.consumed = false; + + do { + ctx.next_handler(stats, &ctx, line); + if (!ctx.next_handler) + break; + } while (!ctx.consumed); + } + + if (read < 0 && !feof(f)) + TEST_ASSERT(false, "getline(%s) failed", LRU_GEN_DEBUGFS); + + TEST_ASSERT(stats->memcg_id > 0, "Couldn't find memcg: %s\n" + "Did the memcg get created in the proper mount?", + memcg); + if (line) + free(line); + TEST_ASSERT(!fclose(f), "fclose(%s) failed", LRU_GEN_DEBUGFS); + + print_memcg_stats(stats, memcg); +} + +/* + * Find all pages tracked by lru_gen for this memcg in generation @target_gen. + * + * If @target_gen is negative, look for all generations. + */ +long lru_gen_sum_memcg_stats_for_gen(int target_gen, + const struct memcg_stats *stats) +{ + int node, gen; + long total_nr = 0; + + for (node = 0; node < stats->nr_nodes; ++node) { + const struct node_stats *node_stats = &stats->nodes[node]; + + for (gen = 0; gen < node_stats->nr_gens; ++gen) { + const struct generation_stats *gen_stats = + &node_stats->gens[gen]; + + if (target_gen >= 0 && gen_stats->gen != target_gen) + continue; + + total_nr += gen_stats->nr_anon + gen_stats->nr_file; + } + } + + return total_nr; +} + +/* Find all pages tracked by lru_gen for this memcg. */ +long lru_gen_sum_memcg_stats(const struct memcg_stats *stats) +{ + return lru_gen_sum_memcg_stats_for_gen(-1, stats); +} + +/* + * If lru_gen aging should force page table scanning. + * + * If you want to set this to false, you will need to do eviction + * before doing extra aging passes. + */ +static const bool force_scan = true; + +static void run_aging_impl(unsigned long memcg_id, int node_id, int max_gen) +{ + FILE *f = fopen(LRU_GEN_DEBUGFS, "w"); + char *command; + size_t sz; + + TEST_ASSERT(f, "fopen(%s) failed", LRU_GEN_DEBUGFS); + sz = asprintf(&command, "+ %lu %d %d 1 %d\n", + memcg_id, node_id, max_gen, force_scan); + TEST_ASSERT(sz > 0, "creating aging command failed"); + + pr_debug("Running aging command: %s", command); + if (fwrite(command, sizeof(char), sz, f) < sz) { + TEST_ASSERT(false, "writing aging command %s to %s failed", + command, LRU_GEN_DEBUGFS); + } + + TEST_ASSERT(!fclose(f), "fclose(%s) failed", LRU_GEN_DEBUGFS); +} + +void lru_gen_do_aging(struct memcg_stats *stats, const char *memcg) +{ + int node, gen; + + pr_debug("lru_gen: invoking aging...\n"); + + /* Must read memcg stats to construct the proper aging command. */ + lru_gen_read_memcg_stats(stats, memcg); + + for (node = 0; node < stats->nr_nodes; ++node) { + int max_gen = 0; + + for (gen = 0; gen < stats->nodes[node].nr_gens; ++gen) { + int this_gen = stats->nodes[node].gens[gen].gen; + + max_gen = max_gen > this_gen ? max_gen : this_gen; + } + + run_aging_impl(stats->memcg_id, stats->nodes[node].node, + max_gen); + } + + /* Re-read so callers get updated information */ + lru_gen_read_memcg_stats(stats, memcg); +} + +/* + * Find which generation contains at least @pages pages, assuming that + * such a generation exists. + */ +int lru_gen_find_generation(const struct memcg_stats *stats, + unsigned long pages) +{ + int node, gen, gen_idx, min_gen = INT_MAX, max_gen = -1; + + for (node = 0; node < stats->nr_nodes; ++node) + for (gen_idx = 0; gen_idx < stats->nodes[node].nr_gens; + ++gen_idx) { + gen = stats->nodes[node].gens[gen_idx].gen; + max_gen = gen > max_gen ? gen : max_gen; + min_gen = gen < min_gen ? gen : min_gen; + } + + for (gen = min_gen; gen < max_gen; ++gen) + /* See if this generation has enough pages. */ + if (lru_gen_sum_memcg_stats_for_gen(gen, stats) > pages) + return gen; + + return -1; +} + +bool lru_gen_usable(void) +{ + long required_features = LRU_GEN_ENABLED | LRU_GEN_MM_WALK; + int lru_gen_fd, lru_gen_debug_fd; + char mglru_feature_str[8] = {}; + long mglru_features; + + lru_gen_fd = open(LRU_GEN_ENABLED_PATH, O_RDONLY); + if (lru_gen_fd < 0) { + puts("lru_gen: Could not open " LRU_GEN_ENABLED_PATH); + return false; + } + if (read(lru_gen_fd, &mglru_feature_str, 7) < 7) { + puts("lru_gen: Could not read from " LRU_GEN_ENABLED_PATH); + close(lru_gen_fd); + return false; + } + close(lru_gen_fd); + + mglru_features = strtol(mglru_feature_str, NULL, 16); + if ((mglru_features & required_features) != required_features) { + printf("lru_gen: missing features, got: %s", mglru_feature_str); + return false; + } + + lru_gen_debug_fd = open(LRU_GEN_DEBUGFS, O_RDWR); + __TEST_REQUIRE(lru_gen_debug_fd >= 0, + "lru_gen: Could not open " LRU_GEN_DEBUGFS ", " + "but lru_gen is enabled, so cannot use page_idle."); + close(lru_gen_debug_fd); + return true; +}