diff mbox series

[v7,10/14] KVM: selftests: Report per-vcpu demand paging rate from demand paging test

Message ID 20240215235405.368539-11-amoorthy@google.com (mailing list archive)
State New, archived
Headers show
Series Improve KVM + userfaultfd performance via KVM_EXIT_MEMORY_FAULTs on stage-2 faults | expand

Commit Message

Anish Moorthy Feb. 15, 2024, 11:54 p.m. UTC
Using the overall demand paging rate to measure performance can be
slightly misleading when vCPU accesses are not overlapped. Adding more
vCPUs will (usually) increase the overall demand paging rate even
if performance remains constant or even degrades on a per-vcpu basis. As
such, it makes sense to report both the total and per-vcpu paging rates.

Signed-off-by: Anish Moorthy <amoorthy@google.com>
---
 tools/testing/selftests/kvm/demand_paging_test.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

Comments

Sean Christopherson April 9, 2024, 10:49 p.m. UTC | #1
On Thu, Feb 15, 2024, Anish Moorthy wrote:
> Using the overall demand paging rate to measure performance can be
> slightly misleading when vCPU accesses are not overlapped. Adding more
> vCPUs will (usually) increase the overall demand paging rate even
> if performance remains constant or even degrades on a per-vcpu basis. As
> such, it makes sense to report both the total and per-vcpu paging rates.
> 
> Signed-off-by: Anish Moorthy <amoorthy@google.com>
> ---
>  tools/testing/selftests/kvm/demand_paging_test.c | 15 +++++++++++----
>  1 file changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
> index 09c116a82a84..6dc823fa933a 100644
> --- a/tools/testing/selftests/kvm/demand_paging_test.c
> +++ b/tools/testing/selftests/kvm/demand_paging_test.c
> @@ -135,6 +135,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
>  	struct timespec ts_diff;
>  	struct kvm_vm *vm;
>  	int i;
> +	double vcpu_paging_rate;
>  
>  	vm = memstress_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 1,
>  				 p->src_type, p->partition_vcpu_memory_access);
> @@ -191,11 +192,17 @@ static void run_test(enum vm_guest_mode mode, void *arg)
>  			uffd_stop_demand_paging(uffd_descs[i]);
>  	}
>  
> -	pr_info("Total guest execution time: %ld.%.9lds\n",
> +	pr_info("Total guest execution time:\t%ld.%.9lds\n",
>  		ts_diff.tv_sec, ts_diff.tv_nsec);
> -	pr_info("Overall demand paging rate: %f pgs/sec\n",
> -		memstress_args.vcpu_args[0].pages * nr_vcpus /
> -		((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / NSEC_PER_SEC));
> +
> +	vcpu_paging_rate =
> +		memstress_args.vcpu_args[0].pages
> +		/ ((double)ts_diff.tv_sec
> +			+ (double)ts_diff.tv_nsec / NSEC_PER_SEC);

*sigh*

For the umpteenth time, please follow kernel coding style.  Either

	vcpu_paging_rate = memstress_args.vcpu_args[0].pages /
			   ((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / NSEC_PER_SEC);

or

	vcpu_paging_rate = memstress_args.vcpu_args[0].pages /
			   ((double)ts_diff.tv_sec +
			    (double)ts_diff.tv_nsec / NSEC_PER_SEC);

I don't have a strong preference, so I'll go with the first one when applying.
diff mbox series

Patch

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 09c116a82a84..6dc823fa933a 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -135,6 +135,7 @@  static void run_test(enum vm_guest_mode mode, void *arg)
 	struct timespec ts_diff;
 	struct kvm_vm *vm;
 	int i;
+	double vcpu_paging_rate;
 
 	vm = memstress_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 1,
 				 p->src_type, p->partition_vcpu_memory_access);
@@ -191,11 +192,17 @@  static void run_test(enum vm_guest_mode mode, void *arg)
 			uffd_stop_demand_paging(uffd_descs[i]);
 	}
 
-	pr_info("Total guest execution time: %ld.%.9lds\n",
+	pr_info("Total guest execution time:\t%ld.%.9lds\n",
 		ts_diff.tv_sec, ts_diff.tv_nsec);
-	pr_info("Overall demand paging rate: %f pgs/sec\n",
-		memstress_args.vcpu_args[0].pages * nr_vcpus /
-		((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / NSEC_PER_SEC));
+
+	vcpu_paging_rate =
+		memstress_args.vcpu_args[0].pages
+		/ ((double)ts_diff.tv_sec
+			+ (double)ts_diff.tv_nsec / NSEC_PER_SEC);
+	pr_info("Per-vcpu demand paging rate:\t%f pgs/sec/vcpu\n",
+		vcpu_paging_rate);
+	pr_info("Overall demand paging rate:\t%f pgs/sec\n",
+		vcpu_paging_rate * nr_vcpus);
 
 	memstress_destroy_vm(vm);