diff mbox

[kvm-unit-tests,v3,11/11] s390x: add sieve test

Message ID 20180213162321.20522-12-david@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

David Hildenbrand Feb. 13, 2018, 4:23 p.m. UTC
Copied from x86/sieve.c. Modifications:
- proper code formatting.
- as setup_vm() is already called, temporarily disable DAT.

The test takes fairly long, especially because we only have 128MB of ram
and allocate 3 times in a row ~100mb of virtual memory.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 s390x/Makefile      |  1 +
 s390x/sieve.c       | 59 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 s390x/unittests.cfg |  6 ++++++
 3 files changed, 66 insertions(+)
 create mode 100644 s390x/sieve.c

Comments

Christian Borntraeger Feb. 13, 2018, 4:26 p.m. UTC | #1
On 02/13/2018 05:23 PM, David Hildenbrand wrote:
> Copied from x86/sieve.c. Modifications:
> - proper code formatting.
> - as setup_vm() is already called, temporarily disable DAT.
> 
> The test takes fairly long, especially because we only have 128MB of ram
> and allocate 3 times in a row ~100mb of virtual memory.

Does it make sense to change the memory size in the unittests.cfg file? e.g. via extra_params?


> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  s390x/Makefile      |  1 +
>  s390x/sieve.c       | 59 +++++++++++++++++++++++++++++++++++++++++++++++++++++
>  s390x/unittests.cfg |  6 ++++++
>  3 files changed, 66 insertions(+)
>  create mode 100644 s390x/sieve.c
> 
> diff --git a/s390x/Makefile b/s390x/Makefile
> index d9bef37..2d3336c 100644
> --- a/s390x/Makefile
> +++ b/s390x/Makefile
> @@ -1,6 +1,7 @@
>  tests = $(TEST_DIR)/selftest.elf
>  tests += $(TEST_DIR)/intercept.elf
>  tests += $(TEST_DIR)/emulator.elf
> +tests += $(TEST_DIR)/sieve.elf
> 
>  all: directories test_cases
> 
> diff --git a/s390x/sieve.c b/s390x/sieve.c
> new file mode 100644
> index 0000000..28d4d1e
> --- /dev/null
> +++ b/s390x/sieve.c
> @@ -0,0 +1,59 @@
> +/*
> + * Copied from x86/sieve.c
> + */
> +
> +#include <libcflat.h>
> +#include <alloc.h>
> +#include <asm/pgtable.h>
> +
> +int sieve(char* data, int size)
> +{
> +	int i, j, r = 0;
> +
> +	for (i = 0; i < size; ++i)
> +		data[i] = 1;
> +
> +	data[0] = data[1] = 0;
> +
> +	for (i = 2; i < size; ++i)
> +		if (data[i]) {
> +			++r;
> +			for (j = i * 2; j < size; j += i)
> +				data[j] = 0;
> +		}
> +	return r;
> +}
> +
> +void test_sieve(const char *msg, char *data, int size)
> +{
> +	int r;
> +
> +	printf("%s:", msg);
> +	r = sieve(data, size);
> +	printf("%d out of %d\n", r, size);
> +}
> +
> +#define STATIC_SIZE 1000000
> +#define VSIZE 100000000
> +char static_data[STATIC_SIZE];
> +
> +int main()
> +{
> +	void *v;
> +	int i;
> +
> +	printf("starting sieve\n");
> +
> +	configure_dat(0);
> +	test_sieve("static", static_data, STATIC_SIZE);
> +	configure_dat(1);
> +
> +	test_sieve("mapped", static_data, STATIC_SIZE);
> +	for (i = 0; i < 3; ++i) {
> +		v = malloc(VSIZE);
> +		test_sieve("virtual", v, VSIZE);
> +		free(v);
> +	}
> +
> +	return 0;
> +}
> diff --git a/s390x/unittests.cfg b/s390x/unittests.cfg
> index 1343a19..4a1e469 100644
> --- a/s390x/unittests.cfg
> +++ b/s390x/unittests.cfg
> @@ -28,3 +28,9 @@ file = intercept.elf
> 
>  [emulator]
>  file = emulator.elf
> +
> +[sieve]
> +file = sieve.elf
> +groups = selftest
> +# can take fairly long even on KVM guests
> +timeout = 600
>
Paolo Bonzini Feb. 13, 2018, 4:44 p.m. UTC | #2
On 13/02/2018 17:26, Christian Borntraeger wrote:
>> The test takes fairly long, especially because we only have 128MB of ram
>> and allocate 3 times in a row ~100mb of virtual memory.
>
> Does it make sense to change the memory size in the unittests.cfg file? e.g. via extra_params?

The test takes 5 seconds on x86 KVM and about a minute on TCG (lots of
TLB misses).  Maybe it's an s390-specific bug in the test?

Thanks,

Paolo
David Hildenbrand Feb. 13, 2018, 5:02 p.m. UTC | #3
On 13.02.2018 17:44, Paolo Bonzini wrote:
> On 13/02/2018 17:26, Christian Borntraeger wrote:
>>> The test takes fairly long, especially because we only have 128MB of ram
>>> and allocate 3 times in a row ~100mb of virtual memory.
>>
>> Does it make sense to change the memory size in the unittests.cfg file? e.g. via extra_params?
> 
> The test takes 5 seconds on x86 KVM and about a minute on TCG (lots of
> TLB misses).  Maybe it's an s390-specific bug in the test?

Under TCG: 1m 6,823s

I have to idea how long it takes under LPAR.  Right now, I only have a
z/VM based system available, so we are already using nested
virtualization (implemented by z/VM). I expect things to be slow :)

Will try to find a LPAR to test with ...

> 
> Thanks,
> 
> Paolo
>
David Hildenbrand Feb. 13, 2018, 5:08 p.m. UTC | #4
On 13.02.2018 18:02, David Hildenbrand wrote:
> On 13.02.2018 17:44, Paolo Bonzini wrote:
>> On 13/02/2018 17:26, Christian Borntraeger wrote:
>>>> The test takes fairly long, especially because we only have 128MB of ram
>>>> and allocate 3 times in a row ~100mb of virtual memory.
>>>
>>> Does it make sense to change the memory size in the unittests.cfg file? e.g. via extra_params?
>>
>> The test takes 5 seconds on x86 KVM and about a minute on TCG (lots of
>> TLB misses).  Maybe it's an s390-specific bug in the test?
> 
> Under TCG: 1m 6,823s
> 
> I have to idea how long it takes under LPAR.  Right now, I only have a
> z/VM based system available, so we are already using nested
> virtualization (implemented by z/VM). I expect things to be slow :)
> 
> Will try to find a LPAR to test with ...
> 

LPAR: 0m 6.552s
z/VM: > 6m

So I don't think its a BUG. It's really nested virtualization kicking in.
David Hildenbrand Feb. 13, 2018, 5:09 p.m. UTC | #5
On 13.02.2018 17:26, Christian Borntraeger wrote:
> 
> 
> On 02/13/2018 05:23 PM, David Hildenbrand wrote:
>> Copied from x86/sieve.c. Modifications:
>> - proper code formatting.
>> - as setup_vm() is already called, temporarily disable DAT.
>>
>> The test takes fairly long, especially because we only have 128MB of ram
>> and allocate 3 times in a row ~100mb of virtual memory.
> 
> Does it make sense to change the memory size in the unittests.cfg file? e.g. via extra_params?

As the long runtime only seems to be a problem when running nested under
z/VM, I don't think this should be a problem.
Paolo Bonzini Feb. 14, 2018, 11:51 a.m. UTC | #6
On 13/02/2018 18:08, David Hildenbrand wrote:
> On 13.02.2018 18:02, David Hildenbrand wrote:
>> On 13.02.2018 17:44, Paolo Bonzini wrote:
>>> On 13/02/2018 17:26, Christian Borntraeger wrote:
>>>>> The test takes fairly long, especially because we only have 128MB of ram
>>>>> and allocate 3 times in a row ~100mb of virtual memory.
>>>>
>>>> Does it make sense to change the memory size in the unittests.cfg file? e.g. via extra_params?
>>>
>>> The test takes 5 seconds on x86 KVM and about a minute on TCG (lots of
>>> TLB misses).  Maybe it's an s390-specific bug in the test?
>>
>> Under TCG: 1m 6,823s
>>
>> I have to idea how long it takes under LPAR.  Right now, I only have a
>> z/VM based system available, so we are already using nested
>> virtualization (implemented by z/VM). I expect things to be slow :)
>>
>> Will try to find a LPAR to test with ...
>>
> 
> LPAR: 0m 6.552s
> z/VM: > 6m
> 
> So I don't think its a BUG. It's really nested virtualization kicking in.

Whoa.  How does KVM-on-KVM-on-LPAR fare?

I'll change the comment to

# can take fairly long when KVM is nested inside z/VM

Thanks,

Paolo
David Hildenbrand Feb. 14, 2018, 12:56 p.m. UTC | #7
On 14.02.2018 12:51, Paolo Bonzini wrote:
> On 13/02/2018 18:08, David Hildenbrand wrote:
>> On 13.02.2018 18:02, David Hildenbrand wrote:
>>> On 13.02.2018 17:44, Paolo Bonzini wrote:
>>>> On 13/02/2018 17:26, Christian Borntraeger wrote:
>>>>>> The test takes fairly long, especially because we only have 128MB of ram
>>>>>> and allocate 3 times in a row ~100mb of virtual memory.
>>>>>
>>>>> Does it make sense to change the memory size in the unittests.cfg file? e.g. via extra_params?
>>>>
>>>> The test takes 5 seconds on x86 KVM and about a minute on TCG (lots of
>>>> TLB misses).  Maybe it's an s390-specific bug in the test?
>>>
>>> Under TCG: 1m 6,823s
>>>
>>> I have to idea how long it takes under LPAR.  Right now, I only have a
>>> z/VM based system available, so we are already using nested
>>> virtualization (implemented by z/VM). I expect things to be slow :)
>>>
>>> Will try to find a LPAR to test with ...
>>>
>>
>> LPAR: 0m 6.552s
>> z/VM: > 6m
>>
>> So I don't think its a BUG. It's really nested virtualization kicking in.
> 
> Whoa.  How does KVM-on-KVM-on-LPAR fare?

We do a lot of IPTE calls, which flush the TLB on all CPUs.

But I think this could also be because the z/VM machine I am running on
is simply overloaded. But of course also because our nested virt
implementation in KVM is better ;)

Just tried nested under KVM (KVM-on-KVM-on-LPAR):

real    0m 9.411s

> 
> I'll change the comment to
> 
> # can take fairly long when KVM is nested inside z/VM
> 
> Thanks,
> 
> Paolo
>
David Hildenbrand Feb. 14, 2018, 1:15 p.m. UTC | #8
On 14.02.2018 13:56, David Hildenbrand wrote:
> On 14.02.2018 12:51, Paolo Bonzini wrote:
>> On 13/02/2018 18:08, David Hildenbrand wrote:
>>> On 13.02.2018 18:02, David Hildenbrand wrote:
>>>> On 13.02.2018 17:44, Paolo Bonzini wrote:
>>>>> On 13/02/2018 17:26, Christian Borntraeger wrote:
>>>>>>> The test takes fairly long, especially because we only have 128MB of ram
>>>>>>> and allocate 3 times in a row ~100mb of virtual memory.
>>>>>>
>>>>>> Does it make sense to change the memory size in the unittests.cfg file? e.g. via extra_params?
>>>>>
>>>>> The test takes 5 seconds on x86 KVM and about a minute on TCG (lots of
>>>>> TLB misses).  Maybe it's an s390-specific bug in the test?
>>>>
>>>> Under TCG: 1m 6,823s
>>>>
>>>> I have to idea how long it takes under LPAR.  Right now, I only have a
>>>> z/VM based system available, so we are already using nested
>>>> virtualization (implemented by z/VM). I expect things to be slow :)
>>>>
>>>> Will try to find a LPAR to test with ...
>>>>
>>>
>>> LPAR: 0m 6.552s
>>> z/VM: > 6m
>>>
>>> So I don't think its a BUG. It's really nested virtualization kicking in.
>>
>> Whoa.  How does KVM-on-KVM-on-LPAR fare?
> 
> We do a lot of IPTE calls, which flush the TLB on all CPUs.
> 
> But I think this could also be because the z/VM machine I am running on
> is simply overloaded. But of course also because our nested virt
> implementation in KVM is better ;)

Just double checked, as we are not reusing virtual addresses, we are not
issuing any IPTE instructions right now. So also no TLB flushes. This
makes it very strange why z/VM performs (for me reproducible) that bad.

> 
> Just tried nested under KVM (KVM-on-KVM-on-LPAR):
> 
> real    0m 9.411s
> 
>>
>> I'll change the comment to
>>
>> # can take fairly long when KVM is nested inside z/VM
>>
>> Thanks,
>>
>> Paolo
>>
> 
>
diff mbox

Patch

diff --git a/s390x/Makefile b/s390x/Makefile
index d9bef37..2d3336c 100644
--- a/s390x/Makefile
+++ b/s390x/Makefile
@@ -1,6 +1,7 @@ 
 tests = $(TEST_DIR)/selftest.elf
 tests += $(TEST_DIR)/intercept.elf
 tests += $(TEST_DIR)/emulator.elf
+tests += $(TEST_DIR)/sieve.elf
 
 all: directories test_cases
 
diff --git a/s390x/sieve.c b/s390x/sieve.c
new file mode 100644
index 0000000..28d4d1e
--- /dev/null
+++ b/s390x/sieve.c
@@ -0,0 +1,59 @@ 
+/*
+ * Copied from x86/sieve.c
+ */
+
+#include <libcflat.h>
+#include <alloc.h>
+#include <asm/pgtable.h>
+
+int sieve(char* data, int size)
+{
+	int i, j, r = 0;
+
+	for (i = 0; i < size; ++i)
+		data[i] = 1;
+
+	data[0] = data[1] = 0;
+
+	for (i = 2; i < size; ++i)
+		if (data[i]) {
+			++r;
+			for (j = i * 2; j < size; j += i)
+				data[j] = 0;
+		}
+	return r;
+}
+
+void test_sieve(const char *msg, char *data, int size)
+{
+	int r;
+
+	printf("%s:", msg);
+	r = sieve(data, size);
+	printf("%d out of %d\n", r, size);
+}
+
+#define STATIC_SIZE 1000000
+#define VSIZE 100000000
+char static_data[STATIC_SIZE];
+
+int main()
+{
+	void *v;
+	int i;
+
+	printf("starting sieve\n");
+
+	configure_dat(0);
+	test_sieve("static", static_data, STATIC_SIZE);
+	configure_dat(1);
+
+	test_sieve("mapped", static_data, STATIC_SIZE);
+	for (i = 0; i < 3; ++i) {
+		v = malloc(VSIZE);
+		test_sieve("virtual", v, VSIZE);
+		free(v);
+	}
+
+	return 0;
+}
diff --git a/s390x/unittests.cfg b/s390x/unittests.cfg
index 1343a19..4a1e469 100644
--- a/s390x/unittests.cfg
+++ b/s390x/unittests.cfg
@@ -28,3 +28,9 @@  file = intercept.elf
 
 [emulator]
 file = emulator.elf
+
+[sieve]
+file = sieve.elf
+groups = selftest
+# can take fairly long even on KVM guests
+timeout = 600