diff mbox

[KVM_AUTOTEST] add kvm hugepage variant and test

Message ID 4A55B759.5080302@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Lukáš Doktor July 9, 2009, 9:24 a.m. UTC
This patch adds kvm_hugepage variant. It prepares the host system and 
start vm with -mem-path option. It does not clean after itself, because 
it's impossible to unmount and free hugepages before all guests are 
destroyed.

There is also added autotest.libhugetlbfs test.

I need to ask you what to do with change of qemu parameter. Newest 
versions are using -mempath insted of -mem-path. This is impossible to 
fix using current config file. I can see 2 solutions:
1) direct change in kvm_vm.py (parse output and try another param)
2) detect qemu capabilities outside and create additional layer (better 
for future occurrence)

Tested by:ldoktor@redhat.com on RHEL5.4 with kvm-83-72.el5

Comments

Michael Goldish July 9, 2009, 12:30 p.m. UTC | #1
I don't think you need to explicitly check for a memory allocation
failure in VM.create() ("qemu produced some output ...").
VM.create() already makes sure the VM is started successfully, and
prints informative failure messages if there's any problem.

----- "Lukáš Doktor" <ldoktor@redhat.com> wrote:

> This patch adds kvm_hugepage variant. It prepares the host system and
> start vm with -mem-path option. It does not clean after itself,
> because 
> it's impossible to unmount and free hugepages before all guests are 
> destroyed.
> 
> There is also added autotest.libhugetlbfs test.
> 
> I need to ask you what to do with change of qemu parameter. Newest 
> versions are using -mempath insted of -mem-path. This is impossible to
> fix using current config file. I can see 2 solutions:
> 1) direct change in kvm_vm.py (parse output and try another param)
> 2) detect qemu capabilities outside and create additional layer
> (better 
> for future occurrence)

I'll have to think about this a little before answering.

> Tested by:ldoktor@redhat.com on RHEL5.4 with kvm-83-72.el5
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Lukáš Doktor July 9, 2009, 12:55 p.m. UTC | #2
Hi Michael,

actually it's necessarily. qemu-kvm only put this message into the 
output and continue booting the guest without hugepage support. Autotest 
than runs all the test. Later in the output is no mention about this. 
You have to predict that this happend and look at debug output of all 
particular tests to see if qemu didn't produced this message.
Using this check if qemu-kvm can't allocate the hugepage memory it fails 
this test, log this information and continue with next variant.

Dne 9.7.2009 14:30, Michael Goldish napsal(a):
> I don't think you need to explicitly check for a memory allocation
> failure in VM.create() ("qemu produced some output ...").
> VM.create() already makes sure the VM is started successfully, and
> prints informative failure messages if there's any problem.
>
> ----- "Lukáš Doktor"<ldoktor@redhat.com>  wrote:
>
>> This patch adds kvm_hugepage variant. It prepares the host system and
>> start vm with -mem-path option. It does not clean after itself,
>> because
>> it's impossible to unmount and free hugepages before all guests are
>> destroyed.
>>
>> There is also added autotest.libhugetlbfs test.
>>
>> I need to ask you what to do with change of qemu parameter. Newest
>> versions are using -mempath insted of -mem-path. This is impossible to
>> fix using current config file. I can see 2 solutions:
>> 1) direct change in kvm_vm.py (parse output and try another param)
>> 2) detect qemu capabilities outside and create additional layer
>> (better
>> for future occurrence)
>
> I'll have to think about this a little before answering.
>
>> Tested by:ldoktor@redhat.com on RHEL5.4 with kvm-83-72.el5
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
sudhir kumar July 10, 2009, 4:38 a.m. UTC | #3
Why do you want to use a control file and put the libhugetlbfs as a
variant of autotest in kvm? Just keeping the kvm_hugepages variant
will not serve the same purpose ? I have been using hugetlbfs variant
for a long but yes without pre script(I have done that manually)? Am I
missing something here?
Rest all looks fine to me except you need somewhere s/enaugh/enough

2009/7/9 Lukáš Doktor <ldoktor@redhat.com>:
> This patch adds kvm_hugepage variant. It prepares the host system and start
> vm with -mem-path option. It does not clean after itself, because it's
> impossible to unmount and free hugepages before all guests are destroyed.
>
> There is also added autotest.libhugetlbfs test.
>
> I need to ask you what to do with change of qemu parameter. Newest versions
> are using -mempath insted of -mem-path. This is impossible to fix using
> current config file. I can see 2 solutions:
> 1) direct change in kvm_vm.py (parse output and try another param)
> 2) detect qemu capabilities outside and create additional layer (better for
> future occurrence)
>
> Tested by:ldoktor@redhat.com on RHEL5.4 with kvm-83-72.el5
>
Lukáš Doktor July 10, 2009, 6:48 a.m. UTC | #4
- kvm_hugepages variant enables us to test if (host) kvm use of 
hugepages works
- libhugetlbfs test inside of guest prove, that (guest) system is able 
to handle hugepages (independently of whether guest uses hugepages). 
This function is necessarily eg. if you want to run Oracle server inside 
the guest.

So basically this are 2 independent things, but somehow connected. If 
you want I can split the patches.

Dne 10.7.2009 06:38, sudhir kumar napsal(a):
> Why do you want to use a control file and put the libhugetlbfs as a
> variant of autotest in kvm? Just keeping the kvm_hugepages variant
> will not serve the same purpose ? I have been using hugetlbfs variant
> for a long but yes without pre script(I have done that manually)? Am I
> missing something here?
> Rest all looks fine to me except you need somewhere s/enaugh/enough
>
> 2009/7/9 Lukáš Doktor<ldoktor@redhat.com>:
>> This patch adds kvm_hugepage variant. It prepares the host system and start
>> vm with -mem-path option. It does not clean after itself, because it's
>> impossible to unmount and free hugepages before all guests are destroyed.
>>
>> There is also added autotest.libhugetlbfs test.
>>
>> I need to ask you what to do with change of qemu parameter. Newest versions
>> are using -mempath insted of -mem-path. This is impossible to fix using
>> current config file. I can see 2 solutions:
>> 1) direct change in kvm_vm.py (parse output and try another param)
>> 2) detect qemu capabilities outside and create additional layer (better for
>> future occurrence)
>>
>> Tested by:ldoktor@redhat.com on RHEL5.4 with kvm-83-72.el5
>>
>
>
>

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff -Narup orig/client/tests/kvm/autotest_control/libhugetlbfs.control new/client/tests/kvm/autotest_control/libhugetlbfs.control
--- orig/client/tests/kvm/autotest_control/libhugetlbfs.control	1970-01-01 01:00:00.000000000 +0100
+++ new/client/tests/kvm/autotest_control/libhugetlbfs.control	2009-07-08 13:18:07.000000000 +0200
@@ -0,0 +1,13 @@ 
+AUTHOR = 'aganti@google.com (Ashwin Ganti)'
+TIME = 'MEDIUM'
+NAME = 'libhugetlbfs test'
+TEST_TYPE = 'client'
+TEST_CLASS = 'Kernel'
+TEST_CATEGORY = 'Functional'
+
+DOC = '''
+Tests basic huge pages functionality when using libhugetlbfs. For more info
+about libhugetlbfs see http://libhugetlbfs.ozlabs.org/
+'''
+
+job.run_test('libhugetlbfs', dir='/mnt')
diff -Narup orig/client/tests/kvm/kvm_tests.cfg.sample new/client/tests/kvm/kvm_tests.cfg.sample
--- orig/client/tests/kvm/kvm_tests.cfg.sample	2009-07-08 13:18:07.000000000 +0200
+++ new/client/tests/kvm/kvm_tests.cfg.sample	2009-07-09 10:15:58.000000000 +0200
@@ -79,6 +79,9 @@  variants:
             - bonnie:
                 test_name = bonnie
                 test_control_file = bonnie.control
+            - libhugetlbfs:
+                test_name = libhugetlbfs
+                test_control_file = libhugetlbfs.control
 
     - linux_s3:      install setup
         type = linux_s3
@@ -546,6 +549,12 @@  variants:
         only default
         image_format = raw
 
+variants:
+    - @kvm_smallpages:
+    - kvm_hugepages:
+        pre_command = "/bin/bash scripts/hugepage.sh /mnt/hugepage"
+        extra_params += " -mem-path /mnt/hugepage"
+
 
 variants:
     - @basic:
@@ -559,6 +568,7 @@  variants:
         only Fedora.8.32
         only install setup boot shutdown
         only rtl8139
+        only kvm_smallpages
     - @sample1:
         only qcow2
         only ide
diff -Narup orig/client/tests/kvm/kvm_vm.py new/client/tests/kvm/kvm_vm.py
--- orig/client/tests/kvm/kvm_vm.py	2009-07-08 13:18:07.000000000 +0200
+++ new/client/tests/kvm/kvm_vm.py	2009-07-09 10:05:19.000000000 +0200
@@ -400,6 +400,13 @@  class VM:
                 self.destroy()
                 return False
 
+            if output:
+                logging.debug("qemu produced some output:\n%s", output)
+                if "alloc_mem_area" in output:
+                    logging.error("Could not allocate hugepage memory"
+                                 " -- qemu command:\n%s", qemu_command)
+                    return False
+
             logging.debug("VM appears to be alive with PID %d", self.pid)
             return True
 
diff -Narup orig/client/tests/kvm/scripts/hugepage.sh new/client/tests/kvm/scripts/hugepage.sh
--- orig/client/tests/kvm/scripts/hugepage.sh	1970-01-01 01:00:00.000000000 +0100
+++ new/client/tests/kvm/scripts/hugepage.sh	2009-07-09 09:47:14.000000000 +0200
@@ -0,0 +1,38 @@ 
+#!/bin/bash
+# Alocates enaugh hugepages for $1 memory and mount hugetlbfs to $2.
+if [ $# -ne 1 ]; then
+	echo "USAGE: $0 mem_path"
+	exit 1
+fi
+
+Hugepagesize=$(grep Hugepagesize /proc/meminfo | cut -d':'  -f 2 | \
+		 xargs | cut -d' ' -f1)
+VMS=$(expr $(echo $KVM_TEST_vms | grep -c ' ') + 1)
+VMSM=$(expr $(expr $VMS \* $KVM_TEST_mem) + $(expr $VMS \* 64 ))
+TARGET=$(expr $VMSM \* 1024 \/ $Hugepagesize)
+
+NR=$(cat /proc/sys/vm/nr_hugepages)
+while [ "$NR" -ne "$TARGET" ]; do
+	NR_="$NR";echo $TARGET > /proc/sys/vm/nr_hugepages
+	sleep 5s
+	NR=$(cat /proc/sys/vm/nr_hugepages)
+	if [ "$NR" -eq "$NR_" ] ; then
+		echo "Can not alocate $TARGET of hugepages"
+		exit 2
+	fi
+done
+
+if [ ! "$(mount | grep /mnt/hugepage |grep hugetlbfs)" ]; then
+	mkdir -p $1
+	mount -t hugetlbfs none $1 || \
+		(echo "Can not mount hugetlbfs filesystem to $1"; exit 3)
+else
+	echo "hugetlbfs filesystem already mounted"
+fi