b/client/tests/kvm/kvm_tests.cfg.sample
@@ -555,6 +555,13 @@ variants:
only default
image_format = raw
+variants:
+ - @kvm_smallpages:
+ - kvm_hugepages:
+ hugepage_path = /mnt/hugepage
+ pre_command = "/usr/bin/python scripts/hugepage.py"
+ extra_params += " -mem-path /mnt/hugepage"
Here, I don't think it's necessary to pass /mnt/hugepage as a
parameter to the script. We can rather choose a default sensible value
and built into the script.
variants:
- @basic:
@@ -568,6 +575,7 @@ variants:
only Fedora.8.32
only install setup boot shutdown
only rtl8139
+ only kvm_smallpages
- @sample1:
only qcow2
only ide
@@ -412,6 +412,13 @@ class VM:
self.destroy()
return False
+ if output:
+ logging.debug("qemu produced some output:\n%s", output)
+ if "alloc_mem_area" in output:
+ logging.error("Could not allocate hugepage memory"
+ " -- qemu command:\n%s", qemu_command)
+ return False
Here seems unnecessary to log every occasion qemu produces output, we
should log it only if it contains the pattern we're looking for. Also,
with kvm_subprocess recent commits, there's no more output defined. I
On Tue, Jul 21, 2009 at 1:04 PM, Lukáš Doktor<ldoktor@redhat.com> wrote: > Well, thank you for notifications, I'll keep them in my mind. Ok Lukáš, I have reviewed your patch and have some comments to make: had to change that. logging.debug("VM appears to be alive with PID %d", self.pid) return True diff -Narup a/client/tests/kvm/scripts/hugepage.py b/client/tests/kvm/scripts/ hugepage.py --- a/client/tests/kvm/scripts/hugepage.py 1970-01-01 01:00:00.000000000 +0100 +++ a/client/tests/kvm/scripts/hugepage.py 2009-07-21 16:47:00.000000000 +0200 @@ -0,0 +1,63 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- +# Alocates enough hugepages and mount hugetlbfs +import os, sys, time + +# Variables check & set +vms = os.environ['KVM_TEST_vms'].split().__len__() +try: + max_vms = int(os.environ['KVM_TEST_max_vms']) +except KeyError: + max_vms = 0 +mem = int(os.environ['KVM_TEST_mem']) +hugepage_path = os.environ['KVM_TEST_hugepage_path'] + +fmeminfo = open("/proc/meminfo", "r") +while fmeminfo: + line = fmeminfo.readline() + if line.startswith("Hugepagesize"): + dumm, hp_size, dumm = line.split() + break +fmeminfo.close() + +if not hp_size: + print "Could not get Hugepagesize from /proc/meminfo file" + raise ValueError Here, is allways good to put the failure reason as one of the parameters to be passed to the exception class constructor, something like: raise ValueError("Could not get Hugepagesize from /proc/meminfo file") + +if vms < max_vms: + vms = max_vms + +vmsm = ((vms * mem) + (vms * 64)) +target = (vmsm * 1024 / int(hp_size)) + +# Iteratively set # of hugepages +fhp = open("/proc/sys/vm/nr_hugepages", "r+") +hp = fhp.readline() +while int(hp) < target: + hp_ = hp + fhp.write(target.__str__()) + fhp.flush() + time.sleep(5) + fhp.seek(0) + hp = int(fhp.readline()) + if hp_ == hp: + raise MemoryError +fhp.close() I liked the above approach to set the hugepage number. +# Mount hugepage filesystem, if necessarily +fmount = open("/proc/mounts", "r") +mount = 1 +line = fmount.readline() +while line: + if line.split()[1] == os.environ['KVM_TEST_hugepage_path']: + mount = 0 + break + line = fmount.readline() +fmount.close() In the above block, it looks to me that we are more interested if we do have a hugetlbfs mount rather than checking if our defined mountpoint is mounted. We need to check in /proc/mounts whether we have a hugetlbfs mount or not instead. +if mount: + if not os.path.exists(hugepage_path): + os.makedirs(hugepage_path) + cmd = "mount -t hugetlbfs none %s" % (hugepage_path) + if os.system(cmd): + raise OSError Same as the above comment when raising exceptions. I think it's better to have our own Error class defined for the script. Since I started fixing the original patch to fit the recent kvm_subprocess commits, I ended up re-creating the patch. I will send it to the mailing list, you tell me what you think. Thanks! Lucas -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html