From patchwork Tue Nov 26 17:17:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: George Dunlap X-Patchwork-Id: 11262909 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 454C6930 for ; Tue, 26 Nov 2019 17:18:59 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 115A720659 for ; Tue, 26 Nov 2019 17:18:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="Wcx800Kr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 115A720659 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iZeTP-00023D-N0; Tue, 26 Nov 2019 17:17:59 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iZeTP-00022u-7j for xen-devel@lists.xenproject.org; Tue, 26 Nov 2019 17:17:59 +0000 X-Inumbo-ID: aeb50520-1070-11ea-83b8-bc764e2007e4 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id aeb50520-1070-11ea-83b8-bc764e2007e4; Tue, 26 Nov 2019 17:17:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1574788674; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=qwKBn5LT2W2Ss95+wOGwxoG2Xyy4U8BDq1MmQNJt11k=; b=Wcx800Kr7BJUJ6cQKVR4iamHltCHDhJobDQg5RAFb78gvdynJVFb2IFp IopJr55+KIwa30MIwkuntta8HjMQJeVr2Hswbcelc8BZKzVjT6sfoannV ZoLyFSKriCNVyjT4mVz240mRKeUbuJ3Z3y2bMuZ9R5Ix9QngzPXJJ1Z3m M=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=george.dunlap@citrix.com; spf=Pass smtp.mailfrom=George.Dunlap@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of george.dunlap@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="George.Dunlap@citrix.com"; x-sender="george.dunlap@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of George.Dunlap@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="George.Dunlap@citrix.com"; x-sender="George.Dunlap@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="George.Dunlap@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: ULZi1v8Xgi2aUKzfJ8oAmQEatrGwWK6IHcLMmZ6ReVKPa7E4RfHNOWU98QwcEZeZ2ilkB8xodK VvEZBLm9/tg6QRn84Z2Nficg9OtZhHocS4ORI2MmwEifbSNsFlS0IrxzOCxjodWJoq6w2syvJl 8DLoIRHFmnbNjLqnW0hMo6S75FLwfuQ1/TuswgXb2wWDNnQC50tf1tGDzkyCMLJi+hHRo9S3j2 2gTjn3DFVuEEOczblGWrHqHFlhLUXx8U89wX5mNQ+5waceMYmU8V8ga+Tu/S7VfvifFZJBx2Dp OeE= X-SBRS: 2.7 X-MesageID: 9410838 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.69,246,1571716800"; d="scan'208";a="9410838" From: George Dunlap To: Date: Tue, 26 Nov 2019 17:17:46 +0000 Message-ID: <20191126171747.3185988-1-george.dunlap@citrix.com> X-Mailer: git-send-email 2.24.0 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH for-4.13 1/2] python/xc.c: Remove trailing whitespace X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , =?utf-8?q?Marek_Marczykowski-G=C3=B3recki?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" No functional change. Signed-off-by: George Dunlap --- CC: Marek Marczykowski-Górecki CC: Juergen Gross --- tools/python/xen/lowlevel/xc/xc.c | 210 +++++++++++++++--------------- 1 file changed, 105 insertions(+), 105 deletions(-) diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c index 44d3606141..6d2afd5695 100644 --- a/tools/python/xen/lowlevel/xc/xc.c +++ b/tools/python/xen/lowlevel/xc/xc.c @@ -1,6 +1,6 @@ /****************************************************************************** * Xc.c - * + * * Copyright (c) 2003-2004, K A Fraser (University of Cambridge) */ @@ -107,7 +107,7 @@ static PyObject *pyxc_domain_dumpcore(XcObject *self, PyObject *args) if ( xc_domain_dumpcore(self->xc_handle, dom, corefile) != 0 ) return pyxc_error_to_exception(self->xc_handle); - + Py_INCREF(zero); return zero; } @@ -141,7 +141,7 @@ static PyObject *pyxc_domain_create(XcObject *self, return NULL; if ( pyhandle != NULL ) { - if ( !PyList_Check(pyhandle) || + if ( !PyList_Check(pyhandle) || (PyList_Size(pyhandle) != sizeof(xen_domain_handle_t)) ) goto out_exception; @@ -188,7 +188,7 @@ static PyObject *pyxc_domain_max_vcpus(XcObject *self, PyObject *args) if (xc_domain_max_vcpus(self->xc_handle, dom, max) != 0) return pyxc_error_to_exception(self->xc_handle); - + Py_INCREF(zero); return zero; } @@ -223,7 +223,7 @@ static PyObject *pyxc_domain_shutdown(XcObject *self, PyObject *args) if ( xc_domain_shutdown(self->xc_handle, dom, reason) != 0 ) return pyxc_error_to_exception(self->xc_handle); - + Py_INCREF(zero); return zero; } @@ -255,7 +255,7 @@ static PyObject *pyxc_vcpu_setaffinity(XcObject *self, static char *kwd_list[] = { "domid", "vcpu", "cpumap", NULL }; - if ( !PyArg_ParseTupleAndKeywords(args, kwds, "i|iO", kwd_list, + if ( !PyArg_ParseTupleAndKeywords(args, kwds, "i|iO", kwd_list, &dom, &vcpu, &cpulist) ) return NULL; @@ -269,7 +269,7 @@ static PyObject *pyxc_vcpu_setaffinity(XcObject *self, if ( (cpulist != NULL) && PyList_Check(cpulist) ) { - for ( i = 0; i < PyList_Size(cpulist); i++ ) + for ( i = 0; i < PyList_Size(cpulist); i++ ) { long cpu = PyLongOrInt_AsLong(PyList_GetItem(cpulist, i)); if ( cpu < 0 || cpu >= nr_cpus ) @@ -282,7 +282,7 @@ static PyObject *pyxc_vcpu_setaffinity(XcObject *self, cpumap[cpu / 8] |= 1 << (cpu % 8); } } - + if ( xc_vcpu_setaffinity(self->xc_handle, dom, vcpu, cpumap, NULL, XEN_VCPUAFFINITY_HARD) != 0 ) { @@ -290,7 +290,7 @@ static PyObject *pyxc_vcpu_setaffinity(XcObject *self, return pyxc_error_to_exception(self->xc_handle); } Py_INCREF(zero); - free(cpumap); + free(cpumap); return zero; } @@ -304,7 +304,7 @@ static PyObject *pyxc_domain_sethandle(XcObject *self, PyObject *args) if (!PyArg_ParseTuple(args, "iO", &dom, &pyhandle)) return NULL; - if ( !PyList_Check(pyhandle) || + if ( !PyList_Check(pyhandle) || (PyList_Size(pyhandle) != sizeof(xen_domain_handle_t)) ) { goto out_exception; @@ -320,7 +320,7 @@ static PyObject *pyxc_domain_sethandle(XcObject *self, PyObject *args) if (xc_domain_sethandle(self->xc_handle, dom, handle) < 0) return pyxc_error_to_exception(self->xc_handle); - + Py_INCREF(zero); return zero; @@ -342,7 +342,7 @@ static PyObject *pyxc_domain_getinfo(XcObject *self, xc_dominfo_t *info; static char *kwd_list[] = { "first_dom", "max_doms", NULL }; - + if ( !PyArg_ParseTupleAndKeywords(args, kwds, "|ii", kwd_list, &first_dom, &max_doms) ) return NULL; @@ -415,7 +415,7 @@ static PyObject *pyxc_vcpu_getinfo(XcObject *self, int nr_cpus; static char *kwd_list[] = { "domid", "vcpu", NULL }; - + if ( !PyArg_ParseTupleAndKeywords(args, kwds, "i|i", kwd_list, &dom, &vcpu) ) return NULL; @@ -470,7 +470,7 @@ static PyObject *pyxc_hvm_param_get(XcObject *self, int param; uint64_t value; - static char *kwd_list[] = { "domid", "param", NULL }; + static char *kwd_list[] = { "domid", "param", NULL }; if ( !PyArg_ParseTupleAndKeywords(args, kwds, "ii", kwd_list, &dom, ¶m) ) return NULL; @@ -490,7 +490,7 @@ static PyObject *pyxc_hvm_param_set(XcObject *self, int param; uint64_t value; - static char *kwd_list[] = { "domid", "param", "value", NULL }; + static char *kwd_list[] = { "domid", "param", "value", NULL }; if ( !PyArg_ParseTupleAndKeywords(args, kwds, "iiL", kwd_list, &dom, ¶m, &value) ) return NULL; @@ -660,7 +660,7 @@ static PyObject *pyxc_get_device_group(XcObject *self, if ( rc < 0 ) { - free(sdev_array); + free(sdev_array); return pyxc_error_to_exception(self->xc_handle); } @@ -861,7 +861,7 @@ static PyObject *pyxc_physdev_pci_access_modify(XcObject *self, static char *kwd_list[] = { "domid", "bus", "dev", "func", "enable", NULL }; - if ( !PyArg_ParseTupleAndKeywords(args, kwds, "iiiii", kwd_list, + if ( !PyArg_ParseTupleAndKeywords(args, kwds, "iiiii", kwd_list, &dom, &bus, &dev, &func, &enable) ) return NULL; @@ -976,7 +976,7 @@ static PyObject *pyxc_physinfo(XcObject *self) "nr_nodes", pinfo.nr_nodes, "threads_per_core", pinfo.threads_per_core, "cores_per_socket", pinfo.cores_per_socket, - "nr_cpus", pinfo.nr_cpus, + "nr_cpus", pinfo.nr_cpus, "total_memory", pages_to_kib(pinfo.total_pages), "free_memory", pages_to_kib(pinfo.free_pages), "scrub_memory", pages_to_kib(pinfo.scrub_pages), @@ -1266,14 +1266,14 @@ static PyObject *pyxc_shadow_control(PyObject *self, static char *kwd_list[] = { "dom", "op", NULL }; - if ( !PyArg_ParseTupleAndKeywords(args, kwds, "i|i", kwd_list, + if ( !PyArg_ParseTupleAndKeywords(args, kwds, "i|i", kwd_list, &dom, &op) ) return NULL; - - if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0, NULL, 0, NULL) + + if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0, NULL, 0, NULL) < 0 ) return pyxc_error_to_exception(xc->xc_handle); - + Py_INCREF(zero); return zero; } @@ -1290,26 +1290,26 @@ static PyObject *pyxc_shadow_mem_control(PyObject *self, static char *kwd_list[] = { "dom", "mb", NULL }; - if ( !PyArg_ParseTupleAndKeywords(args, kwds, "i|i", kwd_list, + if ( !PyArg_ParseTupleAndKeywords(args, kwds, "i|i", kwd_list, &dom, &mbarg) ) return NULL; - - if ( mbarg < 0 ) + + if ( mbarg < 0 ) op = XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION; - else + else { mb = mbarg; op = XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION; } if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0, &mb, 0, NULL) < 0 ) return pyxc_error_to_exception(xc->xc_handle); - + mbarg = mb; return Py_BuildValue("i", mbarg); } static PyObject *pyxc_sched_id_get(XcObject *self) { - + int sched_id; if (xc_sched_id(self->xc_handle, &sched_id) != 0) return PyErr_SetFromErrno(xc_error_obj); @@ -1327,10 +1327,10 @@ static PyObject *pyxc_sched_credit_domain_set(XcObject *self, static char *kwd_list[] = { "domid", "weight", "cap", NULL }; static char kwd_type[] = "I|HH"; struct xen_domctl_sched_credit sdom; - + weight = 0; cap = (uint16_t)~0U; - if( !PyArg_ParseTupleAndKeywords(args, kwds, kwd_type, kwd_list, + if( !PyArg_ParseTupleAndKeywords(args, kwds, kwd_type, kwd_list, &domid, &weight, &cap) ) return NULL; @@ -1348,10 +1348,10 @@ static PyObject *pyxc_sched_credit_domain_get(XcObject *self, PyObject *args) { uint32_t domid; struct xen_domctl_sched_credit sdom; - + if( !PyArg_ParseTuple(args, "I", &domid) ) return NULL; - + if ( xc_sched_credit_domain_get(self->xc_handle, domid, &sdom) != 0 ) return pyxc_error_to_exception(self->xc_handle); @@ -1412,7 +1412,7 @@ static PyObject *pyxc_domain_setmaxmem(XcObject *self, PyObject *args) if (xc_domain_setmaxmem(self->xc_handle, dom, maxmem_kb) != 0) return pyxc_error_to_exception(self->xc_handle); - + Py_INCREF(zero); return zero; } @@ -1425,12 +1425,12 @@ static PyObject *pyxc_domain_set_target_mem(XcObject *self, PyObject *args) if (!PyArg_ParseTuple(args, "ii", &dom, &mem_kb)) return NULL; - mem_pages = mem_kb / 4; + mem_pages = mem_kb / 4; if (xc_domain_set_pod_target(self->xc_handle, dom, mem_pages, NULL, NULL, NULL) != 0) return pyxc_error_to_exception(self->xc_handle); - + Py_INCREF(zero); return zero; } @@ -1445,7 +1445,7 @@ static PyObject *pyxc_domain_set_memmap_limit(XcObject *self, PyObject *args) if ( xc_domain_set_memmap_limit(self->xc_handle, dom, maplimit_kb) != 0 ) return pyxc_error_to_exception(self->xc_handle); - + Py_INCREF(zero); return zero; } @@ -1459,7 +1459,7 @@ static PyObject *pyxc_domain_ioport_permission(XcObject *self, static char *kwd_list[] = { "domid", "first_port", "nr_ports", "allow_access", NULL }; - if ( !PyArg_ParseTupleAndKeywords(args, kwds, "iiii", kwd_list, + if ( !PyArg_ParseTupleAndKeywords(args, kwds, "iiii", kwd_list, &dom, &first_port, &nr_ports, &allow_access) ) return NULL; @@ -1482,7 +1482,7 @@ static PyObject *pyxc_domain_irq_permission(PyObject *self, static char *kwd_list[] = { "domid", "pirq", "allow_access", NULL }; - if ( !PyArg_ParseTupleAndKeywords(args, kwds, "iii", kwd_list, + if ( !PyArg_ParseTupleAndKeywords(args, kwds, "iii", kwd_list, &dom, &pirq, &allow_access) ) return NULL; @@ -1505,7 +1505,7 @@ static PyObject *pyxc_domain_iomem_permission(PyObject *self, static char *kwd_list[] = { "domid", "first_pfn", "nr_pfns", "allow_access", NULL }; - if ( !PyArg_ParseTupleAndKeywords(args, kwds, "illi", kwd_list, + if ( !PyArg_ParseTupleAndKeywords(args, kwds, "illi", kwd_list, &dom, &first_pfn, &nr_pfns, &allow_access) ) return NULL; @@ -1570,7 +1570,7 @@ static PyObject *pyxc_domain_send_trigger(XcObject *self, static char *kwd_list[] = { "domid", "trigger", "vcpu", NULL }; - if ( !PyArg_ParseTupleAndKeywords(args, kwds, "ii|i", kwd_list, + if ( !PyArg_ParseTupleAndKeywords(args, kwds, "ii|i", kwd_list, &dom, &trigger, &vcpu) ) return NULL; @@ -1624,7 +1624,7 @@ static PyObject *pyxc_dom_set_memshr(XcObject *self, PyObject *args) if (xc_memshr_control(self->xc_handle, dom, enable) != 0) return pyxc_error_to_exception(self->xc_handle); - + Py_INCREF(zero); return zero; } @@ -1848,11 +1848,11 @@ static PyObject *pyflask_sid_to_context(PyObject *self, PyObject *args, if (!xc_handle) { return PyErr_SetFromErrno(xc_error_obj); } - + ret = xc_flask_sid_to_context(xc_handle, sid, ctx, ctx_len); - + xc_interface_close(xc_handle); - + if ( ret != 0 ) { errno = -ret; return PyErr_SetFromErrno(xc_error_obj); @@ -1869,7 +1869,7 @@ static PyObject *pyflask_load(PyObject *self, PyObject *args, PyObject *kwds) int ret; static char *kwd_list[] = { "policy", NULL }; - + if( !PyArg_ParseTupleAndKeywords(args, kwds, "s#", kwd_list, &policy, &len) ) return NULL; @@ -1899,11 +1899,11 @@ static PyObject *pyflask_getenforce(PyObject *self) if (!xc_handle) { return PyErr_SetFromErrno(xc_error_obj); } - + ret = xc_flask_getenforce(xc_handle); - + xc_interface_close(xc_handle); - + if ( ret < 0 ) { errno = -ret; return PyErr_SetFromErrno(xc_error_obj); @@ -1929,11 +1929,11 @@ static PyObject *pyflask_setenforce(PyObject *self, PyObject *args, if (!xc_handle) { return PyErr_SetFromErrno(xc_error_obj); } - + ret = xc_flask_setenforce(xc_handle, mode); - + xc_interface_close(xc_handle); - + if ( ret != 0 ) { errno = -ret; return PyErr_SetFromErrno(xc_error_obj); @@ -1951,7 +1951,7 @@ static PyObject *pyflask_access(PyObject *self, PyObject *args, uint32_t req, allowed, decided, auditallow, auditdeny, seqno; int ret; - static char *kwd_list[] = { "src_context", "tar_context", + static char *kwd_list[] = { "src_context", "tar_context", "tar_class", "req_permissions", "decided", "auditallow","auditdeny", "seqno", NULL }; @@ -1965,10 +1965,10 @@ static PyObject *pyflask_access(PyObject *self, PyObject *args, if (!xc_handle) { return PyErr_SetFromErrno(xc_error_obj); } - + ret = xc_flask_access(xc_handle, scon, tcon, tclass, req, &allowed, &decided, &auditallow, &auditdeny, &seqno); - + xc_interface_close(xc_handle); if ( ret != 0 ) { @@ -1980,14 +1980,14 @@ static PyObject *pyflask_access(PyObject *self, PyObject *args, } static PyMethodDef pyxc_methods[] = { - { "domain_create", - (PyCFunction)pyxc_domain_create, + { "domain_create", + (PyCFunction)pyxc_domain_create, METH_VARARGS | METH_KEYWORDS, "\n" "Create a new domain.\n" " dom [int, 0]: Domain identifier to use (allocated if zero).\n" "Returns: [int] new domain identifier; -1 on error.\n" }, - { "domain_max_vcpus", + { "domain_max_vcpus", (PyCFunction)pyxc_domain_max_vcpus, METH_VARARGS, "\n" "Set the maximum number of VCPUs a domain may create.\n" @@ -1995,43 +1995,43 @@ static PyMethodDef pyxc_methods[] = { " max [int, 0]: New maximum number of VCPUs in domain.\n" "Returns: [int] 0 on success; -1 on error.\n" }, - { "domain_dumpcore", - (PyCFunction)pyxc_domain_dumpcore, + { "domain_dumpcore", + (PyCFunction)pyxc_domain_dumpcore, METH_VARARGS, "\n" "Dump core of a domain.\n" " dom [int]: Identifier of domain to dump core of.\n" " corefile [string]: Name of corefile to be created.\n\n" "Returns: [int] 0 on success; -1 on error.\n" }, - { "domain_pause", - (PyCFunction)pyxc_domain_pause, + { "domain_pause", + (PyCFunction)pyxc_domain_pause, METH_VARARGS, "\n" "Temporarily pause execution of a domain.\n" " dom [int]: Identifier of domain to be paused.\n\n" "Returns: [int] 0 on success; -1 on error.\n" }, - { "domain_unpause", - (PyCFunction)pyxc_domain_unpause, + { "domain_unpause", + (PyCFunction)pyxc_domain_unpause, METH_VARARGS, "\n" "(Re)start execution of a domain.\n" " dom [int]: Identifier of domain to be unpaused.\n\n" "Returns: [int] 0 on success; -1 on error.\n" }, - { "domain_destroy", - (PyCFunction)pyxc_domain_destroy, + { "domain_destroy", + (PyCFunction)pyxc_domain_destroy, METH_VARARGS, "\n" "Destroy a domain.\n" " dom [int]: Identifier of domain to be destroyed.\n\n" "Returns: [int] 0 on success; -1 on error.\n" }, - { "domain_destroy_hook", - (PyCFunction)pyxc_domain_destroy_hook, + { "domain_destroy_hook", + (PyCFunction)pyxc_domain_destroy_hook, METH_VARARGS, "\n" "Add a hook for arch stuff before destroy a domain.\n" " dom [int]: Identifier of domain to be destroyed.\n\n" "Returns: [int] 0 on success; -1 on error.\n" }, - { "domain_resume", + { "domain_resume", (PyCFunction)pyxc_domain_resume, METH_VARARGS, "\n" "Resume execution of a suspended domain.\n" @@ -2039,7 +2039,7 @@ static PyMethodDef pyxc_methods[] = { " fast [int]: Use cooperative resume.\n\n" "Returns: [int] 0 on success; -1 on error.\n" }, - { "domain_shutdown", + { "domain_shutdown", (PyCFunction)pyxc_domain_shutdown, METH_VARARGS, "\n" "Shutdown a domain.\n" @@ -2047,8 +2047,8 @@ static PyMethodDef pyxc_methods[] = { " reason [int, 0]: Reason for shutdown.\n" "Returns: [int] 0 on success; -1 on error.\n" }, - { "vcpu_setaffinity", - (PyCFunction)pyxc_vcpu_setaffinity, + { "vcpu_setaffinity", + (PyCFunction)pyxc_vcpu_setaffinity, METH_VARARGS | METH_KEYWORDS, "\n" "Pin a VCPU to a specified set CPUs.\n" " dom [int]: Identifier of domain to which VCPU belongs.\n" @@ -2056,7 +2056,7 @@ static PyMethodDef pyxc_methods[] = { " cpumap [list, []]: list of usable CPUs.\n\n" "Returns: [int] 0 on success; -1 on error.\n" }, - { "domain_sethandle", + { "domain_sethandle", (PyCFunction)pyxc_domain_sethandle, METH_VARARGS, "\n" "Set domain's opaque handle.\n" @@ -2064,8 +2064,8 @@ static PyMethodDef pyxc_methods[] = { " handle [list of 16 ints]: New opaque handle.\n" "Returns: [int] 0 on success; -1 on error.\n" }, - { "domain_getinfo", - (PyCFunction)pyxc_domain_getinfo, + { "domain_getinfo", + (PyCFunction)pyxc_domain_getinfo, METH_VARARGS | METH_KEYWORDS, "\n" "Get information regarding a set of domains, in increasing id order.\n" " first_dom [int, 0]: First domain to retrieve info about.\n" @@ -2090,8 +2090,8 @@ static PyMethodDef pyxc_methods[] = { "reason why it shut itself down.\n" " cpupool [int] Id of cpupool domain is bound to.\n" }, - { "vcpu_getinfo", - (PyCFunction)pyxc_vcpu_getinfo, + { "vcpu_getinfo", + (PyCFunction)pyxc_vcpu_getinfo, METH_VARARGS | METH_KEYWORDS, "\n" "Get information regarding a VCPU.\n" " dom [int]: Domain to retrieve info about.\n" @@ -2115,7 +2115,7 @@ static PyMethodDef pyxc_methods[] = { " xenstore_domid [int]: \n" "Returns: None on success. Raises exception on error.\n" }, - { "hvm_get_param", + { "hvm_get_param", (PyCFunction)pyxc_hvm_param_get, METH_VARARGS | METH_KEYWORDS, "\n" "get a parameter of HVM guest OS.\n" @@ -2123,7 +2123,7 @@ static PyMethodDef pyxc_methods[] = { " param [int]: No. of HVM param.\n" "Returns: [long] value of the param.\n" }, - { "hvm_set_param", + { "hvm_set_param", (PyCFunction)pyxc_hvm_param_set, METH_VARARGS | METH_KEYWORDS, "\n" "set a parameter of HVM guest OS.\n" @@ -2166,12 +2166,12 @@ static PyMethodDef pyxc_methods[] = { " dom [int]: Domain to deassign device from.\n" " pci_str [str]: PCI devices.\n" "Returns: [int] 0 on success, or device bdf that can't be deassigned.\n" }, - + { "sched_id_get", (PyCFunction)pyxc_sched_id_get, METH_NOARGS, "\n" "Get the current scheduler type in use.\n" - "Returns: [int] sched_id.\n" }, + "Returns: [int] sched_id.\n" }, { "sched_credit_domain_set", (PyCFunction)pyxc_sched_credit_domain_set, @@ -2209,7 +2209,7 @@ static PyMethodDef pyxc_methods[] = { "Returns: [dict]\n" " weight [short]: domain's scheduling weight\n"}, - { "evtchn_alloc_unbound", + { "evtchn_alloc_unbound", (PyCFunction)pyxc_evtchn_alloc_unbound, METH_VARARGS | METH_KEYWORDS, "\n" "Allocate an unbound port that will await a remote connection.\n" @@ -2217,7 +2217,7 @@ static PyMethodDef pyxc_methods[] = { " remote_dom [int]: Remote domain to accept connections from.\n\n" "Returns: [int] Unbound event-channel port.\n" }, - { "evtchn_reset", + { "evtchn_reset", (PyCFunction)pyxc_evtchn_reset, METH_VARARGS | METH_KEYWORDS, "\n" "Reset all connections.\n" @@ -2242,9 +2242,9 @@ static PyMethodDef pyxc_methods[] = { " func [int]: PCI function\n" " enable [int]: Non-zero means enable access; else disable access\n\n" "Returns: [int] 0 on success; -1 on error.\n" }, - - { "readconsolering", - (PyCFunction)pyxc_readconsolering, + + { "readconsolering", + (PyCFunction)pyxc_readconsolering, METH_VARARGS | METH_KEYWORDS, "\n" "Read Xen's console ring.\n" " clear [int, 0]: Bool - clear the ring after reading from it?\n\n" @@ -2292,40 +2292,40 @@ static PyMethodDef pyxc_methods[] = { "Returns [str]: Xen buildid" " [None]: on failure.\n" }, - { "shadow_control", - (PyCFunction)pyxc_shadow_control, + { "shadow_control", + (PyCFunction)pyxc_shadow_control, METH_VARARGS | METH_KEYWORDS, "\n" "Set parameter for shadow pagetable interface\n" " dom [int]: Identifier of domain.\n" " op [int, 0]: operation\n\n" "Returns: [int] 0 on success; -1 on error.\n" }, - { "shadow_mem_control", - (PyCFunction)pyxc_shadow_mem_control, + { "shadow_mem_control", + (PyCFunction)pyxc_shadow_mem_control, METH_VARARGS | METH_KEYWORDS, "\n" "Set or read shadow pagetable memory use\n" " dom [int]: Identifier of domain.\n" " mb [int, -1]: MB of shadow memory this domain should have.\n\n" "Returns: [int] MB of shadow memory in use by this domain.\n" }, - { "domain_setmaxmem", - (PyCFunction)pyxc_domain_setmaxmem, + { "domain_setmaxmem", + (PyCFunction)pyxc_domain_setmaxmem, METH_VARARGS, "\n" "Set a domain's memory limit\n" " dom [int]: Identifier of domain.\n" " maxmem_kb [int]: .\n" "Returns: [int] 0 on success; -1 on error.\n" }, - { "domain_set_target_mem", - (PyCFunction)pyxc_domain_set_target_mem, + { "domain_set_target_mem", + (PyCFunction)pyxc_domain_set_target_mem, METH_VARARGS, "\n" "Set a domain's memory target\n" " dom [int]: Identifier of domain.\n" " mem_kb [int]: .\n" "Returns: [int] 0 on success; -1 on error.\n" }, - { "domain_set_memmap_limit", - (PyCFunction)pyxc_domain_set_memmap_limit, + { "domain_set_memmap_limit", + (PyCFunction)pyxc_domain_set_memmap_limit, METH_VARARGS, "\n" "Set a domain's physical memory mapping limit\n" " dom [int]: Identifier of domain.\n" @@ -2407,8 +2407,8 @@ static PyMethodDef pyxc_methods[] = { " keys [str]: String of keys to inject.\n" }, #if defined(__i386__) || defined(__x86_64__) - { "domain_set_cpuid", - (PyCFunction)pyxc_dom_set_cpuid, + { "domain_set_cpuid", + (PyCFunction)pyxc_dom_set_cpuid, METH_VARARGS, "\n" "Set cpuid response for an input and a domain.\n" " dom [int]: Identifier of domain.\n" @@ -2418,15 +2418,15 @@ static PyMethodDef pyxc_methods[] = { " config [dict]: Dictionary of register\n\n" "Returns: [int] 0 on success; exception on error.\n" }, - { "domain_set_policy_cpuid", - (PyCFunction)pyxc_dom_set_policy_cpuid, + { "domain_set_policy_cpuid", + (PyCFunction)pyxc_dom_set_policy_cpuid, METH_VARARGS, "\n" "Set the default cpuid policy for a domain.\n" " dom [int]: Identifier of domain.\n\n" "Returns: [int] 0 on success; exception on error.\n" }, #endif - { "dom_set_memshr", + { "dom_set_memshr", (PyCFunction)pyxc_dom_set_memshr, METH_VARARGS, "\n" "Enable/disable memory sharing for the domain.\n" @@ -2508,20 +2508,20 @@ static PyMethodDef pyxc_methods[] = { METH_KEYWORDS, "\n" "Loads a policy into the hypervisor.\n" " policy [str]: policy to be load\n" - "Returns: [int]: 0 on success; -1 on failure.\n" }, - + "Returns: [int]: 0 on success; -1 on failure.\n" }, + { "flask_getenforce", (PyCFunction)pyflask_getenforce, METH_NOARGS, "\n" "Returns the current mode of the Flask XSM module.\n" - "Returns: [int]: 0 for permissive; 1 for enforcing; -1 on failure.\n" }, + "Returns: [int]: 0 for permissive; 1 for enforcing; -1 on failure.\n" }, { "flask_setenforce", (PyCFunction)pyflask_setenforce, METH_KEYWORDS, "\n" "Modifies the current mode for the Flask XSM module.\n" " mode [int]: mode to change to\n" - "Returns: [int]: 0 on success; -1 on failure.\n" }, + "Returns: [int]: 0 on success; -1 on failure.\n" }, { "flask_access", (PyCFunction)pyflask_access, @@ -2540,7 +2540,7 @@ static PyMethodDef pyxc_methods[] = { " auditdeny [int] permissions set to audit on deny\n" " seqno [int] not used\n" "Returns: [int]: 0 on all permission granted; -1 if any permissions are \ - denied\n" }, + denied\n" }, { NULL, NULL, 0, NULL } }; From patchwork Tue Nov 26 17:17:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: George Dunlap X-Patchwork-Id: 11262907 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 69BD314DB for ; Tue, 26 Nov 2019 17:18:52 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 32ADD20659 for ; Tue, 26 Nov 2019 17:18:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="IO4IiNyz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 32ADD20659 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iZeTL-00022l-Dw; Tue, 26 Nov 2019 17:17:55 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iZeTK-00022g-AD for xen-devel@lists.xenproject.org; Tue, 26 Nov 2019 17:17:54 +0000 X-Inumbo-ID: add85828-1070-11ea-83b8-bc764e2007e4 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id add85828-1070-11ea-83b8-bc764e2007e4; Tue, 26 Nov 2019 17:17:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1574788672; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ASpLrY08k0vg3iG3H6eN2C2LQ2l7vVfEfltmDsQSMVA=; b=IO4IiNyzsEYXJ6Q7FksHBeyRlGE34oW6bI1yTxX4Go0lS4RUQzfMxZb+ VreOy72ny6WiPiE8aHLY7HaZmvcWbTCc+aa2H/pjZSOw5uyH00l9ZCeHU iJFDJWXvG3oBjU+GNyfXxPdn2ZnquMxMNTZVZzokWDAtpIVrwJdKZBh8O s=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=george.dunlap@citrix.com; spf=Pass smtp.mailfrom=George.Dunlap@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of george.dunlap@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="George.Dunlap@citrix.com"; x-sender="george.dunlap@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of George.Dunlap@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="George.Dunlap@citrix.com"; x-sender="George.Dunlap@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="George.Dunlap@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 8VkAuDj7OiyC31pbVuPLk8RC6lB3lpQzNHPyylq8EkDnvFZmD//lvZ0DJqJB+/5lSq8c9wIZo2 9xOvOIpZXPdQQLoSEoz8FJaRGEg2/UPUOWSSCiZRg6nVAufzOgFzHmD6FEmO1KZSDQjwVHdePg 7xMCXdkTvxPjxQ3aqCkYcxGWQU9Pf7yu92jweaUfjAREHF/Tq/M6puuH1jXU0bY7WYYcU41Elu 95wQtPukO19rO5lW4Y3pkewoLDS884vQPFq3bso+J1JCVMcknakEdLe0urkYqMIUg/v7BEjpjs UCs= X-SBRS: 2.7 X-MesageID: 9410834 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.69,246,1571716800"; d="scan'208";a="9410834" From: George Dunlap To: Date: Tue, 26 Nov 2019 17:17:47 +0000 Message-ID: <20191126171747.3185988-2-george.dunlap@citrix.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191126171747.3185988-1-george.dunlap@citrix.com> References: <20191126171747.3185988-1-george.dunlap@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH for-4.13 2/2] Rationalize max_grant_frames and max_maptrack_frames handling X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Paul Durrant , Andrew Cooper , Konrad Rzeszutek Wilk , George Dunlap , =?utf-8?q?Marek_Marczykowski-G?= =?utf-8?q?=C3=B3recki?= , Jan Beulich , Ian Jackson Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Xen used to have single, system-wide limits for the number of grant frames and maptrack frames a guest was allowed to create. Increasing or decreasing this single limit on the Xen command-line would change the limit for all guests on the system. Later, per-domain limits for these values was created. The system-wide limits became strict limits: domains could not be created with higher limits, but could be created with lower limits. However, the change also introduced a range of different "default" values into various places in the toolstack: - The python libxc bindings hard-coded these values to 32 and 1024, respectively - The libxl default values are 32 and 1024 respectively. - xl will use the libxl default for maptrack, but does its own default calculation for grant frames: either 32 or 64, based on the max possible mfn. These defaults interact poorly with the hypervisor command-line limit: - The hypervisor command-line limit cannot be used to raise the limit for all guests anymore, as the default in the toolstack will effectively override this. - If you use the hypervisor command-line limit to *reduce* the limit, then the "default" values generated by the toolstack are too high, and all guest creations will fail. In other words, the toolstack defaults require any change to be effected by having the admin explicitly specify a new value in every guest. In order to address this, have grant_table_init treat '0' values for max_grant_frames and max_maptrack_frames as instructions to use the system-wide default. Have all the above toolstacks default to passing 0 unless a different value is explicitly given. This restores the old behavior, that changing the hypervisor command-line option can change the behavior for all guests, while retaining the ability to set per-guest values. It also removes the bug that *reducing* the system-wide max will cause all domains without explicit limits to fail. (The ocaml bindings require the caller to always specify a value, and the code to start a xenstored stubdomain hard-codes these to 4 and 128 respectively; these will not be addressed here.) Signed-off-by: George Dunlap --- Release justification: This is an observed regression (albeit one that has spanned several releases now). Compile-tested only. NB this patch could be applied without the whitespace fixes (perhaps with some fix-ups); it's just easier since my editor strips trailing whitespace out automatically. CC: Ian Jackson CC: Wei Liu CC: Andrew Cooper CC: Jan Beulich CC: Paul Durrant CC: Julien Grall CC: Konrad Rzeszutek Wilk CC: Stefano Stabellini CC: Juergen Gross CC: Marek Marczykowski-Górecki --- tools/libxl/libxl.h | 4 ++-- tools/python/xen/lowlevel/xc/xc.c | 2 -- tools/xl/xl.c | 12 ++---------- xen/common/grant_table.c | 7 +++++++ xen/include/public/domctl.h | 6 ++++-- 5 files changed, 15 insertions(+), 16 deletions(-) diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h index 49b56fa1a3..1648d337e7 100644 --- a/tools/libxl/libxl.h +++ b/tools/libxl/libxl.h @@ -364,8 +364,8 @@ */ #define LIBXL_HAVE_BUILDINFO_GRANT_LIMITS 1 -#define LIBXL_MAX_GRANT_FRAMES_DEFAULT 32 -#define LIBXL_MAX_MAPTRACK_FRAMES_DEFAULT 1024 +#define LIBXL_MAX_GRANT_FRAMES_DEFAULT 0 +#define LIBXL_MAX_MAPTRACK_FRAMES_DEFAULT 0 /* * LIBXL_HAVE_BUILDINFO_* indicates that libxl_domain_build_info has diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c index 6d2afd5695..0f861872ce 100644 --- a/tools/python/xen/lowlevel/xc/xc.c +++ b/tools/python/xen/lowlevel/xc/xc.c @@ -127,8 +127,6 @@ static PyObject *pyxc_domain_create(XcObject *self, }, .max_vcpus = 1, .max_evtchn_port = -1, /* No limit. */ - .max_grant_frames = 32, - .max_maptrack_frames = 1024, }; static char *kwd_list[] = { "domid", "ssidref", "handle", "flags", diff --git a/tools/xl/xl.c b/tools/xl/xl.c index ddd29b3f1b..b6e220184d 100644 --- a/tools/xl/xl.c +++ b/tools/xl/xl.c @@ -51,8 +51,8 @@ libxl_bitmap global_pv_affinity_mask; enum output_format default_output_format = OUTPUT_FORMAT_JSON; int claim_mode = 1; bool progress_use_cr = 0; -int max_grant_frames = -1; -int max_maptrack_frames = -1; +int max_grant_frames = 0; +int max_maptrack_frames = 0; xentoollog_level minmsglevel = minmsglevel_default; @@ -96,7 +96,6 @@ static void parse_global_config(const char *configfile, XLU_Config *config; int e; const char *buf; - libxl_physinfo physinfo; config = xlu_cfg_init(stderr, configfile); if (!config) { @@ -199,13 +198,6 @@ static void parse_global_config(const char *configfile, if (!xlu_cfg_get_long (config, "max_grant_frames", &l, 0)) max_grant_frames = l; - else { - libxl_physinfo_init(&physinfo); - max_grant_frames = (libxl_get_physinfo(ctx, &physinfo) != 0 || - !(physinfo.max_possible_mfn >> 32)) - ? 32 : 64; - libxl_physinfo_dispose(&physinfo); - } if (!xlu_cfg_get_long (config, "max_maptrack_frames", &l, 0)) max_maptrack_frames = l; diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index b34d520f6d..cd24029e33 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -1843,6 +1843,13 @@ int grant_table_init(struct domain *d, unsigned int max_grant_frames, struct grant_table *gt; int ret = -ENOMEM; + /* Default to maximum values if no lower ones are specified */ + if ( !max_grant_frames ) + max_grant_frames = opt_max_grant_frames; + + if ( !max_maptrack_frames ) + max_maptrack_frames = opt_max_maptrack_frames; + if ( max_grant_frames < INITIAL_NR_GRANT_FRAMES || max_grant_frames > opt_max_grant_frames || max_maptrack_frames > opt_max_maptrack_frames ) diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index 9f2cfd602c..27d04f67aa 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -82,8 +82,10 @@ struct xen_domctl_createdomain { uint32_t iommu_opts; /* - * Various domain limits, which impact the quantity of resources (global - * mapping space, xenheap, etc) a guest may consume. + * Various domain limits, which impact the quantity of resources + * (global mapping space, xenheap, etc) a guest may consume. For + * max_grant_frames and max_maptrack_frames, "0" means "use the + * default maximum value". */ uint32_t max_vcpus; uint32_t max_evtchn_port;