From patchwork Tue Jan 19 09:27:58 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhang X-Patchwork-Id: 8059401 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 9CC4F9F744 for ; Tue, 19 Jan 2016 09:39:51 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 026E7203A9 for ; Tue, 19 Jan 2016 09:39:46 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EECC5202F8 for ; Tue, 19 Jan 2016 09:39:44 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aLSia-0004TD-DL; Tue, 19 Jan 2016 09:36:52 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aLSiY-0004S4-Q5 for xen-devel@lists.xen.org; Tue, 19 Jan 2016 09:36:50 +0000 Received: from [85.158.139.211] by server-14.bemta-5.messagelabs.com id 4B/34-18633-2B30E965; Tue, 19 Jan 2016 09:36:50 +0000 X-Env-Sender: yu.c.zhang@linux.intel.com X-Msg-Ref: server-15.tower-206.messagelabs.com!1453196201!8292738!4 X-Originating-IP: [192.55.52.93] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 45016 invoked from network); 19 Jan 2016 09:36:49 -0000 Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93) by server-15.tower-206.messagelabs.com with SMTP; 19 Jan 2016 09:36:49 -0000 Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP; 19 Jan 2016 01:36:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,316,1449561600"; d="scan'208";a="730085647" Received: from zhangyu-optiplex-9020.bj.intel.com ([10.238.144.104]) by orsmga003.jf.intel.com with ESMTP; 19 Jan 2016 01:36:46 -0800 From: Yu Zhang To: xen-devel@lists.xen.org Date: Tue, 19 Jan 2016 17:27:58 +0800 Message-Id: <1453195678-25944-4-git-send-email-yu.c.zhang@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1453195678-25944-1-git-send-email-yu.c.zhang@linux.intel.com> References: <1453195678-25944-1-git-send-email-yu.c.zhang@linux.intel.com> Cc: kevin.tian@intel.com, keir@xen.org, stefano.stabellini@eu.citrix.com, andrew.cooper3@citrix.com, Paul.Durrant@citrix.com, zhiyuan.lv@intel.com, jbeulich@suse.com, wei.liu2@citrix.com Subject: [Xen-devel] [PATCH 3/3] tools: introduce parameter max_ranges. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP A new parameter - max_ranges is added to set the upper limit of ranges to be tracked inside one ioreq server rangeset. Ioreq server uses a group of rangesets to track the I/O or memory resources to be emulated. The default value of this limit is set to 256. Yet there are circumstances under which the limit should exceed the default one. E.g. in XenGT, when tracking the per-process graphic translation tables on intel broadwell platforms, the number of page tables concerned will be several thousand(normally in this case, 8192 could be a big enough value). Users who set his item explicitly are supposed to know the specific scenarios that necessitate this configuration. Signed-off-by: Yu Zhang --- docs/man/xl.cfg.pod.5 | 17 +++++++++++++++++ tools/libxl/libxl_dom.c | 3 +++ tools/libxl/libxl_types.idl | 1 + tools/libxl/xl_cmdimpl.c | 4 ++++ xen/arch/x86/hvm/hvm.c | 7 ++++++- xen/include/public/hvm/params.h | 5 ++++- 6 files changed, 35 insertions(+), 2 deletions(-) diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5 index 8899f75..562563d 100644 --- a/docs/man/xl.cfg.pod.5 +++ b/docs/man/xl.cfg.pod.5 @@ -962,6 +962,23 @@ FIFO-based event channel ABI support up to 131,071 event channels. Other guests are limited to 4095 (64-bit x86 and ARM) or 1023 (32-bit x86). +=item B + +Limit the maximum ranges that can be tracked inside one ioreq server +rangeset. + +Ioreq server uses a group of rangesets to track the I/O or memory +resources to be emulated. By default, this item is not set. Not +configuring this item, or setting its value to 0 will result in the +upper limit set to its default value - 256. Yet there are circumstances +under which the upper limit inside one rangeset should exceed the +default one. E.g. in XenGT, when tracking the per-process graphic +translation tables on intel broadwell platforms, the number of page +tables concerned will be several thousand(normally in this case, 8192 +could be a big enough value). Users who set his item explicitly are +supposed to know the specific scenarios that necessitate this +configuration. + =back =head2 Paravirtualised (PV) Guest Specific Options diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c index 47971a9..607b0c4 100644 --- a/tools/libxl/libxl_dom.c +++ b/tools/libxl/libxl_dom.c @@ -288,6 +288,9 @@ static void hvm_set_conf_params(xc_interface *handle, uint32_t domid, libxl_defbool_val(info->u.hvm.nested_hvm)); xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M, libxl_defbool_val(info->u.hvm.altp2m)); + if (info->u.hvm.max_ranges > 0) + xc_hvm_param_set(handle, domid, HVM_PARAM_MAX_RANGES, + info->u.hvm.max_ranges); } int libxl__build_pre(libxl__gc *gc, uint32_t domid, diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index 9ad7eba..c936265 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -518,6 +518,7 @@ libxl_domain_build_info = Struct("domain_build_info",[ ("serial_list", libxl_string_list), ("rdm", libxl_rdm_reserve), ("rdm_mem_boundary_memkb", MemKB), + ("max_ranges", uint32), ])), ("pv", Struct(None, [("kernel", string), ("slack_memkb", MemKB), diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c index 25507c7..9359de7 100644 --- a/tools/libxl/xl_cmdimpl.c +++ b/tools/libxl/xl_cmdimpl.c @@ -1626,6 +1626,10 @@ static void parse_config_data(const char *config_source, if (!xlu_cfg_get_long (config, "rdm_mem_boundary", &l, 0)) b_info->u.hvm.rdm_mem_boundary_memkb = l * 1024; + + if (!xlu_cfg_get_long (config, "max_ranges", &l, 0)) + b_info->u.hvm.max_ranges = l; + break; case LIBXL_DOMAIN_TYPE_PV: { diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index d59e7bc..2f85089 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -943,6 +943,10 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, { unsigned int i; int rc; + unsigned int max_ranges = + ( s->domain->arch.hvm_domain.params[HVM_PARAM_MAX_RANGES] > 0 ) ? + s->domain->arch.hvm_domain.params[HVM_PARAM_MAX_RANGES] : + MAX_NR_IO_RANGES; if ( is_default ) goto done; @@ -965,7 +969,7 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, if ( !s->range[i] ) goto fail; - rangeset_limit(s->range[i], MAX_NR_IO_RANGES); + rangeset_limit(s->range[i], max_ranges); } done: @@ -6012,6 +6016,7 @@ static int hvm_allow_set_param(struct domain *d, case HVM_PARAM_IOREQ_SERVER_PFN: case HVM_PARAM_NR_IOREQ_SERVER_PAGES: case HVM_PARAM_ALTP2M: + case HVM_PARAM_MAX_RANGES: if ( value != 0 && a->value != value ) rc = -EEXIST; break; diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h index 81f9451..7732087 100644 --- a/xen/include/public/hvm/params.h +++ b/xen/include/public/hvm/params.h @@ -210,6 +210,9 @@ /* Boolean: Enable altp2m */ #define HVM_PARAM_ALTP2M 35 -#define HVM_NR_PARAMS 36 +/* Maximum ranges to be tracked in one rangeset by ioreq server */ +#define HVM_PARAM_MAX_RANGES 36 + +#define HVM_NR_PARAMS 37 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */