From patchwork Thu Jan 12 14:58:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 9513363 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 27F3260762 for ; Thu, 12 Jan 2017 15:01:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 18F2D286A4 for ; Thu, 12 Jan 2017 15:01:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0DA94286D3; Thu, 12 Jan 2017 15:01:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DB9E1286EA for ; Thu, 12 Jan 2017 15:01:38 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cRgq6-0005YE-E8; Thu, 12 Jan 2017 14:58:54 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cRgq5-0005Uo-0H for xen-devel@lists.xenproject.org; Thu, 12 Jan 2017 14:58:53 +0000 Received: from [85.158.139.211] by server-7.bemta-5.messagelabs.com id E4/16-29113-BA997785; Thu, 12 Jan 2017 14:58:51 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprMIsWRWlGSWpSXmKPExsXitHRDpO7qmeU RBos/sVp83zKZyYHR4/CHKywBjFGsmXlJ+RUJrBnNc+axFNxcx1ix/cValgbGf52MXYycHBIC /hIzD8xgAbHZBHQkpj69xNrFyMEhIqAicXuvQRcjFwezwEpGiaWL/4HVCAskSPx81ATWyyKgK nFv3QSwOK+Am8SHvoVsEDPlJM4f/8kMYnMKuEv8m7ObHcQWAqo5tfQDlK0isX7qLDaIXkGJkz OfgM1hFpCQOPjiBfMERt5ZSFKzkKQWMDKtYtQoTi0qSy3SNTTQSyrKTM8oyU3MzAHyTPVyU4u LE9NTcxKTivWS83M3MQIDiAEIdjCumep8iFGSg0lJlHeVR3mEEF9SfkplRmJxRnxRaU5q8SFG GQ4OJQneWTOAcoJFqempFWmZOcBQhklLcPAoifBuBEnzFhck5hZnpkOkTjHqcpy6cfolkxBLX n5eqpQ47xKQIgGQoozSPLgRsLi6xCgrJczLCHSUEE9BalFuZgmq/CtGcQ5GJWGIVTyZeSVwm1 4BHcEEdMRFG7AjShIRUlINjHKubc5u5s3J2vMVHqu68Vzh7z93dklD2KHHzjorDSaqtnQUPF0 4Sa3IT7PBZzvPw8Pc5zU2rNE+k7/q1KbXx0tncfxjfF786lfmy2bHNz+Sjq6NPfCJYfZlfvaw t43/EjTWdZy4tdrHdnvxN7FSMeXb2w+XrfstP3HjIbO695Zvp8R8SmaabKjEUpyRaKjFXFScC ACYe+PdpgIAAA== X-Env-Sender: prvs=1786f58dd=Paul.Durrant@citrix.com X-Msg-Ref: server-15.tower-206.messagelabs.com!1484233127!65615186!3 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 32117 invoked from network); 12 Jan 2017 14:58:50 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 12 Jan 2017 14:58:50 -0000 X-IronPort-AV: E=Sophos;i="5.33,349,1477958400"; d="scan'208";a="399471546" From: Paul Durrant To: Date: Thu, 12 Jan 2017 14:58:36 +0000 Message-ID: <1484233120-2015-5-git-send-email-paul.durrant@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1484233120-2015-1-git-send-email-paul.durrant@citrix.com> References: <1484233120-2015-1-git-send-email-paul.durrant@citrix.com> MIME-Version: 1.0 Cc: Andrew Cooper , Daniel De Graaf , Paul Durrant , Ian Jackson Subject: [Xen-devel] [PATCH v3 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and... X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP ... HVMOP_set_pci_link_route These HVMOPs were exposed to guests so their definitions need to be preserved for compatibility. This patch therefore updates __XEN_LATEST_INTERFACE_VERSION__ to 0x00040900 and makes the HVMOP defintions conditional on __XEN_INTERFACE_VERSION__ less than that value. NOTE: This patch also widens the 'domain' parameter of xc_hvm_set_pci_intx_level() from a uint8_t to a uint16_t. Suggested-by: Jan Beulich Signed-off-by: Paul Durrant --- Reviewed-by: Jan Beulich Cc: Daniel De Graaf Cc: Ian Jackson Acked-by: Wei Liu Cc: Andrew Cooper v3: - Remove unnecessary padding. v2: - Interface version modification moved to this patch, where it is needed. - Addressed several comments from Jan. --- tools/flask/policy/modules/xen.if | 8 +-- tools/libxc/include/xenctrl.h | 2 +- tools/libxc/xc_misc.c | 83 ++++++++-------------- xen/arch/x86/hvm/dm.c | 72 +++++++++++++++++++ xen/arch/x86/hvm/hvm.c | 136 ------------------------------------ xen/arch/x86/hvm/irq.c | 7 +- xen/include/public/hvm/dm_op.h | 42 +++++++++++ xen/include/public/hvm/hvm_op.h | 4 ++ xen/include/public/xen-compat.h | 2 +- xen/include/xen/hvm/irq.h | 2 +- xen/include/xsm/dummy.h | 18 ----- xen/include/xsm/xsm.h | 18 ----- xen/xsm/dummy.c | 3 - xen/xsm/flask/hooks.c | 15 ---- xen/xsm/flask/policy/access_vectors | 6 -- 15 files changed, 158 insertions(+), 260 deletions(-) diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if index 45e5b5f..092a6c5 100644 --- a/tools/flask/policy/modules/xen.if +++ b/tools/flask/policy/modules/xen.if @@ -57,8 +57,8 @@ define(`create_domain_common', ` allow $1 $2:shadow enable; allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp }; allow $1 $2:grant setup; - allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute sethvmc - setparam pcilevel nested altp2mhvm altp2mhvm_op send_irq }; + allow $1 $2:hvm { cacheattr getparam hvmctl sethvmc + setparam nested altp2mhvm altp2mhvm_op send_irq }; ') # create_domain(priv, target) @@ -93,7 +93,7 @@ define(`manage_domain', ` # (inbound migration is the same as domain creation) define(`migrate_domain_out', ` allow $1 domxen_t:mmu map_read; - allow $1 $2:hvm { gethvmc getparam irqlevel }; + allow $1 $2:hvm { gethvmc getparam }; allow $1 $2:mmu { stat pageinfo map_read }; allow $1 $2:domain { getaddrsize getvcpucontext pause destroy }; allow $1 $2:domain2 gettsc; @@ -151,7 +151,7 @@ define(`device_model', ` allow $1 $2_target:domain { getdomaininfo shutdown }; allow $1 $2_target:mmu { map_read map_write adjust physmap target_hack }; - allow $1 $2_target:hvm { getparam setparam hvmctl irqlevel pciroute pcilevel cacheattr send_irq dm }; + allow $1 $2_target:hvm { getparam setparam hvmctl cacheattr send_irq dm }; ') # make_device_model(priv, dm_dom, hvm_dom) diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h index c7ee412..f819bf2 100644 --- a/tools/libxc/include/xenctrl.h +++ b/tools/libxc/include/xenctrl.h @@ -1594,7 +1594,7 @@ int xc_physdev_unmap_pirq(xc_interface *xch, int xc_hvm_set_pci_intx_level( xc_interface *xch, domid_t dom, - uint8_t domain, uint8_t bus, uint8_t device, uint8_t intx, + uint16_t domain, uint8_t bus, uint8_t device, uint8_t intx, unsigned int level); int xc_hvm_set_isa_irq_level( xc_interface *xch, domid_t dom, diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c index 4c41d41..ddea2bb 100644 --- a/tools/libxc/xc_misc.c +++ b/tools/libxc/xc_misc.c @@ -470,33 +470,24 @@ int xc_getcpuinfo(xc_interface *xch, int max_cpus, int xc_hvm_set_pci_intx_level( xc_interface *xch, domid_t dom, - uint8_t domain, uint8_t bus, uint8_t device, uint8_t intx, + uint16_t domain, uint8_t bus, uint8_t device, uint8_t intx, unsigned int level) { - DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_pci_intx_level, arg); - int rc; - - arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); - if ( arg == NULL ) - { - PERROR("Could not allocate memory for xc_hvm_set_pci_intx_level hypercall"); - return -1; - } + struct xen_dm_op op; + struct xen_dm_op_set_pci_intx_level *data; - arg->domid = dom; - arg->domain = domain; - arg->bus = bus; - arg->device = device; - arg->intx = intx; - arg->level = level; + memset(&op, 0, sizeof(op)); - rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op, - HVMOP_set_pci_intx_level, - HYPERCALL_BUFFER_AS_ARG(arg)); + op.op = XEN_DMOP_set_pci_intx_level; + data = &op.u.set_pci_intx_level; - xc_hypercall_buffer_free(xch, arg); + data->domain = domain; + data->bus = bus; + data->device = device; + data->intx = intx; + data->level = level; - return rc; + return do_dm_op(xch, dom, 1, &op, sizeof(op)); } int xc_hvm_set_isa_irq_level( @@ -504,53 +495,35 @@ int xc_hvm_set_isa_irq_level( uint8_t isa_irq, unsigned int level) { - DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_isa_irq_level, arg); - int rc; - - arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); - if ( arg == NULL ) - { - PERROR("Could not allocate memory for xc_hvm_set_isa_irq_level hypercall"); - return -1; - } + struct xen_dm_op op; + struct xen_dm_op_set_isa_irq_level *data; - arg->domid = dom; - arg->isa_irq = isa_irq; - arg->level = level; + memset(&op, 0, sizeof(op)); - rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op, - HVMOP_set_isa_irq_level, - HYPERCALL_BUFFER_AS_ARG(arg)); + op.op = XEN_DMOP_set_isa_irq_level; + data = &op.u.set_isa_irq_level; - xc_hypercall_buffer_free(xch, arg); + data->isa_irq = isa_irq; + data->level = level; - return rc; + return do_dm_op(xch, dom, 1, &op, sizeof(op)); } int xc_hvm_set_pci_link_route( xc_interface *xch, domid_t dom, uint8_t link, uint8_t isa_irq) { - DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_pci_link_route, arg); - int rc; - - arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); - if ( arg == NULL ) - { - PERROR("Could not allocate memory for xc_hvm_set_pci_link_route hypercall"); - return -1; - } + struct xen_dm_op op; + struct xen_dm_op_set_pci_link_route *data; - arg->domid = dom; - arg->link = link; - arg->isa_irq = isa_irq; + memset(&op, 0, sizeof(op)); - rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op, - HVMOP_set_pci_link_route, - HYPERCALL_BUFFER_AS_ARG(arg)); + op.op = XEN_DMOP_set_pci_link_route; + data = &op.u.set_pci_link_route; - xc_hypercall_buffer_free(xch, arg); + data->link = link; + data->isa_irq = isa_irq; - return rc; + return do_dm_op(xch, dom, 1, &op, sizeof(op)); } int xc_hvm_inject_msi( diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index d501d56..bcd9ea6 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -99,6 +99,49 @@ static int track_dirty_vram(struct domain *d, hap_track_dirty_vram(d, first_pfn, nr, buf.h); } +static int set_pci_intx_level(struct domain *d, uint16_t domain, + uint8_t bus, uint8_t device, + uint8_t intx, uint8_t level) +{ + if ( domain != 0 || bus != 0 || device > 0x1f || intx > 3 ) + return -EINVAL; + + switch ( level ) + { + case 0: + hvm_pci_intx_deassert(d, device, intx); + break; + case 1: + hvm_pci_intx_assert(d, device, intx); + break; + default: + return -EINVAL; + } + + return 0; +} + +static int set_isa_irq_level(struct domain *d, uint8_t isa_irq, + uint8_t level) +{ + if ( isa_irq > 15 ) + return -EINVAL; + + switch ( level ) + { + case 0: + hvm_isa_irq_deassert(d, isa_irq); + break; + case 1: + hvm_isa_irq_assert(d, isa_irq); + break; + default: + return -EINVAL; + } + + return 0; +} + long do_dm_op(domid_t domid, unsigned int nr_bufs, XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs) @@ -227,6 +270,35 @@ long do_dm_op(domid_t domid, break; } + case XEN_DMOP_set_pci_intx_level: + { + const struct xen_dm_op_set_pci_intx_level *data = + &op.u.set_pci_intx_level; + + rc = set_pci_intx_level(d, data->domain, data->bus, + data->device, data->intx, + data->level); + break; + } + + case XEN_DMOP_set_isa_irq_level: + { + const struct xen_dm_op_set_isa_irq_level *data = + &op.u.set_isa_irq_level; + + rc = set_isa_irq_level(d, data->isa_irq, data->level); + break; + } + + case XEN_DMOP_set_pci_link_route: + { + const struct xen_dm_op_set_pci_link_route *data = + &op.u.set_pci_link_route; + + rc = hvm_set_pci_link_route(d, data->link, data->isa_irq); + break; + } + default: rc = -EOPNOTSUPP; break; diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 80d1ad6..30d5d72 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -4427,50 +4427,6 @@ void hvm_hypercall_page_initialise(struct domain *d, hvm_funcs.init_hypercall_page(d, hypercall_page); } -static int hvmop_set_pci_intx_level( - XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_intx_level_t) uop) -{ - struct xen_hvm_set_pci_intx_level op; - struct domain *d; - int rc; - - if ( copy_from_guest(&op, uop, 1) ) - return -EFAULT; - - if ( (op.domain > 0) || (op.bus > 0) || (op.device > 31) || (op.intx > 3) ) - return -EINVAL; - - rc = rcu_lock_remote_domain_by_id(op.domid, &d); - if ( rc != 0 ) - return rc; - - rc = -EINVAL; - if ( !is_hvm_domain(d) ) - goto out; - - rc = xsm_hvm_set_pci_intx_level(XSM_DM_PRIV, d); - if ( rc ) - goto out; - - rc = 0; - switch ( op.level ) - { - case 0: - hvm_pci_intx_deassert(d, op.device, op.intx); - break; - case 1: - hvm_pci_intx_assert(d, op.device, op.intx); - break; - default: - rc = -EINVAL; - break; - } - - out: - rcu_unlock_domain(d); - return rc; -} - void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip) { struct domain *d = v->domain; @@ -4614,83 +4570,6 @@ static void hvm_s3_resume(struct domain *d) } } -static int hvmop_set_isa_irq_level( - XEN_GUEST_HANDLE_PARAM(xen_hvm_set_isa_irq_level_t) uop) -{ - struct xen_hvm_set_isa_irq_level op; - struct domain *d; - int rc; - - if ( copy_from_guest(&op, uop, 1) ) - return -EFAULT; - - if ( op.isa_irq > 15 ) - return -EINVAL; - - rc = rcu_lock_remote_domain_by_id(op.domid, &d); - if ( rc != 0 ) - return rc; - - rc = -EINVAL; - if ( !is_hvm_domain(d) ) - goto out; - - rc = xsm_hvm_set_isa_irq_level(XSM_DM_PRIV, d); - if ( rc ) - goto out; - - rc = 0; - switch ( op.level ) - { - case 0: - hvm_isa_irq_deassert(d, op.isa_irq); - break; - case 1: - hvm_isa_irq_assert(d, op.isa_irq); - break; - default: - rc = -EINVAL; - break; - } - - out: - rcu_unlock_domain(d); - return rc; -} - -static int hvmop_set_pci_link_route( - XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_link_route_t) uop) -{ - struct xen_hvm_set_pci_link_route op; - struct domain *d; - int rc; - - if ( copy_from_guest(&op, uop, 1) ) - return -EFAULT; - - if ( (op.link > 3) || (op.isa_irq > 15) ) - return -EINVAL; - - rc = rcu_lock_remote_domain_by_id(op.domid, &d); - if ( rc != 0 ) - return rc; - - rc = -EINVAL; - if ( !is_hvm_domain(d) ) - goto out; - - rc = xsm_hvm_set_pci_link_route(XSM_DM_PRIV, d); - if ( rc ) - goto out; - - rc = 0; - hvm_set_pci_link_route(d, op.link, op.isa_irq); - - out: - rcu_unlock_domain(d); - return rc; -} - static int hvmop_inject_msi( XEN_GUEST_HANDLE_PARAM(xen_hvm_inject_msi_t) uop) { @@ -5492,26 +5371,11 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) guest_handle_cast(arg, xen_hvm_param_t)); break; - case HVMOP_set_pci_intx_level: - rc = hvmop_set_pci_intx_level( - guest_handle_cast(arg, xen_hvm_set_pci_intx_level_t)); - break; - - case HVMOP_set_isa_irq_level: - rc = hvmop_set_isa_irq_level( - guest_handle_cast(arg, xen_hvm_set_isa_irq_level_t)); - break; - case HVMOP_inject_msi: rc = hvmop_inject_msi( guest_handle_cast(arg, xen_hvm_inject_msi_t)); break; - case HVMOP_set_pci_link_route: - rc = hvmop_set_pci_link_route( - guest_handle_cast(arg, xen_hvm_set_pci_link_route_t)); - break; - case HVMOP_flush_tlbs: rc = guest_handle_is_null(arg) ? hvmop_flush_tlb_all() : -EINVAL; break; diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c index e597114..265a620 100644 --- a/xen/arch/x86/hvm/irq.c +++ b/xen/arch/x86/hvm/irq.c @@ -229,13 +229,14 @@ void hvm_assert_evtchn_irq(struct vcpu *v) hvm_set_callback_irq_level(v); } -void hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq) +int hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq) { struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq; u8 old_isa_irq; int i; - ASSERT((link <= 3) && (isa_irq <= 15)); + if ( (link > 3) || (isa_irq > 15) ) + return -EINVAL; spin_lock(&d->arch.hvm_domain.irq_lock); @@ -273,6 +274,8 @@ void hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq) dprintk(XENLOG_G_INFO, "Dom%u PCI link %u changed %u -> %u\n", d->domain_id, link, old_isa_irq, isa_irq); + + return 0; } int hvm_inject_msi(struct domain *d, uint64_t addr, uint32_t data) diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index fb9cf17..9743453 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -191,6 +191,45 @@ struct xen_dm_op_track_dirty_vram { uint64_aligned_t first_pfn; }; +/* + * XEN_DMOP_set_pci_intx_level: Set the logical level of one of a domain's + * PCI INTx pins. + */ +#define XEN_DMOP_set_pci_intx_level 8 + +struct xen_dm_op_set_pci_intx_level { + /* IN - PCI INTx identification (domain:bus:device:intx) */ + uint16_t domain; + uint8_t bus, device, intx; + /* IN - Level: 0 -> deasserted, 1 -> asserted */ + uint8_t level; +}; + +/* + * XEN_DMOP_set_isa_irq_level: Set the logical level of a one of a domain's + * ISA IRQ lines. + */ +#define XEN_DMOP_set_isa_irq_level 9 + +struct xen_dm_op_set_isa_irq_level { + /* IN - ISA IRQ (0-15) */ + uint8_t isa_irq; + /* IN - Level: 0 -> deasserted, 1 -> asserted */ + uint8_t level; +}; + +/* + * XEN_DMOP_set_pci_link_route: Map a PCI INTx line to an IRQ line. + */ +#define XEN_DMOP_set_pci_link_route 10 + +struct xen_dm_op_set_pci_link_route { + /* PCI INTx line (0-3) */ + uint8_t link; + /* ISA IRQ (1-15) or 0 -> disable link */ + uint8_t isa_irq; +}; + struct xen_dm_op { uint32_t op; uint32_t pad; @@ -202,6 +241,9 @@ struct xen_dm_op { struct xen_dm_op_set_ioreq_server_state set_ioreq_server_state; struct xen_dm_op_destroy_ioreq_server destroy_ioreq_server; struct xen_dm_op_track_dirty_vram track_dirty_vram; + struct xen_dm_op_set_pci_intx_level set_pci_intx_level; + struct xen_dm_op_set_isa_irq_level set_isa_irq_level; + struct xen_dm_op_set_pci_link_route set_pci_link_route; } u; }; diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h index 47e836c..7cf8d4d 100644 --- a/xen/include/public/hvm/hvm_op.h +++ b/xen/include/public/hvm/hvm_op.h @@ -38,6 +38,8 @@ struct xen_hvm_param { typedef struct xen_hvm_param xen_hvm_param_t; DEFINE_XEN_GUEST_HANDLE(xen_hvm_param_t); +#if __XEN_INTERFACE_VERSION__ < 0x00040900 + /* Set the logical level of one of a domain's PCI INTx wires. */ #define HVMOP_set_pci_intx_level 2 struct xen_hvm_set_pci_intx_level { @@ -76,6 +78,8 @@ struct xen_hvm_set_pci_link_route { typedef struct xen_hvm_set_pci_link_route xen_hvm_set_pci_link_route_t; DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_pci_link_route_t); +#endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */ + /* Flushes all VCPU TLBs: @arg must be NULL. */ #define HVMOP_flush_tlbs 5 diff --git a/xen/include/public/xen-compat.h b/xen/include/public/xen-compat.h index dd8a5c0..b673653 100644 --- a/xen/include/public/xen-compat.h +++ b/xen/include/public/xen-compat.h @@ -27,7 +27,7 @@ #ifndef __XEN_PUBLIC_XEN_COMPAT_H__ #define __XEN_PUBLIC_XEN_COMPAT_H__ -#define __XEN_LATEST_INTERFACE_VERSION__ 0x00040800 +#define __XEN_LATEST_INTERFACE_VERSION__ 0x00040900 #if defined(__XEN__) || defined(__XEN_TOOLS__) /* Xen is built with matching headers and implements the latest interface. */ diff --git a/xen/include/xen/hvm/irq.h b/xen/include/xen/hvm/irq.h index 4c9cb20..d3f8623 100644 --- a/xen/include/xen/hvm/irq.h +++ b/xen/include/xen/hvm/irq.h @@ -122,7 +122,7 @@ void hvm_isa_irq_assert( void hvm_isa_irq_deassert( struct domain *d, unsigned int isa_irq); -void hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq); +int hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq); int hvm_inject_msi(struct domain *d, uint64_t addr, uint32_t data); diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index 9f7c174..d3de4b7 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -610,24 +610,6 @@ static XSM_INLINE int xsm_shadow_control(XSM_DEFAULT_ARG struct domain *d, uint3 return xsm_default_action(action, current->domain, d); } -static XSM_INLINE int xsm_hvm_set_pci_intx_level(XSM_DEFAULT_ARG struct domain *d) -{ - XSM_ASSERT_ACTION(XSM_DM_PRIV); - return xsm_default_action(action, current->domain, d); -} - -static XSM_INLINE int xsm_hvm_set_isa_irq_level(XSM_DEFAULT_ARG struct domain *d) -{ - XSM_ASSERT_ACTION(XSM_DM_PRIV); - return xsm_default_action(action, current->domain, d); -} - -static XSM_INLINE int xsm_hvm_set_pci_link_route(XSM_DEFAULT_ARG struct domain *d) -{ - XSM_ASSERT_ACTION(XSM_DM_PRIV); - return xsm_default_action(action, current->domain, d); -} - static XSM_INLINE int xsm_hvm_inject_msi(XSM_DEFAULT_ARG struct domain *d) { XSM_ASSERT_ACTION(XSM_DM_PRIV); diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index b5845a2..2e4a3ce 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -162,9 +162,6 @@ struct xsm_operations { #ifdef CONFIG_X86 int (*do_mca) (void); int (*shadow_control) (struct domain *d, uint32_t op); - int (*hvm_set_pci_intx_level) (struct domain *d); - int (*hvm_set_isa_irq_level) (struct domain *d); - int (*hvm_set_pci_link_route) (struct domain *d); int (*hvm_inject_msi) (struct domain *d); int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op); int (*apic) (struct domain *d, int cmd); @@ -635,21 +632,6 @@ static inline int xsm_shadow_control (xsm_default_t def, struct domain *d, uint3 return xsm_ops->shadow_control(d, op); } -static inline int xsm_hvm_set_pci_intx_level (xsm_default_t def, struct domain *d) -{ - return xsm_ops->hvm_set_pci_intx_level(d); -} - -static inline int xsm_hvm_set_isa_irq_level (xsm_default_t def, struct domain *d) -{ - return xsm_ops->hvm_set_isa_irq_level(d); -} - -static inline int xsm_hvm_set_pci_link_route (xsm_default_t def, struct domain *d) -{ - return xsm_ops->hvm_set_pci_link_route(d); -} - static inline int xsm_hvm_inject_msi (xsm_default_t def, struct domain *d) { return xsm_ops->hvm_inject_msi(d); diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index d544ec1..f1568dd 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -145,9 +145,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) #ifdef CONFIG_X86 set_to_dummy_if_null(ops, do_mca); set_to_dummy_if_null(ops, shadow_control); - set_to_dummy_if_null(ops, hvm_set_pci_intx_level); - set_to_dummy_if_null(ops, hvm_set_isa_irq_level); - set_to_dummy_if_null(ops, hvm_set_pci_link_route); set_to_dummy_if_null(ops, hvm_inject_msi); set_to_dummy_if_null(ops, mem_sharing_op); set_to_dummy_if_null(ops, apic); diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index a4272d7..685cbfa 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1502,21 +1502,6 @@ static int flask_ioport_mapping(struct domain *d, uint32_t start, uint32_t end, return flask_ioport_permission(d, start, end, access); } -static int flask_hvm_set_pci_intx_level(struct domain *d) -{ - return current_has_perm(d, SECCLASS_HVM, HVM__PCILEVEL); -} - -static int flask_hvm_set_isa_irq_level(struct domain *d) -{ - return current_has_perm(d, SECCLASS_HVM, HVM__IRQLEVEL); -} - -static int flask_hvm_set_pci_link_route(struct domain *d) -{ - return current_has_perm(d, SECCLASS_HVM, HVM__PCIROUTE); -} - static int flask_hvm_inject_msi(struct domain *d) { return current_has_perm(d, SECCLASS_HVM, HVM__SEND_IRQ); diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors index 47ce589..2f0ed2d 100644 --- a/xen/xsm/flask/policy/access_vectors +++ b/xen/xsm/flask/policy/access_vectors @@ -259,12 +259,6 @@ class hvm setparam # HVMOP_get_param getparam -# HVMOP_set_pci_intx_level (also needs hvmctl) - pcilevel -# HVMOP_set_isa_irq_level - irqlevel -# HVMOP_set_pci_link_route - pciroute bind_irq # XEN_DOMCTL_pin_mem_cacheattr cacheattr