From patchwork Tue Sep 12 15:08:01 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9949351 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 70AC36024A for ; Tue, 12 Sep 2017 15:10:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5AAE628F35 for ; Tue, 12 Sep 2017 15:10:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4F5F628F37; Tue, 12 Sep 2017 15:10:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A218D28F35 for ; Tue, 12 Sep 2017 15:10:48 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1drmnN-0007a1-GZ; Tue, 12 Sep 2017 15:08:13 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1drmnM-0007Zf-2j for xen-devel@lists.xenproject.org; Tue, 12 Sep 2017 15:08:12 +0000 Received: from [85.158.137.68] by server-9.bemta-3.messagelabs.com id 02/5E-02044-B58F7B95; Tue, 12 Sep 2017 15:08:11 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrOIsWRWlGSWpSXmKPExsXS6fjDSzfqx/Z IgzVv2C2+b5nM5MDocfjDFZYAxijWzLyk/IoE1ozfv/4yFxx5x1jRue0CYwPjk4OMXYycHEIC eRLHmo+xgNi8AnYSbZeugtkSAoYSpxfeBLI5OFgEVCVe76kECbMJqEu0PdvOChIWETCQOHc0C STMLLCLWeLi3wAQW1jAS2L1zk6o6UUSB3bPZAaxOQXsJQ4+bQRr5RUQlPi7QxiiVUvi4a9bLB C2tsSyha+ZQUqYBaQllv/jmMDINwuhYRaShllIGmYhNCxgZFnFqFGcWlSWWqRraKyXVJSZnlG Sm5iZo2toYKyXm1pcnJiempOYVKyXnJ+7iREYfAxAsINx23bPQ4ySHExKorzK97dHCvEl5adU ZiQWZ8QXleakFh9ilOHgUJLgTfoOlBMsSk1PrUjLzAHGAUxagoNHSYSXCSTNW1yQmFucmQ6RO sVozHFs0+U/TBwdN+/+YRJiycvPS5US5y0GKRUAKc0ozYMbBIvPS4yyUsK8jECnCfEUpBblZp agyr9iFOdgVBLmdQOZwpOZVwK37xXQKUxAp/Bc2gJySkkiQkqqgdHe7jaPoX/yvVmKzZaW+wq un+kNvf8wzCTa7ofq0v3PNj9tk3JkuveL2dGu/N7fBYWphmfWzV21fz3L1UdOi78+fSizOHpj 7uGEX784WQ6GGSzf/C5i9SbBnLy37Xa3Fs03yOIt/lpVL/LmkVP4rTUfVh1zc3/QsW7+sqj9B +LWHWgzq523bdNyJZbijERDLeai4kQAi7t9rsoCAAA= X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-10.tower-31.messagelabs.com!1505228887!113737003!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 54102 invoked from network); 12 Sep 2017 15:08:09 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 12 Sep 2017 15:08:09 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Tue, 12 Sep 2017 09:08:07 -0600 Message-Id: <59B81471020000780017A44E@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.2 Date: Tue, 12 Sep 2017 09:08:01 -0600 From: "Jan Beulich" To: "xen-devel" References: <59B80A8B020000780017A347@prv-mh.provo.novell.com> <59B80A8B020000780017A347@prv-mh.provo.novell.com> In-Reply-To: <59B80A8B020000780017A347@prv-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Cc: Stefano Stabellini , Wei Liu , Razvan Cojocaru , George Dunlap , Andrew Cooper , Dario Faggioli , Ian Jackson , Tim Deegan , Julien Grall , tamas@tklengyel.com, Meng Xu Subject: [Xen-devel] [PATCH 1/2] public/domctl: drop unnecessary typedefs and handles X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP By virtue of the struct xen_domctl container structure, most of them are really just cluttering the name space. While doing so, - convert an enum typed (pt_irq_type_t) structure field to a fixed width type, - make x86's paging_domctl() and descendants take a properly typed handle, - add const in a few places. Signed-off-by: Jan Beulich Acked-by: Razvan Cojocaru Acked-by: Dario Faggioli Acked-by: Meng Xu Acked-by: George Dunlap Acked-by: Tamas K Lengyel --- a/tools/libxc/include/xenctrl.h +++ b/tools/libxc/include/xenctrl.h @@ -903,7 +903,7 @@ int xc_vcpu_get_extstate(xc_interface *x uint32_t vcpu, xc_vcpu_extstate_t *extstate); -typedef xen_domctl_getvcpuinfo_t xc_vcpuinfo_t; +typedef struct xen_domctl_getvcpuinfo xc_vcpuinfo_t; int xc_vcpu_getinfo(xc_interface *xch, uint32_t domid, uint32_t vcpu, @@ -916,7 +916,7 @@ long long xc_domain_get_cpu_usage(xc_int int xc_domain_sethandle(xc_interface *xch, uint32_t domid, xen_domain_handle_t handle); -typedef xen_domctl_shadow_op_stats_t xc_shadow_op_stats_t; +typedef struct xen_domctl_shadow_op_stats xc_shadow_op_stats_t; int xc_shadow_control(xc_interface *xch, uint32_t domid, unsigned int sop, --- a/tools/libxc/xc_domain.c +++ b/tools/libxc/xc_domain.c @@ -1714,8 +1714,7 @@ int xc_domain_update_msi_irq( uint64_t gtable) { int rc; - xen_domctl_bind_pt_irq_t *bind; - + struct xen_domctl_bind_pt_irq *bind; DECLARE_DOMCTL; domctl.cmd = XEN_DOMCTL_bind_pt_irq; @@ -1740,8 +1739,7 @@ int xc_domain_unbind_msi_irq( uint32_t gflags) { int rc; - xen_domctl_bind_pt_irq_t *bind; - + struct xen_domctl_bind_pt_irq *bind; DECLARE_DOMCTL; domctl.cmd = XEN_DOMCTL_unbind_pt_irq; @@ -1770,7 +1768,7 @@ static int xc_domain_bind_pt_irq_int( uint16_t spi) { int rc; - xen_domctl_bind_pt_irq_t * bind; + struct xen_domctl_bind_pt_irq *bind; DECLARE_DOMCTL; domctl.cmd = XEN_DOMCTL_bind_pt_irq; @@ -1828,7 +1826,7 @@ static int xc_domain_unbind_pt_irq_int( uint8_t spi) { int rc; - xen_domctl_bind_pt_irq_t * bind; + struct xen_domctl_bind_pt_irq *bind; DECLARE_DOMCTL; domctl.cmd = XEN_DOMCTL_unbind_pt_irq; --- a/xen/arch/arm/domctl.c +++ b/xen/arch/arm/domctl.c @@ -41,7 +41,7 @@ long arch_do_domctl(struct xen_domctl *d case XEN_DOMCTL_bind_pt_irq: { int rc; - xen_domctl_bind_pt_irq_t *bind = &domctl->u.bind_pt_irq; + struct xen_domctl_bind_pt_irq *bind = &domctl->u.bind_pt_irq; uint32_t irq = bind->u.spi.spi; uint32_t virq = bind->machine_irq; @@ -87,7 +87,7 @@ long arch_do_domctl(struct xen_domctl *d case XEN_DOMCTL_unbind_pt_irq: { int rc; - xen_domctl_bind_pt_irq_t *bind = &domctl->u.bind_pt_irq; + struct xen_domctl_bind_pt_irq *bind = &domctl->u.bind_pt_irq; uint32_t irq = bind->u.spi.spi; uint32_t virq = bind->machine_irq; --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -48,7 +48,7 @@ static int gdbsx_guest_mem_io(domid_t do } static int update_domain_cpuid_info(struct domain *d, - const xen_domctl_cpuid_t *ctl) + const struct xen_domctl_cpuid *ctl) { struct cpuid_policy *p = d->arch.cpuid; const struct cpuid_leaf leaf = { ctl->eax, ctl->ebx, ctl->ecx, ctl->edx }; @@ -363,8 +363,7 @@ long arch_do_domctl( { case XEN_DOMCTL_shadow_op: - ret = paging_domctl(d, &domctl->u.shadow_op, - guest_handle_cast(u_domctl, void), 0); + ret = paging_domctl(d, &domctl->u.shadow_op, u_domctl, 0); if ( ret == -ERESTART ) return hypercall_create_continuation(__HYPERVISOR_arch_1, "h", u_domctl); @@ -707,7 +706,7 @@ long arch_do_domctl( case XEN_DOMCTL_bind_pt_irq: { - xen_domctl_bind_pt_irq_t *bind = &domctl->u.bind_pt_irq; + struct xen_domctl_bind_pt_irq *bind = &domctl->u.bind_pt_irq; int irq; ret = -EINVAL; @@ -738,7 +737,7 @@ long arch_do_domctl( case XEN_DOMCTL_unbind_pt_irq: { - xen_domctl_bind_pt_irq_t *bind = &domctl->u.bind_pt_irq; + struct xen_domctl_bind_pt_irq *bind = &domctl->u.bind_pt_irq; int irq = domain_pirq_to_irq(d, bind->machine_irq); ret = -EPERM; --- a/xen/arch/x86/hvm/vioapic.c +++ b/xen/arch/x86/hvm/vioapic.c @@ -162,7 +162,7 @@ static int vioapic_hwdom_map_gsi(unsigne unsigned int pol) { struct domain *currd = current->domain; - xen_domctl_bind_pt_irq_t pt_irq_bind = { + struct xen_domctl_bind_pt_irq pt_irq_bind = { .irq_type = PT_IRQ_TYPE_PCI, .machine_irq = gsi, }; --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -608,8 +608,8 @@ out: paging_unlock(d); } -int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc, - XEN_GUEST_HANDLE_PARAM(void) u_domctl) +int hap_domctl(struct domain *d, struct xen_domctl_shadow_op *sc, + XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) { int rc; bool preempted = false; --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -1606,7 +1606,7 @@ out: return rc; } -int mem_sharing_domctl(struct domain *d, xen_domctl_mem_sharing_op_t *mec) +int mem_sharing_domctl(struct domain *d, struct xen_domctl_mem_sharing_op *mec) { int rc; --- a/xen/arch/x86/mm/paging.c +++ b/xen/arch/x86/mm/paging.c @@ -674,8 +674,9 @@ void paging_vcpu_init(struct vcpu *v) } -int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc, - XEN_GUEST_HANDLE_PARAM(void) u_domctl, bool_t resuming) +int paging_domctl(struct domain *d, struct xen_domctl_shadow_op *sc, + XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl, + bool_t resuming) { int rc; @@ -775,8 +776,7 @@ long paging_domctl_continuation(XEN_GUES { if ( domctl_lock_acquire() ) { - ret = paging_domctl(d, &op.u.shadow_op, - guest_handle_cast(u_domctl, void), 1); + ret = paging_domctl(d, &op.u.shadow_op, u_domctl, 1); domctl_lock_release(); } --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -3809,8 +3809,8 @@ out: /* Shadow-control XEN_DOMCTL dispatcher */ int shadow_domctl(struct domain *d, - xen_domctl_shadow_op_t *sc, - XEN_GUEST_HANDLE_PARAM(void) u_domctl) + struct xen_domctl_shadow_op *sc, + XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) { int rc; bool preempted = false; --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -243,7 +243,7 @@ void domctl_lock_release(void) } static inline -int vcpuaffinity_params_invalid(const xen_domctl_vcpuaffinity_t *vcpuaff) +int vcpuaffinity_params_invalid(const struct xen_domctl_vcpuaffinity *vcpuaff) { return vcpuaff->flags == 0 || ((vcpuaff->flags & XEN_VCPUAFFINITY_HARD) && @@ -690,7 +690,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe case XEN_DOMCTL_getvcpuaffinity: { struct vcpu *v; - xen_domctl_vcpuaffinity_t *vcpuaff = &op->u.vcpuaffinity; + struct xen_domctl_vcpuaffinity *vcpuaff = &op->u.vcpuaffinity; ret = -EINVAL; if ( vcpuaff->vcpu >= d->max_vcpus ) --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -1345,7 +1345,7 @@ rt_dom_cntl( struct vcpu *v; unsigned long flags; int rc = 0; - xen_domctl_schedparam_vcpu_t local_sched; + struct xen_domctl_schedparam_vcpu local_sched; s_time_t period, budget; uint32_t index = 0; --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -41,7 +41,7 @@ static int vm_event_enable( struct domain *d, - xen_domctl_vm_event_op_t *vec, + struct xen_domctl_vm_event_op *vec, struct vm_event_domain **ved, int pause_flag, int param, @@ -587,7 +587,7 @@ void vm_event_cleanup(struct domain *d) #endif } -int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec, +int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec, XEN_GUEST_HANDLE_PARAM(void) u_domctl) { int rc; --- a/xen/drivers/passthrough/io.c +++ b/xen/drivers/passthrough/io.c @@ -276,7 +276,7 @@ static struct vcpu *vector_hashing_dest( } int pt_irq_create_bind( - struct domain *d, xen_domctl_bind_pt_irq_t *pt_irq_bind) + struct domain *d, const struct xen_domctl_bind_pt_irq *pt_irq_bind) { struct hvm_irq_dpci *hvm_irq_dpci; struct hvm_pirq_dpci *pirq_dpci; @@ -620,7 +620,7 @@ int pt_irq_create_bind( } int pt_irq_destroy_bind( - struct domain *d, xen_domctl_bind_pt_irq_t *pt_irq_bind) + struct domain *d, const struct xen_domctl_bind_pt_irq *pt_irq_bind) { struct hvm_irq_dpci *hvm_irq_dpci; struct hvm_pirq_dpci *pirq_dpci; --- a/xen/include/asm-x86/hap.h +++ b/xen/include/asm-x86/hap.h @@ -34,8 +34,8 @@ /* hap domain level functions */ /************************************************/ void hap_domain_init(struct domain *d); -int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc, - XEN_GUEST_HANDLE_PARAM(void) u_domctl); +int hap_domctl(struct domain *d, struct xen_domctl_shadow_op *sc, + XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl); int hap_enable(struct domain *d, u32 mode); void hap_final_teardown(struct domain *d); void hap_teardown(struct domain *d, bool *preempted); --- a/xen/include/asm-x86/mem_sharing.h +++ b/xen/include/asm-x86/mem_sharing.h @@ -87,7 +87,7 @@ int mem_sharing_notify_enomem(struct dom bool_t allow_sleep); int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg); int mem_sharing_domctl(struct domain *d, - xen_domctl_mem_sharing_op_t *mec); + struct xen_domctl_mem_sharing_op *mec); void mem_sharing_init(void); /* Scans the p2m and relinquishes any shared pages, destroying --- a/xen/include/asm-x86/paging.h +++ b/xen/include/asm-x86/paging.h @@ -202,8 +202,9 @@ int paging_domain_init(struct domain *d, /* Handler for paging-control ops: operations from user-space to enable * and disable ephemeral shadow modes (test mode and log-dirty mode) and * manipulate the log-dirty bitmap. */ -int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc, - XEN_GUEST_HANDLE_PARAM(void) u_domctl, bool_t resuming); +int paging_domctl(struct domain *d, struct xen_domctl_shadow_op *sc, + XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl, + bool_t resuming); /* Helper hypercall for dealing with continuations. */ long paging_domctl_continuation(XEN_GUEST_HANDLE_PARAM(xen_domctl_t)); --- a/xen/include/asm-x86/shadow.h +++ b/xen/include/asm-x86/shadow.h @@ -69,8 +69,8 @@ int shadow_track_dirty_vram(struct domai * and disable ephemeral shadow modes (test mode and log-dirty mode) and * manipulate the log-dirty bitmap. */ int shadow_domctl(struct domain *d, - xen_domctl_shadow_op_t *sc, - XEN_GUEST_HANDLE_PARAM(void) u_domctl); + struct xen_domctl_shadow_op *sc, + XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl); /* Call when destroying a domain */ void shadow_teardown(struct domain *d, bool *preempted); @@ -106,8 +106,9 @@ static inline void sh_remove_shadows(str static inline void shadow_blow_tables_per_domain(struct domain *d) {} -static inline int shadow_domctl(struct domain *d, xen_domctl_shadow_op_t *sc, - XEN_GUEST_HANDLE_PARAM(void) u_domctl) +static inline int shadow_domctl(struct domain *d, + struct xen_domctl_shadow_op *sc, + XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) { return -EINVAL; } --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -66,8 +66,6 @@ struct xen_domctl_createdomain { uint32_t flags; struct xen_arch_domainconfig config; }; -typedef struct xen_domctl_createdomain xen_domctl_createdomain_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_createdomain_t); /* XEN_DOMCTL_getdomaininfo */ struct xen_domctl_getdomaininfo { @@ -133,8 +131,6 @@ struct xen_domctl_getmemlist { /* OUT variables. */ uint64_aligned_t num_pfns; }; -typedef struct xen_domctl_getmemlist xen_domctl_getmemlist_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_getmemlist_t); /* XEN_DOMCTL_getpageframeinfo */ @@ -225,8 +221,6 @@ struct xen_domctl_shadow_op_stats { uint32_t fault_count; uint32_t dirty_count; }; -typedef struct xen_domctl_shadow_op_stats xen_domctl_shadow_op_stats_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_shadow_op_stats_t); struct xen_domctl_shadow_op { /* IN variables. */ @@ -244,8 +238,6 @@ struct xen_domctl_shadow_op { uint64_aligned_t pages; /* Size of buffer. Updated with actual size. */ struct xen_domctl_shadow_op_stats stats; }; -typedef struct xen_domctl_shadow_op xen_domctl_shadow_op_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_shadow_op_t); /* XEN_DOMCTL_max_mem */ @@ -253,8 +245,6 @@ struct xen_domctl_max_mem { /* IN variables. */ uint64_aligned_t max_memkb; }; -typedef struct xen_domctl_max_mem xen_domctl_max_mem_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_mem_t); /* XEN_DOMCTL_setvcpucontext */ @@ -263,8 +253,6 @@ struct xen_domctl_vcpucontext { uint32_t vcpu; /* IN */ XEN_GUEST_HANDLE_64(vcpu_guest_context_t) ctxt; /* IN/OUT */ }; -typedef struct xen_domctl_vcpucontext xen_domctl_vcpucontext_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_vcpucontext_t); /* XEN_DOMCTL_getvcpuinfo */ @@ -278,8 +266,6 @@ struct xen_domctl_getvcpuinfo { uint64_aligned_t cpu_time; /* total cpu time consumed (ns) */ uint32_t cpu; /* current mapping */ }; -typedef struct xen_domctl_getvcpuinfo xen_domctl_getvcpuinfo_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_getvcpuinfo_t); /* Get/set the NUMA node(s) with which the guest has affinity with. */ @@ -288,8 +274,6 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_getvc struct xen_domctl_nodeaffinity { struct xenctl_bitmap nodemap;/* IN */ }; -typedef struct xen_domctl_nodeaffinity xen_domctl_nodeaffinity_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_nodeaffinity_t); /* Get/set which physical cpus a vcpu can execute on. */ @@ -327,16 +311,12 @@ struct xen_domctl_vcpuaffinity { struct xenctl_bitmap cpumap_hard; struct xenctl_bitmap cpumap_soft; }; -typedef struct xen_domctl_vcpuaffinity xen_domctl_vcpuaffinity_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_vcpuaffinity_t); /* XEN_DOMCTL_max_vcpus */ struct xen_domctl_max_vcpus { uint32_t max; /* maximum number of vcpus */ }; -typedef struct xen_domctl_max_vcpus xen_domctl_max_vcpus_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_vcpus_t); /* XEN_DOMCTL_scheduler_op */ @@ -348,25 +328,25 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_v #define XEN_SCHEDULER_RTDS 8 #define XEN_SCHEDULER_NULL 9 -typedef struct xen_domctl_sched_credit { +struct xen_domctl_sched_credit { uint16_t weight; uint16_t cap; -} xen_domctl_sched_credit_t; +}; -typedef struct xen_domctl_sched_credit2 { +struct xen_domctl_sched_credit2 { uint16_t weight; -} xen_domctl_sched_credit2_t; +}; -typedef struct xen_domctl_sched_rtds { +struct xen_domctl_sched_rtds { uint32_t period; uint32_t budget; -} xen_domctl_sched_rtds_t; +}; typedef struct xen_domctl_schedparam_vcpu { union { - xen_domctl_sched_credit_t credit; - xen_domctl_sched_credit2_t credit2; - xen_domctl_sched_rtds_t rtds; + struct xen_domctl_sched_credit credit; + struct xen_domctl_sched_credit2 credit2; + struct xen_domctl_sched_rtds rtds; } u; uint32_t vcpuid; } xen_domctl_schedparam_vcpu_t; @@ -393,9 +373,9 @@ struct xen_domctl_scheduler_op { uint32_t cmd; /* XEN_DOMCTL_SCHEDOP_* */ /* IN/OUT */ union { - xen_domctl_sched_credit_t credit; - xen_domctl_sched_credit2_t credit2; - xen_domctl_sched_rtds_t rtds; + struct xen_domctl_sched_credit credit; + struct xen_domctl_sched_credit2 credit2; + struct xen_domctl_sched_rtds rtds; struct { XEN_GUEST_HANDLE_64(xen_domctl_schedparam_vcpu_t) vcpus; /* @@ -407,24 +387,18 @@ struct xen_domctl_scheduler_op { } v; } u; }; -typedef struct xen_domctl_scheduler_op xen_domctl_scheduler_op_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_scheduler_op_t); /* XEN_DOMCTL_setdomainhandle */ struct xen_domctl_setdomainhandle { xen_domain_handle_t handle; }; -typedef struct xen_domctl_setdomainhandle xen_domctl_setdomainhandle_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_setdomainhandle_t); /* XEN_DOMCTL_setdebugging */ struct xen_domctl_setdebugging { uint8_t enable; }; -typedef struct xen_domctl_setdebugging xen_domctl_setdebugging_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_setdebugging_t); /* XEN_DOMCTL_irq_permission */ @@ -432,8 +406,6 @@ struct xen_domctl_irq_permission { uint8_t pirq; uint8_t allow_access; /* flag to specify enable/disable of IRQ access */ }; -typedef struct xen_domctl_irq_permission xen_domctl_irq_permission_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_irq_permission_t); /* XEN_DOMCTL_iomem_permission */ @@ -442,8 +414,6 @@ struct xen_domctl_iomem_permission { uint64_aligned_t nr_mfns; /* number of pages in range (>0) */ uint8_t allow_access; /* allow (!0) or deny (0) access to range? */ }; -typedef struct xen_domctl_iomem_permission xen_domctl_iomem_permission_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_iomem_permission_t); /* XEN_DOMCTL_ioport_permission */ @@ -452,42 +422,34 @@ struct xen_domctl_ioport_permission { uint32_t nr_ports; /* size of port range */ uint8_t allow_access; /* allow or deny access to range? */ }; -typedef struct xen_domctl_ioport_permission xen_domctl_ioport_permission_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_ioport_permission_t); /* XEN_DOMCTL_hypercall_init */ struct xen_domctl_hypercall_init { uint64_aligned_t gmfn; /* GMFN to be initialised */ }; -typedef struct xen_domctl_hypercall_init xen_domctl_hypercall_init_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_hypercall_init_t); /* XEN_DOMCTL_settimeoffset */ struct xen_domctl_settimeoffset { int64_aligned_t time_offset_seconds; /* applied to domain wallclock time */ }; -typedef struct xen_domctl_settimeoffset xen_domctl_settimeoffset_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_settimeoffset_t); /* XEN_DOMCTL_gethvmcontext */ /* XEN_DOMCTL_sethvmcontext */ -typedef struct xen_domctl_hvmcontext { +struct xen_domctl_hvmcontext { uint32_t size; /* IN/OUT: size of buffer / bytes filled */ XEN_GUEST_HANDLE_64(uint8) buffer; /* IN/OUT: data, or call * gethvmcontext with NULL * buffer to get size req'd */ -} xen_domctl_hvmcontext_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_hvmcontext_t); +}; /* XEN_DOMCTL_set_address_size */ /* XEN_DOMCTL_get_address_size */ -typedef struct xen_domctl_address_size { +struct xen_domctl_address_size { uint32_t size; -} xen_domctl_address_size_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_address_size_t); +}; /* XEN_DOMCTL_sendtrigger */ @@ -500,8 +462,6 @@ struct xen_domctl_sendtrigger { uint32_t trigger; /* IN */ uint32_t vcpu; /* IN */ }; -typedef struct xen_domctl_sendtrigger xen_domctl_sendtrigger_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_sendtrigger_t); /* Assign a device to a guest. Sets up IOMMU structures. */ @@ -536,8 +496,6 @@ struct xen_domctl_assign_device { } dt; } u; }; -typedef struct xen_domctl_assign_device xen_domctl_assign_device_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_assign_device_t); /* Retrieve sibling devices infomation of machine_sbdf */ /* XEN_DOMCTL_get_device_group */ @@ -547,22 +505,20 @@ struct xen_domctl_get_device_group { uint32_t num_sdevs; /* OUT */ XEN_GUEST_HANDLE_64(uint32) sdev_array; /* OUT */ }; -typedef struct xen_domctl_get_device_group xen_domctl_get_device_group_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_get_device_group_t); /* Pass-through interrupts: bind real irq -> hvm devfn. */ /* XEN_DOMCTL_bind_pt_irq */ /* XEN_DOMCTL_unbind_pt_irq */ -typedef enum pt_irq_type_e { +enum pt_irq_type { PT_IRQ_TYPE_PCI, PT_IRQ_TYPE_ISA, PT_IRQ_TYPE_MSI, PT_IRQ_TYPE_MSI_TRANSLATE, PT_IRQ_TYPE_SPI, /* ARM: valid range 32-1019 */ -} pt_irq_type_t; +}; struct xen_domctl_bind_pt_irq { uint32_t machine_irq; - pt_irq_type_t irq_type; + uint32_t irq_type; /* enum pt_irq_type */ union { struct { @@ -590,8 +546,6 @@ struct xen_domctl_bind_pt_irq { } spi; } u; }; -typedef struct xen_domctl_bind_pt_irq xen_domctl_bind_pt_irq_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_bind_pt_irq_t); /* Bind machine I/O address range -> HVM address range. */ @@ -613,8 +567,6 @@ struct xen_domctl_memory_mapping { uint32_t add_mapping; /* add or remove mapping */ uint32_t padding; /* padding for 64-bit aligned structure */ }; -typedef struct xen_domctl_memory_mapping xen_domctl_memory_mapping_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_memory_mapping_t); /* Bind machine I/O port range -> HVM I/O port range. */ @@ -625,8 +577,6 @@ struct xen_domctl_ioport_mapping { uint32_t nr_ports; /* size of port range */ uint32_t add_mapping; /* add or remove mapping */ }; -typedef struct xen_domctl_ioport_mapping xen_domctl_ioport_mapping_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_ioport_mapping_t); /* @@ -645,8 +595,6 @@ struct xen_domctl_pin_mem_cacheattr { uint64_aligned_t start, end; uint32_t type; /* XEN_DOMCTL_MEM_CACHEATTR_* */ }; -typedef struct xen_domctl_pin_mem_cacheattr xen_domctl_pin_mem_cacheattr_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_pin_mem_cacheattr_t); /* XEN_DOMCTL_set_ext_vcpucontext */ @@ -678,8 +626,6 @@ struct xen_domctl_ext_vcpucontext { #endif #endif }; -typedef struct xen_domctl_ext_vcpucontext xen_domctl_ext_vcpucontext_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_ext_vcpucontext_t); /* * Set the target domain for a domain @@ -688,8 +634,6 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_ext_v struct xen_domctl_set_target { domid_t target; }; -typedef struct xen_domctl_set_target xen_domctl_set_target_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_target_t); #if defined(__i386__) || defined(__x86_64__) # define XEN_CPUID_INPUT_UNUSED 0xFFFFFFFF @@ -701,8 +645,6 @@ struct xen_domctl_cpuid { uint32_t ecx; uint32_t edx; }; -typedef struct xen_domctl_cpuid xen_domctl_cpuid_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_cpuid_t); #endif /* @@ -725,8 +667,6 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_cpuid struct xen_domctl_subscribe { uint32_t port; /* IN */ }; -typedef struct xen_domctl_subscribe xen_domctl_subscribe_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_subscribe_t); /* * Define the maximum machine address size which should be allocated @@ -747,37 +687,34 @@ struct xen_domctl_debug_op { uint32_t op; /* IN */ uint32_t vcpu; /* IN */ }; -typedef struct xen_domctl_debug_op xen_domctl_debug_op_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_debug_op_t); /* * Request a particular record from the HVM context */ /* XEN_DOMCTL_gethvmcontext_partial */ -typedef struct xen_domctl_hvmcontext_partial { +struct xen_domctl_hvmcontext_partial { uint32_t type; /* IN: Type of record required */ uint32_t instance; /* IN: Instance of that type */ uint64_aligned_t bufsz; /* IN: size of buffer */ XEN_GUEST_HANDLE_64(uint8) buffer; /* OUT: buffer to write record into */ -} xen_domctl_hvmcontext_partial_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_hvmcontext_partial_t); +}; /* XEN_DOMCTL_disable_migrate */ -typedef struct xen_domctl_disable_migrate { +struct xen_domctl_disable_migrate { uint32_t disable; /* IN: 1: disable migration and restore */ -} xen_domctl_disable_migrate_t; +}; /* XEN_DOMCTL_gettscinfo */ /* XEN_DOMCTL_settscinfo */ -typedef struct xen_domctl_tsc_info { +struct xen_domctl_tsc_info { /* IN/OUT */ uint32_t tsc_mode; uint32_t gtsc_khz; uint32_t incarnation; uint32_t pad; uint64_aligned_t elapsed_nsec; -} xen_domctl_tsc_info_t; +}; /* XEN_DOMCTL_gdbsx_guestmemio guest mem io */ struct xen_domctl_gdbsx_memio { @@ -885,8 +822,6 @@ struct xen_domctl_vm_event_op { uint32_t port; /* OUT: event channel for ring */ }; -typedef struct xen_domctl_vm_event_op xen_domctl_vm_event_op_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_vm_event_op_t); /* * Memory sharing operations @@ -902,8 +837,6 @@ struct xen_domctl_mem_sharing_op { uint8_t enable; /* CONTROL */ } u; }; -typedef struct xen_domctl_mem_sharing_op xen_domctl_mem_sharing_op_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_mem_sharing_op_t); struct xen_domctl_audit_p2m { /* OUT error counts */ @@ -911,14 +844,10 @@ struct xen_domctl_audit_p2m { uint64_t m2p_bad; uint64_t p2m_bad; }; -typedef struct xen_domctl_audit_p2m xen_domctl_audit_p2m_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_audit_p2m_t); struct xen_domctl_set_virq_handler { uint32_t virq; /* IN */ }; -typedef struct xen_domctl_set_virq_handler xen_domctl_set_virq_handler_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_virq_handler_t); #if defined(__i386__) || defined(__x86_64__) /* XEN_DOMCTL_setvcpuextstate */ @@ -941,8 +870,6 @@ struct xen_domctl_vcpuextstate { uint64_aligned_t size; XEN_GUEST_HANDLE_64(uint64) buffer; }; -typedef struct xen_domctl_vcpuextstate xen_domctl_vcpuextstate_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_vcpuextstate_t); #endif /* XEN_DOMCTL_set_access_required: sets whether a memory event listener @@ -952,14 +879,10 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_vcpue struct xen_domctl_set_access_required { uint8_t access_required; }; -typedef struct xen_domctl_set_access_required xen_domctl_set_access_required_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_access_required_t); struct xen_domctl_set_broken_page_p2m { uint64_aligned_t pfn; }; -typedef struct xen_domctl_set_broken_page_p2m xen_domctl_set_broken_page_p2m_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_broken_page_p2m_t); /* * XEN_DOMCTL_set_max_evtchn: sets the maximum event channel port @@ -969,8 +892,6 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_b struct xen_domctl_set_max_evtchn { uint32_t max_port; }; -typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t); /* * ARM: Clean and invalidate caches associated with given region of @@ -980,8 +901,6 @@ struct xen_domctl_cacheflush { /* IN: page range to flush. */ xen_pfn_t start_pfn, nr_pfns; }; -typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t); #if defined(__i386__) || defined(__x86_64__) struct xen_domctl_vcpu_msr { @@ -1014,8 +933,6 @@ struct xen_domctl_vcpu_msrs { uint32_t msr_count; /* IN/OUT */ XEN_GUEST_HANDLE_64(xen_domctl_vcpu_msr_t) msrs; /* IN/OUT */ }; -typedef struct xen_domctl_vcpu_msrs xen_domctl_vcpu_msrs_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_vcpu_msrs_t); #endif /* XEN_DOMCTL_setvnumainfo: specifies a virtual NUMA topology for the guest */ @@ -1052,8 +969,6 @@ struct xen_domctl_vnuma { */ XEN_GUEST_HANDLE_64(xen_vmemrange_t) vmemrange; }; -typedef struct xen_domctl_vnuma xen_domctl_vnuma_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_vnuma_t); struct xen_domctl_psr_cmt_op { #define XEN_DOMCTL_PSR_CMT_OP_DETACH 0 @@ -1062,8 +977,6 @@ struct xen_domctl_psr_cmt_op { uint32_t cmd; uint32_t data; }; -typedef struct xen_domctl_psr_cmt_op xen_domctl_psr_cmt_op_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_psr_cmt_op_t); /* XEN_DOMCTL_MONITOR_* * @@ -1144,8 +1057,6 @@ struct xen_domctl_monitor_op { } debug_exception; } u; }; -typedef struct xen_domctl_monitor_op xen_domctl_monitor_op_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_monitor_op_t); struct xen_domctl_psr_cat_op { #define XEN_DOMCTL_PSR_CAT_OP_SET_L3_CBM 0 @@ -1160,8 +1071,6 @@ struct xen_domctl_psr_cat_op { uint32_t target; /* IN */ uint64_t data; /* IN/OUT */ }; -typedef struct xen_domctl_psr_cat_op xen_domctl_psr_cat_op_t; -DEFINE_XEN_GUEST_HANDLE(xen_domctl_psr_cat_op_t); struct xen_domctl { uint32_t cmd; --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -96,8 +96,8 @@ void pt_pci_init(void); struct pirq; int hvm_do_IRQ_dpci(struct domain *, struct pirq *); -int pt_irq_create_bind(struct domain *, xen_domctl_bind_pt_irq_t *); -int pt_irq_destroy_bind(struct domain *, xen_domctl_bind_pt_irq_t *); +int pt_irq_create_bind(struct domain *, const struct xen_domctl_bind_pt_irq *); +int pt_irq_destroy_bind(struct domain *, const struct xen_domctl_bind_pt_irq *); void hvm_dpci_isairq_eoi(struct domain *d, unsigned int isairq); struct hvm_irq_dpci *domain_get_irq_dpci(const struct domain *); --- a/xen/include/xen/vm_event.h +++ b/xen/include/xen/vm_event.h @@ -69,7 +69,7 @@ int vm_event_get_response(struct domain void vm_event_resume(struct domain *d, struct vm_event_domain *ved); -int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec, +int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec, XEN_GUEST_HANDLE_PARAM(void) u_domctl); void vm_event_vcpu_pause(struct vcpu *v);