From patchwork Tue Jul 16 17:06:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Petre Ovidiu PIRCALABU X-Patchwork-Id: 11046521 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A1F7114DB for ; Tue, 16 Jul 2019 17:08:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 898F328684 for ; Tue, 16 Jul 2019 17:08:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7D2DB28688; Tue, 16 Jul 2019 17:08:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8853228684 for ; Tue, 16 Jul 2019 17:08:10 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuN-0005no-Dj; Tue, 16 Jul 2019 17:06:31 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuL-0005nC-IE for xen-devel@lists.xenproject.org; Tue, 16 Jul 2019 17:06:29 +0000 X-Inumbo-ID: 0ba7ca78-a7ec-11e9-8980-bc764e045a96 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 0ba7ca78-a7ec-11e9-8980-bc764e045a96; Tue, 16 Jul 2019 17:06:27 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id B7B7130644B8; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 85F63305B7A1; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Tue, 16 Jul 2019 20:06:15 +0300 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH v2 01/10] vm_event: Define VM_EVENT type X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Stefano Stabellini , Razvan Cojocaru , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Tamas K Lengyel , Jan Beulich , Alexandru Isaila MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Define the type for each of the supported vm_event rings (paging, monitor and sharing) and replace the ring param field with this type. Replace XEN_DOMCTL_VM_EVENT_OP_ occurrences with their corresponding XEN_VM_EVENT_TYPE_ counterpart. Signed-off-by: Petre Pircalabu --- tools/libxc/include/xenctrl.h | 1 + tools/libxc/xc_mem_paging.c | 6 ++-- tools/libxc/xc_memshr.c | 6 ++-- tools/libxc/xc_monitor.c | 6 ++-- tools/libxc/xc_private.h | 8 ++--- tools/libxc/xc_vm_event.c | 58 +++++++++++++++------------------- xen/common/vm_event.c | 12 ++++---- xen/include/public/domctl.h | 72 +++---------------------------------------- xen/include/public/vm_event.h | 31 +++++++++++++++++++ 9 files changed, 81 insertions(+), 119 deletions(-) diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h index 538007a..f3af710 100644 --- a/tools/libxc/include/xenctrl.h +++ b/tools/libxc/include/xenctrl.h @@ -46,6 +46,7 @@ #include #include #include +#include #include "xentoollog.h" diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c index a067706..a88c0cc 100644 --- a/tools/libxc/xc_mem_paging.c +++ b/tools/libxc/xc_mem_paging.c @@ -48,7 +48,7 @@ int xc_mem_paging_enable(xc_interface *xch, uint32_t domain_id, return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_ENABLE, - XEN_DOMCTL_VM_EVENT_OP_PAGING, + XEN_VM_EVENT_TYPE_PAGING, port); } @@ -56,7 +56,7 @@ int xc_mem_paging_disable(xc_interface *xch, uint32_t domain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_DISABLE, - XEN_DOMCTL_VM_EVENT_OP_PAGING, + XEN_VM_EVENT_TYPE_PAGING, NULL); } @@ -64,7 +64,7 @@ int xc_mem_paging_resume(xc_interface *xch, uint32_t domain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_RESUME, - XEN_DOMCTL_VM_EVENT_OP_PAGING, + XEN_VM_EVENT_TYPE_PAGING, NULL); } diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c index d5e135e..1c4a706 100644 --- a/tools/libxc/xc_memshr.c +++ b/tools/libxc/xc_memshr.c @@ -53,7 +53,7 @@ int xc_memshr_ring_enable(xc_interface *xch, return xc_vm_event_control(xch, domid, XEN_VM_EVENT_ENABLE, - XEN_DOMCTL_VM_EVENT_OP_SHARING, + XEN_VM_EVENT_TYPE_SHARING, port); } @@ -62,7 +62,7 @@ int xc_memshr_ring_disable(xc_interface *xch, { return xc_vm_event_control(xch, domid, XEN_VM_EVENT_DISABLE, - XEN_DOMCTL_VM_EVENT_OP_SHARING, + XEN_VM_EVENT_TYPE_SHARING, NULL); } @@ -205,7 +205,7 @@ int xc_memshr_domain_resume(xc_interface *xch, { return xc_vm_event_control(xch, domid, XEN_VM_EVENT_RESUME, - XEN_DOMCTL_VM_EVENT_OP_SHARING, + XEN_VM_EVENT_TYPE_SHARING, NULL); } diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c index 4ac823e..f05b53d 100644 --- a/tools/libxc/xc_monitor.c +++ b/tools/libxc/xc_monitor.c @@ -24,7 +24,7 @@ void *xc_monitor_enable(xc_interface *xch, uint32_t domain_id, uint32_t *port) { - return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN, + return xc_vm_event_enable(xch, domain_id, XEN_VM_EVENT_TYPE_MONITOR, port); } @@ -32,7 +32,7 @@ int xc_monitor_disable(xc_interface *xch, uint32_t domain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_DISABLE, - XEN_DOMCTL_VM_EVENT_OP_MONITOR, + XEN_VM_EVENT_TYPE_MONITOR, NULL); } @@ -40,7 +40,7 @@ int xc_monitor_resume(xc_interface *xch, uint32_t domain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_RESUME, - XEN_DOMCTL_VM_EVENT_OP_MONITOR, + XEN_VM_EVENT_TYPE_MONITOR, NULL); } diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h index adc3b6a..e4f7c3a 100644 --- a/tools/libxc/xc_private.h +++ b/tools/libxc/xc_private.h @@ -412,12 +412,12 @@ int xc_ffs64(uint64_t x); * vm_event operations. Internal use only. */ int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op, - unsigned int mode, uint32_t *port); + unsigned int type, uint32_t *port); /* - * Enables vm_event and returns the mapped ring page indicated by param. - * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN + * Enables vm_event and returns the mapped ring page indicated by type. + * type can be XEN_VM_EVENT_TYPE_(PAGING/MONITOR/SHARING) */ -void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param, +void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type, uint32_t *port); int do_dm_op(xc_interface *xch, uint32_t domid, unsigned int nr_bufs, ...); diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c index a97c615..044bf71 100644 --- a/tools/libxc/xc_vm_event.c +++ b/tools/libxc/xc_vm_event.c @@ -23,7 +23,7 @@ #include "xc_private.h" int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op, - unsigned int mode, uint32_t *port) + unsigned int type, uint32_t *port) { DECLARE_DOMCTL; int rc; @@ -31,7 +31,7 @@ int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op, domctl.cmd = XEN_DOMCTL_vm_event_op; domctl.domain = domain_id; domctl.u.vm_event_op.op = op; - domctl.u.vm_event_op.mode = mode; + domctl.u.vm_event_op.type = type; rc = do_domctl(xch, &domctl); if ( !rc && port ) @@ -39,13 +39,13 @@ int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op, return rc; } -void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param, +void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type, uint32_t *port) { void *ring_page = NULL; uint64_t pfn; xen_pfn_t ring_pfn, mmap_pfn; - unsigned int op, mode; + unsigned int param; int rc1, rc2, saved_errno; if ( !port ) @@ -54,6 +54,25 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param, return NULL; } + switch ( type ) + { + case XEN_VM_EVENT_TYPE_PAGING: + param = HVM_PARAM_PAGING_RING_PFN; + break; + + case XEN_VM_EVENT_TYPE_MONITOR: + param = HVM_PARAM_MONITOR_RING_PFN; + break; + + case XEN_VM_EVENT_TYPE_SHARING: + param = HVM_PARAM_SHARING_RING_PFN; + break; + + default: + errno = EINVAL; + return NULL; + } + /* Pause the domain for ring page setup */ rc1 = xc_domain_pause(xch, domain_id); if ( rc1 != 0 ) @@ -94,34 +113,7 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param, goto out; } - switch ( param ) - { - case HVM_PARAM_PAGING_RING_PFN: - op = XEN_VM_EVENT_ENABLE; - mode = XEN_DOMCTL_VM_EVENT_OP_PAGING; - break; - - case HVM_PARAM_MONITOR_RING_PFN: - op = XEN_VM_EVENT_ENABLE; - mode = XEN_DOMCTL_VM_EVENT_OP_MONITOR; - break; - - case HVM_PARAM_SHARING_RING_PFN: - op = XEN_VM_EVENT_ENABLE; - mode = XEN_DOMCTL_VM_EVENT_OP_SHARING; - break; - - /* - * This is for the outside chance that the HVM_PARAM is valid but is invalid - * as far as vm_event goes. - */ - default: - errno = EINVAL; - rc1 = -1; - goto out; - } - - rc1 = xc_vm_event_control(xch, domain_id, op, mode, port); + rc1 = xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_ENABLE, type, port); if ( rc1 != 0 ) { PERROR("Failed to enable vm_event\n"); @@ -164,7 +156,7 @@ int xc_vm_event_get_version(xc_interface *xch) domctl.cmd = XEN_DOMCTL_vm_event_op; domctl.domain = DOMID_INVALID; domctl.u.vm_event_op.op = XEN_VM_EVENT_GET_VERSION; - domctl.u.vm_event_op.mode = XEN_DOMCTL_VM_EVENT_OP_MONITOR; + domctl.u.vm_event_op.type = XEN_VM_EVENT_TYPE_MONITOR; rc = do_domctl(xch, &domctl); if ( !rc ) diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index e872680..56b506a 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -353,7 +353,7 @@ static int vm_event_resume(struct domain *d, struct vm_event_domain *ved) vm_event_response_t rsp; /* - * vm_event_resume() runs in either XEN_DOMCTL_VM_EVENT_OP_*, or + * vm_event_resume() runs in either XEN_VM_EVENT_* domctls, or * EVTCHN_send context from the introspection consumer. Both contexts * are guaranteed not to be the subject of vm_event responses. * While we could ASSERT(v != current) for each VCPU in d in the loop @@ -580,7 +580,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) return 0; } - rc = xsm_vm_event_control(XSM_PRIV, d, vec->mode, vec->op); + rc = xsm_vm_event_control(XSM_PRIV, d, vec->type, vec->op); if ( rc ) return rc; @@ -607,10 +607,10 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) rc = -ENOSYS; - switch ( vec->mode ) + switch ( vec->type ) { #ifdef CONFIG_HAS_MEM_PAGING - case XEN_DOMCTL_VM_EVENT_OP_PAGING: + case XEN_VM_EVENT_TYPE_PAGING: { rc = -EINVAL; @@ -666,7 +666,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) break; #endif - case XEN_DOMCTL_VM_EVENT_OP_MONITOR: + case XEN_VM_EVENT_TYPE_MONITOR: { rc = -EINVAL; @@ -704,7 +704,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) break; #ifdef CONFIG_HAS_MEM_SHARING - case XEN_DOMCTL_VM_EVENT_OP_SHARING: + case XEN_VM_EVENT_TYPE_SHARING: { rc = -EINVAL; diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index 19486d5..234d8c5 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -769,80 +769,18 @@ struct xen_domctl_gdbsx_domstatus { * VM event operations */ -/* XEN_DOMCTL_vm_event_op */ - -/* - * There are currently three rings available for VM events: - * sharing, monitor and paging. This hypercall allows one to - * control these rings (enable/disable), as well as to signal - * to the hypervisor to pull responses (resume) from the given - * ring. +/* XEN_DOMCTL_vm_event_op. + * Use for teardown/setup of helper<->hypervisor interface for paging, + * access and sharing. */ #define XEN_VM_EVENT_ENABLE 0 #define XEN_VM_EVENT_DISABLE 1 #define XEN_VM_EVENT_RESUME 2 #define XEN_VM_EVENT_GET_VERSION 3 -/* - * Domain memory paging - * Page memory in and out. - * Domctl interface to set up and tear down the - * pager<->hypervisor interface. Use XENMEM_paging_op* - * to perform per-page operations. - * - * The XEN_VM_EVENT_PAGING_ENABLE domctl returns several - * non-standard error codes to indicate why paging could not be enabled: - * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest - * EMLINK - guest has iommu passthrough enabled - * EXDEV - guest has PoD enabled - * EBUSY - guest has or had paging enabled, ring buffer still active - */ -#define XEN_DOMCTL_VM_EVENT_OP_PAGING 1 - -/* - * Monitor helper. - * - * As with paging, use the domctl for teardown/setup of the - * helper<->hypervisor interface. - * - * The monitor interface can be used to register for various VM events. For - * example, there are HVM hypercalls to set the per-page access permissions - * of every page in a domain. When one of these permissions--independent, - * read, write, and execute--is violated, the VCPU is paused and a memory event - * is sent with what happened. The memory event handler can then resume the - * VCPU and redo the access with a XEN_VM_EVENT_RESUME option. - * - * See public/vm_event.h for the list of available events that can be - * subscribed to via the monitor interface. - * - * The XEN_VM_EVENT_MONITOR_* domctls returns - * non-standard error codes to indicate why access could not be enabled: - * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest - * EBUSY - guest has or had access enabled, ring buffer still active - * - */ -#define XEN_DOMCTL_VM_EVENT_OP_MONITOR 2 - -/* - * Sharing ENOMEM helper. - * - * As with paging, use the domctl for teardown/setup of the - * helper<->hypervisor interface. - * - * If setup, this ring is used to communicate failed allocations - * in the unshare path. XENMEM_sharing_op_resume is used to wake up - * vcpus that could not unshare. - * - * Note that shring can be turned on (as per the domctl below) - * *without* this ring being setup. - */ -#define XEN_DOMCTL_VM_EVENT_OP_SHARING 3 - -/* Use for teardown/setup of helper<->hypervisor interface for paging, - * access and sharing.*/ struct xen_domctl_vm_event_op { uint32_t op; /* XEN_VM_EVENT_* */ - uint32_t mode; /* XEN_DOMCTL_VM_EVENT_OP_* */ + uint32_t type; /* XEN_VM_EVENT_TYPE_* */ union { struct { @@ -1004,7 +942,7 @@ struct xen_domctl_psr_cmt_op { * Enable/disable monitoring various VM events. * This domctl configures what events will be reported to helper apps * via the ring buffer "MONITOR". The ring has to be first enabled - * with the domctl XEN_DOMCTL_VM_EVENT_OP_MONITOR. + * with XEN_VM_EVENT_ENABLE. * * GET_CAPABILITIES can be used to determine which of these features is * available on a given platform. diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h index 959083d..c48bc21 100644 --- a/xen/include/public/vm_event.h +++ b/xen/include/public/vm_event.h @@ -36,6 +36,37 @@ #include "io/ring.h" /* + * There are currently three types of VM events. + */ + +/* + * Domain memory paging + * + * Page memory in and out. + */ +#define XEN_VM_EVENT_TYPE_PAGING 1 + +/* + * Monitor. + * + * The monitor interface can be used to register for various VM events. For + * example, there are HVM hypercalls to set the per-page access permissions + * of every page in a domain. When one of these permissions--independent, + * read, write, and execute--is violated, the VCPU is paused and a memory event + * is sent with what happened. The memory event handler can then resume the + * VCPU and redo the access with a XEN_VM_EVENT_RESUME option. + */ +#define XEN_VM_EVENT_TYPE_MONITOR 2 + +/* + * Sharing ENOMEM. + * + * Used to communicate failed allocations in the unshare path. + * XENMEM_sharing_op_resume is used to wake up vcpus that could not unshare. + */ +#define XEN_VM_EVENT_TYPE_SHARING 3 + +/* * Memory event flags */