From patchwork Tue Jul 16 17:06:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Petre Ovidiu PIRCALABU X-Patchwork-Id: 11046521 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A1F7114DB for ; Tue, 16 Jul 2019 17:08:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 898F328684 for ; Tue, 16 Jul 2019 17:08:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7D2DB28688; Tue, 16 Jul 2019 17:08:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8853228684 for ; Tue, 16 Jul 2019 17:08:10 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuN-0005no-Dj; Tue, 16 Jul 2019 17:06:31 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuL-0005nC-IE for xen-devel@lists.xenproject.org; Tue, 16 Jul 2019 17:06:29 +0000 X-Inumbo-ID: 0ba7ca78-a7ec-11e9-8980-bc764e045a96 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 0ba7ca78-a7ec-11e9-8980-bc764e045a96; Tue, 16 Jul 2019 17:06:27 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id B7B7130644B8; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 85F63305B7A1; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Tue, 16 Jul 2019 20:06:15 +0300 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH v2 01/10] vm_event: Define VM_EVENT type X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Stefano Stabellini , Razvan Cojocaru , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Tamas K Lengyel , Jan Beulich , Alexandru Isaila MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Define the type for each of the supported vm_event rings (paging, monitor and sharing) and replace the ring param field with this type. Replace XEN_DOMCTL_VM_EVENT_OP_ occurrences with their corresponding XEN_VM_EVENT_TYPE_ counterpart. Signed-off-by: Petre Pircalabu --- tools/libxc/include/xenctrl.h | 1 + tools/libxc/xc_mem_paging.c | 6 ++-- tools/libxc/xc_memshr.c | 6 ++-- tools/libxc/xc_monitor.c | 6 ++-- tools/libxc/xc_private.h | 8 ++--- tools/libxc/xc_vm_event.c | 58 +++++++++++++++------------------- xen/common/vm_event.c | 12 ++++---- xen/include/public/domctl.h | 72 +++---------------------------------------- xen/include/public/vm_event.h | 31 +++++++++++++++++++ 9 files changed, 81 insertions(+), 119 deletions(-) diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h index 538007a..f3af710 100644 --- a/tools/libxc/include/xenctrl.h +++ b/tools/libxc/include/xenctrl.h @@ -46,6 +46,7 @@ #include #include #include +#include #include "xentoollog.h" diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c index a067706..a88c0cc 100644 --- a/tools/libxc/xc_mem_paging.c +++ b/tools/libxc/xc_mem_paging.c @@ -48,7 +48,7 @@ int xc_mem_paging_enable(xc_interface *xch, uint32_t domain_id, return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_ENABLE, - XEN_DOMCTL_VM_EVENT_OP_PAGING, + XEN_VM_EVENT_TYPE_PAGING, port); } @@ -56,7 +56,7 @@ int xc_mem_paging_disable(xc_interface *xch, uint32_t domain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_DISABLE, - XEN_DOMCTL_VM_EVENT_OP_PAGING, + XEN_VM_EVENT_TYPE_PAGING, NULL); } @@ -64,7 +64,7 @@ int xc_mem_paging_resume(xc_interface *xch, uint32_t domain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_RESUME, - XEN_DOMCTL_VM_EVENT_OP_PAGING, + XEN_VM_EVENT_TYPE_PAGING, NULL); } diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c index d5e135e..1c4a706 100644 --- a/tools/libxc/xc_memshr.c +++ b/tools/libxc/xc_memshr.c @@ -53,7 +53,7 @@ int xc_memshr_ring_enable(xc_interface *xch, return xc_vm_event_control(xch, domid, XEN_VM_EVENT_ENABLE, - XEN_DOMCTL_VM_EVENT_OP_SHARING, + XEN_VM_EVENT_TYPE_SHARING, port); } @@ -62,7 +62,7 @@ int xc_memshr_ring_disable(xc_interface *xch, { return xc_vm_event_control(xch, domid, XEN_VM_EVENT_DISABLE, - XEN_DOMCTL_VM_EVENT_OP_SHARING, + XEN_VM_EVENT_TYPE_SHARING, NULL); } @@ -205,7 +205,7 @@ int xc_memshr_domain_resume(xc_interface *xch, { return xc_vm_event_control(xch, domid, XEN_VM_EVENT_RESUME, - XEN_DOMCTL_VM_EVENT_OP_SHARING, + XEN_VM_EVENT_TYPE_SHARING, NULL); } diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c index 4ac823e..f05b53d 100644 --- a/tools/libxc/xc_monitor.c +++ b/tools/libxc/xc_monitor.c @@ -24,7 +24,7 @@ void *xc_monitor_enable(xc_interface *xch, uint32_t domain_id, uint32_t *port) { - return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN, + return xc_vm_event_enable(xch, domain_id, XEN_VM_EVENT_TYPE_MONITOR, port); } @@ -32,7 +32,7 @@ int xc_monitor_disable(xc_interface *xch, uint32_t domain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_DISABLE, - XEN_DOMCTL_VM_EVENT_OP_MONITOR, + XEN_VM_EVENT_TYPE_MONITOR, NULL); } @@ -40,7 +40,7 @@ int xc_monitor_resume(xc_interface *xch, uint32_t domain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_RESUME, - XEN_DOMCTL_VM_EVENT_OP_MONITOR, + XEN_VM_EVENT_TYPE_MONITOR, NULL); } diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h index adc3b6a..e4f7c3a 100644 --- a/tools/libxc/xc_private.h +++ b/tools/libxc/xc_private.h @@ -412,12 +412,12 @@ int xc_ffs64(uint64_t x); * vm_event operations. Internal use only. */ int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op, - unsigned int mode, uint32_t *port); + unsigned int type, uint32_t *port); /* - * Enables vm_event and returns the mapped ring page indicated by param. - * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN + * Enables vm_event and returns the mapped ring page indicated by type. + * type can be XEN_VM_EVENT_TYPE_(PAGING/MONITOR/SHARING) */ -void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param, +void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type, uint32_t *port); int do_dm_op(xc_interface *xch, uint32_t domid, unsigned int nr_bufs, ...); diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c index a97c615..044bf71 100644 --- a/tools/libxc/xc_vm_event.c +++ b/tools/libxc/xc_vm_event.c @@ -23,7 +23,7 @@ #include "xc_private.h" int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op, - unsigned int mode, uint32_t *port) + unsigned int type, uint32_t *port) { DECLARE_DOMCTL; int rc; @@ -31,7 +31,7 @@ int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op, domctl.cmd = XEN_DOMCTL_vm_event_op; domctl.domain = domain_id; domctl.u.vm_event_op.op = op; - domctl.u.vm_event_op.mode = mode; + domctl.u.vm_event_op.type = type; rc = do_domctl(xch, &domctl); if ( !rc && port ) @@ -39,13 +39,13 @@ int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op, return rc; } -void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param, +void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type, uint32_t *port) { void *ring_page = NULL; uint64_t pfn; xen_pfn_t ring_pfn, mmap_pfn; - unsigned int op, mode; + unsigned int param; int rc1, rc2, saved_errno; if ( !port ) @@ -54,6 +54,25 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param, return NULL; } + switch ( type ) + { + case XEN_VM_EVENT_TYPE_PAGING: + param = HVM_PARAM_PAGING_RING_PFN; + break; + + case XEN_VM_EVENT_TYPE_MONITOR: + param = HVM_PARAM_MONITOR_RING_PFN; + break; + + case XEN_VM_EVENT_TYPE_SHARING: + param = HVM_PARAM_SHARING_RING_PFN; + break; + + default: + errno = EINVAL; + return NULL; + } + /* Pause the domain for ring page setup */ rc1 = xc_domain_pause(xch, domain_id); if ( rc1 != 0 ) @@ -94,34 +113,7 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param, goto out; } - switch ( param ) - { - case HVM_PARAM_PAGING_RING_PFN: - op = XEN_VM_EVENT_ENABLE; - mode = XEN_DOMCTL_VM_EVENT_OP_PAGING; - break; - - case HVM_PARAM_MONITOR_RING_PFN: - op = XEN_VM_EVENT_ENABLE; - mode = XEN_DOMCTL_VM_EVENT_OP_MONITOR; - break; - - case HVM_PARAM_SHARING_RING_PFN: - op = XEN_VM_EVENT_ENABLE; - mode = XEN_DOMCTL_VM_EVENT_OP_SHARING; - break; - - /* - * This is for the outside chance that the HVM_PARAM is valid but is invalid - * as far as vm_event goes. - */ - default: - errno = EINVAL; - rc1 = -1; - goto out; - } - - rc1 = xc_vm_event_control(xch, domain_id, op, mode, port); + rc1 = xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_ENABLE, type, port); if ( rc1 != 0 ) { PERROR("Failed to enable vm_event\n"); @@ -164,7 +156,7 @@ int xc_vm_event_get_version(xc_interface *xch) domctl.cmd = XEN_DOMCTL_vm_event_op; domctl.domain = DOMID_INVALID; domctl.u.vm_event_op.op = XEN_VM_EVENT_GET_VERSION; - domctl.u.vm_event_op.mode = XEN_DOMCTL_VM_EVENT_OP_MONITOR; + domctl.u.vm_event_op.type = XEN_VM_EVENT_TYPE_MONITOR; rc = do_domctl(xch, &domctl); if ( !rc ) diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index e872680..56b506a 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -353,7 +353,7 @@ static int vm_event_resume(struct domain *d, struct vm_event_domain *ved) vm_event_response_t rsp; /* - * vm_event_resume() runs in either XEN_DOMCTL_VM_EVENT_OP_*, or + * vm_event_resume() runs in either XEN_VM_EVENT_* domctls, or * EVTCHN_send context from the introspection consumer. Both contexts * are guaranteed not to be the subject of vm_event responses. * While we could ASSERT(v != current) for each VCPU in d in the loop @@ -580,7 +580,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) return 0; } - rc = xsm_vm_event_control(XSM_PRIV, d, vec->mode, vec->op); + rc = xsm_vm_event_control(XSM_PRIV, d, vec->type, vec->op); if ( rc ) return rc; @@ -607,10 +607,10 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) rc = -ENOSYS; - switch ( vec->mode ) + switch ( vec->type ) { #ifdef CONFIG_HAS_MEM_PAGING - case XEN_DOMCTL_VM_EVENT_OP_PAGING: + case XEN_VM_EVENT_TYPE_PAGING: { rc = -EINVAL; @@ -666,7 +666,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) break; #endif - case XEN_DOMCTL_VM_EVENT_OP_MONITOR: + case XEN_VM_EVENT_TYPE_MONITOR: { rc = -EINVAL; @@ -704,7 +704,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) break; #ifdef CONFIG_HAS_MEM_SHARING - case XEN_DOMCTL_VM_EVENT_OP_SHARING: + case XEN_VM_EVENT_TYPE_SHARING: { rc = -EINVAL; diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index 19486d5..234d8c5 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -769,80 +769,18 @@ struct xen_domctl_gdbsx_domstatus { * VM event operations */ -/* XEN_DOMCTL_vm_event_op */ - -/* - * There are currently three rings available for VM events: - * sharing, monitor and paging. This hypercall allows one to - * control these rings (enable/disable), as well as to signal - * to the hypervisor to pull responses (resume) from the given - * ring. +/* XEN_DOMCTL_vm_event_op. + * Use for teardown/setup of helper<->hypervisor interface for paging, + * access and sharing. */ #define XEN_VM_EVENT_ENABLE 0 #define XEN_VM_EVENT_DISABLE 1 #define XEN_VM_EVENT_RESUME 2 #define XEN_VM_EVENT_GET_VERSION 3 -/* - * Domain memory paging - * Page memory in and out. - * Domctl interface to set up and tear down the - * pager<->hypervisor interface. Use XENMEM_paging_op* - * to perform per-page operations. - * - * The XEN_VM_EVENT_PAGING_ENABLE domctl returns several - * non-standard error codes to indicate why paging could not be enabled: - * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest - * EMLINK - guest has iommu passthrough enabled - * EXDEV - guest has PoD enabled - * EBUSY - guest has or had paging enabled, ring buffer still active - */ -#define XEN_DOMCTL_VM_EVENT_OP_PAGING 1 - -/* - * Monitor helper. - * - * As with paging, use the domctl for teardown/setup of the - * helper<->hypervisor interface. - * - * The monitor interface can be used to register for various VM events. For - * example, there are HVM hypercalls to set the per-page access permissions - * of every page in a domain. When one of these permissions--independent, - * read, write, and execute--is violated, the VCPU is paused and a memory event - * is sent with what happened. The memory event handler can then resume the - * VCPU and redo the access with a XEN_VM_EVENT_RESUME option. - * - * See public/vm_event.h for the list of available events that can be - * subscribed to via the monitor interface. - * - * The XEN_VM_EVENT_MONITOR_* domctls returns - * non-standard error codes to indicate why access could not be enabled: - * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest - * EBUSY - guest has or had access enabled, ring buffer still active - * - */ -#define XEN_DOMCTL_VM_EVENT_OP_MONITOR 2 - -/* - * Sharing ENOMEM helper. - * - * As with paging, use the domctl for teardown/setup of the - * helper<->hypervisor interface. - * - * If setup, this ring is used to communicate failed allocations - * in the unshare path. XENMEM_sharing_op_resume is used to wake up - * vcpus that could not unshare. - * - * Note that shring can be turned on (as per the domctl below) - * *without* this ring being setup. - */ -#define XEN_DOMCTL_VM_EVENT_OP_SHARING 3 - -/* Use for teardown/setup of helper<->hypervisor interface for paging, - * access and sharing.*/ struct xen_domctl_vm_event_op { uint32_t op; /* XEN_VM_EVENT_* */ - uint32_t mode; /* XEN_DOMCTL_VM_EVENT_OP_* */ + uint32_t type; /* XEN_VM_EVENT_TYPE_* */ union { struct { @@ -1004,7 +942,7 @@ struct xen_domctl_psr_cmt_op { * Enable/disable monitoring various VM events. * This domctl configures what events will be reported to helper apps * via the ring buffer "MONITOR". The ring has to be first enabled - * with the domctl XEN_DOMCTL_VM_EVENT_OP_MONITOR. + * with XEN_VM_EVENT_ENABLE. * * GET_CAPABILITIES can be used to determine which of these features is * available on a given platform. diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h index 959083d..c48bc21 100644 --- a/xen/include/public/vm_event.h +++ b/xen/include/public/vm_event.h @@ -36,6 +36,37 @@ #include "io/ring.h" /* + * There are currently three types of VM events. + */ + +/* + * Domain memory paging + * + * Page memory in and out. + */ +#define XEN_VM_EVENT_TYPE_PAGING 1 + +/* + * Monitor. + * + * The monitor interface can be used to register for various VM events. For + * example, there are HVM hypercalls to set the per-page access permissions + * of every page in a domain. When one of these permissions--independent, + * read, write, and execute--is violated, the VCPU is paused and a memory event + * is sent with what happened. The memory event handler can then resume the + * VCPU and redo the access with a XEN_VM_EVENT_RESUME option. + */ +#define XEN_VM_EVENT_TYPE_MONITOR 2 + +/* + * Sharing ENOMEM. + * + * Used to communicate failed allocations in the unshare path. + * XENMEM_sharing_op_resume is used to wake up vcpus that could not unshare. + */ +#define XEN_VM_EVENT_TYPE_SHARING 3 + +/* * Memory event flags */ From patchwork Tue Jul 16 17:06:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Petre Ovidiu PIRCALABU X-Patchwork-Id: 11046519 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0C5D96C5 for ; Tue, 16 Jul 2019 17:08:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E893428681 for ; Tue, 16 Jul 2019 17:08:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DCF7B28686; Tue, 16 Jul 2019 17:08:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 398BD28681 for ; Tue, 16 Jul 2019 17:08:10 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuM-0005nZ-Ra; Tue, 16 Jul 2019 17:06:30 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuL-0005n7-93 for xen-devel@lists.xenproject.org; Tue, 16 Jul 2019 17:06:29 +0000 X-Inumbo-ID: 0bcbf44b-a7ec-11e9-8980-bc764e045a96 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 0bcbf44b-a7ec-11e9-8980-bc764e045a96; Tue, 16 Jul 2019 17:06:27 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id C127A30644BA; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id B33A5305B7B3; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Tue, 16 Jul 2019 20:06:16 +0300 Message-Id: <05d37a1cb32ed76fe728f5ebb296aca55455b56a.1563293545.git.ppircalabu@bitdefender.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH v2 02/10] vm_event: Remove "ring" suffix from vm_event_check_ring X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Tamas K Lengyel , Wei Liu , Razvan Cojocaru , George Dunlap , Andrew Cooper , Julien Grall , Stefano Stabellini , Jan Beulich , Alexandru Isaila , Volodymyr Babchuk , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Decouple implementation from interface to allow vm_event_check to be used regardless of the vm_event underlying implementation. Signed-off-by: Petre Pircalabu Acked-by: Andrew Cooper Acked-by: Tamas K Lengyel Reviewed-by: Alexandru Isaila --- xen/arch/arm/mem_access.c | 2 +- xen/arch/x86/mm/mem_access.c | 4 ++-- xen/arch/x86/mm/mem_paging.c | 2 +- xen/common/mem_access.c | 2 +- xen/common/vm_event.c | 24 ++++++++++++------------ xen/drivers/passthrough/pci.c | 2 +- xen/include/xen/vm_event.h | 4 ++-- 7 files changed, 20 insertions(+), 20 deletions(-) diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c index 3e36202..d54760b 100644 --- a/xen/arch/arm/mem_access.c +++ b/xen/arch/arm/mem_access.c @@ -290,7 +290,7 @@ bool p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec) } /* Otherwise, check if there is a vm_event monitor subscriber */ - if ( !vm_event_check_ring(v->domain->vm_event_monitor) ) + if ( !vm_event_check(v->domain->vm_event_monitor) ) { /* No listener */ if ( p2m->access_required ) diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c index 0144f92..640352e 100644 --- a/xen/arch/x86/mm/mem_access.c +++ b/xen/arch/x86/mm/mem_access.c @@ -182,7 +182,7 @@ bool p2m_mem_access_check(paddr_t gpa, unsigned long gla, gfn_unlock(p2m, gfn, 0); /* Otherwise, check if there is a memory event listener, and send the message along */ - if ( !vm_event_check_ring(d->vm_event_monitor) || !req_ptr ) + if ( !vm_event_check(d->vm_event_monitor) || !req_ptr ) { /* No listener */ if ( p2m->access_required ) @@ -210,7 +210,7 @@ bool p2m_mem_access_check(paddr_t gpa, unsigned long gla, return true; } } - if ( vm_event_check_ring(d->vm_event_monitor) && + if ( vm_event_check(d->vm_event_monitor) && d->arch.monitor.inguest_pagefault_disabled && npfec.kind != npfec_kind_with_gla ) /* don't send a mem_event */ { diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c index 54a94fa..dc2a59a 100644 --- a/xen/arch/x86/mm/mem_paging.c +++ b/xen/arch/x86/mm/mem_paging.c @@ -44,7 +44,7 @@ int mem_paging_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg) goto out; rc = -ENODEV; - if ( unlikely(!vm_event_check_ring(d->vm_event_paging)) ) + if ( unlikely(!vm_event_check(d->vm_event_paging)) ) goto out; switch( mpo.op ) diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c index 010e6f8..51e4e2b 100644 --- a/xen/common/mem_access.c +++ b/xen/common/mem_access.c @@ -52,7 +52,7 @@ int mem_access_memop(unsigned long cmd, goto out; rc = -ENODEV; - if ( unlikely(!vm_event_check_ring(d->vm_event_monitor)) ) + if ( unlikely(!vm_event_check(d->vm_event_monitor)) ) goto out; switch ( mao.op ) diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 56b506a..515a917 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -179,7 +179,7 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved) { struct vm_event_domain *ved = *p_ved; - if ( vm_event_check_ring(ved) ) + if ( vm_event_check(ved) ) { struct vcpu *v; @@ -259,7 +259,7 @@ void vm_event_put_request(struct domain *d, RING_IDX req_prod; struct vcpu *curr = current; - if( !vm_event_check_ring(ved) ) + if( !vm_event_check(ved) ) return; if ( curr->domain != d ) @@ -362,7 +362,7 @@ static int vm_event_resume(struct domain *d, struct vm_event_domain *ved) */ ASSERT(d != current->domain); - if ( unlikely(!vm_event_check_ring(ved)) ) + if ( unlikely(!vm_event_check(ved)) ) return -ENODEV; /* Pull all responses off the ring. */ @@ -433,7 +433,7 @@ static int vm_event_resume(struct domain *d, struct vm_event_domain *ved) void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved) { - if( !vm_event_check_ring(ved) ) + if( !vm_event_check(ved) ) return; spin_lock(&ved->lock); @@ -488,7 +488,7 @@ static int vm_event_wait_slot(struct vm_event_domain *ved) return rc; } -bool vm_event_check_ring(struct vm_event_domain *ved) +bool vm_event_check(struct vm_event_domain *ved) { return ved && ved->ring_page; } @@ -508,7 +508,7 @@ bool vm_event_check_ring(struct vm_event_domain *ved) int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *ved, bool allow_sleep) { - if ( !vm_event_check_ring(ved) ) + if ( !vm_event_check(ved) ) return -EOPNOTSUPP; if ( (current->domain == d) && allow_sleep ) @@ -543,7 +543,7 @@ static void mem_sharing_notification(struct vcpu *v, unsigned int port) void vm_event_cleanup(struct domain *d) { #ifdef CONFIG_HAS_MEM_PAGING - if ( vm_event_check_ring(d->vm_event_paging) ) + if ( vm_event_check(d->vm_event_paging) ) { /* Destroying the wait queue head means waking up all * queued vcpus. This will drain the list, allowing @@ -556,13 +556,13 @@ void vm_event_cleanup(struct domain *d) (void)vm_event_disable(d, &d->vm_event_paging); } #endif - if ( vm_event_check_ring(d->vm_event_monitor) ) + if ( vm_event_check(d->vm_event_monitor) ) { destroy_waitqueue_head(&d->vm_event_monitor->wq); (void)vm_event_disable(d, &d->vm_event_monitor); } #ifdef CONFIG_HAS_MEM_SHARING - if ( vm_event_check_ring(d->vm_event_share) ) + if ( vm_event_check(d->vm_event_share) ) { destroy_waitqueue_head(&d->vm_event_share->wq); (void)vm_event_disable(d, &d->vm_event_share); @@ -646,7 +646,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) break; case XEN_VM_EVENT_DISABLE: - if ( vm_event_check_ring(d->vm_event_paging) ) + if ( vm_event_check(d->vm_event_paging) ) { domain_pause(d); rc = vm_event_disable(d, &d->vm_event_paging); @@ -683,7 +683,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) break; case XEN_VM_EVENT_DISABLE: - if ( vm_event_check_ring(d->vm_event_monitor) ) + if ( vm_event_check(d->vm_event_monitor) ) { domain_pause(d); rc = vm_event_disable(d, &d->vm_event_monitor); @@ -728,7 +728,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) break; case XEN_VM_EVENT_DISABLE: - if ( vm_event_check_ring(d->vm_event_share) ) + if ( vm_event_check(d->vm_event_share) ) { domain_pause(d); rc = vm_event_disable(d, &d->vm_event_share); diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index e886894..eec7686 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -1451,7 +1451,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag) /* Prevent device assign if mem paging or mem sharing have been * enabled for this domain */ if ( unlikely(d->arch.hvm.mem_sharing_enabled || - vm_event_check_ring(d->vm_event_paging) || + vm_event_check(d->vm_event_paging) || p2m_get_hostp2m(d)->global_logdirty) ) return -EXDEV; diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h index 3cc2b20..381be0b 100644 --- a/xen/include/xen/vm_event.h +++ b/xen/include/xen/vm_event.h @@ -29,8 +29,8 @@ /* Clean up on domain destruction */ void vm_event_cleanup(struct domain *d); -/* Returns whether a ring has been set up */ -bool vm_event_check_ring(struct vm_event_domain *ved); +/* Returns whether the VM event domain has been set up */ +bool vm_event_check(struct vm_event_domain *ved); /* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no * available space and the caller is a foreign domain. If the guest itself From patchwork Tue Jul 16 17:06:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Petre Ovidiu PIRCALABU X-Patchwork-Id: 11046531 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6B7991800 for ; Tue, 16 Jul 2019 17:08:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 53E9028682 for ; Tue, 16 Jul 2019 17:08:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3EDFC28688; Tue, 16 Jul 2019 17:08:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6171328682 for ; Tue, 16 Jul 2019 17:08:26 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuL-0005nE-Id; Tue, 16 Jul 2019 17:06:29 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuK-0005n2-Jb for xen-devel@lists.xenproject.org; Tue, 16 Jul 2019 17:06:28 +0000 X-Inumbo-ID: 0bea4cb3-a7ec-11e9-8980-bc764e045a96 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 0bea4cb3-a7ec-11e9-8980-bc764e045a96; Tue, 16 Jul 2019 17:06:27 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id CC961305FFA0; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id BDFDA305B7B4; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Tue, 16 Jul 2019 20:06:17 +0300 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH v2 03/10] vm_event: Add 'struct domain' backpointer to vm_event_domain. X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Tamas K Lengyel , Wei Liu , Razvan Cojocaru , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Stefano Stabellini , Jan Beulich , Alexandru Isaila MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Petre Pircalabu Acked-by: Tamas K Lengyel --- xen/common/vm_event.c | 2 ++ xen/include/xen/sched.h | 2 ++ 2 files changed, 4 insertions(+) diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 515a917..787c61c 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -71,6 +71,8 @@ static int vm_event_enable( if ( rc < 0 ) goto err; + ved->d = d; + rc = prepare_ring_for_helper(d, ring_gfn, &ved->ring_pg_struct, &ved->ring_page); if ( rc < 0 ) diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 97a3ab5..e3093d3 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -279,6 +279,8 @@ struct vcpu /* VM event */ struct vm_event_domain { + /* Domain reference */ + struct domain *d; spinlock_t lock; /* The ring has 64 entries */ unsigned char foreign_producers; From patchwork Tue Jul 16 17:06:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Petre Ovidiu PIRCALABU X-Patchwork-Id: 11046529 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4EE0417E0 for ; Tue, 16 Jul 2019 17:08:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 359CE2867F for ; Tue, 16 Jul 2019 17:08:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2A06828684; Tue, 16 Jul 2019 17:08:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9B1AE283A6 for ; Tue, 16 Jul 2019 17:08:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuN-0005nz-NM; Tue, 16 Jul 2019 17:06:31 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuL-0005nQ-S0 for xen-devel@lists.xenproject.org; Tue, 16 Jul 2019 17:06:29 +0000 X-Inumbo-ID: 0c00d5a5-a7ec-11e9-8980-bc764e045a96 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 0c00d5a5-a7ec-11e9-8980-bc764e045a96; Tue, 16 Jul 2019 17:06:27 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id D73DB305FFA2; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id CA0F7305B7B5; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Tue, 16 Jul 2019 20:06:18 +0300 Message-Id: <46863526d6b28433a75914399d52954c4ca19950.1563293545.git.ppircalabu@bitdefender.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH v2 04/10] vm_event: Simplify vm_event interface X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Tamas K Lengyel , Razvan Cojocaru , Wei Liu , George Dunlap , Andrew Cooper , Jan Beulich , Alexandru Isaila , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Remove the domain reference from calls to vm_event interface function and use the backpointer from vm_event_domain. Affected functions: - __vm_event_claim_slot / vm_event_claim_slot / vm_event_claim_slot_nosleep - vm_event_cancel_slot - vm_event_put_request Signed-off-by: Petre Pircalabu Acked-by: Tamas K Lengyel Reviewed-by: Alexandru Isaila --- xen/arch/x86/mm/mem_sharing.c | 5 ++--- xen/arch/x86/mm/p2m.c | 10 +++++----- xen/common/monitor.c | 4 ++-- xen/common/vm_event.c | 37 ++++++++++++++++++------------------- xen/include/xen/vm_event.h | 17 +++++++---------- 5 files changed, 34 insertions(+), 39 deletions(-) diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index f16a3f5..9d80389 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -557,8 +557,7 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn, .u.mem_sharing.p2mt = p2m_ram_shared }; - if ( (rc = __vm_event_claim_slot(d, - d->vm_event_share, allow_sleep)) < 0 ) + if ( (rc = __vm_event_claim_slot(d->vm_event_share, allow_sleep)) < 0 ) return rc; if ( v->domain == d ) @@ -567,7 +566,7 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn, vm_event_vcpu_pause(v); } - vm_event_put_request(d, d->vm_event_share, &req); + vm_event_put_request(d->vm_event_share, &req); return 0; } diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 4c99548..85de64f 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1652,7 +1652,7 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn, * correctness of the guest execution at this point. If this is the only * page that happens to be paged-out, we'll be okay.. but it's likely the * guest will crash shortly anyways. */ - int rc = vm_event_claim_slot(d, d->vm_event_paging); + int rc = vm_event_claim_slot(d->vm_event_paging); if ( rc < 0 ) return; @@ -1666,7 +1666,7 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn, /* Evict will fail now, tag this request for pager */ req.u.mem_paging.flags |= MEM_PAGING_EVICT_FAIL; - vm_event_put_request(d, d->vm_event_paging, &req); + vm_event_put_request(d->vm_event_paging, &req); } /** @@ -1704,7 +1704,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn_l) struct p2m_domain *p2m = p2m_get_hostp2m(d); /* We're paging. There should be a ring */ - int rc = vm_event_claim_slot(d, d->vm_event_paging); + int rc = vm_event_claim_slot(d->vm_event_paging); if ( rc == -EOPNOTSUPP ) { @@ -1746,7 +1746,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn_l) { /* gfn is already on its way back and vcpu is not paused */ out_cancel: - vm_event_cancel_slot(d, d->vm_event_paging); + vm_event_cancel_slot(d->vm_event_paging); return; } @@ -1754,7 +1754,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn_l) req.u.mem_paging.p2mt = p2mt; req.vcpu_id = v->vcpu_id; - vm_event_put_request(d, d->vm_event_paging, &req); + vm_event_put_request(d->vm_event_paging, &req); } /** diff --git a/xen/common/monitor.c b/xen/common/monitor.c index d5c9ff1..b8d33c4 100644 --- a/xen/common/monitor.c +++ b/xen/common/monitor.c @@ -93,7 +93,7 @@ int monitor_traps(struct vcpu *v, bool sync, vm_event_request_t *req) int rc; struct domain *d = v->domain; - rc = vm_event_claim_slot(d, d->vm_event_monitor); + rc = vm_event_claim_slot(d->vm_event_monitor); switch ( rc ) { case 0: @@ -125,7 +125,7 @@ int monitor_traps(struct vcpu *v, bool sync, vm_event_request_t *req) } vm_event_fill_regs(req); - vm_event_put_request(d, d->vm_event_monitor, req); + vm_event_put_request(d->vm_event_monitor, req); return rc; } diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 787c61c..a235d25 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -119,10 +119,11 @@ static unsigned int vm_event_ring_available(struct vm_event_domain *ved) * but need to be resumed where the ring is capable of processing at least * one event from them. */ -static void vm_event_wake_blocked(struct domain *d, struct vm_event_domain *ved) +static void vm_event_wake_blocked(struct vm_event_domain *ved) { struct vcpu *v; unsigned int i, j, k, avail_req = vm_event_ring_available(ved); + struct domain *d = ved->d; if ( avail_req == 0 || ved->blocked == 0 ) return; @@ -154,7 +155,7 @@ static void vm_event_wake_blocked(struct domain *d, struct vm_event_domain *ved) * was unable to do so, it is queued on a wait queue. These are woken as * needed, and take precedence over the blocked vCPUs. */ -static void vm_event_wake_queued(struct domain *d, struct vm_event_domain *ved) +static void vm_event_wake_queued(struct vm_event_domain *ved) { unsigned int avail_req = vm_event_ring_available(ved); @@ -169,12 +170,12 @@ static void vm_event_wake_queued(struct domain *d, struct vm_event_domain *ved) * call vm_event_wake() again, ensuring that any blocked vCPUs will get * unpaused once all the queued vCPUs have made it through. */ -void vm_event_wake(struct domain *d, struct vm_event_domain *ved) +void vm_event_wake(struct vm_event_domain *ved) { if ( !list_empty(&ved->wq.list) ) - vm_event_wake_queued(d, ved); + vm_event_wake_queued(ved); else - vm_event_wake_blocked(d, ved); + vm_event_wake_blocked(ved); } static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved) @@ -219,17 +220,16 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved) return 0; } -static void vm_event_release_slot(struct domain *d, - struct vm_event_domain *ved) +static void vm_event_release_slot(struct vm_event_domain *ved) { /* Update the accounting */ - if ( current->domain == d ) + if ( current->domain == ved->d ) ved->target_producers--; else ved->foreign_producers--; /* Kick any waiters */ - vm_event_wake(d, ved); + vm_event_wake(ved); } /* @@ -251,8 +251,7 @@ static void vm_event_mark_and_pause(struct vcpu *v, struct vm_event_domain *ved) * overly full and its continued execution would cause stalling and excessive * waiting. The vCPU will be automatically unpaused when the ring clears. */ -void vm_event_put_request(struct domain *d, - struct vm_event_domain *ved, +void vm_event_put_request(struct vm_event_domain *ved, vm_event_request_t *req) { vm_event_front_ring_t *front_ring; @@ -260,6 +259,7 @@ void vm_event_put_request(struct domain *d, unsigned int avail_req; RING_IDX req_prod; struct vcpu *curr = current; + struct domain *d = ved->d; if( !vm_event_check(ved) ) return; @@ -292,7 +292,7 @@ void vm_event_put_request(struct domain *d, RING_PUSH_REQUESTS(front_ring); /* We've actually *used* our reservation, so release the slot. */ - vm_event_release_slot(d, ved); + vm_event_release_slot(ved); /* Give this vCPU a black eye if necessary, on the way out. * See the comments above wake_blocked() for more information @@ -332,7 +332,7 @@ static int vm_event_get_response(struct domain *d, struct vm_event_domain *ved, /* Kick any waiters -- since we've just consumed an event, * there may be additional space available in the ring. */ - vm_event_wake(d, ved); + vm_event_wake(ved); rc = 1; @@ -433,13 +433,13 @@ static int vm_event_resume(struct domain *d, struct vm_event_domain *ved) return 0; } -void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved) +void vm_event_cancel_slot(struct vm_event_domain *ved) { if( !vm_event_check(ved) ) return; spin_lock(&ved->lock); - vm_event_release_slot(d, ved); + vm_event_release_slot(ved); spin_unlock(&ved->lock); } @@ -507,16 +507,15 @@ bool vm_event_check(struct vm_event_domain *ved) * 0: a spot has been reserved * */ -int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *ved, - bool allow_sleep) +int __vm_event_claim_slot(struct vm_event_domain *ved, bool allow_sleep) { if ( !vm_event_check(ved) ) return -EOPNOTSUPP; - if ( (current->domain == d) && allow_sleep ) + if ( (current->domain == ved->d) && allow_sleep ) return vm_event_wait_slot(ved); else - return vm_event_grab_slot(ved, current->domain != d); + return vm_event_grab_slot(ved, current->domain != ved->d); } #ifdef CONFIG_HAS_MEM_PAGING diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h index 381be0b..ff30999 100644 --- a/xen/include/xen/vm_event.h +++ b/xen/include/xen/vm_event.h @@ -45,23 +45,20 @@ bool vm_event_check(struct vm_event_domain *ved); * cancel_slot(), both of which are guaranteed to * succeed. */ -int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *ved, - bool allow_sleep); -static inline int vm_event_claim_slot(struct domain *d, - struct vm_event_domain *ved) +int __vm_event_claim_slot(struct vm_event_domain *ved, bool allow_sleep); +static inline int vm_event_claim_slot(struct vm_event_domain *ved) { - return __vm_event_claim_slot(d, ved, true); + return __vm_event_claim_slot(ved, true); } -static inline int vm_event_claim_slot_nosleep(struct domain *d, - struct vm_event_domain *ved) +static inline int vm_event_claim_slot_nosleep(struct vm_event_domain *ved) { - return __vm_event_claim_slot(d, ved, false); + return __vm_event_claim_slot(ved, false); } -void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved); +void vm_event_cancel_slot(struct vm_event_domain *ved); -void vm_event_put_request(struct domain *d, struct vm_event_domain *ved, +void vm_event_put_request(struct vm_event_domain *ved, vm_event_request_t *req); int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec); From patchwork Tue Jul 16 17:06:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Petre Ovidiu PIRCALABU X-Patchwork-Id: 11046517 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 55C116C5 for ; Tue, 16 Jul 2019 17:08:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D9E828681 for ; Tue, 16 Jul 2019 17:08:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 31C3B28684; Tue, 16 Jul 2019 17:08:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C498F28686 for ; Tue, 16 Jul 2019 17:08:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuO-0005oF-4X; Tue, 16 Jul 2019 17:06:32 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuM-0005nX-PT for xen-devel@lists.xenproject.org; Tue, 16 Jul 2019 17:06:30 +0000 X-Inumbo-ID: 0cf9c1ea-a7ec-11e9-abc9-8f14cbf2e7c2 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 0cf9c1ea-a7ec-11e9-abc9-8f14cbf2e7c2; Tue, 16 Jul 2019 17:06:29 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id E1DAE305FFA3; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id D3C83305B7B6; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Tue, 16 Jul 2019 20:06:19 +0300 Message-Id: <93d50867ea8e45270a180a8f93aaed5a89619510.1563293545.git.ppircalabu@bitdefender.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH v2 05/10] vm_event: Move struct vm_event_domain to vm_event.c X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Tamas K Lengyel , Wei Liu , Razvan Cojocaru , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Stefano Stabellini , Jan Beulich , Alexandru Isaila MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The vm_event_domain members are not accessed outside vm_event.c so it's better to hide de implementation details. Signed-off-by: Petre Pircalabu Acked-by: Andrew Cooper Acked-by: Tamas K Lengyel --- xen/common/vm_event.c | 26 ++++++++++++++++++++++++++ xen/include/xen/sched.h | 26 +------------------------- 2 files changed, 27 insertions(+), 25 deletions(-) diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index a235d25..21895c2 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -35,6 +35,32 @@ #define xen_rmb() smp_rmb() #define xen_wmb() smp_wmb() +/* VM event */ +struct vm_event_domain +{ + /* Domain reference */ + struct domain *d; + spinlock_t lock; + /* The ring has 64 entries */ + unsigned char foreign_producers; + unsigned char target_producers; + /* shared ring page */ + void *ring_page; + struct page_info *ring_pg_struct; + /* front-end ring */ + vm_event_front_ring_t front_ring; + /* event channel port (vcpu0 only) */ + int xen_port; + /* vm_event bit for vcpu->pause_flags */ + int pause_flag; + /* list of vcpus waiting for room in the ring */ + struct waitqueue_head wq; + /* the number of vCPUs blocked */ + unsigned int blocked; + /* The last vcpu woken up */ + unsigned int last_vcpu_wake_up; +}; + static int vm_event_enable( struct domain *d, struct xen_domctl_vm_event_op *vec, diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index e3093d3..19980d2 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -276,31 +276,7 @@ struct vcpu #define domain_lock(d) spin_lock_recursive(&(d)->domain_lock) #define domain_unlock(d) spin_unlock_recursive(&(d)->domain_lock) -/* VM event */ -struct vm_event_domain -{ - /* Domain reference */ - struct domain *d; - spinlock_t lock; - /* The ring has 64 entries */ - unsigned char foreign_producers; - unsigned char target_producers; - /* shared ring page */ - void *ring_page; - struct page_info *ring_pg_struct; - /* front-end ring */ - vm_event_front_ring_t front_ring; - /* event channel port (vcpu0 only) */ - int xen_port; - /* vm_event bit for vcpu->pause_flags */ - int pause_flag; - /* list of vcpus waiting for room in the ring */ - struct waitqueue_head wq; - /* the number of vCPUs blocked */ - unsigned int blocked; - /* The last vcpu woken up */ - unsigned int last_vcpu_wake_up; -}; +struct vm_event_domain; struct evtchn_port_ops; From patchwork Tue Jul 16 17:06:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Petre Ovidiu PIRCALABU X-Patchwork-Id: 11046535 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3D30614DB for ; Tue, 16 Jul 2019 17:08:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1F104283A6 for ; Tue, 16 Jul 2019 17:08:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0F95228681; Tue, 16 Jul 2019 17:08:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9154E283A6 for ; Tue, 16 Jul 2019 17:08:31 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuQ-0005os-1D; Tue, 16 Jul 2019 17:06:34 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuO-0005oJ-9q for xen-devel@lists.xenproject.org; Tue, 16 Jul 2019 17:06:32 +0000 X-Inumbo-ID: 0cfbd976-a7ec-11e9-abc5-eb10bd455aa4 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 0cfbd976-a7ec-11e9-abc5-eb10bd455aa4; Tue, 16 Jul 2019 17:06:29 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id EF0EA305FFA4; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id DBCEB304F602; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Tue, 16 Jul 2019 20:06:20 +0300 Message-Id: <880b61f88b9d19b3ef2bd43713caaab0528a190e.1563293545.git.ppircalabu@bitdefender.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH v2 06/10] vm_event: Decouple implementation details from interface. X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Alexandru Isaila , Tamas K Lengyel , Razvan Cojocaru MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP To accommodate a second implementation of the vm_event subsystem, the current one (ring) should be decoupled from the xen/vm_event.h interface. Signed-off-by: Petre Pircalabu --- xen/common/vm_event.c | 368 ++++++++++++++++++++++----------------------- xen/include/xen/vm_event.h | 60 +++++++- 2 files changed, 236 insertions(+), 192 deletions(-) diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 21895c2..e6a7a29 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -35,12 +35,13 @@ #define xen_rmb() smp_rmb() #define xen_wmb() smp_wmb() -/* VM event */ -struct vm_event_domain +#define to_ring(_ved) container_of((_ved), struct vm_event_ring_domain, ved) + +/* VM event ring implementation */ +struct vm_event_ring_domain { - /* Domain reference */ - struct domain *d; - spinlock_t lock; + /* VM event domain */ + struct vm_event_domain ved; /* The ring has 64 entries */ unsigned char foreign_producers; unsigned char target_producers; @@ -61,7 +62,9 @@ struct vm_event_domain unsigned int last_vcpu_wake_up; }; -static int vm_event_enable( +static const struct vm_event_ops vm_event_ring_ops; + +static int vm_event_ring_enable( struct domain *d, struct xen_domctl_vm_event_op *vec, struct vm_event_domain **p_ved, @@ -71,7 +74,7 @@ static int vm_event_enable( { int rc; unsigned long ring_gfn = d->arch.hvm.params[param]; - struct vm_event_domain *ved; + struct vm_event_ring_domain *impl; /* * Only one connected agent at a time. If the helper crashed, the ring is @@ -84,28 +87,28 @@ static int vm_event_enable( if ( ring_gfn == 0 ) return -EOPNOTSUPP; - ved = xzalloc(struct vm_event_domain); - if ( !ved ) + impl = xzalloc(struct vm_event_ring_domain); + if ( !impl ) return -ENOMEM; /* Trivial setup. */ - spin_lock_init(&ved->lock); - init_waitqueue_head(&ved->wq); - ved->pause_flag = pause_flag; + spin_lock_init(&impl->ved.lock); + init_waitqueue_head(&impl->wq); + impl->ved.d = d; + impl->ved.ops = &vm_event_ring_ops; + impl->pause_flag = pause_flag; rc = vm_event_init_domain(d); if ( rc < 0 ) goto err; - ved->d = d; - - rc = prepare_ring_for_helper(d, ring_gfn, &ved->ring_pg_struct, - &ved->ring_page); + rc = prepare_ring_for_helper(d, ring_gfn, &impl->ring_pg_struct, + &impl->ring_page); if ( rc < 0 ) goto err; - FRONT_RING_INIT(&ved->front_ring, - (vm_event_sring_t *)ved->ring_page, + FRONT_RING_INIT(&impl->front_ring, + (vm_event_sring_t *)impl->ring_page, PAGE_SIZE); rc = alloc_unbound_xen_event_channel(d, 0, current->domain->domain_id, @@ -113,26 +116,26 @@ static int vm_event_enable( if ( rc < 0 ) goto err; - ved->xen_port = vec->u.enable.port = rc; + impl->xen_port = vec->u.enable.port = rc; /* Success. Fill in the domain's appropriate ved. */ - *p_ved = ved; + *p_ved = &impl->ved; return 0; err: - destroy_ring_for_helper(&ved->ring_page, ved->ring_pg_struct); - xfree(ved); + destroy_ring_for_helper(&impl->ring_page, impl->ring_pg_struct); + xfree(impl); return rc; } -static unsigned int vm_event_ring_available(struct vm_event_domain *ved) +static unsigned int vm_event_ring_available(struct vm_event_ring_domain *impl) { - int avail_req = RING_FREE_REQUESTS(&ved->front_ring); + int avail_req = RING_FREE_REQUESTS(&impl->front_ring); - avail_req -= ved->target_producers; - avail_req -= ved->foreign_producers; + avail_req -= impl->target_producers; + avail_req -= impl->foreign_producers; BUG_ON(avail_req < 0); @@ -140,38 +143,38 @@ static unsigned int vm_event_ring_available(struct vm_event_domain *ved) } /* - * vm_event_wake_blocked() will wakeup vcpus waiting for room in the + * vm_event_ring_wake_blocked() will wakeup vcpus waiting for room in the * ring. These vCPUs were paused on their way out after placing an event, * but need to be resumed where the ring is capable of processing at least * one event from them. */ -static void vm_event_wake_blocked(struct vm_event_domain *ved) +static void vm_event_ring_wake_blocked(struct vm_event_ring_domain *impl) { struct vcpu *v; - unsigned int i, j, k, avail_req = vm_event_ring_available(ved); - struct domain *d = ved->d; + unsigned int i, j, k, avail_req = vm_event_ring_available(impl); + struct domain *d = impl->ved.d; - if ( avail_req == 0 || ved->blocked == 0 ) + if ( avail_req == 0 || impl->blocked == 0 ) return; /* We remember which vcpu last woke up to avoid scanning always linearly * from zero and starving higher-numbered vcpus under high load */ - for ( i = ved->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++ ) + for ( i = impl->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++ ) { k = i % d->max_vcpus; v = d->vcpu[k]; if ( !v ) continue; - if ( !ved->blocked || avail_req == 0 ) + if ( !impl->blocked || avail_req == 0 ) break; - if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) ) + if ( test_and_clear_bit(impl->pause_flag, &v->pause_flags) ) { vcpu_unpause(v); avail_req--; - ved->blocked--; - ved->last_vcpu_wake_up = k; + impl->blocked--; + impl->last_vcpu_wake_up = k; } } } @@ -181,93 +184,90 @@ static void vm_event_wake_blocked(struct vm_event_domain *ved) * was unable to do so, it is queued on a wait queue. These are woken as * needed, and take precedence over the blocked vCPUs. */ -static void vm_event_wake_queued(struct vm_event_domain *ved) +static void vm_event_ring_wake_queued(struct vm_event_ring_domain *impl) { - unsigned int avail_req = vm_event_ring_available(ved); + unsigned int avail_req = vm_event_ring_available(impl); if ( avail_req > 0 ) - wake_up_nr(&ved->wq, avail_req); + wake_up_nr(&impl->wq, avail_req); } /* - * vm_event_wake() will wakeup all vcpus waiting for the ring to + * vm_event_ring_wake() will wakeup all vcpus waiting for the ring to * become available. If we have queued vCPUs, they get top priority. We * are guaranteed that they will go through code paths that will eventually - * call vm_event_wake() again, ensuring that any blocked vCPUs will get + * call vm_event_ring_wake() again, ensuring that any blocked vCPUs will get * unpaused once all the queued vCPUs have made it through. */ -void vm_event_wake(struct vm_event_domain *ved) +static void vm_event_ring_wake(struct vm_event_ring_domain *impl) { - if ( !list_empty(&ved->wq.list) ) - vm_event_wake_queued(ved); + if ( !list_empty(&impl->wq.list) ) + vm_event_ring_wake_queued(impl); else - vm_event_wake_blocked(ved); + vm_event_ring_wake_blocked(impl); } -static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved) +static int vm_event_ring_disable(struct vm_event_domain **p_ved) { - struct vm_event_domain *ved = *p_ved; - - if ( vm_event_check(ved) ) - { - struct vcpu *v; + struct vcpu *v; + struct domain *d = (*p_ved)->d; + struct vm_event_ring_domain *impl = to_ring(*p_ved); - spin_lock(&ved->lock); + spin_lock(&impl->ved.lock); - if ( !list_empty(&ved->wq.list) ) - { - spin_unlock(&ved->lock); - return -EBUSY; - } + if ( !list_empty(&impl->wq.list) ) + { + spin_unlock(&impl->ved.lock); + return -EBUSY; + } - /* Free domU's event channel and leave the other one unbound */ - free_xen_event_channel(d, ved->xen_port); + /* Free domU's event channel and leave the other one unbound */ + free_xen_event_channel(d, impl->xen_port); - /* Unblock all vCPUs */ - for_each_vcpu ( d, v ) + /* Unblock all vCPUs */ + for_each_vcpu ( d, v ) + { + if ( test_and_clear_bit(impl->pause_flag, &v->pause_flags) ) { - if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) ) - { - vcpu_unpause(v); - ved->blocked--; - } + vcpu_unpause(v); + impl->blocked--; } + } - destroy_ring_for_helper(&ved->ring_page, ved->ring_pg_struct); + destroy_ring_for_helper(&impl->ring_page, impl->ring_pg_struct); - vm_event_cleanup_domain(d); + vm_event_cleanup_domain(d); - spin_unlock(&ved->lock); - } + spin_unlock(&impl->ved.lock); - xfree(ved); + xfree(impl); *p_ved = NULL; - return 0; } -static void vm_event_release_slot(struct vm_event_domain *ved) +static void vm_event_ring_release_slot(struct vm_event_ring_domain *impl) { /* Update the accounting */ - if ( current->domain == ved->d ) - ved->target_producers--; + if ( current->domain == impl->ved.d ) + impl->target_producers--; else - ved->foreign_producers--; + impl->foreign_producers--; /* Kick any waiters */ - vm_event_wake(ved); + vm_event_ring_wake(impl); } /* - * vm_event_mark_and_pause() tags vcpu and put it to sleep. - * The vcpu will resume execution in vm_event_wake_blocked(). + * vm_event_ring_mark_and_pause() tags vcpu and put it to sleep. + * The vcpu will resume execution in vm_event_ring_wake_blocked(). */ -static void vm_event_mark_and_pause(struct vcpu *v, struct vm_event_domain *ved) +static void vm_event_ring_mark_and_pause(struct vcpu *v, + struct vm_event_ring_domain *impl) { - if ( !test_and_set_bit(ved->pause_flag, &v->pause_flags) ) + if ( !test_and_set_bit(impl->pause_flag, &v->pause_flags) ) { vcpu_pause_nosync(v); - ved->blocked++; + impl->blocked++; } } @@ -277,34 +277,31 @@ static void vm_event_mark_and_pause(struct vcpu *v, struct vm_event_domain *ved) * overly full and its continued execution would cause stalling and excessive * waiting. The vCPU will be automatically unpaused when the ring clears. */ -void vm_event_put_request(struct vm_event_domain *ved, - vm_event_request_t *req) +static void vm_event_ring_put_request(struct vm_event_domain *ved, + vm_event_request_t *req) { vm_event_front_ring_t *front_ring; int free_req; unsigned int avail_req; RING_IDX req_prod; struct vcpu *curr = current; - struct domain *d = ved->d; - - if( !vm_event_check(ved) ) - return; + struct vm_event_ring_domain *impl = to_ring(ved); - if ( curr->domain != d ) + if ( curr->domain != ved->d ) { req->flags |= VM_EVENT_FLAG_FOREIGN; if ( !(req->flags & VM_EVENT_FLAG_VCPU_PAUSED) ) gdprintk(XENLOG_WARNING, "d%dv%d was not paused.\n", - d->domain_id, req->vcpu_id); + ved->d->domain_id, req->vcpu_id); } req->version = VM_EVENT_INTERFACE_VERSION; - spin_lock(&ved->lock); + spin_lock(&impl->ved.lock); /* Due to the reservations, this step must succeed. */ - front_ring = &ved->front_ring; + front_ring = &impl->front_ring; free_req = RING_FREE_REQUESTS(front_ring); ASSERT(free_req > 0); @@ -318,31 +315,31 @@ void vm_event_put_request(struct vm_event_domain *ved, RING_PUSH_REQUESTS(front_ring); /* We've actually *used* our reservation, so release the slot. */ - vm_event_release_slot(ved); + vm_event_ring_release_slot(impl); /* Give this vCPU a black eye if necessary, on the way out. * See the comments above wake_blocked() for more information * on how this mechanism works to avoid waiting. */ - avail_req = vm_event_ring_available(ved); - if( curr->domain == d && avail_req < d->max_vcpus && + avail_req = vm_event_ring_available(impl); + if( curr->domain == ved->d && avail_req < ved->d->max_vcpus && !atomic_read(&curr->vm_event_pause_count) ) - vm_event_mark_and_pause(curr, ved); + vm_event_ring_mark_and_pause(curr, impl); - spin_unlock(&ved->lock); + spin_unlock(&impl->ved.lock); - notify_via_xen_event_channel(d, ved->xen_port); + notify_via_xen_event_channel(ved->d, impl->xen_port); } -static int vm_event_get_response(struct domain *d, struct vm_event_domain *ved, - vm_event_response_t *rsp) +static int vm_event_ring_get_response(struct vm_event_ring_domain *impl, + vm_event_response_t *rsp) { vm_event_front_ring_t *front_ring; RING_IDX rsp_cons; int rc = 0; - spin_lock(&ved->lock); + spin_lock(&impl->ved.lock); - front_ring = &ved->front_ring; + front_ring = &impl->front_ring; rsp_cons = front_ring->rsp_cons; if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) ) @@ -358,12 +355,12 @@ static int vm_event_get_response(struct domain *d, struct vm_event_domain *ved, /* Kick any waiters -- since we've just consumed an event, * there may be additional space available in the ring. */ - vm_event_wake(ved); + vm_event_ring_wake(impl); rc = 1; out: - spin_unlock(&ved->lock); + spin_unlock(&impl->ved.lock); return rc; } @@ -376,10 +373,13 @@ static int vm_event_get_response(struct domain *d, struct vm_event_domain *ved, * Note: responses are handled the same way regardless of which ring they * arrive on. */ -static int vm_event_resume(struct domain *d, struct vm_event_domain *ved) +static int vm_event_ring_resume(struct vm_event_ring_domain *impl) { vm_event_response_t rsp; + if ( unlikely(!impl || !vm_event_check(&impl->ved)) ) + return -ENODEV; + /* * vm_event_resume() runs in either XEN_VM_EVENT_* domctls, or * EVTCHN_send context from the introspection consumer. Both contexts @@ -388,13 +388,10 @@ static int vm_event_resume(struct domain *d, struct vm_event_domain *ved) * below, this covers the case where we would need to iterate over all * of them more succintly. */ - ASSERT(d != current->domain); - - if ( unlikely(!vm_event_check(ved)) ) - return -ENODEV; + ASSERT(impl->ved.d != current->domain); /* Pull all responses off the ring. */ - while ( vm_event_get_response(d, ved, &rsp) ) + while ( vm_event_ring_get_response(impl, &rsp) ) { struct vcpu *v; @@ -405,7 +402,7 @@ static int vm_event_resume(struct domain *d, struct vm_event_domain *ved) } /* Validate the vcpu_id in the response. */ - v = domain_vcpu(d, rsp.vcpu_id); + v = domain_vcpu(impl->ved.d, rsp.vcpu_id); if ( !v ) continue; @@ -419,7 +416,7 @@ static int vm_event_resume(struct domain *d, struct vm_event_domain *ved) { #ifdef CONFIG_HAS_MEM_PAGING if ( rsp.reason == VM_EVENT_REASON_MEM_PAGING ) - p2m_mem_paging_resume(d, &rsp); + p2m_mem_paging_resume(impl->ved.d, &rsp); #endif /* @@ -439,7 +436,7 @@ static int vm_event_resume(struct domain *d, struct vm_event_domain *ved) * Check in arch-specific handler to avoid bitmask overhead when * not supported. */ - vm_event_toggle_singlestep(d, v, &rsp); + vm_event_toggle_singlestep(impl->ved.d, v, &rsp); /* Check for altp2m switch */ if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M ) @@ -459,72 +456,69 @@ static int vm_event_resume(struct domain *d, struct vm_event_domain *ved) return 0; } -void vm_event_cancel_slot(struct vm_event_domain *ved) +static void vm_event_ring_cancel_slot(struct vm_event_domain *ved) { - if( !vm_event_check(ved) ) - return; - spin_lock(&ved->lock); - vm_event_release_slot(ved); + vm_event_ring_release_slot(to_ring(ved)); spin_unlock(&ved->lock); } -static int vm_event_grab_slot(struct vm_event_domain *ved, int foreign) +static int vm_event_ring_grab_slot(struct vm_event_ring_domain *impl, int foreign) { unsigned int avail_req; int rc; - if ( !ved->ring_page ) + if ( !impl->ring_page ) return -EOPNOTSUPP; - spin_lock(&ved->lock); + spin_lock(&impl->ved.lock); - avail_req = vm_event_ring_available(ved); + avail_req = vm_event_ring_available(impl); rc = -EBUSY; if ( avail_req == 0 ) goto out; if ( !foreign ) - ved->target_producers++; + impl->target_producers++; else - ved->foreign_producers++; + impl->foreign_producers++; rc = 0; out: - spin_unlock(&ved->lock); + spin_unlock(&impl->ved.lock); return rc; } /* Simple try_grab wrapper for use in the wait_event() macro. */ -static int vm_event_wait_try_grab(struct vm_event_domain *ved, int *rc) +static int vm_event_ring_wait_try_grab(struct vm_event_ring_domain *impl, int *rc) { - *rc = vm_event_grab_slot(ved, 0); + *rc = vm_event_ring_grab_slot(impl, 0); return *rc; } -/* Call vm_event_grab_slot() until the ring doesn't exist, or is available. */ -static int vm_event_wait_slot(struct vm_event_domain *ved) +/* Call vm_event_ring_grab_slot() until the ring doesn't exist, or is available. */ +static int vm_event_ring_wait_slot(struct vm_event_ring_domain *impl) { int rc = -EBUSY; - wait_event(ved->wq, vm_event_wait_try_grab(ved, &rc) != -EBUSY); + wait_event(impl->wq, vm_event_ring_wait_try_grab(impl, &rc) != -EBUSY); return rc; } -bool vm_event_check(struct vm_event_domain *ved) +static bool vm_event_ring_check(struct vm_event_domain *ved) { - return ved && ved->ring_page; + return to_ring(ved)->ring_page != NULL; } /* * Determines whether or not the current vCPU belongs to the target domain, * and calls the appropriate wait function. If it is a guest vCPU, then we - * use vm_event_wait_slot() to reserve a slot. As long as there is a ring, + * use vm_event_ring_wait_slot() to reserve a slot. As long as there is a ring, * this function will always return 0 for a guest. For a non-guest, we check * for space and return -EBUSY if the ring is not available. * @@ -533,36 +527,33 @@ bool vm_event_check(struct vm_event_domain *ved) * 0: a spot has been reserved * */ -int __vm_event_claim_slot(struct vm_event_domain *ved, bool allow_sleep) +static int vm_event_ring_claim_slot(struct vm_event_domain *ved, bool allow_sleep) { - if ( !vm_event_check(ved) ) - return -EOPNOTSUPP; - if ( (current->domain == ved->d) && allow_sleep ) - return vm_event_wait_slot(ved); + return vm_event_ring_wait_slot(to_ring(ved)); else - return vm_event_grab_slot(ved, current->domain != ved->d); + return vm_event_ring_grab_slot(to_ring(ved), current->domain != ved->d); } #ifdef CONFIG_HAS_MEM_PAGING /* Registered with Xen-bound event channel for incoming notifications. */ static void mem_paging_notification(struct vcpu *v, unsigned int port) { - vm_event_resume(v->domain, v->domain->vm_event_paging); + vm_event_ring_resume(to_ring(v->domain->vm_event_paging)); } #endif /* Registered with Xen-bound event channel for incoming notifications. */ static void monitor_notification(struct vcpu *v, unsigned int port) { - vm_event_resume(v->domain, v->domain->vm_event_monitor); + vm_event_ring_resume(to_ring(v->domain->vm_event_monitor)); } #ifdef CONFIG_HAS_MEM_SHARING /* Registered with Xen-bound event channel for incoming notifications. */ static void mem_sharing_notification(struct vcpu *v, unsigned int port) { - vm_event_resume(v->domain, v->domain->vm_event_share); + vm_event_ring_resume(to_ring(v->domain->vm_event_share)); } #endif @@ -571,32 +562,32 @@ void vm_event_cleanup(struct domain *d) { #ifdef CONFIG_HAS_MEM_PAGING if ( vm_event_check(d->vm_event_paging) ) - { - /* Destroying the wait queue head means waking up all - * queued vcpus. This will drain the list, allowing - * the disable routine to complete. It will also drop - * all domain refs the wait-queued vcpus are holding. - * Finally, because this code path involves previously - * pausing the domain (domain_kill), unpausing the - * vcpus causes no harm. */ - destroy_waitqueue_head(&d->vm_event_paging->wq); - (void)vm_event_disable(d, &d->vm_event_paging); - } + d->vm_event_paging->ops->cleanup(&d->vm_event_paging); #endif + if ( vm_event_check(d->vm_event_monitor) ) - { - destroy_waitqueue_head(&d->vm_event_monitor->wq); - (void)vm_event_disable(d, &d->vm_event_monitor); - } + d->vm_event_monitor->ops->cleanup(&d->vm_event_monitor); + #ifdef CONFIG_HAS_MEM_SHARING if ( vm_event_check(d->vm_event_share) ) - { - destroy_waitqueue_head(&d->vm_event_share->wq); - (void)vm_event_disable(d, &d->vm_event_share); - } + d->vm_event_share->ops->cleanup(&d->vm_event_share); #endif } +static void vm_event_ring_cleanup(struct vm_event_domain **_ved) +{ + struct vm_event_ring_domain *impl = to_ring(*_ved); + /* Destroying the wait queue head means waking up all + * queued vcpus. This will drain the list, allowing + * the disable routine to complete. It will also drop + * all domain refs the wait-queued vcpus are holding. + * Finally, because this code path involves previously + * pausing the domain (domain_kill), unpausing the + * vcpus causes no harm. */ + destroy_waitqueue_head(&impl->wq); + (void)vm_event_ring_disable(_ved); +} + int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) { int rc; @@ -666,23 +657,22 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) break; /* domain_pause() not required here, see XSA-99 */ - rc = vm_event_enable(d, vec, &d->vm_event_paging, _VPF_mem_paging, + rc = vm_event_ring_enable(d, vec, &d->vm_event_paging, _VPF_mem_paging, HVM_PARAM_PAGING_RING_PFN, mem_paging_notification); } break; case XEN_VM_EVENT_DISABLE: - if ( vm_event_check(d->vm_event_paging) ) - { - domain_pause(d); - rc = vm_event_disable(d, &d->vm_event_paging); - domain_unpause(d); - } + if ( !vm_event_check(d->vm_event_paging) ) + break; + domain_pause(d); + rc = vm_event_ring_disable(&d->vm_event_paging); + domain_unpause(d); break; case XEN_VM_EVENT_RESUME: - rc = vm_event_resume(d, d->vm_event_paging); + rc = vm_event_ring_resume(to_ring(d->vm_event_paging)); break; default: @@ -704,23 +694,22 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) rc = arch_monitor_init_domain(d); if ( rc ) break; - rc = vm_event_enable(d, vec, &d->vm_event_monitor, _VPF_mem_access, + rc = vm_event_ring_enable(d, vec, &d->vm_event_monitor, _VPF_mem_access, HVM_PARAM_MONITOR_RING_PFN, monitor_notification); break; case XEN_VM_EVENT_DISABLE: - if ( vm_event_check(d->vm_event_monitor) ) - { - domain_pause(d); - rc = vm_event_disable(d, &d->vm_event_monitor); - arch_monitor_cleanup_domain(d); - domain_unpause(d); - } + if ( !vm_event_check(d->vm_event_monitor) ) + break; + domain_pause(d); + rc = vm_event_ring_disable(&d->vm_event_monitor); + arch_monitor_cleanup_domain(d); + domain_unpause(d); break; case XEN_VM_EVENT_RESUME: - rc = vm_event_resume(d, d->vm_event_monitor); + rc = vm_event_ring_resume(to_ring(d->vm_event_monitor)); break; default: @@ -749,22 +738,21 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) break; /* domain_pause() not required here, see XSA-99 */ - rc = vm_event_enable(d, vec, &d->vm_event_share, _VPF_mem_sharing, + rc = vm_event_ring_enable(d, vec, &d->vm_event_share, _VPF_mem_sharing, HVM_PARAM_SHARING_RING_PFN, mem_sharing_notification); break; case XEN_VM_EVENT_DISABLE: - if ( vm_event_check(d->vm_event_share) ) - { - domain_pause(d); - rc = vm_event_disable(d, &d->vm_event_share); - domain_unpause(d); - } + if ( !vm_event_check(d->vm_event_share) ) + break; + domain_pause(d); + rc = vm_event_ring_disable(&d->vm_event_share); + domain_unpause(d); break; case XEN_VM_EVENT_RESUME: - rc = vm_event_resume(d, d->vm_event_share); + rc = vm_event_ring_resume(to_ring(d->vm_event_share)); break; default: @@ -816,6 +804,14 @@ void vm_event_vcpu_unpause(struct vcpu *v) vcpu_unpause(v); } +static const struct vm_event_ops vm_event_ring_ops = { + .check = vm_event_ring_check, + .cleanup = vm_event_ring_cleanup, + .claim_slot = vm_event_ring_claim_slot, + .cancel_slot = vm_event_ring_cancel_slot, + .put_request = vm_event_ring_put_request +}; + /* * Local variables: * mode: C diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h index ff30999..21a3f50 100644 --- a/xen/include/xen/vm_event.h +++ b/xen/include/xen/vm_event.h @@ -23,14 +23,43 @@ #ifndef __VM_EVENT_H__ #define __VM_EVENT_H__ -#include +#include +#include +#include #include +struct domain; +struct vm_event_domain; + +struct vm_event_ops +{ + bool (*check)(struct vm_event_domain *ved); + void (*cleanup)(struct vm_event_domain **_ved); + int (*claim_slot)(struct vm_event_domain *ved, bool allow_sleep); + void (*cancel_slot)(struct vm_event_domain *ved); + void (*put_request)(struct vm_event_domain *ved, vm_event_request_t *req); +}; + +struct vm_event_domain +{ + /* Domain reference */ + struct domain *d; + + /* vm_event_ops */ + const struct vm_event_ops *ops; + + /* vm_event domain lock */ + spinlock_t lock; +}; + /* Clean up on domain destruction */ void vm_event_cleanup(struct domain *d); /* Returns whether the VM event domain has been set up */ -bool vm_event_check(struct vm_event_domain *ved); +static inline bool vm_event_check(struct vm_event_domain *ved) +{ + return (ved) && ved->ops->check(ved); +} /* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no * available space and the caller is a foreign domain. If the guest itself @@ -45,7 +74,14 @@ bool vm_event_check(struct vm_event_domain *ved); * cancel_slot(), both of which are guaranteed to * succeed. */ -int __vm_event_claim_slot(struct vm_event_domain *ved, bool allow_sleep); +static inline int __vm_event_claim_slot(struct vm_event_domain *ved, bool allow_sleep) +{ + if ( !vm_event_check(ved) ) + return -EOPNOTSUPP; + + return ved->ops->claim_slot(ved, allow_sleep); +} + static inline int vm_event_claim_slot(struct vm_event_domain *ved) { return __vm_event_claim_slot(ved, true); @@ -56,10 +92,22 @@ static inline int vm_event_claim_slot_nosleep(struct vm_event_domain *ved) return __vm_event_claim_slot(ved, false); } -void vm_event_cancel_slot(struct vm_event_domain *ved); +static inline void vm_event_cancel_slot(struct vm_event_domain *ved) +{ + if ( !vm_event_check(ved) ) + return; -void vm_event_put_request(struct vm_event_domain *ved, - vm_event_request_t *req); + ved->ops->cancel_slot(ved); +} + +static inline void vm_event_put_request(struct vm_event_domain *ved, + vm_event_request_t *req) +{ + if ( !vm_event_check(ved) ) + return; + + ved->ops->put_request(ved, req); +} int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec); From patchwork Tue Jul 16 17:06:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Petre Ovidiu PIRCALABU X-Patchwork-Id: 11046523 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19FC714DB for ; Tue, 16 Jul 2019 17:08:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F2CF528681 for ; Tue, 16 Jul 2019 17:08:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E6D3628686; Tue, 16 Jul 2019 17:08:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3BAEB28681 for ; Tue, 16 Jul 2019 17:08:11 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuQ-0005pm-LO; Tue, 16 Jul 2019 17:06:34 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuO-0005oa-Po for xen-devel@lists.xenproject.org; Tue, 16 Jul 2019 17:06:32 +0000 X-Inumbo-ID: 0d788aba-a7ec-11e9-8980-bc764e045a96 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 0d788aba-a7ec-11e9-8980-bc764e045a96; Tue, 16 Jul 2019 17:06:30 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 26551305FFA5; Tue, 16 Jul 2019 20:06:26 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id EE06E304F605; Tue, 16 Jul 2019 20:06:25 +0300 (EEST) From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Tue, 16 Jul 2019 20:06:21 +0300 Message-Id: <79a1e2aebc55c20f58cb8c925320de202b17d8f2.1563293545.git.ppircalabu@bitdefender.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH v2 07/10] vm_event: Add vm_event_ng interface X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Stefano Stabellini , Razvan Cojocaru , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Tamas K Lengyel , Jan Beulich , Alexandru Isaila , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP In high throughput introspection scenarios where lots of monitor vm_events are generated, the ring buffer can fill up before the monitor application gets a chance to handle all the requests thus blocking other vcpus which will have to wait for a slot to become available. This patch adds support for a different mechanism to handle synchronous vm_event requests / responses. As each synchronous request pauses the vcpu until the corresponding response is handled, it can be stored in a slotted memory buffer (one per vcpu) shared between the hypervisor and the controlling domain. Signed-off-by: Petre Pircalabu --- tools/libxc/include/xenctrl.h | 9 + tools/libxc/xc_mem_paging.c | 9 +- tools/libxc/xc_memshr.c | 9 +- tools/libxc/xc_monitor.c | 23 +- tools/libxc/xc_private.h | 12 +- tools/libxc/xc_vm_event.c | 100 ++++++- xen/arch/x86/mm.c | 7 + xen/common/vm_event.c | 595 +++++++++++++++++++++++++++++++++++------- xen/include/public/domctl.h | 10 +- xen/include/public/memory.h | 2 + xen/include/public/vm_event.h | 16 ++ xen/include/xen/vm_event.h | 11 +- 12 files changed, 684 insertions(+), 119 deletions(-) diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h index f3af710..1293b0f 100644 --- a/tools/libxc/include/xenctrl.h +++ b/tools/libxc/include/xenctrl.h @@ -128,6 +128,7 @@ enum xc_error_code { typedef enum xc_error_code xc_error_code; +struct xenforeignmemory_resource_handle; /* * INITIALIZATION FUNCTIONS @@ -2007,6 +2008,14 @@ int xc_vm_event_get_version(xc_interface *xch); void *xc_monitor_enable(xc_interface *xch, uint32_t domain_id, uint32_t *port); int xc_monitor_disable(xc_interface *xch, uint32_t domain_id); int xc_monitor_resume(xc_interface *xch, uint32_t domain_id); + +/* Monitor NG interface */ +int xc_monitor_ng_enable(xc_interface *xch, uint32_t domain_id, + struct xenforeignmemory_resource_handle **fres, + int *num_channels, void **p_addr); +int xc_monitor_ng_disable(xc_interface *xch, uint32_t domain_id, + struct xenforeignmemory_resource_handle **fres); + /* * Get a bitmap of supported monitor events in the form * (1 << XEN_DOMCTL_MONITOR_EVENT_*). diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c index a88c0cc..978008a 100644 --- a/tools/libxc/xc_mem_paging.c +++ b/tools/libxc/xc_mem_paging.c @@ -49,7 +49,7 @@ int xc_mem_paging_enable(xc_interface *xch, uint32_t domain_id, return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_ENABLE, XEN_VM_EVENT_TYPE_PAGING, - port); + 0, port); } int xc_mem_paging_disable(xc_interface *xch, uint32_t domain_id) @@ -57,15 +57,12 @@ int xc_mem_paging_disable(xc_interface *xch, uint32_t domain_id) return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_DISABLE, XEN_VM_EVENT_TYPE_PAGING, - NULL); + 0, NULL); } int xc_mem_paging_resume(xc_interface *xch, uint32_t domain_id) { - return xc_vm_event_control(xch, domain_id, - XEN_VM_EVENT_RESUME, - XEN_VM_EVENT_TYPE_PAGING, - NULL); + return xc_vm_event_resume(xch, domain_id, XEN_VM_EVENT_TYPE_PAGING, 0); } int xc_mem_paging_nominate(xc_interface *xch, uint32_t domain_id, uint64_t gfn) diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c index 1c4a706..44d4f23 100644 --- a/tools/libxc/xc_memshr.c +++ b/tools/libxc/xc_memshr.c @@ -54,7 +54,7 @@ int xc_memshr_ring_enable(xc_interface *xch, return xc_vm_event_control(xch, domid, XEN_VM_EVENT_ENABLE, XEN_VM_EVENT_TYPE_SHARING, - port); + 0, port); } int xc_memshr_ring_disable(xc_interface *xch, @@ -63,7 +63,7 @@ int xc_memshr_ring_disable(xc_interface *xch, return xc_vm_event_control(xch, domid, XEN_VM_EVENT_DISABLE, XEN_VM_EVENT_TYPE_SHARING, - NULL); + 0, NULL); } static int xc_memshr_memop(xc_interface *xch, uint32_t domid, @@ -203,10 +203,7 @@ int xc_memshr_range_share(xc_interface *xch, int xc_memshr_domain_resume(xc_interface *xch, uint32_t domid) { - return xc_vm_event_control(xch, domid, - XEN_VM_EVENT_RESUME, - XEN_VM_EVENT_TYPE_SHARING, - NULL); + return xc_vm_event_resume(xch, domid, XEN_VM_EVENT_TYPE_SHARING, 0); } int xc_memshr_debug_gfn(xc_interface *xch, diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c index f05b53d..d8d62c4 100644 --- a/tools/libxc/xc_monitor.c +++ b/tools/libxc/xc_monitor.c @@ -33,15 +33,12 @@ int xc_monitor_disable(xc_interface *xch, uint32_t domain_id) return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_DISABLE, XEN_VM_EVENT_TYPE_MONITOR, - NULL); + 0, NULL); } int xc_monitor_resume(xc_interface *xch, uint32_t domain_id) { - return xc_vm_event_control(xch, domain_id, - XEN_VM_EVENT_RESUME, - XEN_VM_EVENT_TYPE_MONITOR, - NULL); + return xc_vm_event_resume(xch, domain_id, XEN_VM_EVENT_TYPE_MONITOR, 0); } int xc_monitor_get_capabilities(xc_interface *xch, uint32_t domain_id, @@ -246,6 +243,22 @@ int xc_monitor_emul_unimplemented(xc_interface *xch, uint32_t domain_id, return do_domctl(xch, &domctl); } +int xc_monitor_ng_enable(xc_interface *xch, uint32_t domain_id, + xenforeignmemory_resource_handle **fres, + int *num_channels, void **p_addr) +{ + return xc_vm_event_ng_enable(xch, domain_id, XEN_VM_EVENT_TYPE_MONITOR, + fres, num_channels, p_addr); +} + + +int xc_monitor_ng_disable(xc_interface *xch, uint32_t domain_id, + xenforeignmemory_resource_handle **fres) +{ + return xc_vm_event_ng_disable(xch, domain_id, XEN_VM_EVENT_TYPE_MONITOR, + fres); +} + /* * Local variables: * mode: C diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h index e4f7c3a..9cd6069 100644 --- a/tools/libxc/xc_private.h +++ b/tools/libxc/xc_private.h @@ -412,13 +412,23 @@ int xc_ffs64(uint64_t x); * vm_event operations. Internal use only. */ int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op, - unsigned int type, uint32_t *port); + unsigned int type, unsigned int flags, uint32_t *port); +int xc_vm_event_resume(xc_interface *xch, uint32_t domain_id, unsigned int type, + unsigned int flags); /* * Enables vm_event and returns the mapped ring page indicated by type. * type can be XEN_VM_EVENT_TYPE_(PAGING/MONITOR/SHARING) */ void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type, uint32_t *port); +/* + * Enables/Disables vm_event using the new interface. + */ +int xc_vm_event_ng_enable(xc_interface *xch, uint32_t domain_id, int type, + xenforeignmemory_resource_handle **fres, + int *num_channels, void **p_addr); +int xc_vm_event_ng_disable(xc_interface *xch, uint32_t domain_id, int type, + xenforeignmemory_resource_handle **fres); int do_dm_op(xc_interface *xch, uint32_t domid, unsigned int nr_bufs, ...); diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c index 044bf71..d070e64 100644 --- a/tools/libxc/xc_vm_event.c +++ b/tools/libxc/xc_vm_event.c @@ -22,8 +22,12 @@ #include "xc_private.h" +#ifndef PFN_UP +#define PFN_UP(x) (((x) + XC_PAGE_SIZE-1) >> XC_PAGE_SHIFT) +#endif /* PFN_UP */ + int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op, - unsigned int type, uint32_t *port) + unsigned int type, unsigned int flags, uint32_t *port) { DECLARE_DOMCTL; int rc; @@ -32,6 +36,7 @@ int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op, domctl.domain = domain_id; domctl.u.vm_event_op.op = op; domctl.u.vm_event_op.type = type; + domctl.u.vm_event_op.flags = flags; rc = do_domctl(xch, &domctl); if ( !rc && port ) @@ -113,7 +118,7 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type, goto out; } - rc1 = xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_ENABLE, type, port); + rc1 = xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_ENABLE, type, 0, port); if ( rc1 != 0 ) { PERROR("Failed to enable vm_event\n"); @@ -164,6 +169,97 @@ int xc_vm_event_get_version(xc_interface *xch) return rc; } +int xc_vm_event_resume(xc_interface *xch, uint32_t domain_id, + unsigned int type, unsigned int flags) +{ + DECLARE_DOMCTL; + + domctl.cmd = XEN_DOMCTL_vm_event_op; + domctl.domain = domain_id; + domctl.u.vm_event_op.op = XEN_VM_EVENT_RESUME; + domctl.u.vm_event_op.type = type; + domctl.u.vm_event_op.flags = flags; + domctl.u.vm_event_op.u.resume.vcpu_id = 0; + + return do_domctl(xch, &domctl); +} + +int xc_vm_event_ng_enable(xc_interface *xch, uint32_t domain_id, int type, + xenforeignmemory_resource_handle **fres, + int *num_channels, void **p_addr) +{ + int rc1, rc2; + xc_dominfo_t info; + unsigned long nr_frames; + + if ( !fres || !num_channels || ! p_addr ) + return -EINVAL; + + /* Get the numbers of vcpus */ + if ( xc_domain_getinfo(xch, domain_id, 1, &info) != 1 || + info.domid != domain_id ) + { + PERROR("xc_domain_getinfo failed.\n"); + return -ESRCH; + } + + *num_channels = info.max_vcpu_id + 1; + + rc1 = xc_domain_pause(xch, domain_id); + if ( rc1 ) + { + PERROR("Unable to pause domain\n"); + return rc1; + } + + rc1 = xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_ENABLE, + type, XEN_VM_EVENT_FLAGS_NG_OP, NULL); + if ( rc1 ) + { + PERROR("Failed to enable vm_event\n"); + goto out; + } + + nr_frames = PFN_UP(*num_channels * sizeof(struct vm_event_slot)); + + *fres = xenforeignmemory_map_resource(xch->fmem, domain_id, + XENMEM_resource_vm_event, + XEN_VM_EVENT_TYPE_MONITOR, 0, + nr_frames, p_addr, + PROT_READ | PROT_WRITE, 0); + if ( !*fres ) + { + xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_DISABLE, + type, XEN_VM_EVENT_FLAGS_NG_OP, NULL); + ERROR("Failed to map vm_event resource"); + rc1 = -errno; + goto out; + } + +out: + rc2 = xc_domain_unpause(xch, domain_id); + if ( rc1 || rc2 ) + { + if ( rc2 ) + PERROR("Unable to pause domain\n"); + + if ( rc1 == 0 ) + rc1 = rc2; + } + + return rc1; +} + +int xc_vm_event_ng_disable(xc_interface *xch, uint32_t domain_id, int type, + xenforeignmemory_resource_handle **fres) +{ + xenforeignmemory_unmap_resource(xch->fmem, *fres); + *fres = NULL; + + return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_DISABLE, + type, XEN_VM_EVENT_FLAGS_NG_OP, NULL); +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index df2c013..768df4f 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -119,6 +119,7 @@ #include #include #include +#include #include #include #include @@ -4555,6 +4556,12 @@ int arch_acquire_resource(struct domain *d, unsigned int type, } #endif + case XENMEM_resource_vm_event: + rc = vm_event_ng_get_frames(d, id, frame, nr_frames, mfn_list); + if ( !rc ) + *flags |= XENMEM_rsrc_acq_caller_owned; + break; + default: rc = -EOPNOTSUPP; break; diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index e6a7a29..3f9be97 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include #include @@ -35,6 +36,78 @@ #define xen_rmb() smp_rmb() #define xen_wmb() smp_wmb() +static int vm_event_ring_pfn_param(uint32_t type) +{ + switch( type ) + { +#ifdef CONFIG_HAS_MEM_PAGING + case XEN_VM_EVENT_TYPE_PAGING: + return HVM_PARAM_PAGING_RING_PFN; +#endif + case XEN_VM_EVENT_TYPE_MONITOR: + return HVM_PARAM_MONITOR_RING_PFN; +#ifdef CONFIG_HAS_MEM_SHARING + case XEN_VM_EVENT_TYPE_SHARING: + return HVM_PARAM_SHARING_RING_PFN; +#endif + }; + + ASSERT_UNREACHABLE(); + return -1; +} + +static int vm_event_pause_flag(uint32_t type) +{ + switch( type ) + { +#ifdef CONFIG_HAS_MEM_PAGING + case XEN_VM_EVENT_TYPE_PAGING: + return _VPF_mem_paging; +#endif + case XEN_VM_EVENT_TYPE_MONITOR: + return _VPF_mem_access; +#ifdef CONFIG_HAS_MEM_SHARING + case XEN_VM_EVENT_TYPE_SHARING: + return _VPF_mem_sharing; +#endif + }; + + ASSERT_UNREACHABLE(); + return -1; +} + +#ifdef CONFIG_HAS_MEM_PAGING +static void mem_paging_notification(struct vcpu *v, unsigned int port); +#endif +static void monitor_notification(struct vcpu *v, unsigned int port); +#ifdef CONFIG_HAS_MEM_SHARING +static void mem_sharing_notification(struct vcpu *v, unsigned int port); +#endif + +static xen_event_channel_notification_t vm_event_notification_fn(uint32_t type) +{ + switch( type ) + { +#ifdef CONFIG_HAS_MEM_PAGING + case XEN_VM_EVENT_TYPE_PAGING: + return mem_paging_notification; +#endif + case XEN_VM_EVENT_TYPE_MONITOR: + return monitor_notification; +#ifdef CONFIG_HAS_MEM_SHARING + case XEN_VM_EVENT_TYPE_SHARING: + return mem_sharing_notification; +#endif + }; + + ASSERT_UNREACHABLE(); + return NULL; +} + +/* + * VM event ring implementation; + */ + #define to_ring(_ved) container_of((_ved), struct vm_event_ring_domain, ved) /* VM event ring implementation */ @@ -67,12 +140,12 @@ static const struct vm_event_ops vm_event_ring_ops; static int vm_event_ring_enable( struct domain *d, struct xen_domctl_vm_event_op *vec, - struct vm_event_domain **p_ved, - int pause_flag, - int param, - xen_event_channel_notification_t notification_fn) + struct vm_event_domain **p_ved) { int rc; + int param = vm_event_ring_pfn_param(vec->type); + int pause_flag = vm_event_pause_flag(vec->type); + xen_event_channel_notification_t fn = vm_event_notification_fn(vec->type); unsigned long ring_gfn = d->arch.hvm.params[param]; struct vm_event_ring_domain *impl; @@ -111,8 +184,7 @@ static int vm_event_ring_enable( (vm_event_sring_t *)impl->ring_page, PAGE_SIZE); - rc = alloc_unbound_xen_event_channel(d, 0, current->domain->domain_id, - notification_fn); + rc = alloc_unbound_xen_event_channel(d, 0, current->domain->domain_id, fn); if ( rc < 0 ) goto err; @@ -242,6 +314,7 @@ static int vm_event_ring_disable(struct vm_event_domain **p_ved) xfree(impl); *p_ved = NULL; + return 0; } @@ -365,6 +438,51 @@ static int vm_event_ring_get_response(struct vm_event_ring_domain *impl, return rc; } +static void vm_event_handle_response(struct domain *d, struct vcpu *v, + vm_event_response_t *rsp) +{ + /* Check flags which apply only when the vCPU is paused */ + if ( atomic_read(&v->vm_event_pause_count) ) + { +#ifdef CONFIG_HAS_MEM_PAGING + if ( rsp->reason == VM_EVENT_REASON_MEM_PAGING ) + p2m_mem_paging_resume(d, rsp); +#endif + + /* + * Check emulation flags in the arch-specific handler only, as it + * has to set arch-specific flags when supported, and to avoid + * bitmask overhead when it isn't supported. + */ + vm_event_emulate_check(v, rsp); + + /* + * Check in arch-specific handler to avoid bitmask overhead when + * not supported. + */ + vm_event_register_write_resume(v, rsp); + + /* + * Check in arch-specific handler to avoid bitmask overhead when + * not supported. + */ + vm_event_toggle_singlestep(d, v, rsp); + + /* Check for altp2m switch */ + if ( rsp->flags & VM_EVENT_FLAG_ALTERNATE_P2M ) + p2m_altp2m_check(v, rsp->altp2m_idx); + + if ( rsp->flags & VM_EVENT_FLAG_SET_REGISTERS ) + vm_event_set_registers(v, rsp); + + if ( rsp->flags & VM_EVENT_FLAG_GET_NEXT_INTERRUPT ) + vm_event_monitor_next_interrupt(v); + + if ( rsp->flags & VM_EVENT_FLAG_VCPU_PAUSED ) + vm_event_vcpu_unpause(v); + } +} + /* * Pull all responses from the given ring and unpause the corresponding vCPU * if required. Based on the response type, here we can also call custom @@ -373,22 +491,20 @@ static int vm_event_ring_get_response(struct vm_event_ring_domain *impl, * Note: responses are handled the same way regardless of which ring they * arrive on. */ -static int vm_event_ring_resume(struct vm_event_ring_domain *impl) +static int vm_event_ring_resume(struct vm_event_domain *ved, struct vcpu *v) { vm_event_response_t rsp; - - if ( unlikely(!impl || !vm_event_check(&impl->ved)) ) - return -ENODEV; + struct vm_event_ring_domain *impl = to_ring(ved); /* - * vm_event_resume() runs in either XEN_VM_EVENT_* domctls, or + * vm_event_ring_resume() runs in either XEN_VM_EVENT_* domctls, or * EVTCHN_send context from the introspection consumer. Both contexts * are guaranteed not to be the subject of vm_event responses. * While we could ASSERT(v != current) for each VCPU in d in the loop * below, this covers the case where we would need to iterate over all * of them more succintly. */ - ASSERT(impl->ved.d != current->domain); + ASSERT(ved->d != current->domain); /* Pull all responses off the ring. */ while ( vm_event_ring_get_response(impl, &rsp) ) @@ -402,7 +518,7 @@ static int vm_event_ring_resume(struct vm_event_ring_domain *impl) } /* Validate the vcpu_id in the response. */ - v = domain_vcpu(impl->ved.d, rsp.vcpu_id); + v = domain_vcpu(ved->d, rsp.vcpu_id); if ( !v ) continue; @@ -410,47 +526,7 @@ static int vm_event_ring_resume(struct vm_event_ring_domain *impl) * In some cases the response type needs extra handling, so here * we call the appropriate handlers. */ - - /* Check flags which apply only when the vCPU is paused */ - if ( atomic_read(&v->vm_event_pause_count) ) - { -#ifdef CONFIG_HAS_MEM_PAGING - if ( rsp.reason == VM_EVENT_REASON_MEM_PAGING ) - p2m_mem_paging_resume(impl->ved.d, &rsp); -#endif - - /* - * Check emulation flags in the arch-specific handler only, as it - * has to set arch-specific flags when supported, and to avoid - * bitmask overhead when it isn't supported. - */ - vm_event_emulate_check(v, &rsp); - - /* - * Check in arch-specific handler to avoid bitmask overhead when - * not supported. - */ - vm_event_register_write_resume(v, &rsp); - - /* - * Check in arch-specific handler to avoid bitmask overhead when - * not supported. - */ - vm_event_toggle_singlestep(impl->ved.d, v, &rsp); - - /* Check for altp2m switch */ - if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M ) - p2m_altp2m_check(v, rsp.altp2m_idx); - - if ( rsp.flags & VM_EVENT_FLAG_SET_REGISTERS ) - vm_event_set_registers(v, &rsp); - - if ( rsp.flags & VM_EVENT_FLAG_GET_NEXT_INTERRUPT ) - vm_event_monitor_next_interrupt(v); - - if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED ) - vm_event_vcpu_unpause(v); - } + vm_event_handle_response(ved->d, v, &rsp); } return 0; @@ -535,59 +611,361 @@ static int vm_event_ring_claim_slot(struct vm_event_domain *ved, bool allow_slee return vm_event_ring_grab_slot(to_ring(ved), current->domain != ved->d); } -#ifdef CONFIG_HAS_MEM_PAGING -/* Registered with Xen-bound event channel for incoming notifications. */ -static void mem_paging_notification(struct vcpu *v, unsigned int port) +static void vm_event_ring_cleanup(struct vm_event_domain *ved) { - vm_event_ring_resume(to_ring(v->domain->vm_event_paging)); + struct vm_event_ring_domain *impl = to_ring(ved); + /* Destroying the wait queue head means waking up all + * queued vcpus. This will drain the list, allowing + * the disable routine to complete. It will also drop + * all domain refs the wait-queued vcpus are holding. + * Finally, because this code path involves previously + * pausing the domain (domain_kill), unpausing the + * vcpus causes no harm. */ + destroy_waitqueue_head(&impl->wq); } -#endif -/* Registered with Xen-bound event channel for incoming notifications. */ -static void monitor_notification(struct vcpu *v, unsigned int port) +/* + * VM event NG (new generation) + */ +#define to_channels(_ved) container_of((_ved), \ + struct vm_event_channels_domain, ved) + +struct vm_event_channels_domain +{ + /* VM event domain */ + struct vm_event_domain ved; + /* shared channels buffer */ + struct vm_event_slot *slots; + /* the buffer size (number of frames) */ + unsigned int nr_frames; + /* buffer's mnf list */ + mfn_t mfn[0]; +}; + +static const struct vm_event_ops vm_event_channels_ops; + +static void vm_event_channels_free_buffer(struct vm_event_channels_domain *impl) { - vm_event_ring_resume(to_ring(v->domain->vm_event_monitor)); + int i; + + vunmap(impl->slots); + impl->slots = NULL; + + for ( i = 0; i < impl->nr_frames; i++ ) + free_domheap_page(mfn_to_page(impl->mfn[i])); } -#ifdef CONFIG_HAS_MEM_SHARING -/* Registered with Xen-bound event channel for incoming notifications. */ -static void mem_sharing_notification(struct vcpu *v, unsigned int port) +static int vm_event_channels_alloc_buffer(struct vm_event_channels_domain *impl) { - vm_event_ring_resume(to_ring(v->domain->vm_event_share)); + int i = 0; + + impl->slots = vzalloc(impl->nr_frames * PAGE_SIZE); + if ( !impl->slots ) + return -ENOMEM; + + for ( i = 0; i < impl->nr_frames; i++ ) + impl->mfn[i] = vmap_to_mfn(impl->slots + i * PAGE_SIZE); + + for ( i = 0; i < impl->nr_frames; i++ ) + share_xen_page_with_guest(mfn_to_page(impl->mfn[i]), current->domain, + SHARE_rw); + + return 0; } + +static int vm_event_channels_enable( + struct domain *d, + struct xen_domctl_vm_event_op *vec, + struct vm_event_domain **p_ved) +{ + int rc, i = 0; + xen_event_channel_notification_t fn = vm_event_notification_fn(vec->type); + unsigned int nr_frames = PFN_UP(d->max_vcpus * sizeof(struct vm_event_slot)); + struct vm_event_channels_domain *impl; + + if ( *p_ved ) + return -EBUSY; + + impl = _xzalloc(sizeof(struct vm_event_channels_domain) + + nr_frames * sizeof(mfn_t), + __alignof__(struct vm_event_channels_domain)); + if ( unlikely(!impl) ) + return -ENOMEM; + + spin_lock_init(&impl->ved.lock); + + impl->nr_frames = nr_frames; + impl->ved.d = d; + impl->ved.ops = &vm_event_channels_ops; + + rc = vm_event_init_domain(d); + if ( rc < 0 ) + goto err; + + rc = vm_event_channels_alloc_buffer(impl); + if ( rc ) + goto err; + + for ( i = 0; i < d->max_vcpus; i++ ) + { + rc = alloc_unbound_xen_event_channel(d, i, current->domain->domain_id, fn); + if ( rc < 0 ) + goto err; + + impl->slots[i].port = rc; + impl->slots[i].state = STATE_VM_EVENT_SLOT_IDLE; + } + + *p_ved = &impl->ved; + + return 0; + +err: + while ( --i >= 0 ) + evtchn_close(d, impl->slots[i].port, 0); + xfree(impl); + + return rc; +} + +static int vm_event_channels_disable(struct vm_event_domain **p_ved) +{ + struct vcpu *v; + struct domain *d = (*p_ved)->d; + struct vm_event_channels_domain *impl = to_channels(*p_ved); + int i; + + spin_lock(&impl->ved.lock); + + for_each_vcpu( impl->ved.d, v ) + { + if ( atomic_read(&v->vm_event_pause_count) ) + vm_event_vcpu_unpause(v); + } + + for ( i = 0; i < impl->ved.d->max_vcpus; i++ ) + evtchn_close(impl->ved.d, impl->slots[i].port, 0); + + vm_event_channels_free_buffer(impl); + + vm_event_cleanup_domain(d); + + spin_unlock(&impl->ved.lock); + + xfree(impl); + *p_ved = NULL; + + return 0; +} + +static bool vm_event_channels_check(struct vm_event_domain *ved) +{ + return to_channels(ved)->slots != NULL; +} + +static void vm_event_channels_cleanup(struct vm_event_domain *ved) +{ +} + +static int vm_event_channels_claim_slot(struct vm_event_domain *ved, + bool allow_sleep) +{ + return 0; +} + +static void vm_event_channels_cancel_slot(struct vm_event_domain *ved) +{ +} + +static void vm_event_channels_put_request(struct vm_event_domain *ved, + vm_event_request_t *req) +{ + struct vm_event_channels_domain *impl = to_channels(ved); + struct vm_event_slot *slot; + + ASSERT( req->vcpu_id >= 0 && req->vcpu_id < ved->d->max_vcpus ); + + slot = &impl->slots[req->vcpu_id]; + + if ( current->domain != ved->d ) + { + req->flags |= VM_EVENT_FLAG_FOREIGN; +#ifndef NDEBUG + if ( !(req->flags & VM_EVENT_FLAG_VCPU_PAUSED) ) + gdprintk(XENLOG_G_WARNING, "d%dv%d was not paused.\n", + ved->d->domain_id, req->vcpu_id); #endif + } + + req->version = VM_EVENT_INTERFACE_VERSION; + + spin_lock(&impl->ved.lock); + if ( slot->state != STATE_VM_EVENT_SLOT_IDLE ) + { + gdprintk(XENLOG_G_WARNING, "The VM event slot for d%dv%d is not IDLE.\n", + impl->ved.d->domain_id, req->vcpu_id); + spin_unlock(&impl->ved.lock); + return; + } + + slot->u.req = *req; + slot->state = STATE_VM_EVENT_SLOT_SUBMIT; + spin_unlock(&impl->ved.lock); + notify_via_xen_event_channel(impl->ved.d, slot->port); +} + +static int vm_event_channels_get_response(struct vm_event_channels_domain *impl, + struct vcpu *v, vm_event_response_t *rsp) +{ + struct vm_event_slot *slot = &impl->slots[v->vcpu_id]; + int rc = 0; + + ASSERT( slot != NULL ); + spin_lock(&impl->ved.lock); + + if ( slot->state != STATE_VM_EVENT_SLOT_FINISH ) + { + gdprintk(XENLOG_G_WARNING, "The VM event slot state for d%dv%d is invalid.\n", + impl->ved.d->domain_id, v->vcpu_id); + rc = -1; + goto out; + } + + *rsp = slot->u.rsp; + slot->state = STATE_VM_EVENT_SLOT_IDLE; + +out: + spin_unlock(&impl->ved.lock); + + return rc; +} + +static int vm_event_channels_resume(struct vm_event_domain *ved, struct vcpu *v) +{ + vm_event_response_t rsp; + struct vm_event_channels_domain *impl = to_channels(ved); + + ASSERT(ved->d != current->domain); + + if ( vm_event_channels_get_response(impl, v, &rsp) || + rsp.version != VM_EVENT_INTERFACE_VERSION || + rsp.vcpu_id != v->vcpu_id ) + return -1; + + vm_event_handle_response(ved->d, v, &rsp); + + return 0; +} + +int vm_event_ng_get_frames(struct domain *d, unsigned int id, + unsigned long frame, unsigned int nr_frames, + xen_pfn_t mfn_list[]) +{ + struct vm_event_domain *ved; + int i; + + switch (id ) + { + case XEN_VM_EVENT_TYPE_MONITOR: + ved = d->vm_event_monitor; + break; + + default: + return -ENOSYS; + } + + if ( !vm_event_check(ved) ) + return -EINVAL; + + if ( frame != 0 || nr_frames != to_channels(ved)->nr_frames ) + return -EINVAL; + + spin_lock(&ved->lock); + + for ( i = 0; i < to_channels(ved)->nr_frames; i++ ) + mfn_list[i] = mfn_x(to_channels(ved)->mfn[i]); + + spin_unlock(&ved->lock); + + return 0; +} + +/* + * vm_event implementation agnostic functions + */ /* Clean up on domain destruction */ void vm_event_cleanup(struct domain *d) { #ifdef CONFIG_HAS_MEM_PAGING if ( vm_event_check(d->vm_event_paging) ) - d->vm_event_paging->ops->cleanup(&d->vm_event_paging); + { + d->vm_event_paging->ops->cleanup(d->vm_event_paging); + d->vm_event_paging->ops->disable(&d->vm_event_paging); + } #endif if ( vm_event_check(d->vm_event_monitor) ) - d->vm_event_monitor->ops->cleanup(&d->vm_event_monitor); + { + d->vm_event_monitor->ops->cleanup(d->vm_event_monitor); + d->vm_event_monitor->ops->disable(&d->vm_event_monitor); + } #ifdef CONFIG_HAS_MEM_SHARING if ( vm_event_check(d->vm_event_share) ) - d->vm_event_share->ops->cleanup(&d->vm_event_share); + { + d->vm_event_share->ops->cleanup(d->vm_event_share); + d->vm_event_share->ops->disable(&d->vm_event_share); + } #endif } -static void vm_event_ring_cleanup(struct vm_event_domain **_ved) +static int vm_event_enable(struct domain *d, + struct xen_domctl_vm_event_op *vec, + struct vm_event_domain **p_ved) { - struct vm_event_ring_domain *impl = to_ring(*_ved); - /* Destroying the wait queue head means waking up all - * queued vcpus. This will drain the list, allowing - * the disable routine to complete. It will also drop - * all domain refs the wait-queued vcpus are holding. - * Finally, because this code path involves previously - * pausing the domain (domain_kill), unpausing the - * vcpus causes no harm. */ - destroy_waitqueue_head(&impl->wq); - (void)vm_event_ring_disable(_ved); + return ( vec->flags & XEN_VM_EVENT_FLAGS_NG_OP ) ? + vm_event_channels_enable(d, vec, p_ved) : + vm_event_ring_enable(d, vec, p_ved); } +static int vm_event_resume(struct vm_event_domain *ved, struct vcpu *v) +{ + if ( !vm_event_check(ved) ) + return -ENODEV; + + if ( !v ) + return -EINVAL; + + return ved->ops->resume(ved, v); +} + +#ifdef CONFIG_HAS_MEM_PAGING +/* Registered with Xen-bound event channel for incoming notifications. */ +static void mem_paging_notification(struct vcpu *v, unsigned int port) +{ + vm_event_resume(v->domain->vm_event_paging, v); +} +#endif + +/* Registered with Xen-bound event channel for incoming notifications. */ +static void monitor_notification(struct vcpu *v, unsigned int port) +{ + vm_event_resume(v->domain->vm_event_monitor, v); +} + +#ifdef CONFIG_HAS_MEM_SHARING +/* Registered with Xen-bound event channel for incoming notifications. */ +static void mem_sharing_notification(struct vcpu *v, unsigned int port) +{ + vm_event_resume(v->domain->vm_event_share, v); +} +#endif + +/* + * vm_event domctl interface + */ + int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) { int rc; @@ -632,6 +1010,13 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) { rc = -EINVAL; + /* + * The NG interface is only supported by XEN_VM_EVENT_TYPE_MONITOR + * for now. + */ + if ( vec->flags & XEN_VM_EVENT_FLAGS_NG_OP ) + break; + switch( vec->op ) { case XEN_VM_EVENT_ENABLE: @@ -657,9 +1042,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) break; /* domain_pause() not required here, see XSA-99 */ - rc = vm_event_ring_enable(d, vec, &d->vm_event_paging, _VPF_mem_paging, - HVM_PARAM_PAGING_RING_PFN, - mem_paging_notification); + rc = vm_event_enable(d, vec, &d->vm_event_paging); } break; @@ -667,12 +1050,13 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) if ( !vm_event_check(d->vm_event_paging) ) break; domain_pause(d); - rc = vm_event_ring_disable(&d->vm_event_paging); + rc = d->vm_event_paging->ops->disable(&d->vm_event_paging); domain_unpause(d); break; case XEN_VM_EVENT_RESUME: - rc = vm_event_ring_resume(to_ring(d->vm_event_paging)); + rc = vm_event_resume(d->vm_event_paging, + domain_vcpu(d, vec->u.resume.vcpu_id)); break; default: @@ -694,22 +1078,23 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) rc = arch_monitor_init_domain(d); if ( rc ) break; - rc = vm_event_ring_enable(d, vec, &d->vm_event_monitor, _VPF_mem_access, - HVM_PARAM_MONITOR_RING_PFN, - monitor_notification); + + rc = vm_event_enable(d, vec, &d->vm_event_monitor); + break; case XEN_VM_EVENT_DISABLE: if ( !vm_event_check(d->vm_event_monitor) ) break; domain_pause(d); - rc = vm_event_ring_disable(&d->vm_event_monitor); + rc = d->vm_event_monitor->ops->disable(&d->vm_event_monitor); arch_monitor_cleanup_domain(d); domain_unpause(d); break; case XEN_VM_EVENT_RESUME: - rc = vm_event_ring_resume(to_ring(d->vm_event_monitor)); + rc = vm_event_resume(d->vm_event_monitor, + domain_vcpu(d, vec->u.resume.vcpu_id)); break; default: @@ -724,6 +1109,13 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) { rc = -EINVAL; + /* + * The NG interface is only supported by XEN_VM_EVENT_TYPE_MONITOR + * for now. + */ + if ( vec->flags & XEN_VM_EVENT_FLAGS_NG_OP ) + break; + switch( vec->op ) { case XEN_VM_EVENT_ENABLE: @@ -738,21 +1130,20 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec) break; /* domain_pause() not required here, see XSA-99 */ - rc = vm_event_ring_enable(d, vec, &d->vm_event_share, _VPF_mem_sharing, - HVM_PARAM_SHARING_RING_PFN, - mem_sharing_notification); + rc = vm_event_enable(d, vec, &d->vm_event_share); break; case XEN_VM_EVENT_DISABLE: if ( !vm_event_check(d->vm_event_share) ) break; domain_pause(d); - rc = vm_event_ring_disable(&d->vm_event_share); + rc = d->vm_event_share->ops->disable(&d->vm_event_share); domain_unpause(d); break; case XEN_VM_EVENT_RESUME: - rc = vm_event_ring_resume(to_ring(d->vm_event_share)); + rc = vm_event_resume(d->vm_event_share, + domain_vcpu(d, vec->u.resume.vcpu_id)); break; default: @@ -809,7 +1200,19 @@ static const struct vm_event_ops vm_event_ring_ops = { .cleanup = vm_event_ring_cleanup, .claim_slot = vm_event_ring_claim_slot, .cancel_slot = vm_event_ring_cancel_slot, - .put_request = vm_event_ring_put_request + .disable = vm_event_ring_disable, + .put_request = vm_event_ring_put_request, + .resume = vm_event_ring_resume, +}; + +static const struct vm_event_ops vm_event_channels_ops = { + .check = vm_event_channels_check, + .cleanup = vm_event_channels_cleanup, + .claim_slot = vm_event_channels_claim_slot, + .cancel_slot = vm_event_channels_cancel_slot, + .disable = vm_event_channels_disable, + .put_request = vm_event_channels_put_request, + .resume = vm_event_channels_resume, }; /* diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index 234d8c5..fc7420c 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -38,7 +38,7 @@ #include "hvm/save.h" #include "memory.h" -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000011 +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000012 /* * NB. xen_domctl.domain is an IN/OUT parameter for this operation. @@ -781,12 +781,20 @@ struct xen_domctl_gdbsx_domstatus { struct xen_domctl_vm_event_op { uint32_t op; /* XEN_VM_EVENT_* */ uint32_t type; /* XEN_VM_EVENT_TYPE_* */ + /* Use the NG interface */ +#define _XEN_VM_EVENT_FLAGS_NG_OP 0 +#define XEN_VM_EVENT_FLAGS_NG_OP (1U << _XEN_VM_EVENT_FLAGS_NG_OP) + uint32_t flags; union { struct { uint32_t port; /* OUT: event channel for ring */ } enable; + struct { + uint32_t vcpu_id; /* IN: vcpu_id*/ + } resume; + uint32_t version; } u; }; diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h index 68ddadb..2e8912e 100644 --- a/xen/include/public/memory.h +++ b/xen/include/public/memory.h @@ -612,6 +612,7 @@ struct xen_mem_acquire_resource { #define XENMEM_resource_ioreq_server 0 #define XENMEM_resource_grant_table 1 +#define XENMEM_resource_vm_event 2 /* * IN - a type-specific resource identifier, which must be zero @@ -619,6 +620,7 @@ struct xen_mem_acquire_resource { * * type == XENMEM_resource_ioreq_server -> id == ioreq server id * type == XENMEM_resource_grant_table -> id defined below + * type == XENMEM_resource_vm_event -> id == vm_event type */ uint32_t id; diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h index c48bc21..2f2160b 100644 --- a/xen/include/public/vm_event.h +++ b/xen/include/public/vm_event.h @@ -421,6 +421,22 @@ typedef struct vm_event_st { DEFINE_RING_TYPES(vm_event, vm_event_request_t, vm_event_response_t); +/* VM Event slot state */ +#define STATE_VM_EVENT_SLOT_IDLE 0 /* the slot data is invalid */ +#define STATE_VM_EVENT_SLOT_SUBMIT 1 /* a request was submitted */ +#define STATE_VM_EVENT_SLOT_FINISH 2 /* a response was issued */ + +struct vm_event_slot +{ + uint32_t port; /* evtchn for notifications to/from helper */ + uint32_t state:4; + uint32_t pad:28; + union { + vm_event_request_t req; + vm_event_response_t rsp; + } u; +}; + #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */ #endif /* _XEN_PUBLIC_VM_EVENT_H */ diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h index 21a3f50..0468269 100644 --- a/xen/include/xen/vm_event.h +++ b/xen/include/xen/vm_event.h @@ -30,14 +30,17 @@ struct domain; struct vm_event_domain; +struct xen_domctl_vm_event_op; struct vm_event_ops { bool (*check)(struct vm_event_domain *ved); - void (*cleanup)(struct vm_event_domain **_ved); - int (*claim_slot)(struct vm_event_domain *ved, bool allow_sleep); + void (*cleanup)(struct vm_event_domain *p_ved); + int (*claim_slot)(struct vm_event_domain *ved, bool allow_sleep); void (*cancel_slot)(struct vm_event_domain *ved); + int (*disable)(struct vm_event_domain **p_ved); void (*put_request)(struct vm_event_domain *ved, vm_event_request_t *req); + int (*resume)(struct vm_event_domain *ved, struct vcpu *v); }; struct vm_event_domain @@ -111,6 +114,10 @@ static inline void vm_event_put_request(struct vm_event_domain *ved, int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec); +int vm_event_ng_get_frames(struct domain *d, unsigned int id, + unsigned long frame, unsigned int nr_frames, + xen_pfn_t mfn_list[]); + void vm_event_vcpu_pause(struct vcpu *v); void vm_event_vcpu_unpause(struct vcpu *v); From patchwork Tue Jul 16 17:06:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Petre Ovidiu PIRCALABU X-Patchwork-Id: 11046533 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7913914DB for ; Tue, 16 Jul 2019 17:08:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 60082283A6 for ; Tue, 16 Jul 2019 17:08:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 53B5E2867F; Tue, 16 Jul 2019 17:08:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EF5F628681 for ; Tue, 16 Jul 2019 17:08:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuP-0005og-Dk; Tue, 16 Jul 2019 17:06:33 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuN-0005nq-HI for xen-devel@lists.xenproject.org; Tue, 16 Jul 2019 17:06:31 +0000 X-Inumbo-ID: 0d7bb84c-a7ec-11e9-8980-bc764e045a96 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 0d7bb84c-a7ec-11e9-8980-bc764e045a96; Tue, 16 Jul 2019 17:06:30 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 27655305FFA6; Tue, 16 Jul 2019 20:06:26 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 0E30D304F607; Tue, 16 Jul 2019 20:06:26 +0300 (EEST) From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Tue, 16 Jul 2019 20:06:22 +0300 Message-Id: <2f0d996d9fde04c1a12cee7a1cb58486cf7788d6.1563293545.git.ppircalabu@bitdefender.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH v2 08/10] xen-access: Use getopt_long for cmdline parsing X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Tamas K Lengyel , Razvan Cojocaru , Wei Liu , Ian Jackson , Alexandru Isaila MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This simplifies the command line parsing logic and makes it easier to add new test parameters. Signed-off-by: Petre Pircalabu Acked-by: Tamas K Lengyel Reviewed-by: Alexandru Isaila --- tools/tests/xen-access/xen-access.c | 60 +++++++++++++++++++++---------------- 1 file changed, 35 insertions(+), 25 deletions(-) diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c index 6aaee16..8a3eea5 100644 --- a/tools/tests/xen-access/xen-access.c +++ b/tools/tests/xen-access/xen-access.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include @@ -397,93 +398,102 @@ int main(int argc, char *argv[]) uint16_t altp2m_view_id = 0; char* progname = argv[0]; - argv++; - argc--; + char* command; + int c; + int option_index; + struct option long_options[] = + { + { "mem-access-listener", no_argument, 0, 'm' }, + }; - if ( argc == 3 && argv[0][0] == '-' ) + while ( 1 ) { - if ( !strcmp(argv[0], "-m") ) - required = 1; - else + c = getopt_long(argc, argv, "m", long_options, &option_index); + if ( c == -1 ) + break; + + switch ( c ) { + case 'm': + required = 1; + break; + + default: usage(progname); return -1; } - argv++; - argc--; } - if ( argc != 2 ) + if ( argc - optind != 2 ) { usage(progname); return -1; } - domain_id = atoi(argv[0]); - argv++; - argc--; + domain_id = atoi(argv[optind++]); + command = argv[optind]; - if ( !strcmp(argv[0], "write") ) + if ( !strcmp(command, "write") ) { default_access = XENMEM_access_rx; after_first_access = XENMEM_access_rwx; memaccess = 1; } - else if ( !strcmp(argv[0], "exec") ) + else if ( !strcmp(command, "exec") ) { default_access = XENMEM_access_rw; after_first_access = XENMEM_access_rwx; memaccess = 1; } #if defined(__i386__) || defined(__x86_64__) - else if ( !strcmp(argv[0], "breakpoint") ) + else if ( !strcmp(command, "breakpoint") ) { breakpoint = 1; } - else if ( !strcmp(argv[0], "altp2m_write") ) + else if ( !strcmp(command, "altp2m_write") ) { default_access = XENMEM_access_rx; altp2m = 1; memaccess = 1; } - else if ( !strcmp(argv[0], "altp2m_exec") ) + else if ( !strcmp(command, "altp2m_exec") ) { default_access = XENMEM_access_rw; altp2m = 1; memaccess = 1; } - else if ( !strcmp(argv[0], "altp2m_write_no_gpt") ) + else if ( !strcmp(command, "altp2m_write_no_gpt") ) { default_access = XENMEM_access_rw; altp2m_write_no_gpt = 1; memaccess = 1; altp2m = 1; } - else if ( !strcmp(argv[0], "debug") ) + else if ( !strcmp(command, "debug") ) { debug = 1; } - else if ( !strcmp(argv[0], "cpuid") ) + else if ( !strcmp(command, "cpuid") ) { cpuid = 1; } - else if ( !strcmp(argv[0], "desc_access") ) + else if ( !strcmp(command, "desc_access") ) { desc_access = 1; } - else if ( !strcmp(argv[0], "write_ctrlreg_cr4") ) + else if ( !strcmp(command, "write_ctrlreg_cr4") ) { write_ctrlreg_cr4 = 1; } #elif defined(__arm__) || defined(__aarch64__) - else if ( !strcmp(argv[0], "privcall") ) + else if ( !strcmp(command, "privcall") ) { privcall = 1; } #endif else { - usage(argv[0]); + usage(command); return -1; } @@ -494,7 +504,7 @@ int main(int argc, char *argv[]) return 1; } - DPRINTF("starting %s %u\n", argv[0], domain_id); + DPRINTF("starting %s %u\n", command, domain_id); /* ensure that if we get a signal, we'll do cleanup, then exit */ act.sa_handler = close_handler; From patchwork Tue Jul 16 17:06:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Petre Ovidiu PIRCALABU X-Patchwork-Id: 11046515 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5207014DB for ; Tue, 16 Jul 2019 17:08:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3211D28681 for ; Tue, 16 Jul 2019 17:08:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2365928688; Tue, 16 Jul 2019 17:08:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 92FE128681 for ; Tue, 16 Jul 2019 17:08:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuP-0005om-Nl; Tue, 16 Jul 2019 17:06:33 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuO-0005oI-9n for xen-devel@lists.xenproject.org; Tue, 16 Jul 2019 17:06:32 +0000 X-Inumbo-ID: 0d7a5972-a7ec-11e9-bb70-a7dffbe220bb Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 0d7a5972-a7ec-11e9-bb70-a7dffbe220bb; Tue, 16 Jul 2019 17:06:29 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 32791305FFA7; Tue, 16 Jul 2019 20:06:26 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 23A69304F608; Tue, 16 Jul 2019 20:06:26 +0300 (EEST) From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Tue, 16 Jul 2019 20:06:23 +0300 Message-Id: <96ce48a99eb224291d99c946d19f051b4ab668b6.1563293545.git.ppircalabu@bitdefender.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH v2 09/10] xen-access: Code cleanup X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Tamas K Lengyel , Razvan Cojocaru , Wei Liu , Ian Jackson , Alexandru Isaila MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Cleanup xen-access code in accordance with the XEN style guide. Signed-off-by: Petre Pircalabu Acked-by: Tamas K Lengyel Reviewed-by: Alexandru Isaila --- tools/tests/xen-access/xen-access.c | 57 +++++++++++++++++++++---------------- 1 file changed, 33 insertions(+), 24 deletions(-) diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c index 8a3eea5..abf17a2 100644 --- a/tools/tests/xen-access/xen-access.c +++ b/tools/tests/xen-access/xen-access.c @@ -137,7 +137,7 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess) return 0; /* Tear down domain xenaccess in Xen */ - if ( xenaccess->vm_event.ring_page ) + if ( xenaccess->vm_event.ring_page != NULL ) munmap(xenaccess->vm_event.ring_page, XC_PAGE_SIZE); if ( mem_access_enable ) @@ -195,7 +195,7 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id) int rc; xch = xc_interface_open(NULL, NULL, 0); - if ( !xch ) + if ( xch == NULL ) goto err_iface; DPRINTF("xenaccess init\n"); @@ -218,16 +218,17 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id) &xenaccess->vm_event.evtchn_port); if ( xenaccess->vm_event.ring_page == NULL ) { - switch ( errno ) { - case EBUSY: - ERROR("xenaccess is (or was) active on this domain"); - break; - case ENODEV: - ERROR("EPT not supported for this guest"); - break; - default: - perror("Error enabling mem_access"); - break; + switch ( errno ) + { + case EBUSY: + ERROR("xenaccess is (or was) active on this domain"); + break; + case ENODEV: + ERROR("EPT not supported for this guest"); + break; + default: + perror("Error enabling mem_access"); + break; } goto err; } @@ -283,15 +284,12 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id) } err_iface: + return NULL; } -static inline -int control_singlestep( - xc_interface *xch, - domid_t domain_id, - unsigned long vcpu, - bool enable) +static inline int control_singlestep(xc_interface *xch, domid_t domain_id, + unsigned long vcpu, bool enable) { uint32_t op = enable ? XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_ON : XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_OFF; @@ -361,11 +359,11 @@ void usage(char* progname) { fprintf(stderr, "Usage: %s [-m] write|exec", progname); #if defined(__i386__) || defined(__x86_64__) - fprintf(stderr, "|breakpoint|altp2m_write|altp2m_exec|debug|cpuid|desc_access|write_ctrlreg_cr4|altp2m_write_no_gpt"); + fprintf(stderr, "|breakpoint|altp2m_write|altp2m_exec|debug|cpuid|desc_access|write_ctrlreg_cr4|altp2m_write_no_gpt"); #elif defined(__arm__) || defined(__aarch64__) - fprintf(stderr, "|privcall"); + fprintf(stderr, "|privcall"); #endif - fprintf(stderr, + fprintf(stderr, "\n" "Logs first page writes, execs, or breakpoint traps that occur on the domain.\n" "\n" @@ -562,7 +560,7 @@ int main(int argc, char *argv[]) DPRINTF("altp2m view created with id %u\n", altp2m_view_id); DPRINTF("Setting altp2m mem_access permissions.. "); - for(; gfn < xenaccess->max_gpfn; ++gfn) + for( ; gfn < xenaccess->max_gpfn; ++gfn ) { rc = xc_altp2m_set_mem_access( xch, domain_id, altp2m_view_id, gfn, default_access); @@ -671,7 +669,7 @@ int main(int argc, char *argv[]) } /* Wait for access */ - for (;;) + for ( ; ; ) { if ( interrupted ) { @@ -736,7 +734,8 @@ int main(int argc, char *argv[]) rsp.flags = (req.flags & VM_EVENT_FLAG_VCPU_PAUSED); rsp.reason = req.reason; - switch (req.reason) { + switch ( req.reason ) + { case VM_EVENT_REASON_MEM_ACCESS: if ( !shutting_down ) { @@ -791,6 +790,7 @@ int main(int argc, char *argv[]) rsp.u.mem_access = req.u.mem_access; break; + case VM_EVENT_REASON_SOFTWARE_BREAKPOINT: printf("Breakpoint: rip=%016"PRIx64", gfn=%"PRIx64" (vcpu %d)\n", req.data.regs.x86.rip, @@ -809,6 +809,7 @@ int main(int argc, char *argv[]) continue; } break; + case VM_EVENT_REASON_PRIVILEGED_CALL: printf("Privileged call: pc=%"PRIx64" (vcpu %d)\n", req.data.regs.arm.pc, @@ -818,6 +819,7 @@ int main(int argc, char *argv[]) rsp.data.regs.arm.pc += 4; rsp.flags |= VM_EVENT_FLAG_SET_REGISTERS; break; + case VM_EVENT_REASON_SINGLESTEP: printf("Singlestep: rip=%016"PRIx64", vcpu %d, altp2m %u\n", req.data.regs.x86.rip, @@ -835,6 +837,7 @@ int main(int argc, char *argv[]) rsp.flags |= VM_EVENT_FLAG_TOGGLE_SINGLESTEP; break; + case VM_EVENT_REASON_DEBUG_EXCEPTION: printf("Debug exception: rip=%016"PRIx64", vcpu %d. Type: %u. Length: %u\n", req.data.regs.x86.rip, @@ -856,6 +859,7 @@ int main(int argc, char *argv[]) } break; + case VM_EVENT_REASON_CPUID: printf("CPUID executed: rip=%016"PRIx64", vcpu %d. Insn length: %"PRIu32" " \ "0x%"PRIx32" 0x%"PRIx32": EAX=0x%"PRIx64" EBX=0x%"PRIx64" ECX=0x%"PRIx64" EDX=0x%"PRIx64"\n", @@ -872,6 +876,7 @@ int main(int argc, char *argv[]) rsp.data = req.data; rsp.data.regs.x86.rip += req.u.cpuid.insn_length; break; + case VM_EVENT_REASON_DESCRIPTOR_ACCESS: printf("Descriptor access: rip=%016"PRIx64", vcpu %d: "\ "VMExit info=0x%"PRIx32", descriptor=%d, is write=%d\n", @@ -882,6 +887,7 @@ int main(int argc, char *argv[]) req.u.desc_access.is_write); rsp.flags |= VM_EVENT_FLAG_EMULATE; break; + case VM_EVENT_REASON_WRITE_CTRLREG: printf("Control register written: rip=%016"PRIx64", vcpu %d: " "reg=%s, old_value=%016"PRIx64", new_value=%016"PRIx64"\n", @@ -891,6 +897,7 @@ int main(int argc, char *argv[]) req.u.write_ctrlreg.old_value, req.u.write_ctrlreg.new_value); break; + case VM_EVENT_REASON_EMUL_UNIMPLEMENTED: if ( altp2m_write_no_gpt && req.flags & VM_EVENT_FLAG_ALTERNATE_P2M ) { @@ -901,6 +908,7 @@ int main(int argc, char *argv[]) rsp.altp2m_idx = 0; } break; + default: fprintf(stderr, "UNKNOWN REASON CODE %d\n", req.reason); } @@ -941,6 +949,7 @@ exit: rc = rc1; DPRINTF("xenaccess exit code %d\n", rc); + return rc; } From patchwork Tue Jul 16 17:06:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Petre Ovidiu PIRCALABU X-Patchwork-Id: 11046537 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 890576C5 for ; Tue, 16 Jul 2019 17:08:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6BF02283A6 for ; Tue, 16 Jul 2019 17:08:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5F9BE28682; Tue, 16 Jul 2019 17:08:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6FDA0283A6 for ; Tue, 16 Jul 2019 17:08:37 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuS-0005qj-6N; Tue, 16 Jul 2019 17:06:36 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQuR-0005qC-8d for xen-devel@lists.xenproject.org; Tue, 16 Jul 2019 17:06:35 +0000 X-Inumbo-ID: 0d825f0e-a7ec-11e9-8980-bc764e045a96 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 0d825f0e-a7ec-11e9-8980-bc764e045a96; Tue, 16 Jul 2019 17:06:30 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 46AD3305FFA8; Tue, 16 Jul 2019 20:06:26 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 2D222304F609; Tue, 16 Jul 2019 20:06:26 +0300 (EEST) From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Tue, 16 Jul 2019 20:06:24 +0300 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH v2 10/10] xen-access: Add support for vm_event_ng interface X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Tamas K Lengyel , Razvan Cojocaru , Wei Liu , Ian Jackson , Alexandru Isaila MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Split xen-access in order to accommodate both vm_event interfaces (legacy and NG). By default, the legacy vm_event is selected but this can be changed by adding the '-n' flag in the command line. Signed-off-by: Petre Pircalabu --- tools/tests/xen-access/Makefile | 7 +- tools/tests/xen-access/vm-event-ng.c | 183 ++++++++++++++++++ tools/tests/xen-access/vm-event.c | 194 +++++++++++++++++++ tools/tests/xen-access/xen-access.c | 348 +++++++++++------------------------ tools/tests/xen-access/xen-access.h | 91 +++++++++ 5 files changed, 582 insertions(+), 241 deletions(-) create mode 100644 tools/tests/xen-access/vm-event-ng.c create mode 100644 tools/tests/xen-access/vm-event.c create mode 100644 tools/tests/xen-access/xen-access.h diff --git a/tools/tests/xen-access/Makefile b/tools/tests/xen-access/Makefile index 131c9f3..17760d8 100644 --- a/tools/tests/xen-access/Makefile +++ b/tools/tests/xen-access/Makefile @@ -7,6 +7,7 @@ CFLAGS += -DXC_WANT_COMPAT_DEVICEMODEL_API CFLAGS += $(CFLAGS_libxenctrl) CFLAGS += $(CFLAGS_libxenguest) CFLAGS += $(CFLAGS_libxenevtchn) +CFLAGS += $(CFLAGS_libxenforeignmemory) CFLAGS += $(CFLAGS_xeninclude) TARGETS-y := xen-access @@ -25,8 +26,10 @@ clean: .PHONY: distclean distclean: clean -xen-access: xen-access.o Makefile - $(CC) -o $@ $< $(LDFLAGS) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenevtchn) +OBJS = xen-access.o vm-event.o vm-event-ng.o + +xen-access: $(OBJS) Makefile + $(CC) -o $@ $(OBJS) $(LDFLAGS) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenevtchn) $(LDLIBS_libxenforeignmemory) install uninstall: diff --git a/tools/tests/xen-access/vm-event-ng.c b/tools/tests/xen-access/vm-event-ng.c new file mode 100644 index 0000000..2c79f61 --- /dev/null +++ b/tools/tests/xen-access/vm-event-ng.c @@ -0,0 +1,183 @@ +/* + * vm-event-ng.c + * + * Copyright (c) 2019 Bitdefender S.R.L. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation the + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + +#include +#include +#include +#include +#include +#include +#include "xen-access.h" + +#ifndef PFN_UP +#define PFN_UP(x) (((x) + XC_PAGE_SIZE-1) >> XC_PAGE_SHIFT) +#endif /* PFN_UP */ + +typedef struct vm_event_channels +{ + vm_event_t vme; + int num_channels; + xenforeignmemory_resource_handle *fres; + struct vm_event_slot *slots; + int *ports; +} vm_event_channels_t; + +#define to_channels(_vme) container_of((_vme), vm_event_channels_t, vme) + +static int vm_event_channels_init(xc_interface *xch, xenevtchn_handle *xce, + domid_t domain_id, vm_event_ops_t *ops, + vm_event_t **vm_event) +{ + vm_event_channels_t *impl = NULL; + int rc, i; + + impl = (vm_event_channels_t *)calloc(1, sizeof(vm_event_channels_t)); + if ( impl == NULL ) + return -ENOMEM; + + rc = xc_monitor_ng_enable(xch, domain_id, &impl->fres, &impl->num_channels, (void**)&impl->slots); + if ( rc ) + { + ERROR("Failed to enable monitor"); + return rc; + } + + impl->ports = calloc(impl->num_channels, sizeof(int)); + if ( impl->ports == NULL ) + { + rc = -ENOMEM; + goto err; + } + + for ( i = 0; i < impl->num_channels; i++) + { + rc = xenevtchn_bind_interdomain(xce, domain_id, impl->slots[i].port); + if ( rc < 0 ) + { + ERROR("Failed to bind vm_event_slot port for vcpu %d", i); + rc = -errno; + goto err; + } + + impl->ports[i] = rc; + } + + *vm_event = (vm_event_t*) impl; + return 0; + +err: + while ( --i >= 0 ) + xenevtchn_unbind(xce, impl->ports[i]); + free(impl->ports); + xc_monitor_ng_disable(xch, domain_id, &impl->fres); + free(impl); + return rc; +} + +static int vcpu_id_by_port(vm_event_channels_t *impl, int port, int *vcpu_id) +{ + int i; + + for ( i = 0; i < impl->num_channels; i++ ) + { + if ( port == impl->ports[i] ) + { + *vcpu_id = i; + return 0; + } + } + + return -EINVAL; +} + +static int vm_event_channels_teardown(vm_event_t *vm_event) +{ + vm_event_channels_t *impl = to_channels(vm_event); + int rc, i; + + if ( impl == NULL || impl->ports == NULL ) + return -EINVAL; + + for ( i = 0; i < impl->num_channels; i++ ) + { + rc = xenevtchn_unbind(vm_event->xce, impl->ports[i]); + if ( rc != 0 ) + { + ERROR("Error unbinding event port"); + return rc; + } + } + + return xc_monitor_ng_disable(impl->vme.xch, impl->vme.domain_id, &impl->fres); +} + +static bool vm_event_channels_get_request(vm_event_t *vm_event, vm_event_request_t *req, int *port) +{ + int vcpu_id; + vm_event_channels_t *impl = to_channels(vm_event); + + if ( vcpu_id_by_port(impl, *port, &vcpu_id) != 0 ) + return false; + + if ( impl->slots[vcpu_id].state != STATE_VM_EVENT_SLOT_SUBMIT ) + return false; + + memcpy(req, &impl->slots[vcpu_id].u.req, sizeof(*req)); + + return true; +} + +static void vm_event_channels_put_response(vm_event_t *vm_event, vm_event_response_t *rsp, int port) +{ + int vcpu_id; + vm_event_channels_t *impl = to_channels(vm_event); + + if ( vcpu_id_by_port(impl, port, &vcpu_id) != 0 ) + return; + + memcpy(&impl->slots[vcpu_id].u.rsp, rsp, sizeof(*rsp)); + impl->slots[vcpu_id].state = STATE_VM_EVENT_SLOT_FINISH; +} + +static int vm_event_channels_notify_port(vm_event_t *vm_event, int port) +{ + return xenevtchn_notify(vm_event->xce, port); +} + +vm_event_ops_t channel_ops = { + .get_request = vm_event_channels_get_request, + .put_response = vm_event_channels_put_response, + .notify_port = vm_event_channels_notify_port, + .init = vm_event_channels_init, + .teardown = vm_event_channels_teardown, +}; + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/tools/tests/xen-access/vm-event.c b/tools/tests/xen-access/vm-event.c new file mode 100644 index 0000000..e6b20ce --- /dev/null +++ b/tools/tests/xen-access/vm-event.c @@ -0,0 +1,194 @@ +/* + * vm-event.c + * + * Copyright (c) 2019 Bitdefender S.R.L. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation the + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + +#include +#include +#include +#include +#include +#include "xen-access.h" + +typedef struct vm_event_ring { + vm_event_t vme; + int port; + vm_event_back_ring_t back_ring; + uint32_t evtchn_port; + void *ring_page; +} vm_event_ring_t; + +#define to_ring(_vme) container_of((_vme), vm_event_ring_t, vme) + +static int vm_event_ring_init(xc_interface *xch, xenevtchn_handle *xce, + domid_t domain_id, vm_event_ops_t *ops, + vm_event_t **vm_event) +{ + vm_event_ring_t *impl; + int rc; + + impl = (vm_event_ring_t*) calloc (1, sizeof(vm_event_ring_t)); + if ( impl == NULL ) + return -ENOMEM; + + /* Enable mem_access */ + impl->ring_page = xc_monitor_enable(xch, domain_id, &impl->evtchn_port); + if ( impl->ring_page == NULL ) + { + switch ( errno ) + { + case EBUSY: + ERROR("xenaccess is (or was) active on this domain"); + break; + case ENODEV: + ERROR("EPT not supported for this guest"); + break; + default: + perror("Error enabling mem_access"); + break; + } + rc = -errno; + goto err; + } + + /* Bind event notification */ + rc = xenevtchn_bind_interdomain(xce, domain_id, impl->evtchn_port); + if ( rc < 0 ) + { + ERROR("Failed to bind event channel"); + munmap(impl->ring_page, XC_PAGE_SIZE); + xc_monitor_disable(xch, domain_id); + goto err; + } + + impl->port = rc; + + /* Initialise ring */ + SHARED_RING_INIT((vm_event_sring_t *)impl->ring_page); + BACK_RING_INIT(&impl->back_ring, (vm_event_sring_t *)impl->ring_page, + XC_PAGE_SIZE); + + *vm_event = (vm_event_t*) impl; + return 0; + +err: + free(impl); + return rc; +} + +static int vm_event_ring_teardown(vm_event_t *vm_event) +{ + vm_event_ring_t *impl = to_ring(vm_event); + int rc; + + if ( impl->ring_page != NULL ) + munmap(impl->ring_page, XC_PAGE_SIZE); + + /* Tear down domain xenaccess in Xen */ + rc = xc_monitor_disable(vm_event->xch, vm_event->domain_id); + if ( rc != 0 ) + { + ERROR("Error tearing down domain xenaccess in xen"); + return rc; + } + + /* Unbind VIRQ */ + rc = xenevtchn_unbind(vm_event->xce, impl->port); + if ( rc != 0 ) + { + ERROR("Error unbinding event port"); + return rc; + } + + return 0; +} + +/* + * Note that this function is not thread safe. + */ +static bool vm_event_ring_get_request(vm_event_t *vm_event, vm_event_request_t *req, int *port) +{ + vm_event_back_ring_t *back_ring; + RING_IDX req_cons; + vm_event_ring_t *impl = to_ring(vm_event); + + if ( !RING_HAS_UNCONSUMED_REQUESTS(&impl->back_ring) ) + return false; + + back_ring = &impl->back_ring; + req_cons = back_ring->req_cons; + + /* Copy request */ + memcpy(req, RING_GET_REQUEST(back_ring, req_cons), sizeof(*req)); + req_cons++; + + /* Update ring */ + back_ring->req_cons = req_cons; + back_ring->sring->req_event = req_cons + 1; + + *port = impl->port; + + return true; +} + +/* + * Note that this function is not thread safe. + */ +static void vm_event_ring_put_response(vm_event_t *vm_event, vm_event_response_t *rsp, int port) +{ + vm_event_back_ring_t *back_ring; + RING_IDX rsp_prod; + vm_event_ring_t *impl = to_ring(vm_event); + + back_ring = &impl->back_ring; + rsp_prod = back_ring->rsp_prod_pvt; + + /* Copy response */ + memcpy(RING_GET_RESPONSE(back_ring, rsp_prod), rsp, sizeof(*rsp)); + rsp_prod++; + + /* Update ring */ + back_ring->rsp_prod_pvt = rsp_prod; + RING_PUSH_RESPONSES(back_ring); +} + +static int vm_event_ring_notify_port(vm_event_t *vm_event, int port) +{ + return xenevtchn_notify(vm_event->xce, port); +} + +vm_event_ops_t ring_ops = { + .get_request = vm_event_ring_get_request, + .put_response = vm_event_ring_put_response, + .notify_port = vm_event_ring_notify_port, + .init = vm_event_ring_init, + .teardown = vm_event_ring_teardown, +}; + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c index abf17a2..9157f23 100644 --- a/tools/tests/xen-access/xen-access.c +++ b/tools/tests/xen-access/xen-access.c @@ -35,14 +35,9 @@ #include #include #include -#include #include #include -#include -#include -#include - #include #if defined(__arm__) || defined(__aarch64__) @@ -52,9 +47,7 @@ #define START_PFN 0ULL #endif -#define DPRINTF(a, b...) fprintf(stderr, a, ## b) -#define ERROR(a, b...) fprintf(stderr, a "\n", ## b) -#define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno)) +#include "xen-access.h" /* From xen/include/asm-x86/processor.h */ #define X86_TRAP_DEBUG 1 @@ -63,32 +56,14 @@ /* From xen/include/asm-x86/x86-defns.h */ #define X86_CR4_PGE 0x00000080 /* enable global pages */ -typedef struct vm_event { - domid_t domain_id; - xenevtchn_handle *xce_handle; - int port; - vm_event_back_ring_t back_ring; - uint32_t evtchn_port; - void *ring_page; -} vm_event_t; - -typedef struct xenaccess { - xc_interface *xc_handle; - - xen_pfn_t max_gpfn; - - vm_event_t vm_event; -} xenaccess_t; - static int interrupted; -bool evtchn_bind = 0, evtchn_open = 0, mem_access_enable = 0; static void close_handler(int sig) { interrupted = sig; } -int xc_wait_for_event_or_timeout(xc_interface *xch, xenevtchn_handle *xce, unsigned long ms) +static int xc_wait_for_event_or_timeout(xc_interface *xch, xenevtchn_handle *xce, unsigned long ms) { struct pollfd fd = { .fd = xenevtchn_fd(xce), .events = POLLIN | POLLERR }; int port; @@ -129,161 +104,85 @@ int xc_wait_for_event_or_timeout(xc_interface *xch, xenevtchn_handle *xce, unsig return -errno; } -int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess) +static int vm_event_teardown(vm_event_t *vm_event) { int rc; - if ( xenaccess == NULL ) + if ( vm_event == NULL ) return 0; - /* Tear down domain xenaccess in Xen */ - if ( xenaccess->vm_event.ring_page != NULL ) - munmap(xenaccess->vm_event.ring_page, XC_PAGE_SIZE); - - if ( mem_access_enable ) - { - rc = xc_monitor_disable(xenaccess->xc_handle, - xenaccess->vm_event.domain_id); - if ( rc != 0 ) - { - ERROR("Error tearing down domain xenaccess in xen"); - return rc; - } - } - - /* Unbind VIRQ */ - if ( evtchn_bind ) - { - rc = xenevtchn_unbind(xenaccess->vm_event.xce_handle, - xenaccess->vm_event.port); - if ( rc != 0 ) - { - ERROR("Error unbinding event port"); - return rc; - } - } + rc = vm_event->ops->teardown(vm_event); + if ( rc != 0 ) + return rc; /* Close event channel */ - if ( evtchn_open ) + rc = xenevtchn_close(vm_event->xce); + if ( rc != 0 ) { - rc = xenevtchn_close(xenaccess->vm_event.xce_handle); - if ( rc != 0 ) - { - ERROR("Error closing event channel"); - return rc; - } + ERROR("Error closing event channel"); + return rc; } /* Close connection to Xen */ - rc = xc_interface_close(xenaccess->xc_handle); + rc = xc_interface_close(vm_event->xch); if ( rc != 0 ) { ERROR("Error closing connection to xen"); return rc; } - xenaccess->xc_handle = NULL; - - free(xenaccess); return 0; } -xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id) +static vm_event_t *vm_event_init(domid_t domain_id, vm_event_ops_t *ops) { - xenaccess_t *xenaccess = 0; + vm_event_t *vm_event; xc_interface *xch; + xenevtchn_handle *xce; + xen_pfn_t max_gpfn; int rc; + if ( ops == NULL ) + return NULL; + xch = xc_interface_open(NULL, NULL, 0); if ( xch == NULL ) - goto err_iface; + goto err; DPRINTF("xenaccess init\n"); - *xch_r = xch; - - /* Allocate memory */ - xenaccess = malloc(sizeof(xenaccess_t)); - memset(xenaccess, 0, sizeof(xenaccess_t)); - - /* Open connection to xen */ - xenaccess->xc_handle = xch; - - /* Set domain id */ - xenaccess->vm_event.domain_id = domain_id; - - /* Enable mem_access */ - xenaccess->vm_event.ring_page = - xc_monitor_enable(xenaccess->xc_handle, - xenaccess->vm_event.domain_id, - &xenaccess->vm_event.evtchn_port); - if ( xenaccess->vm_event.ring_page == NULL ) - { - switch ( errno ) - { - case EBUSY: - ERROR("xenaccess is (or was) active on this domain"); - break; - case ENODEV: - ERROR("EPT not supported for this guest"); - break; - default: - perror("Error enabling mem_access"); - break; - } - goto err; - } - mem_access_enable = 1; /* Open event channel */ - xenaccess->vm_event.xce_handle = xenevtchn_open(NULL, 0); - if ( xenaccess->vm_event.xce_handle == NULL ) + xce = xenevtchn_open(NULL, 0); + if ( xce == NULL ) { ERROR("Failed to open event channel"); goto err; } - evtchn_open = 1; - - /* Bind event notification */ - rc = xenevtchn_bind_interdomain(xenaccess->vm_event.xce_handle, - xenaccess->vm_event.domain_id, - xenaccess->vm_event.evtchn_port); - if ( rc < 0 ) - { - ERROR("Failed to bind event channel"); - goto err; - } - evtchn_bind = 1; - xenaccess->vm_event.port = rc; - - /* Initialise ring */ - SHARED_RING_INIT((vm_event_sring_t *)xenaccess->vm_event.ring_page); - BACK_RING_INIT(&xenaccess->vm_event.back_ring, - (vm_event_sring_t *)xenaccess->vm_event.ring_page, - XC_PAGE_SIZE); /* Get max_gpfn */ - rc = xc_domain_maximum_gpfn(xenaccess->xc_handle, - xenaccess->vm_event.domain_id, - &xenaccess->max_gpfn); - + rc = xc_domain_maximum_gpfn(xch, domain_id, &max_gpfn); if ( rc ) { ERROR("Failed to get max gpfn"); goto err; } + DPRINTF("max_gpfn = %"PRI_xen_pfn"\n", max_gpfn); - DPRINTF("max_gpfn = %"PRI_xen_pfn"\n", xenaccess->max_gpfn); + rc = ops->init(xch, xce, domain_id, ops, &vm_event); + if ( rc < 0 ) + goto err; - return xenaccess; + vm_event->xch = xch; + vm_event->xce = xce; + vm_event->domain_id = domain_id; + vm_event->ops = ops; + vm_event->max_gpfn = max_gpfn; - err: - rc = xenaccess_teardown(xch, xenaccess); - if ( rc ) - { - ERROR("Failed to teardown xenaccess structure!\n"); - } + return vm_event; - err_iface: + err: + xenevtchn_close(xce); + xc_interface_close(xch); return NULL; } @@ -298,26 +197,6 @@ static inline int control_singlestep(xc_interface *xch, domid_t domain_id, } /* - * Note that this function is not thread safe. - */ -static void get_request(vm_event_t *vm_event, vm_event_request_t *req) -{ - vm_event_back_ring_t *back_ring; - RING_IDX req_cons; - - back_ring = &vm_event->back_ring; - req_cons = back_ring->req_cons; - - /* Copy request */ - memcpy(req, RING_GET_REQUEST(back_ring, req_cons), sizeof(*req)); - req_cons++; - - /* Update ring */ - back_ring->req_cons = req_cons; - back_ring->sring->req_event = req_cons + 1; -} - -/* * X86 control register names */ static const char* get_x86_ctrl_reg_name(uint32_t index) @@ -335,29 +214,9 @@ static const char* get_x86_ctrl_reg_name(uint32_t index) return names[index]; } -/* - * Note that this function is not thread safe. - */ -static void put_response(vm_event_t *vm_event, vm_event_response_t *rsp) -{ - vm_event_back_ring_t *back_ring; - RING_IDX rsp_prod; - - back_ring = &vm_event->back_ring; - rsp_prod = back_ring->rsp_prod_pvt; - - /* Copy response */ - memcpy(RING_GET_RESPONSE(back_ring, rsp_prod), rsp, sizeof(*rsp)); - rsp_prod++; - - /* Update ring */ - back_ring->rsp_prod_pvt = rsp_prod; - RING_PUSH_RESPONSES(back_ring); -} - void usage(char* progname) { - fprintf(stderr, "Usage: %s [-m] write|exec", progname); + fprintf(stderr, "Usage: %s [-m] [-n] write|exec", progname); #if defined(__i386__) || defined(__x86_64__) fprintf(stderr, "|breakpoint|altp2m_write|altp2m_exec|debug|cpuid|desc_access|write_ctrlreg_cr4|altp2m_write_no_gpt"); #elif defined(__arm__) || defined(__aarch64__) @@ -367,19 +226,22 @@ void usage(char* progname) "\n" "Logs first page writes, execs, or breakpoint traps that occur on the domain.\n" "\n" - "-m requires this program to run, or else the domain may pause\n"); + "-m requires this program to run, or else the domain may pause\n" + "-n uses the per-vcpu channels vm_event interface\n"); } +extern vm_event_ops_t ring_ops; +extern vm_event_ops_t channel_ops; + int main(int argc, char *argv[]) { struct sigaction act; domid_t domain_id; - xenaccess_t *xenaccess; + vm_event_t *vm_event; vm_event_request_t req; vm_event_response_t rsp; int rc = -1; int rc1; - xc_interface *xch; xenmem_access_t default_access = XENMEM_access_rwx; xenmem_access_t after_first_access = XENMEM_access_rwx; int memaccess = 0; @@ -394,6 +256,7 @@ int main(int argc, char *argv[]) int write_ctrlreg_cr4 = 0; int altp2m_write_no_gpt = 0; uint16_t altp2m_view_id = 0; + int new_interface = 0; char* progname = argv[0]; char* command; @@ -402,11 +265,12 @@ int main(int argc, char *argv[]) struct option long_options[] = { { "mem-access-listener", no_argument, 0, 'm' }, + { "new-interface", no_argument, 0, 'n' }, }; while ( 1 ) { - c = getopt_long(argc, argv, "m", long_options, &option_index); + c = getopt_long(argc, argv, "mn", long_options, &option_index); if ( c == -1 ) break; @@ -416,6 +280,10 @@ int main(int argc, char *argv[]) required = 1; break; + case 'n': + new_interface = 1; + break; + default: usage(progname); return -1; @@ -495,10 +363,11 @@ int main(int argc, char *argv[]) return -1; } - xenaccess = xenaccess_init(&xch, domain_id); - if ( xenaccess == NULL ) + vm_event = vm_event_init(domain_id, + (new_interface) ? &channel_ops : &ring_ops); + if ( vm_event == NULL ) { - ERROR("Error initialising xenaccess"); + ERROR("Error initialising vm_event"); return 1; } @@ -514,7 +383,7 @@ int main(int argc, char *argv[]) sigaction(SIGALRM, &act, NULL); /* Set whether the access listener is required */ - rc = xc_domain_set_access_required(xch, domain_id, required); + rc = xc_domain_set_access_required(vm_event->xch, domain_id, required); if ( rc < 0 ) { ERROR("Error %d setting mem_access listener required\n", rc); @@ -529,13 +398,13 @@ int main(int argc, char *argv[]) if( altp2m_write_no_gpt ) { - rc = xc_monitor_inguest_pagefault(xch, domain_id, 1); + rc = xc_monitor_inguest_pagefault(vm_event->xch, domain_id, 1); if ( rc < 0 ) { ERROR("Error %d setting inguest pagefault\n", rc); goto exit; } - rc = xc_monitor_emul_unimplemented(xch, domain_id, 1); + rc = xc_monitor_emul_unimplemented(vm_event->xch, domain_id, 1); if ( rc < 0 ) { ERROR("Error %d failed to enable emul unimplemented\n", rc); @@ -543,14 +412,15 @@ int main(int argc, char *argv[]) } } - rc = xc_altp2m_set_domain_state( xch, domain_id, 1 ); + rc = xc_altp2m_set_domain_state( vm_event->xch, domain_id, 1 ); if ( rc < 0 ) { ERROR("Error %d enabling altp2m on domain!\n", rc); goto exit; } - rc = xc_altp2m_create_view( xch, domain_id, default_access, &altp2m_view_id ); + rc = xc_altp2m_create_view( vm_event->xch, domain_id, default_access, + &altp2m_view_id ); if ( rc < 0 ) { ERROR("Error %d creating altp2m view!\n", rc); @@ -560,24 +430,24 @@ int main(int argc, char *argv[]) DPRINTF("altp2m view created with id %u\n", altp2m_view_id); DPRINTF("Setting altp2m mem_access permissions.. "); - for( ; gfn < xenaccess->max_gpfn; ++gfn ) + for( ; gfn < vm_event->max_gpfn; ++gfn ) { - rc = xc_altp2m_set_mem_access( xch, domain_id, altp2m_view_id, gfn, - default_access); + rc = xc_altp2m_set_mem_access( vm_event->xch, domain_id, + altp2m_view_id, gfn, default_access); if ( !rc ) perm_set++; } DPRINTF("done! Permissions set on %lu pages.\n", perm_set); - rc = xc_altp2m_switch_to_view( xch, domain_id, altp2m_view_id ); + rc = xc_altp2m_switch_to_view( vm_event->xch, domain_id, altp2m_view_id ); if ( rc < 0 ) { ERROR("Error %d switching to altp2m view!\n", rc); goto exit; } - rc = xc_monitor_singlestep( xch, domain_id, 1 ); + rc = xc_monitor_singlestep( vm_event->xch, domain_id, 1 ); if ( rc < 0 ) { ERROR("Error %d failed to enable singlestep monitoring!\n", rc); @@ -588,15 +458,15 @@ int main(int argc, char *argv[]) if ( memaccess && !altp2m ) { /* Set the default access type and convert all pages to it */ - rc = xc_set_mem_access(xch, domain_id, default_access, ~0ull, 0); + rc = xc_set_mem_access(vm_event->xch, domain_id, default_access, ~0ull, 0); if ( rc < 0 ) { ERROR("Error %d setting default mem access type\n", rc); goto exit; } - rc = xc_set_mem_access(xch, domain_id, default_access, START_PFN, - (xenaccess->max_gpfn - START_PFN) ); + rc = xc_set_mem_access(vm_event->xch, domain_id, default_access, START_PFN, + (vm_event->max_gpfn - START_PFN) ); if ( rc < 0 ) { @@ -608,7 +478,7 @@ int main(int argc, char *argv[]) if ( breakpoint ) { - rc = xc_monitor_software_breakpoint(xch, domain_id, 1); + rc = xc_monitor_software_breakpoint(vm_event->xch, domain_id, 1); if ( rc < 0 ) { ERROR("Error %d setting breakpoint trapping with vm_event\n", rc); @@ -618,7 +488,7 @@ int main(int argc, char *argv[]) if ( debug ) { - rc = xc_monitor_debug_exceptions(xch, domain_id, 1, 1); + rc = xc_monitor_debug_exceptions(vm_event->xch, domain_id, 1, 1); if ( rc < 0 ) { ERROR("Error %d setting debug exception listener with vm_event\n", rc); @@ -628,7 +498,7 @@ int main(int argc, char *argv[]) if ( cpuid ) { - rc = xc_monitor_cpuid(xch, domain_id, 1); + rc = xc_monitor_cpuid(vm_event->xch, domain_id, 1); if ( rc < 0 ) { ERROR("Error %d setting cpuid listener with vm_event\n", rc); @@ -638,7 +508,7 @@ int main(int argc, char *argv[]) if ( desc_access ) { - rc = xc_monitor_descriptor_access(xch, domain_id, 1); + rc = xc_monitor_descriptor_access(vm_event->xch, domain_id, 1); if ( rc < 0 ) { ERROR("Error %d setting descriptor access listener with vm_event\n", rc); @@ -648,7 +518,7 @@ int main(int argc, char *argv[]) if ( privcall ) { - rc = xc_monitor_privileged_call(xch, domain_id, 1); + rc = xc_monitor_privileged_call(vm_event->xch, domain_id, 1); if ( rc < 0 ) { ERROR("Error %d setting privileged call trapping with vm_event\n", rc); @@ -659,7 +529,7 @@ int main(int argc, char *argv[]) if ( write_ctrlreg_cr4 ) { /* Mask the CR4.PGE bit so no events will be generated for global TLB flushes. */ - rc = xc_monitor_write_ctrlreg(xch, domain_id, VM_EVENT_X86_CR4, 1, 1, + rc = xc_monitor_write_ctrlreg(vm_event->xch, domain_id, VM_EVENT_X86_CR4, 1, 1, X86_CR4_PGE, 1); if ( rc < 0 ) { @@ -671,41 +541,43 @@ int main(int argc, char *argv[]) /* Wait for access */ for ( ; ; ) { + int port = 0; + if ( interrupted ) { /* Unregister for every event */ DPRINTF("xenaccess shutting down on signal %d\n", interrupted); if ( breakpoint ) - rc = xc_monitor_software_breakpoint(xch, domain_id, 0); + rc = xc_monitor_software_breakpoint(vm_event->xch, domain_id, 0); if ( debug ) - rc = xc_monitor_debug_exceptions(xch, domain_id, 0, 0); + rc = xc_monitor_debug_exceptions(vm_event->xch, domain_id, 0, 0); if ( cpuid ) - rc = xc_monitor_cpuid(xch, domain_id, 0); + rc = xc_monitor_cpuid(vm_event->xch, domain_id, 0); if ( desc_access ) - rc = xc_monitor_descriptor_access(xch, domain_id, 0); + rc = xc_monitor_descriptor_access(vm_event->xch, domain_id, 0); if ( write_ctrlreg_cr4 ) - rc = xc_monitor_write_ctrlreg(xch, domain_id, VM_EVENT_X86_CR4, 0, 0, 0, 0); + rc = xc_monitor_write_ctrlreg(vm_event->xch, domain_id, VM_EVENT_X86_CR4, 0, 0, 0, 0); if ( privcall ) - rc = xc_monitor_privileged_call(xch, domain_id, 0); + rc = xc_monitor_privileged_call(vm_event->xch, domain_id, 0); if ( altp2m ) { - rc = xc_altp2m_switch_to_view( xch, domain_id, 0 ); - rc = xc_altp2m_destroy_view(xch, domain_id, altp2m_view_id); - rc = xc_altp2m_set_domain_state(xch, domain_id, 0); - rc = xc_monitor_singlestep(xch, domain_id, 0); + rc = xc_altp2m_switch_to_view( vm_event->xch, domain_id, 0 ); + rc = xc_altp2m_destroy_view(vm_event->xch, domain_id, altp2m_view_id); + rc = xc_altp2m_set_domain_state(vm_event->xch, domain_id, 0); + rc = xc_monitor_singlestep(vm_event->xch, domain_id, 0); } else { - rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, ~0ull, 0); - rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, START_PFN, - (xenaccess->max_gpfn - START_PFN) ); + rc = xc_set_mem_access(vm_event->xch, domain_id, XENMEM_access_rwx, ~0ull, 0); + rc = xc_set_mem_access(vm_event->xch, domain_id, XENMEM_access_rwx, START_PFN, + (vm_event->max_gpfn - START_PFN) ); } shutting_down = 1; } - rc = xc_wait_for_event_or_timeout(xch, xenaccess->vm_event.xce_handle, 100); + rc = xc_wait_for_event_or_timeout(vm_event->xch, vm_event->xce, 100); if ( rc < -1 ) { ERROR("Error getting event"); @@ -717,10 +589,10 @@ int main(int argc, char *argv[]) DPRINTF("Got event from Xen\n"); } - while ( RING_HAS_UNCONSUMED_REQUESTS(&xenaccess->vm_event.back_ring) ) - { - get_request(&xenaccess->vm_event, &req); + port = rc; + while ( vm_event->ops->get_request(vm_event, &req, &port) ) + { if ( req.version != VM_EVENT_INTERFACE_VERSION ) { ERROR("Error: vm_event interface version mismatch!\n"); @@ -744,7 +616,7 @@ int main(int argc, char *argv[]) * At shutdown we have already reset all the permissions so really no use getting it again. */ xenmem_access_t access; - rc = xc_get_mem_access(xch, domain_id, req.u.mem_access.gfn, &access); + rc = xc_get_mem_access(vm_event->xch, domain_id, req.u.mem_access.gfn, &access); if (rc < 0) { ERROR("Error %d getting mem_access event\n", rc); @@ -777,7 +649,7 @@ int main(int argc, char *argv[]) } else if ( default_access != after_first_access ) { - rc = xc_set_mem_access(xch, domain_id, after_first_access, + rc = xc_set_mem_access(vm_event->xch, domain_id, after_first_access, req.u.mem_access.gfn, 1); if (rc < 0) { @@ -798,7 +670,7 @@ int main(int argc, char *argv[]) req.vcpu_id); /* Reinject */ - rc = xc_hvm_inject_trap(xch, domain_id, req.vcpu_id, + rc = xc_hvm_inject_trap(vm_event->xch, domain_id, req.vcpu_id, X86_TRAP_INT3, req.u.software_breakpoint.type, -1, req.u.software_breakpoint.insn_length, 0); @@ -846,7 +718,7 @@ int main(int argc, char *argv[]) req.u.debug_exception.insn_length); /* Reinject */ - rc = xc_hvm_inject_trap(xch, domain_id, req.vcpu_id, + rc = xc_hvm_inject_trap(vm_event->xch, domain_id, req.vcpu_id, X86_TRAP_DEBUG, req.u.debug_exception.type, -1, req.u.debug_exception.insn_length, @@ -914,17 +786,15 @@ int main(int argc, char *argv[]) } /* Put the response on the ring */ - put_response(&xenaccess->vm_event, &rsp); - } - - /* Tell Xen page is ready */ - rc = xenevtchn_notify(xenaccess->vm_event.xce_handle, - xenaccess->vm_event.port); + put_response(vm_event, &rsp, port); - if ( rc != 0 ) - { - ERROR("Error resuming page"); - interrupted = -1; + /* Tell Xen page is ready */ + rc = notify_port(vm_event, port); + if ( rc != 0 ) + { + ERROR("Error resuming page"); + interrupted = -1; + } } if ( shutting_down ) @@ -937,13 +807,13 @@ exit: { uint32_t vcpu_id; for ( vcpu_id = 0; vcpu_idxch, domain_id, vcpu_id, 0); } - /* Tear down domain xenaccess */ - rc1 = xenaccess_teardown(xch, xenaccess); + /* Tear down domain */ + rc1 = vm_event_teardown(vm_event); if ( rc1 != 0 ) - ERROR("Error tearing down xenaccess"); + ERROR("Error tearing down vm_event"); if ( rc == 0 ) rc = rc1; diff --git a/tools/tests/xen-access/xen-access.h b/tools/tests/xen-access/xen-access.h new file mode 100644 index 0000000..9fc640c --- /dev/null +++ b/tools/tests/xen-access/xen-access.h @@ -0,0 +1,91 @@ +/* + * xen-access.h + * + * Copyright (c) 2019 Bitdefender S.R.L. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation the + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + +#ifndef XEN_ACCESS_H +#define XEN_ACCESS_H + +#include +#include +#include + +#ifndef container_of +#define container_of(ptr, type, member) ({ \ + const typeof( ((type *)0)->member ) *__mptr = (ptr); \ + (type *)( (char *)__mptr - offsetof(type,member) );}) +#endif /* container_of */ + +#define DPRINTF(a, b...) fprintf(stderr, a, ## b) +#define ERROR(a, b...) fprintf(stderr, a "\n", ## b) +#define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno)) + +struct vm_event_ops; + +typedef struct vm_event { + xc_interface *xch; + domid_t domain_id; + xenevtchn_handle *xce; + xen_pfn_t max_gpfn; + struct vm_event_ops *ops; +} vm_event_t; + +typedef struct vm_event_ops { + int (*init)(xc_interface *, xenevtchn_handle *, domid_t, + struct vm_event_ops *, vm_event_t **); + int (*teardown)(vm_event_t *); + bool (*get_request)(vm_event_t *, vm_event_request_t *, int *); + void (*put_response)(vm_event_t *, vm_event_response_t *, int); + int (*notify_port)(vm_event_t *, int port); +} vm_event_ops_t; + +static inline bool get_request(vm_event_t *vm_event, vm_event_request_t *req, + int *port) +{ + return ( vm_event ) ? vm_event->ops->get_request(vm_event, req, port) : + false; +} + +static inline void put_response(vm_event_t *vm_event, vm_event_response_t *rsp, int port) +{ + if ( vm_event ) + vm_event->ops->put_response(vm_event, rsp, port); +} + +static inline int notify_port(vm_event_t *vm_event, int port) +{ + if ( !vm_event ) + return -EINVAL; + + return vm_event->ops->notify_port(vm_event, port); +} + +#endif /* XEN_ACCESS_H */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */