From patchwork Mon Oct 5 09:49:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11816417 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19D3592C for ; Mon, 5 Oct 2020 09:50:52 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D8A8020776 for ; Mon, 5 Oct 2020 09:50:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=xen.org header.i=@xen.org header.b="LHvT0/yK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D8A8020776 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.2945.8412 (Exim 4.92) (envelope-from ) id 1kPN7K-0000aJ-95; Mon, 05 Oct 2020 09:49:14 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 2945.8412; Mon, 05 Oct 2020 09:49:14 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7K-0000aC-6A; Mon, 05 Oct 2020 09:49:14 +0000 Received: by outflank-mailman (input) for mailman id 2945; Mon, 05 Oct 2020 09:49:12 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7I-0000a2-9z for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:12 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d5ee281d-a4a4-454e-8da2-b8c551704b45; Mon, 05 Oct 2020 09:49:11 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7F-00018R-FE; Mon, 05 Oct 2020 09:49:09 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kPN7F-0007gW-5T; Mon, 05 Oct 2020 09:49:09 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7I-0000a2-9z for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:12 +0000 X-Inumbo-ID: d5ee281d-a4a4-454e-8da2-b8c551704b45 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d5ee281d-a4a4-454e-8da2-b8c551704b45; Mon, 05 Oct 2020 09:49:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=JndE2IT1FR1VzwIIZT6xeGdSTmev8bp7YBG/Bt30IPQ=; b=LHvT0/yKr0tJ+uY0azfHrBacfx T6bE7maz3Yo3mUsnUWLpLBpdZo8zOS9ZHmdAHl7kQsCRCxn5DAr60IRnS339n+uMzBKrN6ejqkkpV 4TTPt5FCqCuCR8jUHZ9c5gkaiwd0rQj4rNdkacPLNpoDo33IF77b2PeMVCChiWcHd71c=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7F-00018R-FE; Mon, 05 Oct 2020 09:49:09 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kPN7F-0007gW-5T; Mon, 05 Oct 2020 09:49:09 +0000 From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Ian Jackson , Wei Liu , Anthony PERARD Subject: [PATCH 1/5] libxl: remove separate calculation of IOMMU memory overhead Date: Mon, 5 Oct 2020 10:49:01 +0100 Message-Id: <20201005094905.2929-2-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201005094905.2929-1-paul@xen.org> References: <20201005094905.2929-1-paul@xen.org> MIME-Version: 1.0 From: Paul Durrant When using 'shared_pt' mode the IOMMU is using the EPT PTEs. In 'sync_pt' mode these PTEs are instead replicated for the IOMMU to use. Hence, it is fairly clear that the memory overhead in this mode is essentially another copy of the P2M. This patch removes the independent calculation done in libxl__get_required_iommu_memory() and instead simply uses 'shadow_memkb' as the value of the IOMMU overhead since this is the estimated size of the P2M. Signed-off-by: Paul Durrant Acked-by: Wei Liu --- Cc: Ian Jackson Cc: Wei Liu Cc: Anthony PERARD --- tools/libs/light/libxl_create.c | 22 +++++----------------- 1 file changed, 5 insertions(+), 17 deletions(-) diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c index 9a6e92b3a5..f07ba84850 100644 --- a/tools/libs/light/libxl_create.c +++ b/tools/libs/light/libxl_create.c @@ -1001,21 +1001,6 @@ static bool ok_to_default_memkb_in_create(libxl__gc *gc) */ } -static unsigned long libxl__get_required_iommu_memory(unsigned long maxmem_kb) -{ - unsigned long iommu_pages = 0, mem_pages = maxmem_kb / 4; - unsigned int level; - - /* Assume a 4 level page table with 512 entries per level */ - for (level = 0; level < 4; level++) - { - mem_pages = DIV_ROUNDUP(mem_pages, 512); - iommu_pages += mem_pages; - } - - return iommu_pages * 4; -} - int libxl__domain_config_setdefault(libxl__gc *gc, libxl_domain_config *d_config, uint32_t domid /* for logging, only */) @@ -1168,12 +1153,15 @@ int libxl__domain_config_setdefault(libxl__gc *gc, libxl_get_required_shadow_memory(d_config->b_info.max_memkb, d_config->b_info.max_vcpus); - /* No IOMMU reservation is needed if passthrough mode is not 'sync_pt' */ + /* No IOMMU reservation is needed if passthrough mode is not 'sync_pt' + * otherwise we need a reservation sufficient to accommodate a copy of + * the P2M. + */ if (d_config->b_info.iommu_memkb == LIBXL_MEMKB_DEFAULT && ok_to_default_memkb_in_create(gc)) d_config->b_info.iommu_memkb = (d_config->c_info.passthrough == LIBXL_PASSTHROUGH_SYNC_PT) - ? libxl__get_required_iommu_memory(d_config->b_info.max_memkb) + ? d_config->b_info.shadow_memkb : 0; ret = libxl__domain_build_info_setdefault(gc, &d_config->b_info); From patchwork Mon Oct 5 09:49:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11816413 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 60E0492C for ; Mon, 5 Oct 2020 09:50:15 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 24B2820776 for ; Mon, 5 Oct 2020 09:50:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=xen.org header.i=@xen.org header.b="E2g7sHQL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 24B2820776 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.2948.8448 (Exim 4.92) (envelope-from ) id 1kPN7P-0000ep-66; Mon, 05 Oct 2020 09:49:19 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 2948.8448; Mon, 05 Oct 2020 09:49:19 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7P-0000ef-1l; Mon, 05 Oct 2020 09:49:19 +0000 Received: by outflank-mailman (input) for mailman id 2948; Mon, 05 Oct 2020 09:49:18 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7O-0000a7-94 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:18 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 3b0d67c1-f441-4482-81d8-0a448f5a7faf; Mon, 05 Oct 2020 09:49:13 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7G-00018X-Ub; Mon, 05 Oct 2020 09:49:10 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kPN7G-0007gW-Lf; Mon, 05 Oct 2020 09:49:10 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7O-0000a7-94 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:18 +0000 X-Inumbo-ID: 3b0d67c1-f441-4482-81d8-0a448f5a7faf Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 3b0d67c1-f441-4482-81d8-0a448f5a7faf; Mon, 05 Oct 2020 09:49:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=GCHQt2qpyFKezc4RlS7bJaSqJ0rTh54jgHtmbo053rM=; b=E2g7sHQL6sF7iRLQ5d3rqFKtlM u2WzzcXp5l6efaJe6jQBRqspjBtYvVfC3NxlgOQmfm+6tv+rrHJaZ8fB9/V8YUvKoOs52qZogLpBZ UCP2oh7QgEFrBdkRoE89KMpve1YNUYAvS1Z5CDgBkmPBkLJWJjaLmV/YRVi/Zo0yULXs=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7G-00018X-Ub; Mon, 05 Oct 2020 09:49:10 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kPN7G-0007gW-Lf; Mon, 05 Oct 2020 09:49:10 +0000 From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Daniel De Graaf , Ian Jackson , Wei Liu , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini Subject: [PATCH 2/5] iommu / domctl: introduce XEN_DOMCTL_iommu_ctl Date: Mon, 5 Oct 2020 10:49:02 +0100 Message-Id: <20201005094905.2929-3-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201005094905.2929-1-paul@xen.org> References: <20201005094905.2929-1-paul@xen.org> MIME-Version: 1.0 From: Paul Durrant A subsequent patch will introduce code to apply a cap to memory used for IOMMU page-tables for a domain. This patch simply introduces the boilerplate for the new domctl. The implementation of new sub-operations of the new domctl will be added to iommu_ctl() and hence, whilst this initially only returns -EOPNOTSUPP, the code in iommu_do_domctl() is modified to set up a hypercall continuation should iommu_ctl() return -ERESTART. NOTE: The op code is passed into the newly introduced xsm_iommu_ctl() function, but for the moment only a single default 'ctl' perm is defined in the new 'iommu' class. Signed-off-by: Paul Durrant --- Cc: Daniel De Graaf Cc: Ian Jackson Cc: Wei Liu Cc: Andrew Cooper Cc: George Dunlap Cc: Jan Beulich Cc: Julien Grall Cc: Stefano Stabellini --- tools/flask/policy/modules/dom0.te | 2 ++ xen/drivers/passthrough/iommu.c | 39 ++++++++++++++++++++++++--- xen/include/public/domctl.h | 14 ++++++++++ xen/include/xsm/dummy.h | 17 +++++++++--- xen/include/xsm/xsm.h | 26 ++++++++++++------ xen/xsm/dummy.c | 6 +++-- xen/xsm/flask/hooks.c | 26 +++++++++++++----- xen/xsm/flask/policy/access_vectors | 7 +++++ xen/xsm/flask/policy/security_classes | 1 + 9 files changed, 114 insertions(+), 24 deletions(-) diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te index 0a63ce15b6..ab5eb682c7 100644 --- a/tools/flask/policy/modules/dom0.te +++ b/tools/flask/policy/modules/dom0.te @@ -69,6 +69,8 @@ auditallow dom0_t security_t:security { load_policy setenforce setbool }; # Allow dom0 to report platform configuration changes back to the hypervisor allow dom0_t xen_t:resource setup; +allow dom0_t xen_t:iommu ctl; + admin_device(dom0_t, device_t) admin_device(dom0_t, irq_t) admin_device(dom0_t, ioport_t) diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c index 87f9a857bb..bef0405984 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -502,6 +502,27 @@ void iommu_resume() iommu_get_ops()->resume(); } +static int iommu_ctl( + struct xen_domctl *domctl, struct domain *d, + XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) +{ + struct xen_domctl_iommu_ctl *ctl = &domctl->u.iommu_ctl; + int rc; + + rc = xsm_iommu_ctl(XSM_HOOK, d, ctl->op); + if ( rc ) + return rc; + + switch ( ctl->op ) + { + default: + rc = -EOPNOTSUPP; + break; + } + + return rc; +} + int iommu_do_domctl( struct xen_domctl *domctl, struct domain *d, XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) @@ -511,14 +532,26 @@ int iommu_do_domctl( if ( !is_iommu_enabled(d) ) return -EOPNOTSUPP; + switch ( domctl->cmd ) + { + case XEN_DOMCTL_iommu_ctl: + ret = iommu_ctl(domctl, d, u_domctl); + if ( ret == -ERESTART ) + ret = hypercall_create_continuation(__HYPERVISOR_domctl, + "h", u_domctl); + break; + + default: #ifdef CONFIG_HAS_PCI - ret = iommu_do_pci_domctl(domctl, d, u_domctl); + ret = iommu_do_pci_domctl(domctl, d, u_domctl); #endif #ifdef CONFIG_HAS_DEVICE_TREE - if ( ret == -ENODEV ) - ret = iommu_do_dt_domctl(domctl, d, u_domctl); + if ( ret == -ENODEV ) + ret = iommu_do_dt_domctl(domctl, d, u_domctl); #endif + break; + } return ret; } diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index 791f0a2592..75e855625a 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -1130,6 +1130,18 @@ struct xen_domctl_vuart_op { */ }; +/* + * XEN_DOMCTL_iommu_ctl + * + * Control of VM IOMMU settings + */ + +#define XEN_DOMCTL_IOMMU_INVALID 0 + +struct xen_domctl_iommu_ctl { + uint32_t op; /* XEN_DOMCTL_IOMMU_* */ +}; + struct xen_domctl { uint32_t cmd; #define XEN_DOMCTL_createdomain 1 @@ -1214,6 +1226,7 @@ struct xen_domctl { #define XEN_DOMCTL_vuart_op 81 #define XEN_DOMCTL_get_cpu_policy 82 #define XEN_DOMCTL_set_cpu_policy 83 +#define XEN_DOMCTL_iommu_ctl 84 #define XEN_DOMCTL_gdbsx_guestmemio 1000 #define XEN_DOMCTL_gdbsx_pausevcpu 1001 #define XEN_DOMCTL_gdbsx_unpausevcpu 1002 @@ -1274,6 +1287,7 @@ struct xen_domctl { struct xen_domctl_monitor_op monitor_op; struct xen_domctl_psr_alloc psr_alloc; struct xen_domctl_vuart_op vuart_op; + struct xen_domctl_iommu_ctl iommu_ctl; uint8_t pad[128]; } u; }; diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index 2368acebed..9825533c75 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -348,7 +348,15 @@ static XSM_INLINE int xsm_get_vnumainfo(XSM_DEFAULT_ARG struct domain *d) return xsm_default_action(action, current->domain, d); } -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI) +#if defined(CONFIG_HAS_PASSTHROUGH) +static XSM_INLINE int xsm_iommu_ctl(XSM_DEFAULT_ARG struct domain *d, + unsigned int op) +{ + XSM_ASSERT_ACTION(XSM_PRIV); + return xsm_default_action(action, current->domain, d); +} + +#if defined(CONFIG_HAS_PCI) static XSM_INLINE int xsm_get_device_group(XSM_DEFAULT_ARG uint32_t machine_bdf) { XSM_ASSERT_ACTION(XSM_HOOK); @@ -367,9 +375,9 @@ static XSM_INLINE int xsm_deassign_device(XSM_DEFAULT_ARG struct domain *d, uint return xsm_default_action(action, current->domain, d); } -#endif /* HAS_PASSTHROUGH && HAS_PCI */ +#endif /* HAS_PCI */ -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE) +#if defined(CONFIG_HAS_DEVICE_TREE) static XSM_INLINE int xsm_assign_dtdevice(XSM_DEFAULT_ARG struct domain *d, const char *dtpath) { @@ -384,7 +392,8 @@ static XSM_INLINE int xsm_deassign_dtdevice(XSM_DEFAULT_ARG struct domain *d, return xsm_default_action(action, current->domain, d); } -#endif /* HAS_PASSTHROUGH && HAS_DEVICE_TREE */ +#endif /* HAS_DEVICE_TREE */ +#endif /* HAS_PASSTHROUGH */ static XSM_INLINE int xsm_resource_plug_core(XSM_DEFAULT_VOID) { diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index b21c3783d3..1a96d3502c 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -106,17 +106,19 @@ struct xsm_operations { int (*irq_permission) (struct domain *d, int pirq, uint8_t allow); int (*iomem_permission) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow); int (*iomem_mapping) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow); - int (*pci_config_permission) (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access); + int (*pci_config_permission) (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access); -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI) +#if defined(CONFIG_HAS_PASSTHROUGH) + int (*iommu_ctl) (struct domain *d, unsigned int op); +#if defined(CONFIG_HAS_PCI) int (*get_device_group) (uint32_t machine_bdf); int (*assign_device) (struct domain *d, uint32_t machine_bdf); int (*deassign_device) (struct domain *d, uint32_t machine_bdf); #endif - -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE) +#if defined(CONFIG_HAS_DEVICE_TREE) int (*assign_dtdevice) (struct domain *d, const char *dtpath); int (*deassign_dtdevice) (struct domain *d, const char *dtpath); +#endif #endif int (*resource_plug_core) (void); @@ -466,7 +468,14 @@ static inline int xsm_pci_config_permission (xsm_default_t def, struct domain *d return xsm_ops->pci_config_permission(d, machine_bdf, start, end, access); } -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI) +#if defined(CONFIG_HAS_PASSTHROUGH) +static inline int xsm_iommu_ctl(xsm_default_t def, struct domain *d, + unsigned int op) +{ + return xsm_ops->iommu_ctl(d, op); +} + +#if defined(CONFIG_HAS_PCI) static inline int xsm_get_device_group(xsm_default_t def, uint32_t machine_bdf) { return xsm_ops->get_device_group(machine_bdf); @@ -481,9 +490,9 @@ static inline int xsm_deassign_device(xsm_default_t def, struct domain *d, uint3 { return xsm_ops->deassign_device(d, machine_bdf); } -#endif /* HAS_PASSTHROUGH && HAS_PCI) */ +#endif /* HAS_PCI */ -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE) +#if defined(CONFIG_HAS_DEVICE_TREE) static inline int xsm_assign_dtdevice(xsm_default_t def, struct domain *d, const char *dtpath) { @@ -496,7 +505,8 @@ static inline int xsm_deassign_dtdevice(xsm_default_t def, struct domain *d, return xsm_ops->deassign_dtdevice(d, dtpath); } -#endif /* HAS_PASSTHROUGH && HAS_DEVICE_TREE */ +#endif /* HAS_DEVICE_TREE */ +#endif /* HAS_PASSTHROUGH */ static inline int xsm_resource_plug_pci (xsm_default_t def, uint32_t machine_bdf) { diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index d4cce68089..a924f1dfd1 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -84,14 +84,16 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) set_to_dummy_if_null(ops, get_vnumainfo); #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI) + set_to_dummy_if_null(ops, iommu_ctl); +#if defined(CONFIG_HAS_PCI) set_to_dummy_if_null(ops, get_device_group); set_to_dummy_if_null(ops, assign_device); set_to_dummy_if_null(ops, deassign_device); #endif - -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE) +#if defined(CONFIG_HAS_DEVICE_TREE) set_to_dummy_if_null(ops, assign_dtdevice); set_to_dummy_if_null(ops, deassign_dtdevice); +#endif #endif set_to_dummy_if_null(ops, resource_plug_core); diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index b3addbf701..a2858fb0c0 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -624,6 +624,7 @@ static int flask_domctl(struct domain *d, int cmd) * These have individual XSM hooks * (drivers/passthrough/{pci,device_tree.c) */ + case XEN_DOMCTL_iommu_ctl: case XEN_DOMCTL_get_device_group: case XEN_DOMCTL_test_assign_device: case XEN_DOMCTL_assign_device: @@ -1278,7 +1279,15 @@ static int flask_mem_sharing(struct domain *d) } #endif -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI) +#if defined(CONFIG_HAS_PASSTHROUGH) + +static int flask_iommu_ctl(struct domain *d, unsigned int op) +{ + /* All current ops are subject to default 'ctl' perm */ + return current_has_perm(d, SECCLASS_IOMMU, IOMMU__CTL); +} + +#if defined(CONFIG_HAS_PCI) static int flask_get_device_group(uint32_t machine_bdf) { u32 rsid; @@ -1348,9 +1357,9 @@ static int flask_deassign_device(struct domain *d, uint32_t machine_bdf) return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__REMOVE_DEVICE, NULL); } -#endif /* HAS_PASSTHROUGH && HAS_PCI */ +#endif /* HAS_PCI */ -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE) +#if defined(CONFIG_HAS_DEVICE_TREE) static int flask_test_assign_dtdevice(const char *dtpath) { u32 rsid; @@ -1410,7 +1419,8 @@ static int flask_deassign_dtdevice(struct domain *d, const char *dtpath) return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__REMOVE_DEVICE, NULL); } -#endif /* HAS_PASSTHROUGH && HAS_DEVICE_TREE */ +#endif /* HAS_DEVICE_TREE */ +#endif /* HAS_PASSTHROUGH */ static int flask_platform_op(uint32_t op) { @@ -1855,15 +1865,17 @@ static struct xsm_operations flask_ops = { .remove_from_physmap = flask_remove_from_physmap, .map_gmfn_foreign = flask_map_gmfn_foreign, -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI) +#if defined(CONFIG_HAS_PASSTHROUGH) + .iommu_ctl = flask_iommu_ctl, +#if defined(CONFIG_HAS_PCI) .get_device_group = flask_get_device_group, .assign_device = flask_assign_device, .deassign_device = flask_deassign_device, #endif - -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE) +#if defined(CONFIG_HAS_DEVICE_TREE) .assign_dtdevice = flask_assign_dtdevice, .deassign_dtdevice = flask_deassign_dtdevice, +#endif #endif .platform_op = flask_platform_op, diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors index fde5162c7e..c017a38666 100644 --- a/xen/xsm/flask/policy/access_vectors +++ b/xen/xsm/flask/policy/access_vectors @@ -542,3 +542,10 @@ class argo # Domain sending a message to another domain. send } + +# Class iommu describes operations on the IOMMU resources of a domain +class iommu +{ + # Miscellaneous control + ctl +} diff --git a/xen/xsm/flask/policy/security_classes b/xen/xsm/flask/policy/security_classes index 50ecbabc5c..882968e79c 100644 --- a/xen/xsm/flask/policy/security_classes +++ b/xen/xsm/flask/policy/security_classes @@ -20,5 +20,6 @@ class grant class security class version class argo +class iommu # FLASK From patchwork Mon Oct 5 09:49:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11816419 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E38D113B2 for ; Mon, 5 Oct 2020 09:50:52 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B0DE620776 for ; Mon, 5 Oct 2020 09:50:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=xen.org header.i=@xen.org header.b="DI9ymTP2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B0DE620776 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.2949.8460 (Exim 4.92) (envelope-from ) id 1kPN7U-0000ld-JA; Mon, 05 Oct 2020 09:49:24 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 2949.8460; Mon, 05 Oct 2020 09:49:24 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7U-0000lS-Fn; Mon, 05 Oct 2020 09:49:24 +0000 Received: by outflank-mailman (input) for mailman id 2949; Mon, 05 Oct 2020 09:49:23 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7T-0000a7-9N for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:23 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 66d5992c-4a93-4e63-a402-3dcd90abad74; Mon, 05 Oct 2020 09:49:14 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7I-00018e-Ki; Mon, 05 Oct 2020 09:49:12 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kPN7I-0007gW-C8; Mon, 05 Oct 2020 09:49:12 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7T-0000a7-9N for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:23 +0000 X-Inumbo-ID: 66d5992c-4a93-4e63-a402-3dcd90abad74 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 66d5992c-4a93-4e63-a402-3dcd90abad74; Mon, 05 Oct 2020 09:49:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=/aaR7886EUtZE6QjtNx8lerjhqgir9MSpy1XgMZF4CQ=; b=DI9ymTP2D65+vmiOYBjbM//YKo UEJWIwtLaTCuNuH8oEgzyJs2GxIops+nzfjC+uJUuZMZXYdP6tlLXMss/APAlfHH5UukDZkx9Odyt W7aLmasRZAfakrHvBpaHu1lwvzZGq3x3qRe+XIPgerBnpJ1KZ5BJSRExOy7wiq6r4Hqw=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7I-00018e-Ki; Mon, 05 Oct 2020 09:49:12 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kPN7I-0007gW-C8; Mon, 05 Oct 2020 09:49:12 +0000 From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Ian Jackson , Wei Liu , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Anthony PERARD , Volodymyr Babchuk , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= Subject: [PATCH 3/5] libxl / iommu / domctl: introduce XEN_DOMCTL_IOMMU_SET_ALLOCATION... Date: Mon, 5 Oct 2020 10:49:03 +0100 Message-Id: <20201005094905.2929-4-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201005094905.2929-1-paul@xen.org> References: <20201005094905.2929-1-paul@xen.org> MIME-Version: 1.0 From: Paul Durrant ... sub-operation of XEN_DOMCTL_iommu_ctl. This patch adds a new sub-operation into the domctl. The code in iommu_ctl() is extended to call a new arch-specific iommu_set_allocation() function which will be called with the IOMMU page-table overhead (in 4k pages) in response to libxl issuing a new domctl via the xc_iommu_set_allocation() helper function. The helper function is only called in the x86 implementation of libxl__arch_domain_create() when the calculated 'iommu_memkb' value is non-zero. Hence the ARM implementation of iommu_set_allocation() simply returns -EOPNOTSUPP. NOTE: The implementation of the IOMMU page-table memory pool will be added in a subsequent patch and so the x86 implementation of iommu_set_allocation() currently does nothing other than return 0 (to indicate success) thereby ensuring that the new call in libxl__arch_domain_create() always succeeds. Signed-off-by: Paul Durrant --- Cc: Ian Jackson Cc: Wei Liu Cc: Andrew Cooper Cc: George Dunlap Cc: Jan Beulich Cc: Julien Grall Cc: Stefano Stabellini Cc: Anthony PERARD Cc: Volodymyr Babchuk Cc: "Roger Pau Monné" --- tools/libs/ctrl/include/xenctrl.h | 5 +++++ tools/libs/ctrl/xc_domain.c | 16 ++++++++++++++++ tools/libs/light/libxl_x86.c | 10 ++++++++++ xen/drivers/passthrough/iommu.c | 8 ++++++++ xen/drivers/passthrough/x86/iommu.c | 5 +++++ xen/include/asm-arm/iommu.h | 6 ++++++ xen/include/asm-x86/iommu.h | 2 ++ xen/include/public/domctl.h | 8 ++++++++ 8 files changed, 60 insertions(+) diff --git a/tools/libs/ctrl/include/xenctrl.h b/tools/libs/ctrl/include/xenctrl.h index 3796425e1e..4d6c9d44bc 100644 --- a/tools/libs/ctrl/include/xenctrl.h +++ b/tools/libs/ctrl/include/xenctrl.h @@ -2650,6 +2650,11 @@ int xc_livepatch_replace(xc_interface *xch, char *name, uint32_t timeout, uint32 int xc_domain_cacheflush(xc_interface *xch, uint32_t domid, xen_pfn_t start_pfn, xen_pfn_t nr_pfns); +/* IOMMU control operations */ + +int xc_iommu_set_allocation(xc_interface *xch, uint32_t domid, + unsigned int nr_pages); + /* Compat shims */ #include "xenctrl_compat.h" diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c index e7cea4a17d..0b20a8f2ee 100644 --- a/tools/libs/ctrl/xc_domain.c +++ b/tools/libs/ctrl/xc_domain.c @@ -2185,6 +2185,22 @@ int xc_domain_soft_reset(xc_interface *xch, domctl.domain = domid; return do_domctl(xch, &domctl); } + +int xc_iommu_set_allocation(xc_interface *xch, uint32_t domid, + unsigned int nr_pages) +{ + DECLARE_DOMCTL; + + memset(&domctl, 0, sizeof(domctl)); + + domctl.cmd = XEN_DOMCTL_iommu_ctl; + domctl.domain = domid; + domctl.u.iommu_ctl.op = XEN_DOMCTL_IOMMU_SET_ALLOCATION; + domctl.u.iommu_ctl.u.set_allocation.nr_pages = nr_pages; + + return do_domctl(xch, &domctl); +} + /* * Local variables: * mode: C diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c index 6ec6c27c83..9631974dd6 100644 --- a/tools/libs/light/libxl_x86.c +++ b/tools/libs/light/libxl_x86.c @@ -520,6 +520,16 @@ int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config, NULL, 0, &shadow, 0, NULL); } + if (d_config->b_info.iommu_memkb) { + unsigned int nr_pages = DIV_ROUNDUP(d_config->b_info.iommu_memkb, 4); + + ret = xc_iommu_set_allocation(ctx->xch, domid, nr_pages); + if (ret) { + LOGED(ERROR, domid, "Failed to set IOMMU allocation"); + goto out; + } + } + if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_PV && libxl_defbool_val(d_config->b_info.u.pv.e820_host)) { ret = libxl__e820_alloc(gc, domid, d_config); diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c index bef0405984..642d5c8331 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -515,6 +515,14 @@ static int iommu_ctl( switch ( ctl->op ) { + case XEN_DOMCTL_IOMMU_SET_ALLOCATION: + { + struct xen_domctl_iommu_set_allocation *set_allocation = + &ctl->u.set_allocation; + + rc = iommu_set_allocation(d, set_allocation->nr_pages); + break; + } default: rc = -EOPNOTSUPP; break; diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c index f17b1820f4..b168073f10 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -134,6 +134,11 @@ void __hwdom_init arch_iommu_check_autotranslated_hwdom(struct domain *d) panic("PVH hardware domain iommu must be set in 'strict' mode\n"); } +int iommu_set_allocation(struct domain *d, unsigned nr_pages) +{ + return 0; +} + int arch_iommu_domain_init(struct domain *d) { struct domain_iommu *hd = dom_iommu(d); diff --git a/xen/include/asm-arm/iommu.h b/xen/include/asm-arm/iommu.h index 937edc8373..2e4735bace 100644 --- a/xen/include/asm-arm/iommu.h +++ b/xen/include/asm-arm/iommu.h @@ -33,6 +33,12 @@ int __must_check arm_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn, int __must_check arm_iommu_unmap_page(struct domain *d, dfn_t dfn, unsigned int *flush_flags); +static inline int iommu_set_allocation(struct domain *d, + unsigned int nr_pages) +{ + return -EOPNOTSUPP; +} + #endif /* __ARCH_ARM_IOMMU_H__ */ /* diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h index 970eb06ffa..d086f564af 100644 --- a/xen/include/asm-x86/iommu.h +++ b/xen/include/asm-x86/iommu.h @@ -138,6 +138,8 @@ int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq, int __must_check iommu_free_pgtables(struct domain *d); struct page_info *__must_check iommu_alloc_pgtable(struct domain *d); +int __must_check iommu_set_allocation(struct domain *d, unsigned int nr_pages); + #endif /* !__ARCH_X86_IOMMU_H__ */ /* * Local variables: diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index 75e855625a..6402678838 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -1138,8 +1138,16 @@ struct xen_domctl_vuart_op { #define XEN_DOMCTL_IOMMU_INVALID 0 +#define XEN_DOMCTL_IOMMU_SET_ALLOCATION 1 +struct xen_domctl_iommu_set_allocation { + uint32_t nr_pages; +}; + struct xen_domctl_iommu_ctl { uint32_t op; /* XEN_DOMCTL_IOMMU_* */ + union { + struct xen_domctl_iommu_set_allocation set_allocation; + } u; }; struct xen_domctl { From patchwork Mon Oct 5 09:49:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11816411 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 052BC92C for ; Mon, 5 Oct 2020 09:50:05 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C562520776 for ; Mon, 5 Oct 2020 09:50:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=xen.org header.i=@xen.org header.b="daBGQ6wW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C562520776 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.2947.8436 (Exim 4.92) (envelope-from ) id 1kPN7N-0000cr-Rq; Mon, 05 Oct 2020 09:49:17 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 2947.8436; Mon, 05 Oct 2020 09:49:17 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7N-0000cj-O6; Mon, 05 Oct 2020 09:49:17 +0000 Received: by outflank-mailman (input) for mailman id 2947; Mon, 05 Oct 2020 09:49:17 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7N-0000a2-4g for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:17 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id cc291b9a-82d7-464d-b8ff-200c3d85c9aa; Mon, 05 Oct 2020 09:49:13 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7J-00018j-D5; Mon, 05 Oct 2020 09:49:13 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kPN7J-0007gW-5k; Mon, 05 Oct 2020 09:49:13 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7N-0000a2-4g for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:17 +0000 X-Inumbo-ID: cc291b9a-82d7-464d-b8ff-200c3d85c9aa Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id cc291b9a-82d7-464d-b8ff-200c3d85c9aa; Mon, 05 Oct 2020 09:49:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=0/TvnggoZgnvE3hOxtGaw8lHv49nlACtJLzfHJVFPDg=; b=daBGQ6wW7PKyicNrzxKeGW/4xr JsNo/tu6Lto8STSfVzRNs3wrlqCf6RgM9URkBR0sk+ytSsYT7z+u38waeP6sT0LGWMON6encXeLRU sTbJgN48uiUrZaprjjzECD/WIaQpuNxGHpPGeWuDoOTBF5Ag+iDqmhSfdz0HjcRaLMys=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7J-00018j-D5; Mon, 05 Oct 2020 09:49:13 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kPN7J-0007gW-5k; Mon, 05 Oct 2020 09:49:13 +0000 From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Jan Beulich Subject: [PATCH 4/5] iommu: set 'hap_pt_share' and 'need_sync' flags earlier in iommu_domain_init() Date: Mon, 5 Oct 2020 10:49:04 +0100 Message-Id: <20201005094905.2929-5-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201005094905.2929-1-paul@xen.org> References: <20201005094905.2929-1-paul@xen.org> MIME-Version: 1.0 From: Paul Durrant Set these flags prior to the calls to arch_iommu_domain_init() and the implementation init() method. There is no reason to hide this information from those functions and the value of 'hap_pt_share' will be needed by a modification to arch_iommu_domain_init() made in a subsequent patch. Signed-off-by: Paul Durrant Reviewed-by: Julien Grall --- Cc: Jan Beulich --- xen/drivers/passthrough/iommu.c | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c index 642d5c8331..fd9705b3a9 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -174,15 +174,6 @@ int iommu_domain_init(struct domain *d, unsigned int opts) hd->node = NUMA_NO_NODE; #endif - ret = arch_iommu_domain_init(d); - if ( ret ) - return ret; - - hd->platform_ops = iommu_get_ops(); - ret = hd->platform_ops->init(d); - if ( ret || is_system_domain(d) ) - return ret; - /* * Use shared page tables for HAP and IOMMU if the global option * is enabled (from which we can infer the h/w is capable) and @@ -202,7 +193,12 @@ int iommu_domain_init(struct domain *d, unsigned int opts) ASSERT(!(hd->need_sync && hd->hap_pt_share)); - return 0; + ret = arch_iommu_domain_init(d); + if ( ret ) + return ret; + + hd->platform_ops = iommu_get_ops(); + return hd->platform_ops->init(d); } void __hwdom_init iommu_hwdom_init(struct domain *d) From patchwork Mon Oct 5 09:49:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11816415 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A9D2392C for ; Mon, 5 Oct 2020 09:50:40 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7651620776 for ; Mon, 5 Oct 2020 09:50:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=xen.org header.i=@xen.org header.b="zeEgrm+x" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7651620776 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.2950.8472 (Exim 4.92) (envelope-from ) id 1kPN7Z-0000rq-T4; Mon, 05 Oct 2020 09:49:29 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 2950.8472; Mon, 05 Oct 2020 09:49:29 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7Z-0000rh-PE; Mon, 05 Oct 2020 09:49:29 +0000 Received: by outflank-mailman (input) for mailman id 2950; Mon, 05 Oct 2020 09:49:28 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7Y-0000a7-9c for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:28 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ccbdfe1a-3095-42d9-b8c2-8552cdbb2387; Mon, 05 Oct 2020 09:49:15 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7K-00018v-Ha; Mon, 05 Oct 2020 09:49:14 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kPN7K-0007gW-92; Mon, 05 Oct 2020 09:49:14 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7Y-0000a7-9c for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:28 +0000 X-Inumbo-ID: ccbdfe1a-3095-42d9-b8c2-8552cdbb2387 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ccbdfe1a-3095-42d9-b8c2-8552cdbb2387; Mon, 05 Oct 2020 09:49:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=786CYGhqhj0G7twgI0F8WxticlI/nilf3doZC+/fM1I=; b=zeEgrm+xJUjmFaVxTi0jW2I0O7 rxetauyJtbYfDQWFYKaA+/TGZmY4aSvvCVKV91LiD42COv9zL3Ithov3ykjT/LZitIF8OSPYbAgt0 RvkupNDYE7bYXUT34TPwrxMf2Xooec3tcP+wqS0cALKda2CDCeDOX3H6aYIAUV4ieMSE=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kPN7K-00018v-Ha; Mon, 05 Oct 2020 09:49:14 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kPN7K-0007gW-92; Mon, 05 Oct 2020 09:49:14 +0000 From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu Subject: [PATCH 5/5] x86 / iommu: create a dedicated pool of page-table pages Date: Mon, 5 Oct 2020 10:49:05 +0100 Message-Id: <20201005094905.2929-6-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201005094905.2929-1-paul@xen.org> References: <20201005094905.2929-1-paul@xen.org> MIME-Version: 1.0 From: Paul Durrant This patch, in a way analogous to HAP page allocation, creates a pool of pages for use as IOMMU page-tables when the size of that pool is configured by a call to iommu_set_allocation(). That call occurs when the tool-stack has calculates the size of the pool and issues the XEN_DOMCTL_IOMMU_SET_ALLOCATION sub-operation of the XEN_DOMCTL_iommu_ctl domctl. However, some IOMMU mappings must be set up during domain_create(), before the tool-stack has had a chance to make its calculation and issue the domctl. Hence an initial hard-coded pool size of 256 is set for domains not sharing EPT during the call to arch_iommu_domain_init(). NOTE: No pool is configured for the hardware or quarantine domains. They continue to allocate page-table pages on demand. The prototype of iommu_free_pgtables() is change to void return as it no longer needs to be re-startable. Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Andrew Cooper Cc: "Roger Pau Monné" Cc: Wei Liu --- xen/arch/x86/domain.c | 4 +- xen/drivers/passthrough/x86/iommu.c | 129 ++++++++++++++++++++++++---- xen/include/asm-x86/iommu.h | 5 +- 3 files changed, 116 insertions(+), 22 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 7e16d49bfd..101ef4aba5 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -2304,7 +2304,9 @@ int domain_relinquish_resources(struct domain *d) PROGRESS(iommu_pagetables): - ret = iommu_free_pgtables(d); + iommu_free_pgtables(d); + + ret = iommu_set_allocation(d, 0); if ( ret ) return ret; diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c index b168073f10..de0cc52489 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -134,21 +134,106 @@ void __hwdom_init arch_iommu_check_autotranslated_hwdom(struct domain *d) panic("PVH hardware domain iommu must be set in 'strict' mode\n"); } -int iommu_set_allocation(struct domain *d, unsigned nr_pages) +static int destroy_pgtable(struct domain *d) { + struct domain_iommu *hd = dom_iommu(d); + struct page_info *pg; + + if ( !hd->arch.pgtables.nr ) + { + ASSERT_UNREACHABLE(); + return -ENOENT; + } + + pg = page_list_remove_head(&hd->arch.pgtables.free_list); + if ( !pg ) + return -EBUSY; + + hd->arch.pgtables.nr--; + free_domheap_page(pg); + return 0; } +static int create_pgtable(struct domain *d) +{ + struct domain_iommu *hd = dom_iommu(d); + unsigned int memflags = 0; + struct page_info *pg; + +#ifdef CONFIG_NUMA + if ( hd->node != NUMA_NO_NODE ) + memflags = MEMF_node(hd->node); +#endif + + pg = alloc_domheap_page(NULL, memflags); + if ( !pg ) + return -ENOMEM; + + page_list_add(pg, &hd->arch.pgtables.free_list); + hd->arch.pgtables.nr++; + + return 0; +} + +static int set_allocation(struct domain *d, unsigned int nr_pages, + bool allow_preempt) +{ + struct domain_iommu *hd = dom_iommu(d); + unsigned int done = 0; + int rc = 0; + + spin_lock(&hd->arch.pgtables.lock); + + while ( !rc ) + { + if ( hd->arch.pgtables.nr < nr_pages ) + rc = create_pgtable(d); + else if ( hd->arch.pgtables.nr > nr_pages ) + rc = destroy_pgtable(d); + else + break; + + if ( allow_preempt && !rc && !(++done & 0xff) && + general_preempt_check() ) + rc = -ERESTART; + } + + spin_unlock(&hd->arch.pgtables.lock); + + return rc; +} + +int iommu_set_allocation(struct domain *d, unsigned int nr_pages) +{ + return set_allocation(d, nr_pages, true); +} + +/* + * Some IOMMU mappings are set up during domain_create() before the tool- + * stack has a chance to calculate and set the appropriate page-table + * allocation. A hard-coded initial allocation covers this gap. + */ +#define INITIAL_ALLOCATION 256 + int arch_iommu_domain_init(struct domain *d) { struct domain_iommu *hd = dom_iommu(d); spin_lock_init(&hd->arch.mapping_lock); + INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.free_list); INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.list); spin_lock_init(&hd->arch.pgtables.lock); - return 0; + /* + * The hardware and quarantine domains are not subject to a quota + * and domains sharing EPT do not require any allocation. + */ + if ( is_hardware_domain(d) || d == dom_io || iommu_use_hap_pt(d) ) + return 0; + + return set_allocation(d, INITIAL_ALLOCATION, false); } void arch_iommu_domain_destroy(struct domain *d) @@ -265,38 +350,45 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d) return; } -int iommu_free_pgtables(struct domain *d) +void iommu_free_pgtables(struct domain *d) { struct domain_iommu *hd = dom_iommu(d); - struct page_info *pg; - unsigned int done = 0; - while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) ) - { - free_domheap_page(pg); + spin_lock(&hd->arch.pgtables.lock); - if ( !(++done & 0xff) && general_preempt_check() ) - return -ERESTART; - } + page_list_splice(&hd->arch.pgtables.list, &hd->arch.pgtables.free_list); + INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.list); - return 0; + spin_unlock(&hd->arch.pgtables.lock); } struct page_info *iommu_alloc_pgtable(struct domain *d) { struct domain_iommu *hd = dom_iommu(d); - unsigned int memflags = 0; struct page_info *pg; void *p; -#ifdef CONFIG_NUMA - if ( hd->node != NUMA_NO_NODE ) - memflags = MEMF_node(hd->node); -#endif + spin_lock(&hd->arch.pgtables.lock); - pg = alloc_domheap_page(NULL, memflags); + again: + pg = page_list_remove_head(&hd->arch.pgtables.free_list); if ( !pg ) + { + /* + * The hardware and quarantine domains are not subject to a quota + * so create page-table pages on demand. + */ + if ( is_hardware_domain(d) || d == dom_io ) + { + int rc = create_pgtable(d); + + if ( !rc ) + goto again; + } + + spin_unlock(&hd->arch.pgtables.lock); return NULL; + } p = __map_domain_page(pg); clear_page(p); @@ -306,7 +398,6 @@ struct page_info *iommu_alloc_pgtable(struct domain *d) unmap_domain_page(p); - spin_lock(&hd->arch.pgtables.lock); page_list_add(pg, &hd->arch.pgtables.list); spin_unlock(&hd->arch.pgtables.lock); diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h index d086f564af..3991b5601b 100644 --- a/xen/include/asm-x86/iommu.h +++ b/xen/include/asm-x86/iommu.h @@ -47,7 +47,8 @@ struct arch_iommu { spinlock_t mapping_lock; /* io page table lock */ struct { - struct page_list_head list; + struct page_list_head list, free_list; + unsigned int nr; spinlock_t lock; } pgtables; @@ -135,7 +136,7 @@ int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq, iommu_vcall(ops, sync_cache, addr, size); \ }) -int __must_check iommu_free_pgtables(struct domain *d); +void iommu_free_pgtables(struct domain *d); struct page_info *__must_check iommu_alloc_pgtable(struct domain *d); int __must_check iommu_set_allocation(struct domain *d, unsigned int nr_pages);