From patchwork Tue Sep 24 15:30:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11159127 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 857B913B1 for ; Tue, 24 Sep 2019 15:32:44 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 563062168B for ; Tue, 24 Sep 2019 15:32:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jCBvhVL5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 563062168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmmG-0007mz-Nw; Tue, 24 Sep 2019 15:30:56 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmmF-0007mk-Vj for xen-devel@lists.xenproject.org; Tue, 24 Sep 2019 15:30:56 +0000 X-Inumbo-ID: 49af3b80-dee0-11e9-8628-bc764e2007e4 Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144]) by localhost (Halon) with ESMTPS id 49af3b80-dee0-11e9-8628-bc764e2007e4; Tue, 24 Sep 2019 15:30:50 +0000 (UTC) Received: by mail-lf1-x144.google.com with SMTP id t8so1732209lfc.13 for ; Tue, 24 Sep 2019 08:30:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jwXiyhMbIwZuEAyCZXPBQ4ZYbNBEOMdJ8KLzMT9+Tn4=; b=jCBvhVL5Ca3EPviq1hF8wK5fDyp/4Qmjatxv42rP/6hd1hShmR83HrVfJVfpdMq3tz CzP6kguUfbEdEv0+U3L2eWN6MAfFHRkCQ1groD5KiZqxpSbSFo8F1nOAjMj7866E2209 b6lf39kIQt06KEt3UfwPNNEr26lvWQHOqqquoUHjAlS9db8fV5zeeRchgnfNYpvirfsh dBsvnsTleNDk56yghd9ge8nJPiCriyeOrIbQ3JdNNLdm/MYj96iHQN0vy5wHGECJP96p ySHol7d2eTz9z9dZzGoeU3eRNVG1PMyzgwMZ7REvJjH6bH1CmIeetJmRzmjoQI+hgr5+ zjoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jwXiyhMbIwZuEAyCZXPBQ4ZYbNBEOMdJ8KLzMT9+Tn4=; b=Du1cUL6BoriYvaL/uSw/gdYKJu0isRIlhmbvoJEwCNJYFJaMp7OEswGhXJXd6YNgkL Yf6IeBrbacUxnYONcPoDeP2v/odNOTNWsq4xO8NeVSq2w6FeWFO2uA6Z/irSa8dolOPk x1XdupqUWAft/cHi+6ksWEF9lYlYA/zGPrvIW8z0j10prye6IMtPAMEp8/8nV7+ByGdr 8G6voFCTgv9546Mij8Yx1UP8YvgFybUSflwd2NVxs/xp1QREU72yKiElU7S/I+MVgSYD IULFoH3VnKGbqZXYsvrMuAy1kORKdKYyKmEvvNok3TxITEoBgk8K6EKDzMExWbMQyFl4 2qOQ== X-Gm-Message-State: APjAAAUgj3xAnRlwChI3W9DQL0mytyaFAiBfGHRe2FQfDST4sprNvvPQ 7xuEyGZyG1Ajo06PJzimx0+giP7tzxo= X-Google-Smtp-Source: APXvYqyXjFTMeIjPtvc5DfU9Xwf+Z0jqaOtA/6PaG1qMMesmZTGF8D0kNrl49Fkzkd53oUISpX5hRQ== X-Received: by 2002:a05:6512:419:: with SMTP id u25mr2238873lfk.165.1569339049600; Tue, 24 Sep 2019 08:30:49 -0700 (PDT) Received: from otyshchenko.kyiv.epam.com (ll-74.141.223.85.sovam.net.ua. [85.223.141.74]) by smtp.gmail.com with ESMTPSA id z128sm525818lfa.1.2019.09.24.08.30.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 24 Sep 2019 08:30:49 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Date: Tue, 24 Sep 2019 18:30:20 +0300 Message-Id: <1569339027-19484-2-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> References: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> Subject: [Xen-devel] [PATCH V5 1/8] iommu/arm: Add iommu_helpers.c file to keep common for IOMMUs stuff X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Oleksandr Tyshchenko , julien.grall@arm.com, sstabellini@kernel.org, volodymyr_babchuk@epam.com MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Oleksandr Tyshchenko Introduce a separate file to keep various helpers which could be used by more than one IOMMU driver in order not to duplicate code. The first candidates to be moved to the new file are SMMU driver's "map_page/unmap_page" callbacks. These callbacks neither contain any SMMU specific info nor perform any SMMU specific actions and are going to be the same across all IOMMU drivers which H/W IP shares P2M with the CPU like SMMU does. So, move callbacks to iommu_helpers.c for the upcoming IPMMU driver to be able to re-use them. Signed-off-by: Oleksandr Tyshchenko Acked-by: Julien Grall --- Changes V3 -> V4: - added Julien's A-b - fixed Grammatical error - clarified comment in a header Changes V2 -> V3: - put headers in alphabetical order - retained Copyright from the SMMU code --- xen/drivers/passthrough/arm/Makefile | 2 +- xen/drivers/passthrough/arm/iommu_helpers.c | 84 +++++++++++++++++++++++++++++ xen/drivers/passthrough/arm/smmu.c | 48 +---------------- xen/include/asm-arm/iommu.h | 10 ++++ 4 files changed, 97 insertions(+), 47 deletions(-) create mode 100644 xen/drivers/passthrough/arm/iommu_helpers.c diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile index b3efcfd..4abb87a 100644 --- a/xen/drivers/passthrough/arm/Makefile +++ b/xen/drivers/passthrough/arm/Makefile @@ -1,2 +1,2 @@ -obj-y += iommu.o +obj-y += iommu.o iommu_helpers.o obj-$(CONFIG_ARM_SMMU) += smmu.o diff --git a/xen/drivers/passthrough/arm/iommu_helpers.c b/xen/drivers/passthrough/arm/iommu_helpers.c new file mode 100644 index 0000000..a36e2b8 --- /dev/null +++ b/xen/drivers/passthrough/arm/iommu_helpers.c @@ -0,0 +1,84 @@ +/* + * xen/drivers/passthrough/arm/iommu_helpers.c + * + * Contains various helpers to be used by IOMMU drivers. + * + * Based on Xen's SMMU driver: + * xen/drivers/passthrough/arm/smmu.c + * + * Copyright (C) 2014 Linaro Limited. + * + * Copyright (C) 2019 EPAM Systems Inc. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms and conditions of the GNU General Public + * License, version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public + * License along with this program; If not, see . + */ + +#include +#include +#include + +#include + +/* Should only be used if P2M Table is shared between the CPU and the IOMMU. */ +int __must_check arm_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn, + unsigned int flags, + unsigned int *flush_flags) +{ + p2m_type_t t; + + /* + * Grant mappings can be used for DMA requests. The dev_bus_addr + * returned by the hypercall is the MFN (not the IPA). For device + * protected by an IOMMU, Xen needs to add a 1:1 mapping in the domain + * p2m to allow DMA request to work. + * This is only valid when the domain is directed mapped. Hence this + * function should only be used by gnttab code with gfn == mfn == dfn. + */ + BUG_ON(!is_domain_direct_mapped(d)); + BUG_ON(mfn_x(mfn) != dfn_x(dfn)); + + /* We only support readable and writable flags */ + if ( !(flags & (IOMMUF_readable | IOMMUF_writable)) ) + return -EINVAL; + + t = (flags & IOMMUF_writable) ? p2m_iommu_map_rw : p2m_iommu_map_ro; + + /* + * The function guest_physmap_add_entry replaces the current mapping + * if there is already one... + */ + return guest_physmap_add_entry(d, _gfn(dfn_x(dfn)), _mfn(dfn_x(dfn)), 0, t); +} + +/* Should only be used if P2M Table is shared between the CPU and the IOMMU. */ +int __must_check arm_iommu_unmap_page(struct domain *d, dfn_t dfn, + unsigned int *flush_flags) +{ + /* + * This function should only be used by gnttab code when the domain + * is direct mapped (i.e. gfn == mfn == dfn). + */ + if ( !is_domain_direct_mapped(d) ) + return -EINVAL; + + return guest_physmap_remove_page(d, _gfn(dfn_x(dfn)), _mfn(dfn_x(dfn)), 0); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c index f151b9f..8ae986a 100644 --- a/xen/drivers/passthrough/arm/smmu.c +++ b/xen/drivers/passthrough/arm/smmu.c @@ -2734,50 +2734,6 @@ static void arm_smmu_iommu_domain_teardown(struct domain *d) xfree(xen_domain); } -static int __must_check arm_smmu_map_page(struct domain *d, dfn_t dfn, - mfn_t mfn, unsigned int flags, - unsigned int *flush_flags) -{ - p2m_type_t t; - - /* - * Grant mappings can be used for DMA requests. The dev_bus_addr - * returned by the hypercall is the MFN (not the IPA). For device - * protected by an IOMMU, Xen needs to add a 1:1 mapping in the domain - * p2m to allow DMA request to work. - * This is only valid when the domain is directed mapped. Hence this - * function should only be used by gnttab code with gfn == mfn == dfn. - */ - BUG_ON(!is_domain_direct_mapped(d)); - BUG_ON(mfn_x(mfn) != dfn_x(dfn)); - - /* We only support readable and writable flags */ - if (!(flags & (IOMMUF_readable | IOMMUF_writable))) - return -EINVAL; - - t = (flags & IOMMUF_writable) ? p2m_iommu_map_rw : p2m_iommu_map_ro; - - /* - * The function guest_physmap_add_entry replaces the current mapping - * if there is already one... - */ - return guest_physmap_add_entry(d, _gfn(dfn_x(dfn)), _mfn(dfn_x(dfn)), - 0, t); -} - -static int __must_check arm_smmu_unmap_page(struct domain *d, dfn_t dfn, - unsigned int *flush_flags) -{ - /* - * This function should only be used by gnttab code when the domain - * is direct mapped (i.e. gfn == mfn == dfn). - */ - if ( !is_domain_direct_mapped(d) ) - return -EINVAL; - - return guest_physmap_remove_page(d, _gfn(dfn_x(dfn)), _mfn(dfn_x(dfn)), 0); -} - static const struct iommu_ops arm_smmu_iommu_ops = { .init = arm_smmu_iommu_domain_init, .hwdom_init = arm_smmu_iommu_hwdom_init, @@ -2786,8 +2742,8 @@ static const struct iommu_ops arm_smmu_iommu_ops = { .iotlb_flush_all = arm_smmu_iotlb_flush_all, .assign_device = arm_smmu_assign_dev, .reassign_device = arm_smmu_reassign_dev, - .map_page = arm_smmu_map_page, - .unmap_page = arm_smmu_unmap_page, + .map_page = arm_iommu_map_page, + .unmap_page = arm_iommu_unmap_page, }; static __init const struct arm_smmu_device *find_smmu(const struct device *dev) diff --git a/xen/include/asm-arm/iommu.h b/xen/include/asm-arm/iommu.h index 904c9ae..11dedba 100644 --- a/xen/include/asm-arm/iommu.h +++ b/xen/include/asm-arm/iommu.h @@ -26,6 +26,16 @@ struct arch_iommu const struct iommu_ops *iommu_get_ops(void); void iommu_set_ops(const struct iommu_ops *ops); +/* + * The mapping helpers below should only be used if P2M Table is shared + * between the CPU and the IOMMU. + */ +int __must_check arm_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn, + unsigned int flags, + unsigned int *flush_flags); +int __must_check arm_iommu_unmap_page(struct domain *d, dfn_t dfn, + unsigned int *flush_flags); + #endif /* __ARCH_ARM_IOMMU_H__ */ /* From patchwork Tue Sep 24 15:30:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11159125 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE11517EE for ; Tue, 24 Sep 2019 15:32:29 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 89A002168B for ; Tue, 24 Sep 2019 15:32:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="n2G6jw8L" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 89A002168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmmM-0007oZ-1K; Tue, 24 Sep 2019 15:31:02 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmmK-0007oI-VV for xen-devel@lists.xenproject.org; Tue, 24 Sep 2019 15:31:00 +0000 X-Inumbo-ID: 4a50ba46-dee0-11e9-97fb-bc764e2007e4 Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141]) by localhost (Halon) with ESMTPS id 4a50ba46-dee0-11e9-97fb-bc764e2007e4; Tue, 24 Sep 2019 15:30:51 +0000 (UTC) Received: by mail-lf1-x141.google.com with SMTP id y127so1770733lfc.0 for ; Tue, 24 Sep 2019 08:30:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xDq40Au2Mh7HyP1hj5EVpnkLX/Wo7KofiZYfsJMVXjI=; b=n2G6jw8Ln7O7S1FaAg88uaPetKqNjc3LUqU1/u77mIGZ5hBQ45/Uu5jz4qHwC2zzWT bCgQvrou2/C7+kTNiNn6KqnVbe2HjN1DSxJJ5Zbqy4CGfSXaDrzjWBEI0tyRbUph7QQ2 rtjkZA05cj5ZMmWehZcNI0FhpmAseboczMUlChcYM49UIrasTYFWhroqdwGQ6vGN0V9Y FrQ6INdDRC3wVJq8zivzDgGPzTF5YX8RsKTGm3Dj5hi99Hfjx4c5tk6g6Dffmk2vnjaA /zlQiE1cMQJ0Ym5fhrPpM1Kjeoj/2DqXiUBygXV/yzRueDKa+MSj85gva7FZygZS7XvP VBoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xDq40Au2Mh7HyP1hj5EVpnkLX/Wo7KofiZYfsJMVXjI=; b=fxto3CJVv/f036Jp6Ew9aAu7RsSRvNkD5y43VrdI+B2P0fqs2vabavwnOnnKq7d7XO PKkKSyvTfqR7LQKavHUzxlDuGaPAz37+TIpgkkiKBfpsmhwpPviTYiX7Sq7ktTXveLrx roggza4PHoqhJ3SkK9M7qg9PdBYV6mlN7ldjFtwdJyVhvv8xKGnxw4fCIf72W6Cws4oZ XdJxtxu21t7sukyxWqEHatMEfryqwvu/X7t6RSqxK2Np1T6UoMSN9DAh6V0vtONfc7ZP /dhMvhbqz5LmRWsYnaIvnPSOgw9dVsV/qkJTLucBA36p5bfuFk6GtPOX9G3C0IkWT8l1 ph6w== X-Gm-Message-State: APjAAAVLpzH59uwKh1iMP6oFxlgM0TYg/HxtkiyLupuhD2BlBnafy1hs nmcDndXUFCNx1pda8R7RdAuRUuXMl/Q= X-Google-Smtp-Source: APXvYqzWnom2pf5wI0dodtFvA0qnCGVcPInR0foBKtLLIvWuqw82toki2gmmIWzOA2werorF22P5Pg== X-Received: by 2002:ac2:483a:: with SMTP id 26mr2240195lft.188.1569339050626; Tue, 24 Sep 2019 08:30:50 -0700 (PDT) Received: from otyshchenko.kyiv.epam.com (ll-74.141.223.85.sovam.net.ua. [85.223.141.74]) by smtp.gmail.com with ESMTPSA id z128sm525818lfa.1.2019.09.24.08.30.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 24 Sep 2019 08:30:50 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Date: Tue, 24 Sep 2019 18:30:21 +0300 Message-Id: <1569339027-19484-3-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> References: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> Subject: [Xen-devel] [PATCH V5 2/8] iommu/arm: Add ability to handle deferred probing request X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Oleksandr Tyshchenko , julien.grall@arm.com, sstabellini@kernel.org, volodymyr_babchuk@epam.com MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Oleksandr Tyshchenko This patch adds minimal required support to General IOMMU framework to be able to handle a case when IOMMU driver requesting deferred probing for a device. In order not to pull Linux's error code (-EPROBE_DEFER) to Xen we have chosen -EAGAIN to be used for indicating that device probing is deferred. This is needed for the upcoming IPMMU driver which may request deferred probing depending on what device will be probed the first (there is some dependency between these devices, Root device must be registered before Cache devices. If not the case, driver will deny further Cache device probes until Root device is registered). As we can't guarantee a fixed pre-defined order for the device nodes in DT, we need to be ready for the situation where devices being probed in "any" order. Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Julien Grall --- Changes V4 -> V5: - added Julien's R-b - fixed Grammatical error Changes V3 -> V4: - moved changes related to the "headers ordering" to a separate patch - added explaination why domain_list is re-used - removed the unnecessary pair of outermost parentheses when checking the list_empty Changes V2 -> V3: - removed deferred_probe field from struct dt_device_node, re-used domain_list instead - documented domain_list usage - added ASSERT to check that np->domain_list is empty before re-using it - put deferred_probe_list to init section - used more strict logic regarding processing devices in the deferred list - added more comments to code - put headers in alphabetical order --- xen/drivers/passthrough/arm/iommu.c | 56 +++++++++++++++++++++++++++++++++++-- xen/include/asm-arm/device.h | 6 +++- xen/include/xen/device_tree.h | 7 +++++ 3 files changed, 66 insertions(+), 3 deletions(-) diff --git a/xen/drivers/passthrough/arm/iommu.c b/xen/drivers/passthrough/arm/iommu.c index 2678ef5..76e1152 100644 --- a/xen/drivers/passthrough/arm/iommu.c +++ b/xen/drivers/passthrough/arm/iommu.c @@ -21,6 +21,12 @@ #include +/* + * Deferred probe list is used to keep track of devices for which driver + * requested deferred probing (returned -EAGAIN). + */ +static __initdata LIST_HEAD(deferred_probe_list); + static const struct iommu_ops *iommu_ops; const struct iommu_ops *iommu_get_ops(void) @@ -43,7 +49,7 @@ void __init iommu_set_ops(const struct iommu_ops *ops) int __init iommu_hardware_setup(void) { - struct dt_device_node *np; + struct dt_device_node *np, *tmp; int rc; unsigned int num_iommus = 0; @@ -52,6 +58,21 @@ int __init iommu_hardware_setup(void) rc = device_init(np, DEVICE_IOMMU, NULL); if ( !rc ) num_iommus++; + else if ( rc == -EAGAIN ) + { + /* + * Nobody should use device's domain_list at such early stage, + * so we can re-use it to link the device in the deferred list to + * avoid introducing extra list_head field in struct dt_device_node. + */ + ASSERT(list_empty(&np->domain_list)); + + /* + * Driver requested deferred probing, so add this device to + * the deferred list for further processing. + */ + list_add(&np->domain_list, &deferred_probe_list); + } /* * Ignore the following error codes: * - EBADF: Indicate the current is not an IOMMU @@ -62,7 +83,38 @@ int __init iommu_hardware_setup(void) return rc; } - return ( num_iommus > 0 ) ? 0 : -ENODEV; + /* Return immediately if there are no initialized devices. */ + if ( !num_iommus ) + return list_empty(&deferred_probe_list) ? -ENODEV : -EAGAIN; + + rc = 0; + + /* + * Process devices in the deferred list if it is not empty. + * Check that at least one device is initialized at each loop, otherwise + * we may get an infinite loop. Also stop processing if we got an error + * other than -EAGAIN. + */ + while ( !list_empty(&deferred_probe_list) && num_iommus ) + { + num_iommus = 0; + + list_for_each_entry_safe ( np, tmp, &deferred_probe_list, domain_list ) + { + rc = device_init(np, DEVICE_IOMMU, NULL); + if ( !rc ) + { + num_iommus++; + + /* Remove initialized device from the deferred list. */ + list_del_init(&np->domain_list); + } + else if ( rc != -EAGAIN ) + return rc; + } + } + + return rc; } void __hwdom_init arch_iommu_check_autotranslated_hwdom(struct domain *d) diff --git a/xen/include/asm-arm/device.h b/xen/include/asm-arm/device.h index 63a0f36..ee1c3bc 100644 --- a/xen/include/asm-arm/device.h +++ b/xen/include/asm-arm/device.h @@ -44,7 +44,11 @@ struct device_desc { enum device_class class; /* List of devices supported by this driver */ const struct dt_device_match *dt_match; - /* Device initialization */ + /* + * Device initialization. + * + * -EAGAIN is used to indicate that device probing is deferred. + */ int (*init)(struct dt_device_node *dev, const void *data); }; diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h index 9a7a8f2..f2ad22b 100644 --- a/xen/include/xen/device_tree.h +++ b/xen/include/xen/device_tree.h @@ -92,6 +92,13 @@ struct dt_device_node { /* IOMMU specific fields */ bool is_protected; + /* + * The main purpose of this list is to link the structure in the list + * of devices assigned to domain. + * + * Boot code (iommu_hardware_setup) re-uses this list to link the structure + * in the list of devices for which driver requested deferred probing. + */ struct list_head domain_list; struct device dev; From patchwork Tue Sep 24 15:30:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11159133 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE8DF13B1 for ; Tue, 24 Sep 2019 15:32:58 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7D0CD207FD for ; Tue, 24 Sep 2019 15:32:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dN7PTuPE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7D0CD207FD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmmR-0007rW-Cu; Tue, 24 Sep 2019 15:31:07 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmmP-0007r4-W3 for xen-devel@lists.xenproject.org; Tue, 24 Sep 2019 15:31:06 +0000 X-Inumbo-ID: 4af7dce0-dee0-11e9-bf31-bc764e2007e4 Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244]) by localhost (Halon) with ESMTPS id 4af7dce0-dee0-11e9-bf31-bc764e2007e4; Tue, 24 Sep 2019 15:30:53 +0000 (UTC) Received: by mail-lj1-x244.google.com with SMTP id y23so2348234lje.9 for ; Tue, 24 Sep 2019 08:30:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=sQRBacY7IWJeeQXPRyg+OlUCm3S/btufzJlQQaOIwrY=; b=dN7PTuPEfZu/swbJ4BENv+xHvySzwTp57pMW5yXhIP/EBvompkuSw0Xi1pOlNd25nb VhXsrJluOg/OOPlEkcvtVf8k74XXeWOMbAeV8bZLNu9ZI5oCF1iokNr5zHDMZhuK7IJ9 CNfINWtFCYLF4vpwr0TKaVYZ60gUZshIMt1aRD7UMV/imydgHdPv11gPXAlhPk2EWRX6 YXYcqem83i6YY/faCp9JM2vIHHQKFMW4G4nMiZ9dOF3ZV2BQkaBbyVf7WVktNX3l/gGk rNR4OPv4lWZVtnc6hx3Ur9uyaSFtDC2uZc9UU1Pbe7/mne2G2nBEMi/RxdPav0oG+po2 fuNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=sQRBacY7IWJeeQXPRyg+OlUCm3S/btufzJlQQaOIwrY=; b=QB9odalJUBNlM1KgTkxHPeRpgYJXqImaT57qKiknjwN/eqpv5FxrAu4OTyUeCuj5UW PgkP39PnsV1sice3gNdIASPT94UNLKevyGtLHSPjQJW4sZRlAw/o++biooA1P31/WlfY Ugji4GnxWKwGWT642hJF8m4DNq/fid9425+cxOPmdPGMeS6quQGmDVVY7mCoHVZERsxt Mx2VGKUpNUu0XQwSDFfa0TYbRPdpLM/gJz8Jx7kWrLjjZqZMzqxvYP+W2fOgq7eVc67B n6NIHlub5gua6EfIG0JslVXuATWZk3pe3naKmTPqT0kb08Ywh8S9iu6G69Cu7e6FX3pi zVzw== X-Gm-Message-State: APjAAAXQZpgpE3Jl4wfXhW8HyOR3tAdE99yFO7ceNPh68RF6a6E6mpuW JEou+7Xx4xLMxM+fCF1J3bhQOQAbbWE= X-Google-Smtp-Source: APXvYqyNenFd6tU2ijaWmAxnjP/Z4c+kge27jfzseo4ya1NikrUCP9JxciXWb2r6vJM50l8bBXFlKg== X-Received: by 2002:a2e:654a:: with SMTP id z71mr2458830ljb.37.1569339051795; Tue, 24 Sep 2019 08:30:51 -0700 (PDT) Received: from otyshchenko.kyiv.epam.com (ll-74.141.223.85.sovam.net.ua. [85.223.141.74]) by smtp.gmail.com with ESMTPSA id z128sm525818lfa.1.2019.09.24.08.30.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 24 Sep 2019 08:30:51 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Date: Tue, 24 Sep 2019 18:30:22 +0300 Message-Id: <1569339027-19484-4-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> References: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> Subject: [Xen-devel] [PATCH V5 3/8] xen/common: Introduce _xrealloc function X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: sstabellini@kernel.org, Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Oleksandr Tyshchenko , julien.grall@arm.com, Paul Durrant , Jan Beulich , volodymyr_babchuk@epam.com MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Oleksandr Tyshchenko This patch introduces type-unsafe function which besides re-allocation handles the following corner cases: 1. if requested size is zero, it will behave like xfree 2. if incoming pointer is not valid (NULL or ZERO_BLOCK_PTR), it will behave like xmalloc If both pointer and size are valid the function will re-allocate and copy only if requested size and alignment don't fit in already allocated space. Subsequent patch will add type-safe helper macros. Signed-off-by: Oleksandr Tyshchenko CC: Andrew Cooper CC: George Dunlap CC: Ian Jackson CC: Jan Beulich CC: Julien Grall CC: Konrad Rzeszutek Wilk CC: Stefano Stabellini CC: Tim Deegan CC: Wei Liu CC: Paul Durrant --- Changes V4 -> V5: - avoid possible truncation with allocations of 4GiB or above - introduce helper functions add(strip)_padding to avoid duplicating the code - omit the unnecessary casts, change u32 to uint32_t when moving the code - use _xzalloc instead of _xmalloc to get the tail portion zeroed - update pointer according to the requsted alignment - compared against "size" instead of "tmp_size" for the allocations above PAGE_SIZE Changes V3 -> V4: - add check for the alignment compatibility - properly detect current size (take into the account a possible fake alignment header) - update comment in code/patch description Changes RFC -> V3: - behave like xmalloc if incoming pointer is ZERO_BLOCK_PTR or NULL - return ZERO_BLOCK_PTR after xfree if requested size is zero - add patch description - use allocator internals to recognize current size of the incoming pointer - do not re-allocate and copy if requested size fits in already allocated space ... Original patch was initially posted by Sameer Goel: https://lists.xen.org/archives/html/xen-devel/2017-06/msg00858.html This could be considered as another attempt to add it: https://www.mail-archive.com/kexec@lists.infradead.org/msg21335.html [As it was previously discussed with Julien in IRC] The reason for this patch to be an RFC is that patch itself is not completely correct and I don't fully understand what/how should be done for this patch to be accepted. Or whether community even wants this to go in. So, to avoid bike shedding, the first target is to collect feedback. For everyone who wants more details why this is needed and where used, please see next patch of this thread: "iommu/arm: Add lightweight iommu_fwspec support" In a nutshell, the upcoming "iommu_fwspec" support on ARM is going to use xrealloc to expand an array for device IDs. We really want to have "iommu_fwspec" support which will give us a generic abstract way to add new device to the IOMMU based on the generic IOMMU DT binding. --- xen/common/xmalloc_tlsf.c | 113 ++++++++++++++++++++++++++++++++++++++-------- xen/include/xen/xmalloc.h | 1 + 2 files changed, 96 insertions(+), 18 deletions(-) diff --git a/xen/common/xmalloc_tlsf.c b/xen/common/xmalloc_tlsf.c index e98ad65..a797019 100644 --- a/xen/common/xmalloc_tlsf.c +++ b/xen/common/xmalloc_tlsf.c @@ -554,10 +554,40 @@ static void tlsf_init(void) #define ZERO_BLOCK_PTR ((void *)-1L) #endif +static void *strip_padding(void *p) +{ + struct bhdr *b = p - BHDR_OVERHEAD; + + if ( b->size & FREE_BLOCK ) + { + p -= b->size & ~FREE_BLOCK; + b = p - BHDR_OVERHEAD; + ASSERT(!(b->size & FREE_BLOCK)); + } + + return p; +} + +static void *add_padding(void *p, unsigned long align) +{ + uint32_t pad; + + if ( (pad = -(long)p & (align - 1)) != 0 ) + { + void *q = p + pad; + struct bhdr *b = q - BHDR_OVERHEAD; + + ASSERT(q > p); + b->size = pad | FREE_BLOCK; + p = q; + } + + return p; +} + void *_xmalloc(unsigned long size, unsigned long align) { void *p = NULL; - u32 pad; ASSERT(!in_irq()); @@ -578,14 +608,7 @@ void *_xmalloc(unsigned long size, unsigned long align) return xmalloc_whole_pages(size - align + MEM_ALIGN, align); /* Add alignment padding. */ - if ( (pad = -(long)p & (align - 1)) != 0 ) - { - char *q = (char *)p + pad; - struct bhdr *b = (struct bhdr *)(q - BHDR_OVERHEAD); - ASSERT(q > (char *)p); - b->size = pad | FREE_BLOCK; - p = q; - } + p = add_padding(p, align); ASSERT(((unsigned long)p & (align - 1)) == 0); return p; @@ -598,10 +621,70 @@ void *_xzalloc(unsigned long size, unsigned long align) return p ? memset(p, 0, size) : p; } -void xfree(void *p) +void *_xrealloc(void *ptr, unsigned long size, unsigned long align) { - struct bhdr *b; + unsigned long curr_size, tmp_size; + void *p; + + if ( !size ) + { + xfree(ptr); + return ZERO_BLOCK_PTR; + } + if ( ptr == NULL || ptr == ZERO_BLOCK_PTR ) + return _xmalloc(size, align); + + ASSERT((align & (align - 1)) == 0); + if ( align < MEM_ALIGN ) + align = MEM_ALIGN; + + tmp_size = size + align - MEM_ALIGN; + + if ( tmp_size < PAGE_SIZE ) + tmp_size = (tmp_size < MIN_BLOCK_SIZE) ? MIN_BLOCK_SIZE : + ROUNDUP_SIZE(tmp_size); + + if ( !((unsigned long)ptr & (PAGE_SIZE - 1)) ) + { + curr_size = (unsigned long)PFN_ORDER(virt_to_page(ptr)) << PAGE_SHIFT; + + if ( size <= curr_size && ((unsigned long)ptr & (align - 1)) == 0 ) + return ptr; + } + else + { + struct bhdr *b; + + /* Strip alignment padding. */ + p = strip_padding(ptr); + + b = p - BHDR_OVERHEAD; + curr_size = b->size & BLOCK_SIZE_MASK; + + if ( tmp_size <= curr_size ) + { + /* Add alignment padding. */ + p = add_padding(p, align); + + ASSERT(((unsigned long)p & (align - 1)) == 0); + + return p; + } + } + + p = _xzalloc(size, align); + if ( p ) + { + memcpy(p, ptr, min(curr_size, size)); + xfree(ptr); + } + + return p; +} + +void xfree(void *p) +{ if ( p == NULL || p == ZERO_BLOCK_PTR ) return; @@ -626,13 +709,7 @@ void xfree(void *p) } /* Strip alignment padding. */ - b = (struct bhdr *)((char *)p - BHDR_OVERHEAD); - if ( b->size & FREE_BLOCK ) - { - p = (char *)p - (b->size & ~FREE_BLOCK); - b = (struct bhdr *)((char *)p - BHDR_OVERHEAD); - ASSERT(!(b->size & FREE_BLOCK)); - } + p = strip_padding(p); xmem_pool_free(p, xenpool); } diff --git a/xen/include/xen/xmalloc.h b/xen/include/xen/xmalloc.h index f075d2d..831152f 100644 --- a/xen/include/xen/xmalloc.h +++ b/xen/include/xen/xmalloc.h @@ -51,6 +51,7 @@ extern void xfree(void *); /* Underlying functions */ extern void *_xmalloc(unsigned long size, unsigned long align); extern void *_xzalloc(unsigned long size, unsigned long align); +extern void *_xrealloc(void *ptr, unsigned long size, unsigned long align); static inline void *_xmalloc_array( unsigned long size, unsigned long align, unsigned long num) From patchwork Tue Sep 24 15:30:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11159123 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9FC4F1668 for ; Tue, 24 Sep 2019 15:32:29 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7B847207FD for ; Tue, 24 Sep 2019 15:32:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="c9rxzKKS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7B847207FD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmmV-0007ts-TI; Tue, 24 Sep 2019 15:31:11 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmmU-0007tR-Vr for xen-devel@lists.xenproject.org; Tue, 24 Sep 2019 15:31:11 +0000 X-Inumbo-ID: 4c5337d8-dee0-11e9-97fb-bc764e2007e4 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by localhost (Halon) with ESMTPS id 4c5337d8-dee0-11e9-97fb-bc764e2007e4; Tue, 24 Sep 2019 15:30:55 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id q11so1739458lfc.11 for ; Tue, 24 Sep 2019 08:30:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=dAXIdOzd5atuBvsXDzswi8p3w2Qe44gCP0DnzcVW2iI=; b=c9rxzKKSk0cCCaf04TMT7q8FmKtllw9FcvMS0MsrP2lO/cH92sh3wuI4ogVvd8fntN g+eKwqaDkxNnPUBPvQHu8MhtP4LgmF5Wp7wTaWC8uHf39Wu21KzoKm4Rt9IubIQ4mfpS M7Kt+v1UVA1m6pzDWPt1HXxPWs25rybA1BHtYiB+JHRHB9fmWRnqUdOW8fTboa5nQ6Dl 5Wp86FTv7tMefJ8sAGHRSpAnIE692bzk5Ia0+IxY25oR9pOa9lTl8mLj+4rsA+TW08i+ 4U6HrsGLbdNn0Ybv9zrYLjBYOteQsUo1oGn4eQ3ZvC7Ba1dKzh/nHjvPie9ysAPCUoU1 TuAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dAXIdOzd5atuBvsXDzswi8p3w2Qe44gCP0DnzcVW2iI=; b=m4znBmj0gSML6a+PtX1qAClpufr48L73rR6EJ0ZaURVOgmmbuyfaaizQc2J+JXZECr SPpApIiJUoI95sawUEizattpMzRBhPaChctPjP8mO+8d9NJzhDbAAiS/E6u02L1Vj81x 7SV6dilBoAAc+dxET4A6UXawibbqiSDtzaRGX+zw3k+whe6H//afYpIxIMp5+/wFz+Cu 8lTpw/IaIR1VqKbgXCfg5gpTMOdTXleuaE5ryT3WZ4KnmRGT6IEqaulet/b9nDTGUDw9 Tz1S2Yr77EJCGpjlhcT8e0jwz/E3a1EXy6wFfprG8DdVrzV+MQIRyhTHdI0IBrX8g3ZN o4NQ== X-Gm-Message-State: APjAAAVpkClwLE4QhHlKmJjiurTE2lei/N9r7OQISFHZdgTu+odV0RC2 ZIpEzOK9u1smtxQZGAZwIX6zERlS28s= X-Google-Smtp-Source: APXvYqyTopsiIiak5OUxOOlpbxeMPwSJ/QGpMYAc5YuXNte2/WoJY0PWJEINJqb+/gUKPk3/I7pfpg== X-Received: by 2002:ac2:5dd0:: with SMTP id x16mr2354993lfq.38.1569339053076; Tue, 24 Sep 2019 08:30:53 -0700 (PDT) Received: from otyshchenko.kyiv.epam.com (ll-74.141.223.85.sovam.net.ua. [85.223.141.74]) by smtp.gmail.com with ESMTPSA id z128sm525818lfa.1.2019.09.24.08.30.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 24 Sep 2019 08:30:52 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Date: Tue, 24 Sep 2019 18:30:23 +0300 Message-Id: <1569339027-19484-5-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> References: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> Subject: [Xen-devel] [PATCH V5 4/8] xen/common: Introduce xrealloc_flex_struct() helper macros X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: sstabellini@kernel.org, Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Oleksandr Tyshchenko , julien.grall@arm.com, Jan Beulich , volodymyr_babchuk@epam.com MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Oleksandr Tyshchenko This patch introduces type-safe helper macros to re-allocate space for a structure with a flexible array of typed objects. For example, if we need to re-size the "data" array: struct arrlen { size_t len; int data[]; }; We can use the proposed macros in the following way: new_ptr = realloc_flex_struct(old_ptr, data, nr_elem); where nr_elem is the desired number of elements. Subsequent patch will use this macros. Also, while here, introduce xmalloc(xzalloc)_flex_struct() to allocate space for a structure with a flexible array of typed objects. Suggested-by: Volodymyr Babchuk Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Jan Beulich CC: Andrew Cooper CC: George Dunlap CC: Ian Jackson CC: Julien Grall CC: Konrad Rzeszutek Wilk CC: Stefano Stabellini CC: Tim Deegan CC: Wei Liu --- Changes V4 -> V5: - clarified patch description (data[] instead of data[1]) - introduced xzalloc_flex_struct() - added Jan's R-b - added missing parentheses around the entire constructs Changes V3 -> V4: - clarified patch description - modified to not use leading underscores - removed unnecessary pair of outermost parentheses - modified to use "nr" instead of "len" - placed xmalloc_flex_struct before xrealloc_flex_struct - simplified xrealloc_flex_struct macros --- xen/include/xen/xmalloc.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/xen/include/xen/xmalloc.h b/xen/include/xen/xmalloc.h index 831152f..f515cee 100644 --- a/xen/include/xen/xmalloc.h +++ b/xen/include/xen/xmalloc.h @@ -35,6 +35,18 @@ #define xzalloc_array(_type, _num) \ ((_type *)_xzalloc_array(sizeof(_type), __alignof__(_type), _num)) +/* Allocate space for a structure with a flexible array of typed objects. */ +#define xzalloc_flex_struct(type, field, nr) \ + ((type *)_xzalloc(offsetof(type, field[nr]), __alignof__(type))) + +#define xmalloc_flex_struct(type, field, nr) \ + ((type *)_xmalloc(offsetof(type, field[nr]), __alignof__(type))) + +/* Re-allocate space for a structure with a flexible array of typed objects. */ +#define xrealloc_flex_struct(ptr, field, nr) \ + ((typeof(ptr))_xrealloc(ptr, offsetof(typeof(*(ptr)), field[nr]), \ + __alignof__(typeof(*(ptr))))) + /* Allocate untyped storage. */ #define xmalloc_bytes(_bytes) _xmalloc(_bytes, SMP_CACHE_BYTES) #define xzalloc_bytes(_bytes) _xzalloc(_bytes, SMP_CACHE_BYTES) From patchwork Tue Sep 24 15:30:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11159135 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B6E0913B1 for ; Tue, 24 Sep 2019 15:33:04 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 868B720665 for ; Tue, 24 Sep 2019 15:33:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dO+KRciA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 868B720665 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmmb-0007wh-7g; Tue, 24 Sep 2019 15:31:17 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmmZ-0007w5-VM for xen-devel@lists.xenproject.org; Tue, 24 Sep 2019 15:31:15 +0000 X-Inumbo-ID: 4c6f58e6-dee0-11e9-bf31-bc764e2007e4 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by localhost (Halon) with ESMTPS id 4c6f58e6-dee0-11e9-bf31-bc764e2007e4; Tue, 24 Sep 2019 15:30:55 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id l21so2358121lje.4 for ; Tue, 24 Sep 2019 08:30:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DrzTrZ/qnICiIH+wPomi5xqO8UIxGT+AUn3v3IM2ulg=; b=dO+KRciAASWiYHfB13nbiaml9Xjn4GpuQ5UDxN8pjZluEksYWjbLnve/VoApSK1Su3 DsuXD/vOaRz+7oYQIbf8/Fl/U7AkzKGjrJxnqXDYyVFV7FKsRXNvqJ2Rk9hJbG5fIKjR 6ZJjfcp/zeuzEmRDwcfo/hjTHDLgxrlU57eBaNpnjr84ZtYoUeYZq0wkWxZP0pvtlVxX YhfZmEL8guJoheFzzqj2p+0m/1cxDcW7WIIAvEDQLFubNTmX/h9/WjwH+c1asbemQkLf zalnLbmDnPYj/U2x1HlOkSWIv7CxV+wO6hQhjCnYSBUwfGCIxPOaT5Mop49o1VQiOjIZ 6fPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DrzTrZ/qnICiIH+wPomi5xqO8UIxGT+AUn3v3IM2ulg=; b=bDuhTJevJucqLmYm+yGc/TMqufghVNKXn1SMB33Odhcd2qb3ij6xSg4xeRP1eeKz6g Kmwu4ZwRuPDXeMUcnkgqNrFPFgVo9b5UkQmy9uMirRrYRTLjMXJSSLYAPAJxy8MRWzkV 4gHdb0KNhJGJyPCBF5PH2ELZT2kV+kJVvIyGPA8KYQoBiX+njuG2mz5Ow1Ee0MzXzFoM l1MzCXUnL++qNUC18/oMi/Kgcl72CFYXXPuHvm2DtjQUF/E8heUjdmf3FQg6k+PRe1sc aV7DpD3puJv49rrxa0m7Hq4sadPrKKxw8udeKISOF7UmrF9W7aymFVqqm7LjhnV/uK+l 6rfA== X-Gm-Message-State: APjAAAVCRB/2vIPKBarE0SkpqDzCD2hvl1+5S0o/gPr/oHbtR5RSrJdN ouN3IP9MG/Ih8M2MD6dXVtMrWCcWki8= X-Google-Smtp-Source: APXvYqykQIRaDNSy/2QhVoqUOcT4TJ1HacgneLc/5LdyKjHU9Z738bm99dRaesHhfSDIulUjxoV1JQ== X-Received: by 2002:a05:651c:1132:: with SMTP id e18mr2445509ljo.33.1569339054185; Tue, 24 Sep 2019 08:30:54 -0700 (PDT) Received: from otyshchenko.kyiv.epam.com (ll-74.141.223.85.sovam.net.ua. [85.223.141.74]) by smtp.gmail.com with ESMTPSA id z128sm525818lfa.1.2019.09.24.08.30.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 24 Sep 2019 08:30:53 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Date: Tue, 24 Sep 2019 18:30:24 +0300 Message-Id: <1569339027-19484-6-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> References: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> Subject: [Xen-devel] [PATCH V5 5/8] iommu/arm: Add lightweight iommu_fwspec support X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Oleksandr Tyshchenko , julien.grall@arm.com, sstabellini@kernel.org, volodymyr_babchuk@epam.com, Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Oleksandr Tyshchenko We need to have some abstract way to add new device to the IOMMU based on the generic IOMMU DT bindings [1] which can be used for both DT (right now) and ACPI (in future). For that reason we can borrow the idea used in Linux these days called "iommu_fwspec". Having this in, it will be possible to configure IOMMU master interfaces of the device (device IDs) from a single common place and avoid keeping almost identical look-up implementations in each IOMMU driver. There is no need to port the whole implementation of "iommu_fwspec" to Xen, we could, probably, end up with a much simpler solution, some "stripped down" version which fits our requirements. So, this patch adds the following: 1. A common structure "iommu_fwspec" to hold the the per-device firmware data 2. New member "iommu_fwspec" of struct device 3. Functions/helpers to deal with "dev->iommu_fwspec" It should be noted that in comparison of the original "iommu_fwspec" Xen's variant doesn't contain some fields, which are not really needed at the moment (ops, flag) and "iommu_fwnode" field was replaced by "iommu_dev" to avoid porting a lot of code (to support "fwnode_handle") with little benefit. The "iommu_fwspec" support is based on the Linux's commit: f74c2bb98776e2de508f4d607cd519873065118e "Linux 5.3-rc8" Subsequent patches will use of that support. [1] https://www.kernel.org/doc/Documentation/devicetree/bindings/iommu/iommu.txt Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall CC: Jan Beulich Acked-by: Julien Grall --- Changes V4 -> V5: - modified struct iommu_fwspec to use flexible array (uint32_t ids[]) - modified to use xzalloc_flex_struct() on the first allocation - added "const" to uint32_t *ids in iommu_fwspec_add_ids() Changes V3 -> V4: - modified iommu_fwspec_add_ids() to use new implementation of xrealloc_flex_struct() - mentioned exact Linux version we are based on - fixed Grammatical error Changes V2 -> V3: - added Copyright from Linux - ordered the headers alphabetically - removed check for not a NULL before calling xfree() - used unsigned for variables which can't be negative - removed #include from iommu.h - added check to iommu_fwspec_init() to not allow overriding - clarified comments in code - modified iommu_fwspec_add_ids() to use type-safe xrealloc_flex_struct() --- xen/drivers/passthrough/arm/Makefile | 2 +- xen/drivers/passthrough/arm/iommu_fwspec.c | 97 ++++++++++++++++++++++++++++++ xen/include/asm-arm/device.h | 1 + xen/include/asm-arm/iommu_fwspec.h | 68 +++++++++++++++++++++ 4 files changed, 167 insertions(+), 1 deletion(-) create mode 100644 xen/drivers/passthrough/arm/iommu_fwspec.c create mode 100644 xen/include/asm-arm/iommu_fwspec.h diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile index 4abb87a..5fbad45 100644 --- a/xen/drivers/passthrough/arm/Makefile +++ b/xen/drivers/passthrough/arm/Makefile @@ -1,2 +1,2 @@ -obj-y += iommu.o iommu_helpers.o +obj-y += iommu.o iommu_helpers.o iommu_fwspec.o obj-$(CONFIG_ARM_SMMU) += smmu.o diff --git a/xen/drivers/passthrough/arm/iommu_fwspec.c b/xen/drivers/passthrough/arm/iommu_fwspec.c new file mode 100644 index 0000000..8c97c7c --- /dev/null +++ b/xen/drivers/passthrough/arm/iommu_fwspec.c @@ -0,0 +1,97 @@ +/* + * xen/drivers/passthrough/arm/iommu_fwspec.c + * + * Contains functions to maintain per-device firmware data + * + * Based on Linux's iommu_fwspec support you can find at: + * drivers/iommu/iommu.c + * + * Copyright (C) 2007-2008 Advanced Micro Devices, Inc. + * + * Copyright (C) 2019 EPAM Systems Inc. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms and conditions of the GNU General Public + * License, version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public + * License along with this program; If not, see . + */ + +#include +#include + +#include +#include + +int iommu_fwspec_init(struct device *dev, struct device *iommu_dev) +{ + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + + if ( fwspec ) + { + /* We expect the device to be protected by only one IOMMU. */ + if ( fwspec->iommu_dev != iommu_dev ) + return -EINVAL; + + return 0; + } + + /* + * Allocate with ids[1] to avoid the re-allocation in the common case + * where a device has a single device ID. + */ + fwspec = xzalloc_flex_struct(struct iommu_fwspec, ids, 1); + if ( !fwspec ) + return -ENOMEM; + + fwspec->iommu_dev = iommu_dev; + dev_iommu_fwspec_set(dev, fwspec); + + return 0; +} + +void iommu_fwspec_free(struct device *dev) +{ + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + + xfree(fwspec); + dev_iommu_fwspec_set(dev, NULL); +} + +int iommu_fwspec_add_ids(struct device *dev, const uint32_t *ids, + unsigned int num_ids) +{ + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + unsigned int i; + + if ( !fwspec ) + return -EINVAL; + + fwspec = xrealloc_flex_struct(fwspec, ids, fwspec->num_ids + num_ids); + if ( !fwspec ) + return -ENOMEM; + + dev_iommu_fwspec_set(dev, fwspec); + + for ( i = 0; i < num_ids; i++ ) + fwspec->ids[fwspec->num_ids + i] = ids[i]; + + fwspec->num_ids += num_ids; + + return 0; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/device.h b/xen/include/asm-arm/device.h index ee1c3bc..ee7cff2 100644 --- a/xen/include/asm-arm/device.h +++ b/xen/include/asm-arm/device.h @@ -18,6 +18,7 @@ struct device struct dt_device_node *of_node; /* Used by drivers imported from Linux */ #endif struct dev_archdata archdata; + struct iommu_fwspec *iommu_fwspec; /* per-device IOMMU instance data */ }; typedef struct device device_t; diff --git a/xen/include/asm-arm/iommu_fwspec.h b/xen/include/asm-arm/iommu_fwspec.h new file mode 100644 index 0000000..5cdb53f --- /dev/null +++ b/xen/include/asm-arm/iommu_fwspec.h @@ -0,0 +1,68 @@ +/* + * xen/include/asm-arm/iommu_fwspec.h + * + * Contains a common structure to hold the per-device firmware data and + * declaration of functions used to maintain that data + * + * Based on Linux's iommu_fwspec support you can find at: + * include/linux/iommu.h + * + * Copyright (C) 2007-2008 Advanced Micro Devices, Inc. + * + * Copyright (C) 2019 EPAM Systems Inc. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms and conditions of the GNU General Public + * License, version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public + * License along with this program; If not, see . + */ + +#ifndef __ARCH_ARM_IOMMU_FWSPEC_H__ +#define __ARCH_ARM_IOMMU_FWSPEC_H__ + +/* per-device IOMMU instance data */ +struct iommu_fwspec { + /* this device's IOMMU */ + struct device *iommu_dev; + /* IOMMU driver private data for this device */ + void *iommu_priv; + /* number of associated device IDs */ + unsigned int num_ids; + /* IDs which this device may present to the IOMMU */ + uint32_t ids[]; +}; + +int iommu_fwspec_init(struct device *dev, struct device *iommu_dev); +void iommu_fwspec_free(struct device *dev); +int iommu_fwspec_add_ids(struct device *dev, const uint32_t *ids, + unsigned int num_ids); + +static inline struct iommu_fwspec *dev_iommu_fwspec_get(struct device *dev) +{ + return dev->iommu_fwspec; +} + +static inline void dev_iommu_fwspec_set(struct device *dev, + struct iommu_fwspec *fwspec) +{ + dev->iommu_fwspec = fwspec; +} + +#endif /* __ARCH_ARM_IOMMU_FWSPEC_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ From patchwork Tue Sep 24 15:30:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11159131 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 61BF413B1 for ; Tue, 24 Sep 2019 15:32:49 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3CE08207FD for ; Tue, 24 Sep 2019 15:32:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FANQB+i9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3CE08207FD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmmg-0007zM-Is; Tue, 24 Sep 2019 15:31:22 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmme-0007yg-W4 for xen-devel@lists.xenproject.org; Tue, 24 Sep 2019 15:31:21 +0000 X-Inumbo-ID: 4d537486-dee0-11e9-8628-bc764e2007e4 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by localhost (Halon) with ESMTPS id 4d537486-dee0-11e9-8628-bc764e2007e4; Tue, 24 Sep 2019 15:30:56 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id r134so1740772lff.12 for ; Tue, 24 Sep 2019 08:30:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=V6XOG/FGgPceH4HTM4sWnOAAklkg5JIV8JYjvkApJ8c=; b=FANQB+i9aEOe+stHUUD7tyJp9ibY8fuBk3gZ4WEp+5DUihyMVQLnPS6B3qrNz4wHRY vqj4cX13kP6WV8pKVfnY6cXXvZKTNMZn2BoiZ7qy0YUdEQUqO2fZV0IOZ9Ng+ceMj7m5 MH4bX4Tiyv0uzuW+zA9FNri+WJPqmvz7pPoWybh6g+3/G/3+BjFlFXYKXlKieZQBaXP5 ExNDCcAxKYKYpfWg98v+Bn15RcGx1aL4P1iyj767z1oVBZCYn+AsvHRDxFtXSXQEJ2fd ZyuX4XNSVxsBTzmhReG4YMz0HiZZ8l5qoHsUYxZqxsiJMvgbppkrbma94ep+qITCizY9 f5tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=V6XOG/FGgPceH4HTM4sWnOAAklkg5JIV8JYjvkApJ8c=; b=hH041laIlI+bIotwOeU2xicksCOYyikP6EyKGMeQr5na9b9MkuwBhnN7YxEk2FlLAs RtL4uxqIy8yWs8+tJBEABmhSwJE3mRcZkKj/hPsdRPdgG5ytwNzhBg7HDAwEd9Ay07Yk Z1YqvtFcdGR9F4yECDf1Qbx4vXPmgwaZLhU5nR+tIC3OVfilniaHVAnLZ3IaWJU9CtaY FPozAX0Pf1645NBSy73nK5PWa/a52fhLUHZRpkMvWuK9HC6cYWYH9L4mwITChhlVo5NT 2BFhvYwKB3Kb5B4g4MiEyBzk+4QrW+Zdz+G8vBDpYl/gfi1LQ4J4z8q58YpDDFCHWcJd fjcA== X-Gm-Message-State: APjAAAUTQ5XJFUDlNEqRvTiBB6W9O1WIJL8hvvkfLsiZe9Qz8f64bTnn ucTJFuBSYF5KTaeZaAvzZHBvIr1NGXc= X-Google-Smtp-Source: APXvYqxPzMIKt3hU1+5rdmyp+simhX5pnXT+OfmtN+dGWFSSGs2JxlUOG0k+EiVGc5ISwVc9ozOefQ== X-Received: by 2002:ac2:44b9:: with SMTP id c25mr2460096lfm.112.1569339055698; Tue, 24 Sep 2019 08:30:55 -0700 (PDT) Received: from otyshchenko.kyiv.epam.com (ll-74.141.223.85.sovam.net.ua. [85.223.141.74]) by smtp.gmail.com with ESMTPSA id z128sm525818lfa.1.2019.09.24.08.30.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 24 Sep 2019 08:30:55 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Date: Tue, 24 Sep 2019 18:30:25 +0300 Message-Id: <1569339027-19484-7-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> References: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> Subject: [Xen-devel] [PATCH V5 6/8] iommu: Order the headers alphabetically in device_tree.c X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Oleksandr Tyshchenko , julien.grall@arm.com, sstabellini@kernel.org, volodymyr_babchuk@epam.com MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Oleksandr Tyshchenko Clean up the code a bit by putting the headers in alphabetical order. Signed-off-by: Oleksandr Tyshchenko Acked-by: Julien Grall --- xen/drivers/passthrough/device_tree.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c index 12f2c4c..3f328f4 100644 --- a/xen/drivers/passthrough/device_tree.c +++ b/xen/drivers/passthrough/device_tree.c @@ -15,11 +15,11 @@ * GNU General Public License for more details. */ -#include -#include +#include #include #include -#include +#include +#include #include static spinlock_t dtdevs_lock = SPIN_LOCK_UNLOCKED; From patchwork Tue Sep 24 15:30:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11159129 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 20CA417EE for ; Tue, 24 Sep 2019 15:32:48 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E55EC217D7 for ; Tue, 24 Sep 2019 15:32:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="pAhklr1x" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E55EC217D7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmml-00082S-U5; Tue, 24 Sep 2019 15:31:27 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmmj-00081e-Vh for xen-devel@lists.xenproject.org; Tue, 24 Sep 2019 15:31:26 +0000 X-Inumbo-ID: 4e0bfc04-dee0-11e9-bf31-bc764e2007e4 Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141]) by localhost (Halon) with ESMTPS id 4e0bfc04-dee0-11e9-bf31-bc764e2007e4; Tue, 24 Sep 2019 15:30:58 +0000 (UTC) Received: by mail-lf1-x141.google.com with SMTP id u3so1739657lfl.10 for ; Tue, 24 Sep 2019 08:30:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=fx31G+xE30gvwkp/PyhDSOqp0zx0pQGOVGh0hjTeTJQ=; b=pAhklr1xt+wXGNd+UeI1wUFdonF0bkm/xD7OXtEBWfcQv+Pw63rDEkpXKvFdOc+HSQ tusC46G163cMTrPe3K5Udtex0qoG33XfOkPzQ6DDFbz3ZWdXcC8tXYVf/RzLvWWgysEA r1t/W/7LuNuVTn0F6/n7TkMA6fynvAr5ngijtDsSjYJFL8uAW2g/YzkvlHi4q66jgYHh Vw+QgPDO5bJFwucbubO2HtiHWBl3Bby3qygK/tJ2w6HE1ZLIiH5EL6QWK7E/ieOhzGS7 nrx/g2KD7iN0H6ZJN5wA48dIO93kY3dDNBTNitspm1wF6s2+u19c9TJIvdvX4nBAehqk BOjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=fx31G+xE30gvwkp/PyhDSOqp0zx0pQGOVGh0hjTeTJQ=; b=DEMyXBRtmEROYO4deUgCakhWceCnETIiv6ZTMxiamBb2iuDNtS15o5qrVTbhjYcQRy KFpblZYRHXdoVjiy3NJaotbf0ZM3HpTJr4sfreDM6jvSaZvf0OkpBjb8XAiFh1OPCRuu wwD0Y30u+lQFVNNniefgEjq09CcmN70S4QGbXi49PglMM0VzcNOZltlta5SLrrmQCXAK IoPjadxNduRhK0H9PaKz1fss68/J0126FAx1JA+E+ABbEOA/5r1k1RLCWhL9agiGksnv sUmv5j/eGXMJjRIvbJXP3ZbCqfCqraNE6EDr9WoBeeqJOQaJkcZ1Ut73T4k+s6kMIUSZ dJRw== X-Gm-Message-State: APjAAAU0CqtAX3Uvi+DQJMU7GxIVq7dNK8RXNDAl7Pgx+EJqhim0kNG0 uVLVYxf0p6j4fcjc7FOR34/l/rX3Erc= X-Google-Smtp-Source: APXvYqyooF/bRI+8Rcptb8sjQ+KkxYLORSBaLT7tYBi9TTv04to6CsenaRgBQTvPMz8bYxCwYmHvDg== X-Received: by 2002:a05:6512:488:: with SMTP id v8mr2329190lfq.37.1569339056923; Tue, 24 Sep 2019 08:30:56 -0700 (PDT) Received: from otyshchenko.kyiv.epam.com (ll-74.141.223.85.sovam.net.ua. [85.223.141.74]) by smtp.gmail.com with ESMTPSA id z128sm525818lfa.1.2019.09.24.08.30.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 24 Sep 2019 08:30:56 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Date: Tue, 24 Sep 2019 18:30:26 +0300 Message-Id: <1569339027-19484-8-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> References: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> Subject: [Xen-devel] [PATCH V5 7/8] iommu/arm: Introduce iommu_add_dt_device API X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Oleksandr Tyshchenko , julien.grall@arm.com, sstabellini@kernel.org, volodymyr_babchuk@epam.com, Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Oleksandr Tyshchenko The main puprose of this patch is to add a way to register DT device (which is behind the IOMMU) using the generic IOMMU DT bindings [1] before assigning that device to a domain. So, this patch adds new "iommu_add_dt_device" API for adding DT device to the IOMMU using generic IOMMU DT bindings and previously added "iommu_fwspec" support. As devices can be assigned to the hardware domain and other domains this function is called from two places: handle_device() and iommu_do_dt_domctl(). Besides that, this patch adds new "dt_xlate" callback (borrowed from Linux "of_xlate") for providing the driver with DT IOMMU specifier which describes the IOMMU master interfaces of that device (device IDs, etc). According to the generic IOMMU DT bindings the context of required properties for IOMMU device/master node (#iommu-cells, iommus) depends on many factors and is really driver depended thing. Please note, all IOMMU drivers which support generic IOMMU DT bindings should use "dt_xlate" and "add_device" callbacks. [1] https://www.kernel.org/doc/Documentation/devicetree/bindings/iommu/iommu.txt Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall CC: Jan Beulich --- Changes V4 -> V5: - added "const" to struct dt_phandle_args *args in dt_xlate - moved iommu_add_dt_device() to xen/passthrough/device_tree.c - modified logic, don't try to add "all" devices to the IOMMU when constructing Dom0, but only devices which are going to be assigned to hwdom - updated iommu_do_dt_domctl() to call iommu_add_dt_device() - clarified patch description - removed "__init" from iommu_add_dt_device() Changes V3 -> V4: - squashed with "iommu: Add of_xlate callback" patch - renamed "of_xlate" to "dt_xlate" - reworked patch description - clarified comments in code, removed confusing word "initialize device", etc - updated debug message in handle_device() - modified to check ops->of_xlate and ops->add_device only if "iommus" property is exists Changes V2 -> V3: - clarified patch description - clarified comments in code - modified to provide DT IOMMU specifier to the driver using "of_xlate" callback - documented function usage - modified to return an error if ops is not present/implemented, - added ability to return a possitive value to indicate that device doesn't need to be protected - removed check for the "iommu" property presence in the common code - included directly --- xen/arch/arm/domain_build.c | 22 +++++++---- xen/drivers/passthrough/device_tree.c | 71 +++++++++++++++++++++++++++++++++++ xen/include/xen/iommu.h | 21 +++++++++++ 3 files changed, 107 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index a0fee1e..8048789 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -1243,6 +1243,7 @@ static int __init map_device_children(struct domain *d, * - Give permission to the guest to manage IRQ and MMIO range * - Retrieve the IRQ configuration (i.e edge/level) from device tree * When the device is not marked for guest passthrough: + * - Try to call iommu_add_dt_device to protect the device by an IOMMU * - Assign the device to the guest if it's protected by an IOMMU * - Map the IRQs and iomem regions to DOM0 */ @@ -1263,15 +1264,22 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev, dt_dprintk("%s passthrough = %d nirq = %d naddr = %u\n", dt_node_full_name(dev), need_mapping, nirq, naddr); - if ( dt_device_is_protected(dev) && need_mapping ) + if ( need_mapping ) { - dt_dprintk("%s setup iommu\n", dt_node_full_name(dev)); - res = iommu_assign_dt_device(d, dev); - if ( res ) - { - printk(XENLOG_ERR "Failed to setup the IOMMU for %s\n", + dt_dprintk("Check if %s is behind the IOMMU and add it\n", dt_node_full_name(dev)); - return res; + + iommu_add_dt_device(dev); + if ( dt_device_is_protected(dev) ) + { + dt_dprintk("%s setup iommu\n", dt_node_full_name(dev)); + res = iommu_assign_dt_device(d, dev); + if ( res ) + { + printk(XENLOG_ERR "Failed to setup the IOMMU for %s\n", + dt_node_full_name(dev)); + return res; + } } } diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c index 3f328f4..0fd2488 100644 --- a/xen/drivers/passthrough/device_tree.c +++ b/xen/drivers/passthrough/device_tree.c @@ -22,6 +22,8 @@ #include #include +#include + static spinlock_t dtdevs_lock = SPIN_LOCK_UNLOCKED; int iommu_assign_dt_device(struct domain *d, struct dt_device_node *dev) @@ -136,6 +138,68 @@ int iommu_release_dt_devices(struct domain *d) return 0; } +int iommu_add_dt_device(struct dt_device_node *np) +{ + const struct iommu_ops *ops = iommu_get_ops(); + struct dt_phandle_args iommu_spec; + struct device *dev = dt_to_dev(np); + int rc = 1, index = 0; + + if ( !iommu_enabled ) + return 1; + + if ( !ops ) + return -EINVAL; + + if ( dev_iommu_fwspec_get(dev) ) + return -EEXIST; + + /* + * According to the Documentation/devicetree/bindings/iommu/iommu.txt + * from Linux. + */ + while ( !dt_parse_phandle_with_args(np, "iommus", "#iommu-cells", + index, &iommu_spec) ) + { + /* + * The driver which supports generic IOMMU DT bindings must have + * these callback implemented. + */ + if ( !ops->add_device || !ops->dt_xlate ) + return -EINVAL; + + if ( !dt_device_is_available(iommu_spec.np) ) + break; + + rc = iommu_fwspec_init(dev, &iommu_spec.np->dev); + if ( rc ) + break; + + /* + * Provide DT IOMMU specifier which describes the IOMMU master + * interfaces of that device (device IDs, etc) to the driver. + * The driver is responsible to decide how to interpret them. + */ + rc = ops->dt_xlate(dev, &iommu_spec); + if ( rc ) + break; + + index++; + } + + /* + * Add master device to the IOMMU if latter is present and available. + * The driver is responsible to mark that device as protected. + */ + if ( !rc ) + rc = ops->add_device(0, dev); + + if ( rc < 0 ) + iommu_fwspec_free(dev); + + return rc; +} + int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) { @@ -177,6 +241,13 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, break; } + iommu_add_dt_device(dev); + if ( !dt_device_is_protected(dev) ) + { + ret = -EINVAL; + break; + } + ret = iommu_assign_dt_device(d, dev); if ( ret ) diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index c5ed7ef..2d1cc6e 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -186,6 +186,17 @@ int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev); int iommu_dt_domain_init(struct domain *d); int iommu_release_dt_devices(struct domain *d); +/* + * Helper to add master device to the IOMMU using generic IOMMU DT bindings. + * + * Return values: + * 0 : device is protected by an IOMMU + * <0 : device is not protected by an IOMMU, but must be (error condition) + * >0 : device doesn't need to be protected by an IOMMU + * (IOMMU is not enabled/present or device is not connected to it). + */ +int iommu_add_dt_device(struct dt_device_node *np); + int iommu_do_dt_domctl(struct xen_domctl *, struct domain *, XEN_GUEST_HANDLE_PARAM(xen_domctl_t)); @@ -254,6 +265,16 @@ struct iommu_ops { int __must_check (*iotlb_flush_all)(struct domain *d); int (*get_reserved_device_memory)(iommu_grdm_t *, void *); void (*dump_p2m_table)(struct domain *d); + +#ifdef CONFIG_HAS_DEVICE_TREE + /* + * All IOMMU drivers which support generic IOMMU DT bindings should use + * this callback. This is a way for the framework to provide the driver + * with DT IOMMU specifier which describes the IOMMU master interfaces of + * that device (device IDs, etc). + */ + int (*dt_xlate)(device_t *dev, const struct dt_phandle_args *args); +#endif }; #include From patchwork Tue Sep 24 15:30:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11159137 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6840317EE for ; Tue, 24 Sep 2019 15:33:32 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2CB3420665 for ; Tue, 24 Sep 2019 15:33:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="K5nbYjBs" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2CB3420665 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmmq-00085q-Df; Tue, 24 Sep 2019 15:31:32 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCmmo-00084o-WB for xen-devel@lists.xenproject.org; Tue, 24 Sep 2019 15:31:31 +0000 X-Inumbo-ID: 4f26611a-dee0-11e9-b588-bc764e2007e4 Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243]) by localhost (Halon) with ESMTPS id 4f26611a-dee0-11e9-b588-bc764e2007e4; Tue, 24 Sep 2019 15:31:00 +0000 (UTC) Received: by mail-lj1-x243.google.com with SMTP id v24so2364910ljj.3 for ; Tue, 24 Sep 2019 08:30:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=g/AM/w//wq1KzJmBr6XyL1KuKQ+WAJLP+iZCkYy2m3E=; b=K5nbYjBsYbpMSrOw5SpFZJCFr54FUhqlg7BH9wJ4chgBw5FpVa32vVarQQV3ON30ib X8+zPwYSouWjWUy67rR/D73mZmJKnkhh+58eWu9bcmBxNKJeHjrPRLiBzsmlR4z9SJQB e1ND6lRaehGDX5MbwcViI+f3RZbkhTkVKxK1dun+3HD3QNuMWdNt+GOhHlrEzLWVZSVL WHndHjhAh51+RcX7d6LNoA1MiM0A/GsMgckJ9ZCbcaudzeIpVnwQxa4mKJ7n0diP42kp 6IjSv22pQwQlZ54YGPSv+3SIdaksgLb7i1AcpRVROYbQhiiT0EQg8EDiNrld8ER68tOb wF7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=g/AM/w//wq1KzJmBr6XyL1KuKQ+WAJLP+iZCkYy2m3E=; b=Jm7WWrnG7Db4c1XruYnulWVtSCJmDJ502xtLVhHapO6eh+BIUqnXay5GVO3Meay3Hh Xv2/BK68/UA2RY/iXpKfM05RNp+0qB5A8anzEqQvz/eI7Gg5fhT/9QJUhD1xK6Pr3p96 qRMUT7u/ebqRGJeh4R192Ak56zw5s3uXD50e2s6EL3WH6uB1vrAsB91WW8YUaREbuwmz I1EY7yEmTdZBsh6t04nnRyAk+hEWsiJQ67eJ68K28lFSer4fHah+46Lw476E/g2e9sUe WEzlo/1ikVeazkpVWM7kwvH4wTEsmWbq4O6AkeMacHDTCxVILFEYMLpjhlL0E2hBb9VN yHVw== X-Gm-Message-State: APjAAAXtdoGVZjFOkBqE0tbo0wSLPP4F8Wynmiaw66exiHObWdIKlP6S 5ANghzal0RwnElYvtGDe2oo4HjiHLaM= X-Google-Smtp-Source: APXvYqwn3HBGLTl5wTVkW8Jl4Htnh5g6wsnRg1hyxTt25qQYc8Ox+Wz8iWkYbez3FJkmf2Swm258wA== X-Received: by 2002:a2e:9a88:: with SMTP id p8mr2362545lji.86.1569339058065; Tue, 24 Sep 2019 08:30:58 -0700 (PDT) Received: from otyshchenko.kyiv.epam.com (ll-74.141.223.85.sovam.net.ua. [85.223.141.74]) by smtp.gmail.com with ESMTPSA id z128sm525818lfa.1.2019.09.24.08.30.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 24 Sep 2019 08:30:57 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Date: Tue, 24 Sep 2019 18:30:27 +0300 Message-Id: <1569339027-19484-9-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> References: <1569339027-19484-1-git-send-email-olekstysh@gmail.com> Subject: [Xen-devel] [PATCH V5 8/8] iommu/arm: Add Renesas IPMMU-VMSA support X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Oleksandr Tyshchenko , julien.grall@arm.com, sstabellini@kernel.org, volodymyr_babchuk@epam.com, Yoshihiro Shimoda MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Oleksandr Tyshchenko The IPMMU-VMSA is VMSA-compatible I/O Memory Management Unit (IOMMU) which provides address translation and access protection functionalities to processing units and interconnect networks. Please note, current driver is supposed to work only with newest R-Car Gen3 SoCs revisions which IPMMU hardware supports stage 2 translation table format and is able to use CPU's P2M table as is if one is 3-level page table (up to 40 bit IPA). The major differences compare to the Linux driver are: 1. Stage 1/Stage 2 translation. Linux driver supports Stage 1 translation only (with Stage 1 translation table format). It manages page table by itself. But Xen driver supports Stage 2 translation (with Stage 2 translation table format) to be able to share the P2M with the CPU. Stage 1 translation is always bypassed in Xen driver. So, Xen driver is supposed to be used with newest R-Car Gen3 SoC revisions only (H3 ES3.0, M3-W+, etc.) which IPMMU H/W supports stage 2 translation table format. 2. AArch64 support. Linux driver uses VMSAv8-32 mode, while Xen driver enables Armv8 VMSAv8-64 mode to cover up to 40 bit input address. 3. Context bank (sets of page table) usage. In Xen, each context bank is mapped to one Xen domain. So, all devices being pass throughed to the same Xen domain share the same context bank. 4. IPMMU device tracking. In Xen, all IOMMU devices are managed by single driver instance. So, driver uses global list to keep track of registered IPMMU devices. Signed-off-by: Oleksandr Tyshchenko [for the IPMMU H/W bits] Reviewed-by: Yoshihiro Shimoda Acked-by: Julien Grall --- Changes V4 -> V5: - added R-b/A-b - added "const" to struct dt_phandle_args *spec in ipmmu_dt_xlate() - added "const" to struct dt_phandle_args *args in ipmmu_init_platform_device() Changes V3 -> V4: - changed "Gen3" to "R-Car Gen3" in commit description - renamed "of_xlate" to "dt_xlate" - removed check for the iommu_hap_pt_share flag in ipmmu_init() - modified to be able to select IPMMU_VMSA option only if EXPERT is set - removed an error message when deferring the probe - moved check for the P2M sharing possibility to the Root IPMMU path Changes V2 -> V3: - added Copyright from Linux - included and directly - removed pointless spin_lock from ipmmu_iommu_domain_teardown() - changed "Gen3" to "R-Car Gen3", "M3 ES3.0" to "M3-W+" - changed RCAR_PRODUCT_M3 to RCAR_PRODUCT_M3W - clarified IMTTBCR_TSZ1_MASK - removed TLB flush from ipmmu_domain_irq() until clarified the purpose - returned -ENODEV in ipmmu_init() if P2M sharing or P2M IPA size is not supported to show that IOMMU device is not usable in Xen - implemented new callback "of_xlate" to parse DT IOMMU specifier and add device IDs to "iommu_fwspec" - reworked "add_device" callback - updated patch description Changes V1 -> V2: - rewrited driver to use iommu_fwspec - removed DT parsing code for micro-TLBs - removed struct ipmmu_vmsa_master_cfg, dev_archdata macro - added ipmmu_find_mmu_by_dev(), various helpers to access fwspec->iommu_priv - implemented new callback "add_device" to add master device to IPMMU - removed ipmmu_protect_masters() - removed code to locate Root device in the first place, used EAGAIN to request deferred probing - used printk_once for the system wide error messages in ipmmu_init() which don't need to be shown for every device being probed - removed map_page/unmap_page implementation, reused them from iommu_helpers.c - used %pd for printing domaid id - performed various cosmetic fixes - changed u32 -> uint32_t, u64 -> uint64_t, unsigned int -> uint32_t where needed - clarified TODOs - clafiried supported SoC versions in config IPMMU_VMSA, set default to "n" - updated comments in code, provided more accurate description, added new comments where needed - updated patch description by providing differences between Linux/Xen implementations - removed fields for cache snoop transaction when configuring IMTTBCR (update from Renesas BSP) --- xen/arch/arm/platforms/Kconfig | 1 + xen/drivers/passthrough/Kconfig | 13 + xen/drivers/passthrough/arm/Makefile | 1 + xen/drivers/passthrough/arm/ipmmu-vmsa.c | 1333 ++++++++++++++++++++++++++++++ 4 files changed, 1348 insertions(+) create mode 100644 xen/drivers/passthrough/arm/ipmmu-vmsa.c diff --git a/xen/arch/arm/platforms/Kconfig b/xen/arch/arm/platforms/Kconfig index bc0e9cd..4bb7319 100644 --- a/xen/arch/arm/platforms/Kconfig +++ b/xen/arch/arm/platforms/Kconfig @@ -25,6 +25,7 @@ config RCAR3 bool "Renesas RCar3 support" depends on ARM_64 select HAS_SCIF + select IPMMU_VMSA if EXPERT ---help--- Enable all the required drivers for Renesas RCar3 diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig index a3c0649..e22301c 100644 --- a/xen/drivers/passthrough/Kconfig +++ b/xen/drivers/passthrough/Kconfig @@ -12,4 +12,17 @@ config ARM_SMMU Say Y here if your SoC includes an IOMMU device implementing the ARM SMMU architecture. + +config IPMMU_VMSA + bool "Renesas IPMMU-VMSA found in R-Car Gen3 SoCs" if EXPERT = "y" + default n + depends on ARM_64 + ---help--- + Support for implementations of the Renesas IPMMU-VMSA found + in R-Car Gen3 SoCs. + + Say Y here if you are using newest R-Car Gen3 SoCs revisions + (H3 ES3.0, M3-W+, etc) which IPMMU hardware supports stage 2 + translation table format and is able to use CPU's P2M table as is. + endif diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile index 5fbad45..fcd918e 100644 --- a/xen/drivers/passthrough/arm/Makefile +++ b/xen/drivers/passthrough/arm/Makefile @@ -1,2 +1,3 @@ obj-y += iommu.o iommu_helpers.o iommu_fwspec.o obj-$(CONFIG_ARM_SMMU) += smmu.o +obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthrough/arm/ipmmu-vmsa.c new file mode 100644 index 0000000..f2fb4a2 --- /dev/null +++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c @@ -0,0 +1,1333 @@ +/* + * xen/drivers/passthrough/arm/ipmmu-vmsa.c + * + * Driver for the Renesas IPMMU-VMSA found in R-Car Gen3 SoCs. + * + * The IPMMU-VMSA is VMSA-compatible I/O Memory Management Unit (IOMMU) + * which provides address translation and access protection functionalities + * to processing units and interconnect networks. + * + * Please note, current driver is supposed to work only with newest + * R-Car Gen3 SoCs revisions which IPMMU hardware supports stage 2 translation + * table format and is able to use CPU's P2M table as is. + * + * Based on Linux's IPMMU-VMSA driver from Renesas BSP: + * drivers/iommu/ipmmu-vmsa.c + * you can found at: + * url: git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas-bsp.git + * branch: v4.14.75-ltsi/rcar-3.9.6 + * commit: e206eb5b81a60e64c35fbc3a999b1a0db2b98044 + * and Xen's SMMU driver: + * xen/drivers/passthrough/arm/smmu.c + * + * Copyright (C) 2014-2019 Renesas Electronics Corporation + * + * Copyright (C) 2016-2019 EPAM Systems Inc. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms and conditions of the GNU General Public + * License, version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public + * License along with this program; If not, see . + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#define dev_name(dev) dt_node_full_name(dev_to_dt(dev)) + +/* Device logger functions */ +#define dev_print(dev, lvl, fmt, ...) \ + printk(lvl "ipmmu: %s: " fmt, dev_name(dev), ## __VA_ARGS__) + +#define dev_info(dev, fmt, ...) \ + dev_print(dev, XENLOG_INFO, fmt, ## __VA_ARGS__) +#define dev_warn(dev, fmt, ...) \ + dev_print(dev, XENLOG_WARNING, fmt, ## __VA_ARGS__) +#define dev_err(dev, fmt, ...) \ + dev_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__) +#define dev_err_ratelimited(dev, fmt, ...) \ + dev_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__) + +/* + * R-Car Gen3 SoCs make use of up to 8 IPMMU contexts (sets of page table) and + * these can be managed independently. Each context is mapped to one Xen domain. + */ +#define IPMMU_CTX_MAX 8 +/* R-Car Gen3 SoCs make use of up to 48 micro-TLBs per IPMMU device. */ +#define IPMMU_UTLB_MAX 48 + +/* IPMMU context supports IPA size up to 40 bit. */ +#define IPMMU_MAX_P2M_IPA_BITS 40 + +/* + * Xen's domain IPMMU information stored in dom_iommu(d)->arch.priv + * + * As each context (set of page table) is mapped to one Xen domain, + * all associated IPMMU domains use the same context mapped to this Xen domain. + * This makes all master devices being attached to the same Xen domain share + * the same context (P2M table). + */ +struct ipmmu_vmsa_xen_domain { + /* + * Used to protect everything which belongs to this Xen domain: + * device assignment, domain init/destroy, flush ops, etc + */ + spinlock_t lock; + /* One or more Cache IPMMU domains associated with this Xen domain */ + struct list_head cache_domains; + /* Root IPMMU domain associated with this Xen domain */ + struct ipmmu_vmsa_domain *root_domain; +}; + +/* Xen master device's IPMMU information stored in fwspec->iommu_priv */ +struct ipmmu_vmsa_xen_device { + /* Cache IPMMU domain this master device is logically attached to */ + struct ipmmu_vmsa_domain *domain; + /* Cache IPMMU this master device is physically connected to */ + struct ipmmu_vmsa_device *mmu; +}; + +/* Root/Cache IPMMU device's information */ +struct ipmmu_vmsa_device { + struct device *dev; + void __iomem *base; + struct ipmmu_vmsa_device *root; + struct list_head list; + unsigned int num_utlbs; + unsigned int num_ctx; + spinlock_t lock; /* Protects ctx and domains[] */ + DECLARE_BITMAP(ctx, IPMMU_CTX_MAX); + struct ipmmu_vmsa_domain *domains[IPMMU_CTX_MAX]; +}; + +/* + * Root/Cache IPMMU domain's information + * + * Root IPMMU device is assigned to Root IPMMU domain while Cache IPMMU device + * is assigned to Cache IPMMU domain. Master devices are connected to Cache + * IPMMU devices through specific ports called micro-TLBs. + * All Cache IPMMU devices, in turn, are connected to Root IPMMU device + * which manages IPMMU context. + */ +struct ipmmu_vmsa_domain { + /* + * IPMMU device assigned to this IPMMU domain. + * Either Root device which is located at the main memory bus domain or + * Cache device which is located at each hierarchy bus domain. + */ + struct ipmmu_vmsa_device *mmu; + + /* Context used for this IPMMU domain */ + unsigned int context_id; + + /* Xen domain associated with this IPMMU domain */ + struct domain *d; + + /* The fields below are used for Cache IPMMU domain only */ + + /* + * Used to keep track of the master devices which are attached to this + * IPMMU domain (domain users). Master devices behind the same IPMMU device + * are grouped together by putting into the same IPMMU domain. + * Only when the refcount reaches 0 this IPMMU domain can be destroyed. + */ + unsigned int refcount; + /* Used to link this IPMMU domain for the same Xen domain */ + struct list_head list; +}; + +/* Used to keep track of registered IPMMU devices */ +static LIST_HEAD(ipmmu_devices); +static DEFINE_SPINLOCK(ipmmu_devices_lock); + +#define TLB_LOOP_TIMEOUT 100 /* 100us */ + +/* Registers Definition */ +#define IM_CTX_SIZE 0x40 + +#define IMCTR 0x0000 +/* + * These fields are implemented in IPMMU-MM only. So, can be set for + * Root IPMMU only. + */ +#define IMCTR_VA64 (1 << 29) +#define IMCTR_TRE (1 << 17) +#define IMCTR_AFE (1 << 16) +#define IMCTR_RTSEL_MASK (3 << 4) +#define IMCTR_RTSEL_SHIFT 4 +#define IMCTR_TREN (1 << 3) +/* + * These fields are common for all IPMMU devices. So, can be set for + * Cache IPMMUs as well. + */ +#define IMCTR_INTEN (1 << 2) +#define IMCTR_FLUSH (1 << 1) +#define IMCTR_MMUEN (1 << 0) +#define IMCTR_COMMON_MASK (7 << 0) + +#define IMCAAR 0x0004 + +#define IMTTBCR 0x0008 +#define IMTTBCR_EAE (1 << 31) +#define IMTTBCR_PMB (1 << 30) +#define IMTTBCR_SH1_NON_SHAREABLE (0 << 28) +#define IMTTBCR_SH1_OUTER_SHAREABLE (2 << 28) +#define IMTTBCR_SH1_INNER_SHAREABLE (3 << 28) +#define IMTTBCR_SH1_MASK (3 << 28) +#define IMTTBCR_ORGN1_NC (0 << 26) +#define IMTTBCR_ORGN1_WB_WA (1 << 26) +#define IMTTBCR_ORGN1_WT (2 << 26) +#define IMTTBCR_ORGN1_WB (3 << 26) +#define IMTTBCR_ORGN1_MASK (3 << 26) +#define IMTTBCR_IRGN1_NC (0 << 24) +#define IMTTBCR_IRGN1_WB_WA (1 << 24) +#define IMTTBCR_IRGN1_WT (2 << 24) +#define IMTTBCR_IRGN1_WB (3 << 24) +#define IMTTBCR_IRGN1_MASK (3 << 24) +#define IMTTBCR_TSZ1_MASK (0x1f << 16) +#define IMTTBCR_TSZ1_SHIFT 16 +#define IMTTBCR_SH0_NON_SHAREABLE (0 << 12) +#define IMTTBCR_SH0_OUTER_SHAREABLE (2 << 12) +#define IMTTBCR_SH0_INNER_SHAREABLE (3 << 12) +#define IMTTBCR_SH0_MASK (3 << 12) +#define IMTTBCR_ORGN0_NC (0 << 10) +#define IMTTBCR_ORGN0_WB_WA (1 << 10) +#define IMTTBCR_ORGN0_WT (2 << 10) +#define IMTTBCR_ORGN0_WB (3 << 10) +#define IMTTBCR_ORGN0_MASK (3 << 10) +#define IMTTBCR_IRGN0_NC (0 << 8) +#define IMTTBCR_IRGN0_WB_WA (1 << 8) +#define IMTTBCR_IRGN0_WT (2 << 8) +#define IMTTBCR_IRGN0_WB (3 << 8) +#define IMTTBCR_IRGN0_MASK (3 << 8) +#define IMTTBCR_SL0_LVL_2 (0 << 6) +#define IMTTBCR_SL0_LVL_1 (1 << 6) +#define IMTTBCR_TSZ0_MASK (0x1f << 0) +#define IMTTBCR_TSZ0_SHIFT 0 + +#define IMTTLBR0 0x0010 +#define IMTTLBR0_TTBR_MASK (0xfffff << 12) +#define IMTTUBR0 0x0014 +#define IMTTUBR0_TTBR_MASK (0xff << 0) +#define IMTTLBR1 0x0018 +#define IMTTLBR1_TTBR_MASK (0xfffff << 12) +#define IMTTUBR1 0x001c +#define IMTTUBR1_TTBR_MASK (0xff << 0) + +#define IMSTR 0x0020 +#define IMSTR_ERRLVL_MASK (3 << 12) +#define IMSTR_ERRLVL_SHIFT 12 +#define IMSTR_ERRCODE_TLB_FORMAT (1 << 8) +#define IMSTR_ERRCODE_ACCESS_PERM (4 << 8) +#define IMSTR_ERRCODE_SECURE_ACCESS (5 << 8) +#define IMSTR_ERRCODE_MASK (7 << 8) +#define IMSTR_MHIT (1 << 4) +#define IMSTR_ABORT (1 << 2) +#define IMSTR_PF (1 << 1) +#define IMSTR_TF (1 << 0) + +#define IMELAR 0x0030 +#define IMEUAR 0x0034 + +#define IMUCTR(n) ((n) < 32 ? IMUCTR0(n) : IMUCTR32(n)) +#define IMUCTR0(n) (0x0300 + ((n) * 16)) +#define IMUCTR32(n) (0x0600 + (((n) - 32) * 16)) +#define IMUCTR_FIXADDEN (1 << 31) +#define IMUCTR_FIXADD_MASK (0xff << 16) +#define IMUCTR_FIXADD_SHIFT 16 +#define IMUCTR_TTSEL_MMU(n) ((n) << 4) +#define IMUCTR_TTSEL_PMB (8 << 4) +#define IMUCTR_TTSEL_MASK (15 << 4) +#define IMUCTR_FLUSH (1 << 1) +#define IMUCTR_MMUEN (1 << 0) + +#define IMUASID(n) ((n) < 32 ? IMUASID0(n) : IMUASID32(n)) +#define IMUASID0(n) (0x0308 + ((n) * 16)) +#define IMUASID32(n) (0x0608 + (((n) - 32) * 16)) +#define IMUASID_ASID8_MASK (0xff << 8) +#define IMUASID_ASID8_SHIFT 8 +#define IMUASID_ASID0_MASK (0xff << 0) +#define IMUASID_ASID0_SHIFT 0 + +#define IMSAUXCTLR 0x0504 +#define IMSAUXCTLR_S2PTE (1 << 3) + +static struct ipmmu_vmsa_device *to_ipmmu(struct device *dev) +{ + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + + return fwspec && fwspec->iommu_priv ? + ((struct ipmmu_vmsa_xen_device *)fwspec->iommu_priv)->mmu : NULL; +} + +static void set_ipmmu(struct device *dev, struct ipmmu_vmsa_device *mmu) +{ + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + + ((struct ipmmu_vmsa_xen_device *)fwspec->iommu_priv)->mmu = mmu; +} + +static struct ipmmu_vmsa_domain *to_domain(struct device *dev) +{ + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + + return fwspec && fwspec->iommu_priv ? + ((struct ipmmu_vmsa_xen_device *)fwspec->iommu_priv)->domain : NULL; +} + +static void set_domain(struct device *dev, struct ipmmu_vmsa_domain *domain) +{ + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + + ((struct ipmmu_vmsa_xen_device *)fwspec->iommu_priv)->domain = domain; +} + +static struct ipmmu_vmsa_device *ipmmu_find_mmu_by_dev(struct device *dev) +{ + struct ipmmu_vmsa_device *mmu = NULL; + bool found = false; + + spin_lock(&ipmmu_devices_lock); + + list_for_each_entry ( mmu, &ipmmu_devices, list ) + { + if ( mmu->dev == dev ) + { + found = true; + break; + } + } + + spin_unlock(&ipmmu_devices_lock); + + return found ? mmu : NULL; +} + +/* Root device handling */ +static bool ipmmu_is_root(struct ipmmu_vmsa_device *mmu) +{ + return mmu->root == mmu; +} + +static struct ipmmu_vmsa_device *ipmmu_find_root(void) +{ + struct ipmmu_vmsa_device *mmu = NULL; + bool found = false; + + spin_lock(&ipmmu_devices_lock); + + list_for_each_entry( mmu, &ipmmu_devices, list ) + { + if ( ipmmu_is_root(mmu) ) + { + found = true; + break; + } + } + + spin_unlock(&ipmmu_devices_lock); + + return found ? mmu : NULL; +} + +/* Read/Write Access */ +static uint32_t ipmmu_read(struct ipmmu_vmsa_device *mmu, uint32_t offset) +{ + return readl(mmu->base + offset); +} + +static void ipmmu_write(struct ipmmu_vmsa_device *mmu, uint32_t offset, + uint32_t data) +{ + writel(data, mmu->base + offset); +} + +static uint32_t ipmmu_ctx_read_root(struct ipmmu_vmsa_domain *domain, + uint32_t reg) +{ + return ipmmu_read(domain->mmu->root, + domain->context_id * IM_CTX_SIZE + reg); +} + +static void ipmmu_ctx_write_root(struct ipmmu_vmsa_domain *domain, + uint32_t reg, uint32_t data) +{ + ipmmu_write(domain->mmu->root, + domain->context_id * IM_CTX_SIZE + reg, data); +} + +static void ipmmu_ctx_write_cache(struct ipmmu_vmsa_domain *domain, + uint32_t reg, uint32_t data) +{ + /* We expect only IMCTR value to be passed as a reg. */ + ASSERT(reg == IMCTR); + + /* Mask fields which are implemented in IPMMU-MM only. */ + if ( !ipmmu_is_root(domain->mmu) ) + ipmmu_write(domain->mmu, domain->context_id * IM_CTX_SIZE + reg, + data & IMCTR_COMMON_MASK); +} + +/* + * Write the context to both Root IPMMU and all Cache IPMMUs assigned + * to this Xen domain. + */ +static void ipmmu_ctx_write_all(struct ipmmu_vmsa_domain *domain, + uint32_t reg, uint32_t data) +{ + struct ipmmu_vmsa_xen_domain *xen_domain = dom_iommu(domain->d)->arch.priv; + struct ipmmu_vmsa_domain *cache_domain; + + list_for_each_entry( cache_domain, &xen_domain->cache_domains, list ) + ipmmu_ctx_write_cache(cache_domain, reg, data); + + ipmmu_ctx_write_root(domain, reg, data); +} + +/* TLB and micro-TLB Management */ + +/* Wait for any pending TLB invalidations to complete. */ +static void ipmmu_tlb_sync(struct ipmmu_vmsa_domain *domain) +{ + unsigned int count = 0; + + while ( ipmmu_ctx_read_root(domain, IMCTR) & IMCTR_FLUSH ) + { + cpu_relax(); + if ( ++count == TLB_LOOP_TIMEOUT ) + { + dev_err_ratelimited(domain->mmu->dev, "TLB sync timed out -- MMU may be deadlocked\n"); + return; + } + udelay(1); + } +} + +static void ipmmu_tlb_invalidate(struct ipmmu_vmsa_domain *domain) +{ + uint32_t data; + + data = ipmmu_ctx_read_root(domain, IMCTR); + data |= IMCTR_FLUSH; + ipmmu_ctx_write_all(domain, IMCTR, data); + + ipmmu_tlb_sync(domain); +} + +/* Enable MMU translation for the micro-TLB. */ +static void ipmmu_utlb_enable(struct ipmmu_vmsa_domain *domain, + unsigned int utlb) +{ + struct ipmmu_vmsa_device *mmu = domain->mmu; + + /* + * TODO: Reference-count the micro-TLB as several bus masters can be + * connected to the same micro-TLB. Prevent the use cases where + * the same micro-TLB could be shared between multiple Xen domains. + */ + ipmmu_write(mmu, IMUASID(utlb), 0); + ipmmu_write(mmu, IMUCTR(utlb), ipmmu_read(mmu, IMUCTR(utlb)) | + IMUCTR_TTSEL_MMU(domain->context_id) | IMUCTR_MMUEN); +} + +/* Disable MMU translation for the micro-TLB. */ +static void ipmmu_utlb_disable(struct ipmmu_vmsa_domain *domain, + unsigned int utlb) +{ + struct ipmmu_vmsa_device *mmu = domain->mmu; + + ipmmu_write(mmu, IMUCTR(utlb), 0); +} + +/* Domain/Context Management */ +static int ipmmu_domain_allocate_context(struct ipmmu_vmsa_device *mmu, + struct ipmmu_vmsa_domain *domain) +{ + unsigned long flags; + int ret; + + spin_lock_irqsave(&mmu->lock, flags); + + ret = find_first_zero_bit(mmu->ctx, mmu->num_ctx); + if ( ret != mmu->num_ctx ) + { + mmu->domains[ret] = domain; + set_bit(ret, mmu->ctx); + } + else + ret = -EBUSY; + + spin_unlock_irqrestore(&mmu->lock, flags); + + return ret; +} + +static void ipmmu_domain_free_context(struct ipmmu_vmsa_device *mmu, + unsigned int context_id) +{ + unsigned long flags; + + spin_lock_irqsave(&mmu->lock, flags); + + clear_bit(context_id, mmu->ctx); + mmu->domains[context_id] = NULL; + + spin_unlock_irqrestore(&mmu->lock, flags); +} + +static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain) +{ + uint64_t ttbr; + uint32_t tsz0; + int ret; + + /* Find an unused context. */ + ret = ipmmu_domain_allocate_context(domain->mmu->root, domain); + if ( ret < 0 ) + return ret; + + domain->context_id = ret; + + /* + * TTBR0 + * Use P2M table for this Xen domain. + */ + ASSERT(domain->d != NULL); + ttbr = page_to_maddr(domain->d->arch.p2m.root); + + dev_info(domain->mmu->root->dev, "%pd: Set IPMMU context %u (pgd 0x%"PRIx64")\n", + domain->d, domain->context_id, ttbr); + + ipmmu_ctx_write_root(domain, IMTTLBR0, ttbr & IMTTLBR0_TTBR_MASK); + ipmmu_ctx_write_root(domain, IMTTUBR0, (ttbr >> 32) & IMTTUBR0_TTBR_MASK); + + /* + * TTBCR + * We use long descriptors and allocate the whole "p2m_ipa_bits" IPA space + * to TTBR0. Use 4KB page granule. Start page table walks at first level. + * Always bypass stage 1 translation. + */ + tsz0 = (64 - p2m_ipa_bits) << IMTTBCR_TSZ0_SHIFT; + ipmmu_ctx_write_root(domain, IMTTBCR, IMTTBCR_EAE | IMTTBCR_PMB | + IMTTBCR_SL0_LVL_1 | tsz0); + + /* + * IMSTR + * Clear all interrupt flags. + */ + ipmmu_ctx_write_root(domain, IMSTR, ipmmu_ctx_read_root(domain, IMSTR)); + + /* + * IMCTR + * Enable the MMU and interrupt generation. The long-descriptor + * translation table format doesn't use TEX remapping. Don't enable AF + * software management as we have no use for it. Use VMSAv8-64 mode. + * Enable the context for Root IPMMU only. Flush the TLB as required + * when modifying the context registers. + */ + ipmmu_ctx_write_root(domain, IMCTR, + IMCTR_VA64 | IMCTR_INTEN | IMCTR_FLUSH | IMCTR_MMUEN); + + return 0; +} + +static void ipmmu_domain_destroy_context(struct ipmmu_vmsa_domain *domain) +{ + if ( !domain->mmu ) + return; + + /* + * Disable the context for Root IPMMU only. Flush the TLB as required + * when modifying the context registers. + */ + ipmmu_ctx_write_root(domain, IMCTR, IMCTR_FLUSH); + ipmmu_tlb_sync(domain); + + ipmmu_domain_free_context(domain->mmu->root, domain->context_id); +} + +/* Fault Handling */ +static void ipmmu_domain_irq(struct ipmmu_vmsa_domain *domain) +{ + const uint32_t err_mask = IMSTR_MHIT | IMSTR_ABORT | IMSTR_PF | IMSTR_TF; + struct ipmmu_vmsa_device *mmu = domain->mmu; + uint32_t status; + uint64_t iova; + + status = ipmmu_ctx_read_root(domain, IMSTR); + if ( !(status & err_mask) ) + return; + + iova = ipmmu_ctx_read_root(domain, IMELAR) | + ((uint64_t)ipmmu_ctx_read_root(domain, IMEUAR) << 32); + + /* + * Clear the error status flags. Unlike traditional interrupt flag + * registers that must be cleared by writing 1, this status register + * seems to require 0. The error address register must be read before, + * otherwise its value will be 0. + */ + ipmmu_ctx_write_root(domain, IMSTR, 0); + + /* Log fatal errors. */ + if ( status & IMSTR_MHIT ) + dev_err_ratelimited(mmu->dev, "%pd: Multiple TLB hits @0x%"PRIx64"\n", + domain->d, iova); + if ( status & IMSTR_ABORT ) + dev_err_ratelimited(mmu->dev, "%pd: Page Table Walk Abort @0x%"PRIx64"\n", + domain->d, iova); + + /* Return if it is neither Permission Fault nor Translation Fault. */ + if ( !(status & (IMSTR_PF | IMSTR_TF)) ) + return; + + dev_err_ratelimited(mmu->dev, "%pd: Unhandled fault: status 0x%08x iova 0x%"PRIx64"\n", + domain->d, status, iova); +} + +static void ipmmu_irq(int irq, void *dev, struct cpu_user_regs *regs) +{ + struct ipmmu_vmsa_device *mmu = dev; + unsigned int i; + unsigned long flags; + + spin_lock_irqsave(&mmu->lock, flags); + + /* + * When interrupt arrives, we don't know the context it is related to. + * So, check interrupts for all active contexts to locate a context + * with status bits set. + */ + for ( i = 0; i < mmu->num_ctx; i++ ) + { + if ( !mmu->domains[i] ) + continue; + ipmmu_domain_irq(mmu->domains[i]); + } + + spin_unlock_irqrestore(&mmu->lock, flags); +} + +/* Master devices management */ +static int ipmmu_attach_device(struct ipmmu_vmsa_domain *domain, + struct device *dev) +{ + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + struct ipmmu_vmsa_device *mmu = to_ipmmu(dev); + unsigned int i; + + if ( !mmu ) + { + dev_err(dev, "Cannot attach to IPMMU\n"); + return -ENXIO; + } + + if ( !domain->mmu ) + { + /* The domain hasn't been used yet, initialize it. */ + domain->mmu = mmu; + + /* + * We have already enabled context for Root IPMMU assigned to this + * Xen domain in ipmmu_domain_init_context(). + * Enable the context for Cache IPMMU only. Flush the TLB as required + * when modifying the context registers. + */ + ipmmu_ctx_write_cache(domain, IMCTR, + ipmmu_ctx_read_root(domain, IMCTR) | IMCTR_FLUSH); + + dev_info(dev, "Using IPMMU context %u\n", domain->context_id); + } + else if ( domain->mmu != mmu ) + { + /* + * Something is wrong, we can't attach two master devices using + * different IOMMUs to the same IPMMU domain. + */ + dev_err(dev, "Can't attach IPMMU %s to domain on IPMMU %s\n", + dev_name(mmu->dev), dev_name(domain->mmu->dev)); + return -EINVAL; + } + else + dev_info(dev, "Reusing IPMMU context %u\n", domain->context_id); + + for ( i = 0; i < fwspec->num_ids; ++i ) + ipmmu_utlb_enable(domain, fwspec->ids[i]); + + return 0; +} + +static void ipmmu_detach_device(struct ipmmu_vmsa_domain *domain, + struct device *dev) +{ + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + unsigned int i; + + for ( i = 0; i < fwspec->num_ids; ++i ) + ipmmu_utlb_disable(domain, fwspec->ids[i]); +} + +static int ipmmu_init_platform_device(struct device *dev, + const struct dt_phandle_args *args) +{ + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + struct ipmmu_vmsa_device *mmu; + + mmu = ipmmu_find_mmu_by_dev(dt_to_dev(args->np)); + if ( !mmu ) + return -ENODEV; + + fwspec->iommu_priv = xzalloc(struct ipmmu_vmsa_xen_device); + if ( !fwspec->iommu_priv ) + return -ENOMEM; + + set_ipmmu(dev, mmu); + + return 0; +} + +static void ipmmu_device_reset(struct ipmmu_vmsa_device *mmu) +{ + unsigned int i; + + /* Disable all contexts. */ + for ( i = 0; i < mmu->num_ctx; ++i ) + ipmmu_write(mmu, i * IM_CTX_SIZE + IMCTR, 0); +} + +/* R-Car Gen3 SoCs product and cut information. */ +#define RCAR_PRODUCT_MASK 0x00007F00 +#define RCAR_PRODUCT_H3 0x00004F00 +#define RCAR_PRODUCT_M3W 0x00005200 +#define RCAR_PRODUCT_M3N 0x00005500 +#define RCAR_CUT_MASK 0x000000FF +#define RCAR_CUT_VER30 0x00000020 + +static __init bool ipmmu_stage2_supported(void) +{ + struct dt_device_node *np; + uint64_t addr, size; + void __iomem *base; + uint32_t product, cut; + bool stage2_supported = false; + + np = dt_find_compatible_node(NULL, NULL, "renesas,prr"); + if ( !np ) + { + printk(XENLOG_ERR "ipmmu: Failed to find PRR node\n"); + return false; + } + + if ( dt_device_get_address(np, 0, &addr, &size) ) + { + printk(XENLOG_ERR "ipmmu: Failed to get PRR MMIO\n"); + return false; + } + + base = ioremap_nocache(addr, size); + if ( !base ) + { + printk(XENLOG_ERR "ipmmu: Failed to ioremap PRR MMIO\n"); + return false; + } + + product = readl(base); + cut = product & RCAR_CUT_MASK; + product &= RCAR_PRODUCT_MASK; + + switch ( product ) + { + case RCAR_PRODUCT_H3: + case RCAR_PRODUCT_M3W: + if ( cut >= RCAR_CUT_VER30 ) + stage2_supported = true; + break; + + case RCAR_PRODUCT_M3N: + stage2_supported = true; + break; + + default: + printk(XENLOG_ERR "ipmmu: Unsupported SoC version\n"); + break; + } + + iounmap(base); + + return stage2_supported; +} + +/* + * This function relies on the fact that Root IPMMU device is being probed + * the first. If not the case, it denies further Cache IPMMU device probes + * (returns the -EAGAIN) until the Root IPMMU device has been registered + * for sure. + */ +static int ipmmu_probe(struct dt_device_node *node) +{ + struct ipmmu_vmsa_device *mmu; + uint64_t addr, size; + int irq, ret; + + mmu = xzalloc(struct ipmmu_vmsa_device); + if ( !mmu ) + { + dev_err(&node->dev, "Cannot allocate device data\n"); + return -ENOMEM; + } + + mmu->dev = &node->dev; + mmu->num_utlbs = IPMMU_UTLB_MAX; + mmu->num_ctx = IPMMU_CTX_MAX; + spin_lock_init(&mmu->lock); + bitmap_zero(mmu->ctx, IPMMU_CTX_MAX); + + /* Map I/O memory and request IRQ. */ + ret = dt_device_get_address(node, 0, &addr, &size); + if ( ret ) + { + dev_err(&node->dev, "Failed to get MMIO\n"); + goto out; + } + + mmu->base = ioremap_nocache(addr, size); + if ( !mmu->base ) + { + dev_err(&node->dev, "Failed to ioremap MMIO (addr 0x%"PRIx64" size 0x%"PRIx64")\n", + addr, size); + ret = -ENOMEM; + goto out; + } + + /* + * Determine if this IPMMU node is a Root device by checking for + * the lack of renesas,ipmmu-main property. + */ + if ( !dt_find_property(node, "renesas,ipmmu-main", NULL) ) + mmu->root = mmu; + else + mmu->root = ipmmu_find_root(); + + /* Wait until the Root device has been registered for sure. */ + if ( !mmu->root ) + { + ret = -EAGAIN; + goto out; + } + + /* Root devices have mandatory IRQs. */ + if ( ipmmu_is_root(mmu) ) + { + if ( !ipmmu_stage2_supported() ) + { + printk(XENLOG_ERR "ipmmu: P2M sharing is not supported in current SoC revision\n"); + ret = -ENODEV; + goto out; + } + + /* + * As 4-level translation table is not supported in IPMMU, we need + * to check IPA size used for P2M table beforehand to be sure it is + * 3-level and the IPMMU will be able to use it. + * + * TODO: First initialize the IOMMU and gather the requirements and + * then initialize the P2M. In the P2M code, take into the account + * the IOMMU requirements and restrict "pa_range" if necessary. + */ + if ( IPMMU_MAX_P2M_IPA_BITS < p2m_ipa_bits ) + { + printk(XENLOG_ERR "ipmmu: P2M IPA size is not supported (P2M=%u IPMMU=%u)!\n", + p2m_ipa_bits, IPMMU_MAX_P2M_IPA_BITS); + ret = -ENODEV; + goto out; + } + + irq = platform_get_irq(node, 0); + if ( irq < 0 ) + { + dev_err(&node->dev, "No IRQ found\n"); + ret = irq; + goto out; + } + + ret = request_irq(irq, 0, ipmmu_irq, dev_name(&node->dev), mmu); + if ( ret < 0 ) + { + dev_err(&node->dev, "Failed to request IRQ %d\n", irq); + goto out; + } + + ipmmu_device_reset(mmu); + + /* + * Use stage 2 translation table format when stage 2 translation + * enabled. + */ + ipmmu_write(mmu, IMSAUXCTLR, + ipmmu_read(mmu, IMSAUXCTLR) | IMSAUXCTLR_S2PTE); + + dev_info(&node->dev, "IPMMU context 0 is reserved\n"); + set_bit(0, mmu->ctx); + } + + spin_lock(&ipmmu_devices_lock); + list_add(&mmu->list, &ipmmu_devices); + spin_unlock(&ipmmu_devices_lock); + + dev_info(&node->dev, "Registered %s IPMMU\n", + ipmmu_is_root(mmu) ? "Root" : "Cache"); + + return 0; + +out: + if ( mmu->base ) + iounmap(mmu->base); + xfree(mmu); + + return ret; +} + +/* Xen IOMMU ops */ +static int __must_check ipmmu_iotlb_flush_all(struct domain *d) +{ + struct ipmmu_vmsa_xen_domain *xen_domain = dom_iommu(d)->arch.priv; + + if ( !xen_domain || !xen_domain->root_domain ) + return 0; + + spin_lock(&xen_domain->lock); + ipmmu_tlb_invalidate(xen_domain->root_domain); + spin_unlock(&xen_domain->lock); + + return 0; +} + +static int __must_check ipmmu_iotlb_flush(struct domain *d, dfn_t dfn, + unsigned int page_count, + unsigned int flush_flags) +{ + ASSERT(flush_flags); + + /* The hardware doesn't support selective TLB flush. */ + return ipmmu_iotlb_flush_all(d); +} + +static struct ipmmu_vmsa_domain *ipmmu_get_cache_domain(struct domain *d, + struct device *dev) +{ + struct ipmmu_vmsa_xen_domain *xen_domain = dom_iommu(d)->arch.priv; + struct ipmmu_vmsa_device *mmu = to_ipmmu(dev); + struct ipmmu_vmsa_domain *domain; + + if ( !mmu ) + return NULL; + + /* + * Loop through all Cache IPMMU domains associated with this Xen domain + * to locate an IPMMU domain this IPMMU device is assigned to. + */ + list_for_each_entry( domain, &xen_domain->cache_domains, list ) + { + if ( domain->mmu == mmu ) + return domain; + } + + return NULL; +} + +static struct ipmmu_vmsa_domain *ipmmu_alloc_cache_domain(struct domain *d) +{ + struct ipmmu_vmsa_xen_domain *xen_domain = dom_iommu(d)->arch.priv; + struct ipmmu_vmsa_domain *domain; + + domain = xzalloc(struct ipmmu_vmsa_domain); + if ( !domain ) + return ERR_PTR(-ENOMEM); + + /* + * We don't assign the Cache IPMMU device here, it will be assigned when + * attaching master device to this domain in ipmmu_attach_device(). + * domain->mmu = NULL; + */ + + domain->d = d; + /* Use the same context mapped to this Xen domain. */ + domain->context_id = xen_domain->root_domain->context_id; + + return domain; +} + +static void ipmmu_free_cache_domain(struct ipmmu_vmsa_domain *domain) +{ + list_del(&domain->list); + /* + * Disable the context for Cache IPMMU only. Flush the TLB as required + * when modifying the context registers. + */ + ipmmu_ctx_write_cache(domain, IMCTR, IMCTR_FLUSH); + xfree(domain); +} + +static struct ipmmu_vmsa_domain *ipmmu_alloc_root_domain(struct domain *d) +{ + struct ipmmu_vmsa_domain *domain; + struct ipmmu_vmsa_device *root; + int ret; + + /* If we are here then Root device must has been registered. */ + root = ipmmu_find_root(); + if ( !root ) + { + printk(XENLOG_ERR "ipmmu: Unable to locate Root IPMMU\n"); + return ERR_PTR(-ENODEV); + } + + domain = xzalloc(struct ipmmu_vmsa_domain); + if ( !domain ) + return ERR_PTR(-ENOMEM); + + domain->mmu = root; + domain->d = d; + + /* Initialize the context to be mapped to this Xen domain. */ + ret = ipmmu_domain_init_context(domain); + if ( ret < 0 ) + { + dev_err(root->dev, "%pd: Unable to initialize IPMMU context\n", d); + xfree(domain); + return ERR_PTR(ret); + } + + return domain; +} + +static void ipmmu_free_root_domain(struct ipmmu_vmsa_domain *domain) +{ + ipmmu_domain_destroy_context(domain); + xfree(domain); +} + +static int ipmmu_assign_device(struct domain *d, u8 devfn, struct device *dev, + uint32_t flag) +{ + struct ipmmu_vmsa_xen_domain *xen_domain = dom_iommu(d)->arch.priv; + struct ipmmu_vmsa_domain *domain; + int ret; + + if ( !xen_domain ) + return -EINVAL; + + if ( !to_ipmmu(dev) ) + return -ENODEV; + + spin_lock(&xen_domain->lock); + + /* + * The IPMMU context for the Xen domain is not allocated beforehand + * (at the Xen domain creation time), but on demand only, when the first + * master device being attached to it. + * Create Root IPMMU domain which context will be mapped to this Xen domain + * if not exits yet. + */ + if ( !xen_domain->root_domain ) + { + domain = ipmmu_alloc_root_domain(d); + if ( IS_ERR(domain) ) + { + ret = PTR_ERR(domain); + goto out; + } + + xen_domain->root_domain = domain; + } + + if ( to_domain(dev) ) + { + dev_err(dev, "Already attached to IPMMU domain\n"); + ret = -EEXIST; + goto out; + } + + /* + * Master devices behind the same Cache IPMMU can be attached to the same + * Cache IPMMU domain. + * Before creating new IPMMU domain check to see if the required one + * already exists for this Xen domain. + */ + domain = ipmmu_get_cache_domain(d, dev); + if ( !domain ) + { + /* Create new IPMMU domain this master device will be attached to. */ + domain = ipmmu_alloc_cache_domain(d); + if ( IS_ERR(domain) ) + { + ret = PTR_ERR(domain); + goto out; + } + + /* Chain new IPMMU domain to the Xen domain. */ + list_add(&domain->list, &xen_domain->cache_domains); + } + + ret = ipmmu_attach_device(domain, dev); + if ( ret ) + { + /* + * Destroy Cache IPMMU domain only if there are no master devices + * attached to it. + */ + if ( !domain->refcount ) + ipmmu_free_cache_domain(domain); + } + else + { + domain->refcount++; + set_domain(dev, domain); + } + +out: + spin_unlock(&xen_domain->lock); + + return ret; +} + +static int ipmmu_deassign_device(struct domain *d, struct device *dev) +{ + struct ipmmu_vmsa_xen_domain *xen_domain = dom_iommu(d)->arch.priv; + struct ipmmu_vmsa_domain *domain = to_domain(dev); + + if ( !domain || domain->d != d ) + { + dev_err(dev, "Not attached to %pd\n", d); + return -ESRCH; + } + + spin_lock(&xen_domain->lock); + + ipmmu_detach_device(domain, dev); + set_domain(dev, NULL); + domain->refcount--; + + /* + * Destroy Cache IPMMU domain only if there are no master devices + * attached to it. + */ + if ( !domain->refcount ) + ipmmu_free_cache_domain(domain); + + spin_unlock(&xen_domain->lock); + + return 0; +} + +static int ipmmu_reassign_device(struct domain *s, struct domain *t, + u8 devfn, struct device *dev) +{ + int ret = 0; + + /* Don't allow remapping on other domain than hwdom */ + if ( t && t != hardware_domain ) + return -EPERM; + + if ( t == s ) + return 0; + + ret = ipmmu_deassign_device(s, dev); + if ( ret ) + return ret; + + if ( t ) + { + /* No flags are defined for ARM. */ + ret = ipmmu_assign_device(t, devfn, dev, 0); + if ( ret ) + return ret; + } + + return 0; +} + +static int ipmmu_dt_xlate(struct device *dev, + const struct dt_phandle_args *spec) +{ + int ret; + + /* + * Perform sanity check of passed DT IOMMU specifier. Each master device + * gets micro-TLB (device ID) assignment via the "iommus" property + * in DT. We expect #iommu-cells to be 1 (Multiple-master IOMMU) and + * this cell for the micro-TLB (device ID). + */ + if ( spec->args_count != 1 || spec->args[0] >= IPMMU_UTLB_MAX ) + return -EINVAL; + + ret = iommu_fwspec_add_ids(dev, spec->args, 1); + if ( ret ) + return ret; + + /* Initialize once - xlate() will call multiple times. */ + if ( to_ipmmu(dev) ) + return 0; + + return ipmmu_init_platform_device(dev, spec); +} + +static int ipmmu_add_device(u8 devfn, struct device *dev) +{ + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + + /* Only let through devices that have been verified in xlate(). */ + if ( !to_ipmmu(dev) ) + return -ENODEV; + + if ( dt_device_is_protected(dev_to_dt(dev)) ) + { + dev_err(dev, "Already added to IPMMU\n"); + return -EEXIST; + } + + /* Let Xen know that the master device is protected by an IOMMU. */ + dt_device_set_protected(dev_to_dt(dev)); + + dev_info(dev, "Added master device (IPMMU %s micro-TLBs %u)\n", + dev_name(fwspec->iommu_dev), fwspec->num_ids); + + return 0; +} + +static int ipmmu_iommu_domain_init(struct domain *d) +{ + struct ipmmu_vmsa_xen_domain *xen_domain; + + xen_domain = xzalloc(struct ipmmu_vmsa_xen_domain); + if ( !xen_domain ) + return -ENOMEM; + + spin_lock_init(&xen_domain->lock); + INIT_LIST_HEAD(&xen_domain->cache_domains); + /* + * We don't create Root IPMMU domain here, it will be created on demand + * only, when attaching the first master device to this Xen domain in + * ipmmu_assign_device(). + * xen_domain->root_domain = NULL; + */ + + dom_iommu(d)->arch.priv = xen_domain; + + return 0; +} + +static void __hwdom_init ipmmu_iommu_hwdom_init(struct domain *d) +{ + /* Set to false options not supported on ARM. */ + if ( iommu_hwdom_inclusive ) + printk(XENLOG_WARNING "ipmmu: map-inclusive dom0-iommu option is not supported on ARM\n"); + iommu_hwdom_inclusive = false; + if ( iommu_hwdom_reserved == 1 ) + printk(XENLOG_WARNING "ipmmu: map-reserved dom0-iommu option is not supported on ARM\n"); + iommu_hwdom_reserved = 0; + + arch_iommu_hwdom_init(d); +} + +static void ipmmu_iommu_domain_teardown(struct domain *d) +{ + struct ipmmu_vmsa_xen_domain *xen_domain = dom_iommu(d)->arch.priv; + + if ( !xen_domain ) + return; + + /* + * Destroy Root IPMMU domain which context is mapped to this Xen domain + * if exits. + */ + if ( xen_domain->root_domain ) + ipmmu_free_root_domain(xen_domain->root_domain); + + /* + * We assume that all master devices have already been detached from + * this Xen domain and there must be no associated Cache IPMMU domains + * in use. + */ + ASSERT(list_empty(&xen_domain->cache_domains)); + xfree(xen_domain); + dom_iommu(d)->arch.priv = NULL; +} + +static const struct iommu_ops ipmmu_iommu_ops = +{ + .init = ipmmu_iommu_domain_init, + .hwdom_init = ipmmu_iommu_hwdom_init, + .teardown = ipmmu_iommu_domain_teardown, + .iotlb_flush = ipmmu_iotlb_flush, + .iotlb_flush_all = ipmmu_iotlb_flush_all, + .assign_device = ipmmu_assign_device, + .reassign_device = ipmmu_reassign_device, + .map_page = arm_iommu_map_page, + .unmap_page = arm_iommu_unmap_page, + .dt_xlate = ipmmu_dt_xlate, + .add_device = ipmmu_add_device, +}; + +static const struct dt_device_match ipmmu_dt_match[] __initconst = +{ + DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a7795"), + DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a77965"), + DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a7796"), + { /* sentinel */ }, +}; + +static __init int ipmmu_init(struct dt_device_node *node, const void *data) +{ + int ret; + + /* + * Even if the device can't be initialized, we don't want to give + * the IPMMU device to dom0. + */ + dt_device_set_used_by(node, DOMID_XEN); + + ret = ipmmu_probe(node); + if ( ret ) + { + dev_err(&node->dev, "Failed to init IPMMU (%d)\n", ret); + return ret; + } + + iommu_set_ops(&ipmmu_iommu_ops); + + return 0; +} + +DT_DEVICE_START(ipmmu, "Renesas IPMMU-VMSA", DEVICE_IOMMU) + .dt_match = ipmmu_dt_match, + .init = ipmmu_init, +DT_DEVICE_END + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */