From patchwork Wed Feb 1 12:52:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124368 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EE8A2C636CD for ; Wed, 1 Feb 2023 14:02:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=a4yQAxOq0HqiN4eLHzP6eyU+B5GofpwMTs5Z4l/9wIU=; b=b6aZEF1j2Lj93b 0gf/DFrLm1lBeh+pyK/AEwz2m0LMPdD6ahLSWwMvdjPAMyUvaTkm8tbD6u9H9uQMNxBD6nXRRHsBK 7uzuVG3+agMkePeuIlKuc8M4Sm+/72PQ2ns9fNhracsXhz1vzNLVT0r2dv2NnackLJwSUwTlc9FDB cpvWdSug4EHZigySpjoIEVobn0p1tQdrCcGVbuMXwNYIqhTMHutIxJG3S1jyYFsLFc5bchuB3dzKl mjrk4RSdePnVY13OVZbNL/TVz5KTMver21MBvJOBYqNkXG/txubSw/midiPraSH6zTbaM7m3DWK2T iiO0AAP/yTwaMBjkGXew==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDgH-00CB5c-Oy; Wed, 01 Feb 2023 14:01:46 +0000 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNChv-00BnFM-MM for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:31 +0000 Received: by mail-wr1-x430.google.com with SMTP id q5so17271196wrv.0 for ; Wed, 01 Feb 2023 04:59:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OuHcxpVYr3Mx9lIj5wDy3+e1q2UTRpHhp+/IieOHuMM=; b=da+v6VgpDeN6FK5hSMa+DLtsoeQtkApU4NaNy2syMiAmX8Iqjb48CUGoR4mJ+k76sV a1zKNFsJINtPnKiNKoOf0OOQWfNkPo2G9yiV+/ZNMaF7qqS5AkfsC9nH5AioCDd2eI4x 8U0Mvq+FOTvhdyaH/q0r2wbC1Kxs0MPkAft37Vj35ci750GDENuNKYJyKa9jmcYV/yGm DPw1bgL3ayKyE7Axc9+V1xmdG1/cm5foLijKzffoSoEYOgggT9bqH6NUC19UrMGu5WSa M0j/yK7QsQpudLXKjsWaPEprDzcDkhSKbNRzdjQ6NZCxi2ISJ5WWFtrDrjgxj7IATtUq n4qA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OuHcxpVYr3Mx9lIj5wDy3+e1q2UTRpHhp+/IieOHuMM=; b=eClv+cndIlfqd4dbPftfat9ijB9wWBgPLVsG4mVtXckWDlA5mlMbDBUDMh9E3CYYWO tRZP9mDfSp4o9gZ2BtaXEAunLoXHWQE3j7dyhV9o+ZAWWDerTWTalhF2D4Cul5PAuDMg 26NBbaH6dKLtQd6tSkaqyHIznscLjnWM2HTQMh3MFn/yzMBn3YESyUwCTQdtlef1IAl9 PDkurLztCOz4J6iHgGcOjE0AYc6fdqnNjXGNJqPECofFimV0wgVCUDbm+b++k2lMJgRU zzhvsBAH9DOjIrxHC4gsZWosZVK9rw2xduuk8ix6+AbozQMuAXM8PpNEamCjNoHvefBj aQNA== X-Gm-Message-State: AO0yUKUlxGPAH/vob6T7jj/Yg8JHMc1FKX6KZSNTAZs0A6WifCbsJZDi 5ofzv6Umt4gNofVB/qIDgj+YXQ== X-Google-Smtp-Source: AK7set82dUTG0tmBxSoD8WSSqdVps8oF3AS/Q946jw+uC7L84Wn7Z/Xb4Etl1FEZovrukY5QNtyCXQ== X-Received: by 2002:a5d:570b:0:b0:2bf:c01a:f24f with SMTP id a11-20020a5d570b000000b002bfc01af24fmr1586643wrv.5.1675256359104; Wed, 01 Feb 2023 04:59:19 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:18 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 01/45] iommu/io-pgtable-arm: Split the page table driver Date: Wed, 1 Feb 2023 12:52:45 +0000 Message-Id: <20230201125328.2186498-2-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045923_796085_F025EF57 X-CRM114-Status: GOOD ( 26.89 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To allow the KVM IOMMU driver to populate page tables using the io-pgtable-arm code, move the shared bits into io-pgtable-arm-common.c. Here we move the bulk of the common code, and a subsequent patch handles the bits that require more care. phys_to_virt() and virt_to_phys() do need special handling here because the hypervisor will have its own version. It will also implement its own version of __arm_lpae_alloc_pages(), __arm_lpae_free_pages() and __arm_lpae_sync_pte() since the hypervisor needs some assistance for allocating pages. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/Makefile | 2 +- drivers/iommu/io-pgtable-arm.h | 30 - include/linux/io-pgtable-arm.h | 187 ++++++ .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 2 +- drivers/iommu/io-pgtable-arm-common.c | 500 ++++++++++++++ drivers/iommu/io-pgtable-arm.c | 634 +----------------- 6 files changed, 696 insertions(+), 659 deletions(-) delete mode 100644 drivers/iommu/io-pgtable-arm.h create mode 100644 include/linux/io-pgtable-arm.h create mode 100644 drivers/iommu/io-pgtable-arm-common.c diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile index f461d0651385..c616acf534f8 100644 --- a/drivers/iommu/Makefile +++ b/drivers/iommu/Makefile @@ -7,7 +7,7 @@ obj-$(CONFIG_IOMMU_DEBUGFS) += iommu-debugfs.o obj-$(CONFIG_IOMMU_DMA) += dma-iommu.o obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) += io-pgtable-arm-v7s.o -obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o +obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o io-pgtable-arm-common.o obj-$(CONFIG_IOMMU_IO_PGTABLE_DART) += io-pgtable-dart.o obj-$(CONFIG_IOASID) += ioasid.o obj-$(CONFIG_IOMMU_IOVA) += iova.o diff --git a/drivers/iommu/io-pgtable-arm.h b/drivers/iommu/io-pgtable-arm.h deleted file mode 100644 index ba7cfdf7afa0..000000000000 --- a/drivers/iommu/io-pgtable-arm.h +++ /dev/null @@ -1,30 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -#ifndef IO_PGTABLE_ARM_H_ -#define IO_PGTABLE_ARM_H_ - -#define ARM_LPAE_TCR_TG0_4K 0 -#define ARM_LPAE_TCR_TG0_64K 1 -#define ARM_LPAE_TCR_TG0_16K 2 - -#define ARM_LPAE_TCR_TG1_16K 1 -#define ARM_LPAE_TCR_TG1_4K 2 -#define ARM_LPAE_TCR_TG1_64K 3 - -#define ARM_LPAE_TCR_SH_NS 0 -#define ARM_LPAE_TCR_SH_OS 2 -#define ARM_LPAE_TCR_SH_IS 3 - -#define ARM_LPAE_TCR_RGN_NC 0 -#define ARM_LPAE_TCR_RGN_WBWA 1 -#define ARM_LPAE_TCR_RGN_WT 2 -#define ARM_LPAE_TCR_RGN_WB 3 - -#define ARM_LPAE_TCR_PS_32_BIT 0x0ULL -#define ARM_LPAE_TCR_PS_36_BIT 0x1ULL -#define ARM_LPAE_TCR_PS_40_BIT 0x2ULL -#define ARM_LPAE_TCR_PS_42_BIT 0x3ULL -#define ARM_LPAE_TCR_PS_44_BIT 0x4ULL -#define ARM_LPAE_TCR_PS_48_BIT 0x5ULL -#define ARM_LPAE_TCR_PS_52_BIT 0x6ULL - -#endif /* IO_PGTABLE_ARM_H_ */ diff --git a/include/linux/io-pgtable-arm.h b/include/linux/io-pgtable-arm.h new file mode 100644 index 000000000000..594b5030b450 --- /dev/null +++ b/include/linux/io-pgtable-arm.h @@ -0,0 +1,187 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef IO_PGTABLE_H_ +#define IO_PGTABLE_H_ + +#include + +extern bool selftest_running; + +typedef u64 arm_lpae_iopte; + +struct arm_lpae_io_pgtable { + struct io_pgtable iop; + + int pgd_bits; + int start_level; + int bits_per_level; + + void *pgd; +}; + +/* Struct accessors */ +#define io_pgtable_to_data(x) \ + container_of((x), struct arm_lpae_io_pgtable, iop) + +#define io_pgtable_ops_to_data(x) \ + io_pgtable_to_data(io_pgtable_ops_to_pgtable(x)) + +/* + * Calculate the right shift amount to get to the portion describing level l + * in a virtual address mapped by the pagetable in d. + */ +#define ARM_LPAE_LVL_SHIFT(l,d) \ + (((ARM_LPAE_MAX_LEVELS - (l)) * (d)->bits_per_level) + \ + ilog2(sizeof(arm_lpae_iopte))) + +#define ARM_LPAE_GRANULE(d) \ + (sizeof(arm_lpae_iopte) << (d)->bits_per_level) +#define ARM_LPAE_PGD_SIZE(d) \ + (sizeof(arm_lpae_iopte) << (d)->pgd_bits) + +#define ARM_LPAE_PTES_PER_TABLE(d) \ + (ARM_LPAE_GRANULE(d) >> ilog2(sizeof(arm_lpae_iopte))) + +/* + * Calculate the index at level l used to map virtual address a using the + * pagetable in d. + */ +#define ARM_LPAE_PGD_IDX(l,d) \ + ((l) == (d)->start_level ? (d)->pgd_bits - (d)->bits_per_level : 0) + +#define ARM_LPAE_LVL_IDX(a,l,d) \ + (((u64)(a) >> ARM_LPAE_LVL_SHIFT(l,d)) & \ + ((1 << ((d)->bits_per_level + ARM_LPAE_PGD_IDX(l,d))) - 1)) + +/* Calculate the block/page mapping size at level l for pagetable in d. */ +#define ARM_LPAE_BLOCK_SIZE(l,d) (1ULL << ARM_LPAE_LVL_SHIFT(l,d)) + +/* Page table bits */ +#define ARM_LPAE_PTE_TYPE_SHIFT 0 +#define ARM_LPAE_PTE_TYPE_MASK 0x3 + +#define ARM_LPAE_PTE_TYPE_BLOCK 1 +#define ARM_LPAE_PTE_TYPE_TABLE 3 +#define ARM_LPAE_PTE_TYPE_PAGE 3 + +#define ARM_LPAE_PTE_ADDR_MASK GENMASK_ULL(47,12) + +#define ARM_LPAE_PTE_NSTABLE (((arm_lpae_iopte)1) << 63) +#define ARM_LPAE_PTE_XN (((arm_lpae_iopte)3) << 53) +#define ARM_LPAE_PTE_AF (((arm_lpae_iopte)1) << 10) +#define ARM_LPAE_PTE_SH_NS (((arm_lpae_iopte)0) << 8) +#define ARM_LPAE_PTE_SH_OS (((arm_lpae_iopte)2) << 8) +#define ARM_LPAE_PTE_SH_IS (((arm_lpae_iopte)3) << 8) +#define ARM_LPAE_PTE_NS (((arm_lpae_iopte)1) << 5) +#define ARM_LPAE_PTE_VALID (((arm_lpae_iopte)1) << 0) + +#define ARM_LPAE_PTE_ATTR_LO_MASK (((arm_lpae_iopte)0x3ff) << 2) +/* Ignore the contiguous bit for block splitting */ +#define ARM_LPAE_PTE_ATTR_HI_MASK (((arm_lpae_iopte)6) << 52) +#define ARM_LPAE_PTE_ATTR_MASK (ARM_LPAE_PTE_ATTR_LO_MASK | \ + ARM_LPAE_PTE_ATTR_HI_MASK) +/* Software bit for solving coherency races */ +#define ARM_LPAE_PTE_SW_SYNC (((arm_lpae_iopte)1) << 55) + +/* Stage-1 PTE */ +#define ARM_LPAE_PTE_AP_UNPRIV (((arm_lpae_iopte)1) << 6) +#define ARM_LPAE_PTE_AP_RDONLY (((arm_lpae_iopte)2) << 6) +#define ARM_LPAE_PTE_ATTRINDX_SHIFT 2 +#define ARM_LPAE_PTE_nG (((arm_lpae_iopte)1) << 11) + +/* Stage-2 PTE */ +#define ARM_LPAE_PTE_HAP_FAULT (((arm_lpae_iopte)0) << 6) +#define ARM_LPAE_PTE_HAP_READ (((arm_lpae_iopte)1) << 6) +#define ARM_LPAE_PTE_HAP_WRITE (((arm_lpae_iopte)2) << 6) +#define ARM_LPAE_PTE_MEMATTR_OIWB (((arm_lpae_iopte)0xf) << 2) +#define ARM_LPAE_PTE_MEMATTR_NC (((arm_lpae_iopte)0x5) << 2) +#define ARM_LPAE_PTE_MEMATTR_DEV (((arm_lpae_iopte)0x1) << 2) + +/* Register bits */ +#define ARM_LPAE_VTCR_SL0_MASK 0x3 + +#define ARM_LPAE_TCR_T0SZ_SHIFT 0 + +#define ARM_LPAE_TCR_TG0_4K 0 +#define ARM_LPAE_TCR_TG0_64K 1 +#define ARM_LPAE_TCR_TG0_16K 2 + +#define ARM_LPAE_TCR_TG1_16K 1 +#define ARM_LPAE_TCR_TG1_4K 2 +#define ARM_LPAE_TCR_TG1_64K 3 + +#define ARM_LPAE_TCR_SH_NS 0 +#define ARM_LPAE_TCR_SH_OS 2 +#define ARM_LPAE_TCR_SH_IS 3 + +#define ARM_LPAE_TCR_RGN_NC 0 +#define ARM_LPAE_TCR_RGN_WBWA 1 +#define ARM_LPAE_TCR_RGN_WT 2 +#define ARM_LPAE_TCR_RGN_WB 3 + +#define ARM_LPAE_TCR_PS_32_BIT 0x0ULL +#define ARM_LPAE_TCR_PS_36_BIT 0x1ULL +#define ARM_LPAE_TCR_PS_40_BIT 0x2ULL +#define ARM_LPAE_TCR_PS_42_BIT 0x3ULL +#define ARM_LPAE_TCR_PS_44_BIT 0x4ULL +#define ARM_LPAE_TCR_PS_48_BIT 0x5ULL +#define ARM_LPAE_TCR_PS_52_BIT 0x6ULL + +#define ARM_LPAE_VTCR_PS_SHIFT 16 +#define ARM_LPAE_VTCR_PS_MASK 0x7 + +#define ARM_LPAE_MAIR_ATTR_SHIFT(n) ((n) << 3) +#define ARM_LPAE_MAIR_ATTR_MASK 0xff +#define ARM_LPAE_MAIR_ATTR_DEVICE 0x04 +#define ARM_LPAE_MAIR_ATTR_NC 0x44 +#define ARM_LPAE_MAIR_ATTR_INC_OWBRWA 0xf4 +#define ARM_LPAE_MAIR_ATTR_WBRWA 0xff +#define ARM_LPAE_MAIR_ATTR_IDX_NC 0 +#define ARM_LPAE_MAIR_ATTR_IDX_CACHE 1 +#define ARM_LPAE_MAIR_ATTR_IDX_DEV 2 +#define ARM_LPAE_MAIR_ATTR_IDX_INC_OCACHE 3 + +#define ARM_MALI_LPAE_TTBR_ADRMODE_TABLE (3u << 0) +#define ARM_MALI_LPAE_TTBR_READ_INNER BIT(2) +#define ARM_MALI_LPAE_TTBR_SHARE_OUTER BIT(4) + +#define ARM_MALI_LPAE_MEMATTR_IMP_DEF 0x88ULL +#define ARM_MALI_LPAE_MEMATTR_WRITE_ALLOC 0x8DULL + +#define ARM_LPAE_MAX_LEVELS 4 + +#define iopte_type(pte) \ + (((pte) >> ARM_LPAE_PTE_TYPE_SHIFT) & ARM_LPAE_PTE_TYPE_MASK) + +#define iopte_prot(pte) ((pte) & ARM_LPAE_PTE_ATTR_MASK) + +static inline bool iopte_leaf(arm_lpae_iopte pte, int lvl, + enum io_pgtable_fmt fmt) +{ + if (lvl == (ARM_LPAE_MAX_LEVELS - 1) && fmt != ARM_MALI_LPAE) + return iopte_type(pte) == ARM_LPAE_PTE_TYPE_PAGE; + + return iopte_type(pte) == ARM_LPAE_PTE_TYPE_BLOCK; +} + +#define __arm_lpae_virt_to_phys __pa +#define __arm_lpae_phys_to_virt __va + +/* Generic functions */ +int arm_lpae_map_pages(struct io_pgtable_ops *ops, unsigned long iova, + phys_addr_t paddr, size_t pgsize, size_t pgcount, + int iommu_prot, gfp_t gfp, size_t *mapped); +size_t arm_lpae_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *gather); +phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, + unsigned long iova); +void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, + arm_lpae_iopte *ptep); + +/* Host/hyp-specific functions */ +void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp, struct io_pgtable_cfg *cfg); +void __arm_lpae_free_pages(void *pages, size_t size, struct io_pgtable_cfg *cfg); +void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, int num_entries, + struct io_pgtable_cfg *cfg); + +#endif /* IO_PGTABLE_H_ */ diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index a5a63b1c947e..df288f29a5c1 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -3,6 +3,7 @@ * Implementation of the IOMMU SVA API for the ARM SMMUv3 */ +#include #include #include #include @@ -11,7 +12,6 @@ #include "arm-smmu-v3.h" #include "../../iommu-sva.h" -#include "../../io-pgtable-arm.h" struct arm_smmu_mmu_notifier { struct mmu_notifier mn; diff --git a/drivers/iommu/io-pgtable-arm-common.c b/drivers/iommu/io-pgtable-arm-common.c new file mode 100644 index 000000000000..74d962712d15 --- /dev/null +++ b/drivers/iommu/io-pgtable-arm-common.c @@ -0,0 +1,500 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * CPU-agnostic ARM page table allocator. + * A copy of this library is embedded in the KVM nVHE image. + * + * Copyright (C) 2022 Arm Limited + * + * Author: Will Deacon + */ + +#include + +#include +#include + +#define iopte_deref(pte, d) __arm_lpae_phys_to_virt(iopte_to_paddr(pte, d)) + +static arm_lpae_iopte paddr_to_iopte(phys_addr_t paddr, + struct arm_lpae_io_pgtable *data) +{ + arm_lpae_iopte pte = paddr; + + /* Of the bits which overlap, either 51:48 or 15:12 are always RES0 */ + return (pte | (pte >> (48 - 12))) & ARM_LPAE_PTE_ADDR_MASK; +} + +static phys_addr_t iopte_to_paddr(arm_lpae_iopte pte, + struct arm_lpae_io_pgtable *data) +{ + u64 paddr = pte & ARM_LPAE_PTE_ADDR_MASK; + + if (ARM_LPAE_GRANULE(data) < SZ_64K) + return paddr; + + /* Rotate the packed high-order bits back to the top */ + return (paddr | (paddr << (48 - 12))) & (ARM_LPAE_PTE_ADDR_MASK << 4); +} + +static void __arm_lpae_clear_pte(arm_lpae_iopte *ptep, struct io_pgtable_cfg *cfg) +{ + + *ptep = 0; + + if (!cfg->coherent_walk) + __arm_lpae_sync_pte(ptep, 1, cfg); +} + +static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, + struct iommu_iotlb_gather *gather, + unsigned long iova, size_t size, size_t pgcount, + int lvl, arm_lpae_iopte *ptep); + +static void __arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, + phys_addr_t paddr, arm_lpae_iopte prot, + int lvl, int num_entries, arm_lpae_iopte *ptep) +{ + arm_lpae_iopte pte = prot; + struct io_pgtable_cfg *cfg = &data->iop.cfg; + size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data); + int i; + + if (data->iop.fmt != ARM_MALI_LPAE && lvl == ARM_LPAE_MAX_LEVELS - 1) + pte |= ARM_LPAE_PTE_TYPE_PAGE; + else + pte |= ARM_LPAE_PTE_TYPE_BLOCK; + + for (i = 0; i < num_entries; i++) + ptep[i] = pte | paddr_to_iopte(paddr + i * sz, data); + + if (!cfg->coherent_walk) + __arm_lpae_sync_pte(ptep, num_entries, cfg); +} + +static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, + unsigned long iova, phys_addr_t paddr, + arm_lpae_iopte prot, int lvl, int num_entries, + arm_lpae_iopte *ptep) +{ + int i; + + for (i = 0; i < num_entries; i++) + if (iopte_leaf(ptep[i], lvl, data->iop.fmt)) { + /* We require an unmap first */ + WARN_ON(!selftest_running); + return -EEXIST; + } else if (iopte_type(ptep[i]) == ARM_LPAE_PTE_TYPE_TABLE) { + /* + * We need to unmap and free the old table before + * overwriting it with a block entry. + */ + arm_lpae_iopte *tblp; + size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data); + + tblp = ptep - ARM_LPAE_LVL_IDX(iova, lvl, data); + if (__arm_lpae_unmap(data, NULL, iova + i * sz, sz, 1, + lvl, tblp) != sz) { + WARN_ON(1); + return -EINVAL; + } + } + + __arm_lpae_init_pte(data, paddr, prot, lvl, num_entries, ptep); + return 0; +} + +static arm_lpae_iopte arm_lpae_install_table(arm_lpae_iopte *table, + arm_lpae_iopte *ptep, + arm_lpae_iopte curr, + struct arm_lpae_io_pgtable *data) +{ + arm_lpae_iopte old, new; + struct io_pgtable_cfg *cfg = &data->iop.cfg; + + new = paddr_to_iopte(__arm_lpae_virt_to_phys(table), data) | + ARM_LPAE_PTE_TYPE_TABLE; + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS) + new |= ARM_LPAE_PTE_NSTABLE; + + /* + * Ensure the table itself is visible before its PTE can be. + * Whilst we could get away with cmpxchg64_release below, this + * doesn't have any ordering semantics when !CONFIG_SMP. + */ + dma_wmb(); + + old = cmpxchg64_relaxed(ptep, curr, new); + + if (cfg->coherent_walk || (old & ARM_LPAE_PTE_SW_SYNC)) + return old; + + /* Even if it's not ours, there's no point waiting; just kick it */ + __arm_lpae_sync_pte(ptep, 1, cfg); + if (old == curr) + WRITE_ONCE(*ptep, new | ARM_LPAE_PTE_SW_SYNC); + + return old; +} + +int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, + phys_addr_t paddr, size_t size, size_t pgcount, + arm_lpae_iopte prot, int lvl, arm_lpae_iopte *ptep, + gfp_t gfp, size_t *mapped) +{ + arm_lpae_iopte *cptep, pte; + size_t block_size = ARM_LPAE_BLOCK_SIZE(lvl, data); + size_t tblsz = ARM_LPAE_GRANULE(data); + struct io_pgtable_cfg *cfg = &data->iop.cfg; + int ret = 0, num_entries, max_entries, map_idx_start; + + /* Find our entry at the current level */ + map_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data); + ptep += map_idx_start; + + /* If we can install a leaf entry at this level, then do so */ + if (size == block_size) { + max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start; + num_entries = min_t(int, pgcount, max_entries); + ret = arm_lpae_init_pte(data, iova, paddr, prot, lvl, num_entries, ptep); + if (!ret) + *mapped += num_entries * size; + + return ret; + } + + /* We can't allocate tables at the final level */ + if (WARN_ON(lvl >= ARM_LPAE_MAX_LEVELS - 1)) + return -EINVAL; + + /* Grab a pointer to the next level */ + pte = READ_ONCE(*ptep); + if (!pte) { + cptep = __arm_lpae_alloc_pages(tblsz, gfp, cfg); + if (!cptep) + return -ENOMEM; + + pte = arm_lpae_install_table(cptep, ptep, 0, data); + if (pte) + __arm_lpae_free_pages(cptep, tblsz, cfg); + } else if (!cfg->coherent_walk && !(pte & ARM_LPAE_PTE_SW_SYNC)) { + __arm_lpae_sync_pte(ptep, 1, cfg); + } + + if (pte && !iopte_leaf(pte, lvl, data->iop.fmt)) { + cptep = iopte_deref(pte, data); + } else if (pte) { + /* We require an unmap first */ + WARN_ON(!selftest_running); + return -EEXIST; + } + + /* Rinse, repeat */ + return __arm_lpae_map(data, iova, paddr, size, pgcount, prot, lvl + 1, + cptep, gfp, mapped); +} + +static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data, + int prot) +{ + arm_lpae_iopte pte; + + if (data->iop.fmt == ARM_64_LPAE_S1 || + data->iop.fmt == ARM_32_LPAE_S1) { + pte = ARM_LPAE_PTE_nG; + if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ)) + pte |= ARM_LPAE_PTE_AP_RDONLY; + if (!(prot & IOMMU_PRIV)) + pte |= ARM_LPAE_PTE_AP_UNPRIV; + } else { + pte = ARM_LPAE_PTE_HAP_FAULT; + if (prot & IOMMU_READ) + pte |= ARM_LPAE_PTE_HAP_READ; + if (prot & IOMMU_WRITE) + pte |= ARM_LPAE_PTE_HAP_WRITE; + } + + /* + * Note that this logic is structured to accommodate Mali LPAE + * having stage-1-like attributes but stage-2-like permissions. + */ + if (data->iop.fmt == ARM_64_LPAE_S2 || + data->iop.fmt == ARM_32_LPAE_S2) { + if (prot & IOMMU_MMIO) + pte |= ARM_LPAE_PTE_MEMATTR_DEV; + else if (prot & IOMMU_CACHE) + pte |= ARM_LPAE_PTE_MEMATTR_OIWB; + else + pte |= ARM_LPAE_PTE_MEMATTR_NC; + } else { + if (prot & IOMMU_MMIO) + pte |= (ARM_LPAE_MAIR_ATTR_IDX_DEV + << ARM_LPAE_PTE_ATTRINDX_SHIFT); + else if (prot & IOMMU_CACHE) + pte |= (ARM_LPAE_MAIR_ATTR_IDX_CACHE + << ARM_LPAE_PTE_ATTRINDX_SHIFT); + } + + /* + * Also Mali has its own notions of shareability wherein its Inner + * domain covers the cores within the GPU, and its Outer domain is + * "outside the GPU" (i.e. either the Inner or System domain in CPU + * terms, depending on coherency). + */ + if (prot & IOMMU_CACHE && data->iop.fmt != ARM_MALI_LPAE) + pte |= ARM_LPAE_PTE_SH_IS; + else + pte |= ARM_LPAE_PTE_SH_OS; + + if (prot & IOMMU_NOEXEC) + pte |= ARM_LPAE_PTE_XN; + + if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS) + pte |= ARM_LPAE_PTE_NS; + + if (data->iop.fmt != ARM_MALI_LPAE) + pte |= ARM_LPAE_PTE_AF; + + return pte; +} + +int arm_lpae_map_pages(struct io_pgtable_ops *ops, unsigned long iova, + phys_addr_t paddr, size_t pgsize, size_t pgcount, + int iommu_prot, gfp_t gfp, size_t *mapped) +{ + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); + struct io_pgtable_cfg *cfg = &data->iop.cfg; + arm_lpae_iopte *ptep = data->pgd; + int ret, lvl = data->start_level; + arm_lpae_iopte prot; + long iaext = (s64)iova >> cfg->ias; + + if (WARN_ON(!pgsize || (pgsize & cfg->pgsize_bitmap) != pgsize)) + return -EINVAL; + + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) + iaext = ~iaext; + if (WARN_ON(iaext || paddr >> cfg->oas)) + return -ERANGE; + + /* If no access, then nothing to do */ + if (!(iommu_prot & (IOMMU_READ | IOMMU_WRITE))) + return 0; + + prot = arm_lpae_prot_to_pte(data, iommu_prot); + ret = __arm_lpae_map(data, iova, paddr, pgsize, pgcount, prot, lvl, + ptep, gfp, mapped); + /* + * Synchronise all PTE updates for the new mapping before there's + * a chance for anything to kick off a table walk for the new iova. + */ + wmb(); + + return ret; +} + +void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, + arm_lpae_iopte *ptep) +{ + arm_lpae_iopte *start, *end; + unsigned long table_size; + + if (lvl == data->start_level) + table_size = ARM_LPAE_PGD_SIZE(data); + else + table_size = ARM_LPAE_GRANULE(data); + + start = ptep; + + /* Only leaf entries at the last level */ + if (lvl == ARM_LPAE_MAX_LEVELS - 1) + end = ptep; + else + end = (void *)ptep + table_size; + + while (ptep != end) { + arm_lpae_iopte pte = *ptep++; + + if (!pte || iopte_leaf(pte, lvl, data->iop.fmt)) + continue; + + __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); + } + + __arm_lpae_free_pages(start, table_size, &data->iop.cfg); +} + +static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, + struct iommu_iotlb_gather *gather, + unsigned long iova, size_t size, + arm_lpae_iopte blk_pte, int lvl, + arm_lpae_iopte *ptep, size_t pgcount) +{ + struct io_pgtable_cfg *cfg = &data->iop.cfg; + arm_lpae_iopte pte, *tablep; + phys_addr_t blk_paddr; + size_t tablesz = ARM_LPAE_GRANULE(data); + size_t split_sz = ARM_LPAE_BLOCK_SIZE(lvl, data); + int ptes_per_table = ARM_LPAE_PTES_PER_TABLE(data); + int i, unmap_idx_start = -1, num_entries = 0, max_entries; + + if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS)) + return 0; + + tablep = __arm_lpae_alloc_pages(tablesz, GFP_ATOMIC, cfg); + if (!tablep) + return 0; /* Bytes unmapped */ + + if (size == split_sz) { + unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data); + max_entries = ptes_per_table - unmap_idx_start; + num_entries = min_t(int, pgcount, max_entries); + } + + blk_paddr = iopte_to_paddr(blk_pte, data); + pte = iopte_prot(blk_pte); + + for (i = 0; i < ptes_per_table; i++, blk_paddr += split_sz) { + /* Unmap! */ + if (i >= unmap_idx_start && i < (unmap_idx_start + num_entries)) + continue; + + __arm_lpae_init_pte(data, blk_paddr, pte, lvl, 1, &tablep[i]); + } + + pte = arm_lpae_install_table(tablep, ptep, blk_pte, data); + if (pte != blk_pte) { + __arm_lpae_free_pages(tablep, tablesz, cfg); + /* + * We may race against someone unmapping another part of this + * block, but anything else is invalid. We can't misinterpret + * a page entry here since we're never at the last level. + */ + if (iopte_type(pte) != ARM_LPAE_PTE_TYPE_TABLE) + return 0; + + tablep = iopte_deref(pte, data); + } else if (unmap_idx_start >= 0) { + for (i = 0; i < num_entries; i++) + io_pgtable_tlb_add_page(&data->iop, gather, iova + i * size, size); + + return num_entries * size; + } + + return __arm_lpae_unmap(data, gather, iova, size, pgcount, lvl, tablep); +} + +static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, + struct iommu_iotlb_gather *gather, + unsigned long iova, size_t size, size_t pgcount, + int lvl, arm_lpae_iopte *ptep) +{ + arm_lpae_iopte pte; + struct io_pgtable *iop = &data->iop; + int i = 0, num_entries, max_entries, unmap_idx_start; + + /* Something went horribly wrong and we ran out of page table */ + if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS)) + return 0; + + unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data); + ptep += unmap_idx_start; + pte = READ_ONCE(*ptep); + if (WARN_ON(!pte)) + return 0; + + /* If the size matches this level, we're in the right place */ + if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) { + max_entries = ARM_LPAE_PTES_PER_TABLE(data) - unmap_idx_start; + num_entries = min_t(int, pgcount, max_entries); + + while (i < num_entries) { + pte = READ_ONCE(*ptep); + if (WARN_ON(!pte)) + break; + + __arm_lpae_clear_pte(ptep, &iop->cfg); + + if (!iopte_leaf(pte, lvl, iop->fmt)) { + /* Also flush any partial walks */ + io_pgtable_tlb_flush_walk(iop, iova + i * size, size, + ARM_LPAE_GRANULE(data)); + __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); + } else if (!iommu_iotlb_gather_queued(gather)) { + io_pgtable_tlb_add_page(iop, gather, iova + i * size, size); + } + + ptep++; + i++; + } + + return i * size; + } else if (iopte_leaf(pte, lvl, iop->fmt)) { + /* + * Insert a table at the next level to map the old region, + * minus the part we want to unmap + */ + return arm_lpae_split_blk_unmap(data, gather, iova, size, pte, + lvl + 1, ptep, pgcount); + } + + /* Keep on walkin' */ + ptep = iopte_deref(pte, data); + return __arm_lpae_unmap(data, gather, iova, size, pgcount, lvl + 1, ptep); +} + +size_t arm_lpae_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *gather) +{ + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); + struct io_pgtable_cfg *cfg = &data->iop.cfg; + arm_lpae_iopte *ptep = data->pgd; + long iaext = (s64)iova >> cfg->ias; + + if (WARN_ON(!pgsize || (pgsize & cfg->pgsize_bitmap) != pgsize || !pgcount)) + return 0; + + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) + iaext = ~iaext; + if (WARN_ON(iaext)) + return 0; + + return __arm_lpae_unmap(data, gather, iova, pgsize, pgcount, + data->start_level, ptep); +} + +phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, + unsigned long iova) +{ + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); + arm_lpae_iopte pte, *ptep = data->pgd; + int lvl = data->start_level; + + do { + /* Valid IOPTE pointer? */ + if (!ptep) + return 0; + + /* Grab the IOPTE we're interested in */ + ptep += ARM_LPAE_LVL_IDX(iova, lvl, data); + pte = READ_ONCE(*ptep); + + /* Valid entry? */ + if (!pte) + return 0; + + /* Leaf entry? */ + if (iopte_leaf(pte, lvl, data->iop.fmt)) + goto found_translation; + + /* Take it to the next level */ + ptep = iopte_deref(pte, data); + } while (++lvl < ARM_LPAE_MAX_LEVELS); + + /* Ran out of page tables to walk */ + return 0; + +found_translation: + iova &= (ARM_LPAE_BLOCK_SIZE(lvl, data) - 1); + return iopte_to_paddr(pte, data) | iova; +} diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 72dcdd468cf3..db42aed6ad7b 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only /* * CPU-agnostic ARM page table allocator. + * Host-specific functions. The rest is in io-pgtable-arm-common.c. * * Copyright (C) 2014 ARM Limited * @@ -11,7 +12,7 @@ #include #include -#include +#include #include #include #include @@ -20,175 +21,17 @@ #include -#include "io-pgtable-arm.h" - #define ARM_LPAE_MAX_ADDR_BITS 52 #define ARM_LPAE_S2_MAX_CONCAT_PAGES 16 -#define ARM_LPAE_MAX_LEVELS 4 - -/* Struct accessors */ -#define io_pgtable_to_data(x) \ - container_of((x), struct arm_lpae_io_pgtable, iop) - -#define io_pgtable_ops_to_data(x) \ - io_pgtable_to_data(io_pgtable_ops_to_pgtable(x)) - -/* - * Calculate the right shift amount to get to the portion describing level l - * in a virtual address mapped by the pagetable in d. - */ -#define ARM_LPAE_LVL_SHIFT(l,d) \ - (((ARM_LPAE_MAX_LEVELS - (l)) * (d)->bits_per_level) + \ - ilog2(sizeof(arm_lpae_iopte))) -#define ARM_LPAE_GRANULE(d) \ - (sizeof(arm_lpae_iopte) << (d)->bits_per_level) -#define ARM_LPAE_PGD_SIZE(d) \ - (sizeof(arm_lpae_iopte) << (d)->pgd_bits) - -#define ARM_LPAE_PTES_PER_TABLE(d) \ - (ARM_LPAE_GRANULE(d) >> ilog2(sizeof(arm_lpae_iopte))) - -/* - * Calculate the index at level l used to map virtual address a using the - * pagetable in d. - */ -#define ARM_LPAE_PGD_IDX(l,d) \ - ((l) == (d)->start_level ? (d)->pgd_bits - (d)->bits_per_level : 0) - -#define ARM_LPAE_LVL_IDX(a,l,d) \ - (((u64)(a) >> ARM_LPAE_LVL_SHIFT(l,d)) & \ - ((1 << ((d)->bits_per_level + ARM_LPAE_PGD_IDX(l,d))) - 1)) - -/* Calculate the block/page mapping size at level l for pagetable in d. */ -#define ARM_LPAE_BLOCK_SIZE(l,d) (1ULL << ARM_LPAE_LVL_SHIFT(l,d)) - -/* Page table bits */ -#define ARM_LPAE_PTE_TYPE_SHIFT 0 -#define ARM_LPAE_PTE_TYPE_MASK 0x3 - -#define ARM_LPAE_PTE_TYPE_BLOCK 1 -#define ARM_LPAE_PTE_TYPE_TABLE 3 -#define ARM_LPAE_PTE_TYPE_PAGE 3 - -#define ARM_LPAE_PTE_ADDR_MASK GENMASK_ULL(47,12) - -#define ARM_LPAE_PTE_NSTABLE (((arm_lpae_iopte)1) << 63) -#define ARM_LPAE_PTE_XN (((arm_lpae_iopte)3) << 53) -#define ARM_LPAE_PTE_AF (((arm_lpae_iopte)1) << 10) -#define ARM_LPAE_PTE_SH_NS (((arm_lpae_iopte)0) << 8) -#define ARM_LPAE_PTE_SH_OS (((arm_lpae_iopte)2) << 8) -#define ARM_LPAE_PTE_SH_IS (((arm_lpae_iopte)3) << 8) -#define ARM_LPAE_PTE_NS (((arm_lpae_iopte)1) << 5) -#define ARM_LPAE_PTE_VALID (((arm_lpae_iopte)1) << 0) - -#define ARM_LPAE_PTE_ATTR_LO_MASK (((arm_lpae_iopte)0x3ff) << 2) -/* Ignore the contiguous bit for block splitting */ -#define ARM_LPAE_PTE_ATTR_HI_MASK (((arm_lpae_iopte)6) << 52) -#define ARM_LPAE_PTE_ATTR_MASK (ARM_LPAE_PTE_ATTR_LO_MASK | \ - ARM_LPAE_PTE_ATTR_HI_MASK) -/* Software bit for solving coherency races */ -#define ARM_LPAE_PTE_SW_SYNC (((arm_lpae_iopte)1) << 55) - -/* Stage-1 PTE */ -#define ARM_LPAE_PTE_AP_UNPRIV (((arm_lpae_iopte)1) << 6) -#define ARM_LPAE_PTE_AP_RDONLY (((arm_lpae_iopte)2) << 6) -#define ARM_LPAE_PTE_ATTRINDX_SHIFT 2 -#define ARM_LPAE_PTE_nG (((arm_lpae_iopte)1) << 11) - -/* Stage-2 PTE */ -#define ARM_LPAE_PTE_HAP_FAULT (((arm_lpae_iopte)0) << 6) -#define ARM_LPAE_PTE_HAP_READ (((arm_lpae_iopte)1) << 6) -#define ARM_LPAE_PTE_HAP_WRITE (((arm_lpae_iopte)2) << 6) -#define ARM_LPAE_PTE_MEMATTR_OIWB (((arm_lpae_iopte)0xf) << 2) -#define ARM_LPAE_PTE_MEMATTR_NC (((arm_lpae_iopte)0x5) << 2) -#define ARM_LPAE_PTE_MEMATTR_DEV (((arm_lpae_iopte)0x1) << 2) - -/* Register bits */ -#define ARM_LPAE_VTCR_SL0_MASK 0x3 - -#define ARM_LPAE_TCR_T0SZ_SHIFT 0 - -#define ARM_LPAE_VTCR_PS_SHIFT 16 -#define ARM_LPAE_VTCR_PS_MASK 0x7 - -#define ARM_LPAE_MAIR_ATTR_SHIFT(n) ((n) << 3) -#define ARM_LPAE_MAIR_ATTR_MASK 0xff -#define ARM_LPAE_MAIR_ATTR_DEVICE 0x04 -#define ARM_LPAE_MAIR_ATTR_NC 0x44 -#define ARM_LPAE_MAIR_ATTR_INC_OWBRWA 0xf4 -#define ARM_LPAE_MAIR_ATTR_WBRWA 0xff -#define ARM_LPAE_MAIR_ATTR_IDX_NC 0 -#define ARM_LPAE_MAIR_ATTR_IDX_CACHE 1 -#define ARM_LPAE_MAIR_ATTR_IDX_DEV 2 -#define ARM_LPAE_MAIR_ATTR_IDX_INC_OCACHE 3 - -#define ARM_MALI_LPAE_TTBR_ADRMODE_TABLE (3u << 0) -#define ARM_MALI_LPAE_TTBR_READ_INNER BIT(2) -#define ARM_MALI_LPAE_TTBR_SHARE_OUTER BIT(4) - -#define ARM_MALI_LPAE_MEMATTR_IMP_DEF 0x88ULL -#define ARM_MALI_LPAE_MEMATTR_WRITE_ALLOC 0x8DULL - -/* IOPTE accessors */ -#define iopte_deref(pte,d) __va(iopte_to_paddr(pte, d)) - -#define iopte_type(pte) \ - (((pte) >> ARM_LPAE_PTE_TYPE_SHIFT) & ARM_LPAE_PTE_TYPE_MASK) - -#define iopte_prot(pte) ((pte) & ARM_LPAE_PTE_ATTR_MASK) - -struct arm_lpae_io_pgtable { - struct io_pgtable iop; - - int pgd_bits; - int start_level; - int bits_per_level; - - void *pgd; -}; - -typedef u64 arm_lpae_iopte; - -static inline bool iopte_leaf(arm_lpae_iopte pte, int lvl, - enum io_pgtable_fmt fmt) -{ - if (lvl == (ARM_LPAE_MAX_LEVELS - 1) && fmt != ARM_MALI_LPAE) - return iopte_type(pte) == ARM_LPAE_PTE_TYPE_PAGE; - - return iopte_type(pte) == ARM_LPAE_PTE_TYPE_BLOCK; -} - -static arm_lpae_iopte paddr_to_iopte(phys_addr_t paddr, - struct arm_lpae_io_pgtable *data) -{ - arm_lpae_iopte pte = paddr; - - /* Of the bits which overlap, either 51:48 or 15:12 are always RES0 */ - return (pte | (pte >> (48 - 12))) & ARM_LPAE_PTE_ADDR_MASK; -} - -static phys_addr_t iopte_to_paddr(arm_lpae_iopte pte, - struct arm_lpae_io_pgtable *data) -{ - u64 paddr = pte & ARM_LPAE_PTE_ADDR_MASK; - - if (ARM_LPAE_GRANULE(data) < SZ_64K) - return paddr; - - /* Rotate the packed high-order bits back to the top */ - return (paddr | (paddr << (48 - 12))) & (ARM_LPAE_PTE_ADDR_MASK << 4); -} - -static bool selftest_running = false; +bool selftest_running = false; static dma_addr_t __arm_lpae_dma_addr(void *pages) { return (dma_addr_t)virt_to_phys(pages); } -static void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp, - struct io_pgtable_cfg *cfg) +void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp, struct io_pgtable_cfg *cfg) { struct device *dev = cfg->iommu_dev; int order = get_order(size); @@ -225,8 +68,7 @@ static void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp, return NULL; } -static void __arm_lpae_free_pages(void *pages, size_t size, - struct io_pgtable_cfg *cfg) +void __arm_lpae_free_pages(void *pages, size_t size, struct io_pgtable_cfg *cfg) { if (!cfg->coherent_walk) dma_unmap_single(cfg->iommu_dev, __arm_lpae_dma_addr(pages), @@ -234,299 +76,13 @@ static void __arm_lpae_free_pages(void *pages, size_t size, free_pages((unsigned long)pages, get_order(size)); } -static void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, int num_entries, - struct io_pgtable_cfg *cfg) +void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, int num_entries, + struct io_pgtable_cfg *cfg) { dma_sync_single_for_device(cfg->iommu_dev, __arm_lpae_dma_addr(ptep), sizeof(*ptep) * num_entries, DMA_TO_DEVICE); } -static void __arm_lpae_clear_pte(arm_lpae_iopte *ptep, struct io_pgtable_cfg *cfg) -{ - - *ptep = 0; - - if (!cfg->coherent_walk) - __arm_lpae_sync_pte(ptep, 1, cfg); -} - -static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, - struct iommu_iotlb_gather *gather, - unsigned long iova, size_t size, size_t pgcount, - int lvl, arm_lpae_iopte *ptep); - -static void __arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, - phys_addr_t paddr, arm_lpae_iopte prot, - int lvl, int num_entries, arm_lpae_iopte *ptep) -{ - arm_lpae_iopte pte = prot; - struct io_pgtable_cfg *cfg = &data->iop.cfg; - size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data); - int i; - - if (data->iop.fmt != ARM_MALI_LPAE && lvl == ARM_LPAE_MAX_LEVELS - 1) - pte |= ARM_LPAE_PTE_TYPE_PAGE; - else - pte |= ARM_LPAE_PTE_TYPE_BLOCK; - - for (i = 0; i < num_entries; i++) - ptep[i] = pte | paddr_to_iopte(paddr + i * sz, data); - - if (!cfg->coherent_walk) - __arm_lpae_sync_pte(ptep, num_entries, cfg); -} - -static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, - unsigned long iova, phys_addr_t paddr, - arm_lpae_iopte prot, int lvl, int num_entries, - arm_lpae_iopte *ptep) -{ - int i; - - for (i = 0; i < num_entries; i++) - if (iopte_leaf(ptep[i], lvl, data->iop.fmt)) { - /* We require an unmap first */ - WARN_ON(!selftest_running); - return -EEXIST; - } else if (iopte_type(ptep[i]) == ARM_LPAE_PTE_TYPE_TABLE) { - /* - * We need to unmap and free the old table before - * overwriting it with a block entry. - */ - arm_lpae_iopte *tblp; - size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data); - - tblp = ptep - ARM_LPAE_LVL_IDX(iova, lvl, data); - if (__arm_lpae_unmap(data, NULL, iova + i * sz, sz, 1, - lvl, tblp) != sz) { - WARN_ON(1); - return -EINVAL; - } - } - - __arm_lpae_init_pte(data, paddr, prot, lvl, num_entries, ptep); - return 0; -} - -static arm_lpae_iopte arm_lpae_install_table(arm_lpae_iopte *table, - arm_lpae_iopte *ptep, - arm_lpae_iopte curr, - struct arm_lpae_io_pgtable *data) -{ - arm_lpae_iopte old, new; - struct io_pgtable_cfg *cfg = &data->iop.cfg; - - new = paddr_to_iopte(__pa(table), data) | ARM_LPAE_PTE_TYPE_TABLE; - if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS) - new |= ARM_LPAE_PTE_NSTABLE; - - /* - * Ensure the table itself is visible before its PTE can be. - * Whilst we could get away with cmpxchg64_release below, this - * doesn't have any ordering semantics when !CONFIG_SMP. - */ - dma_wmb(); - - old = cmpxchg64_relaxed(ptep, curr, new); - - if (cfg->coherent_walk || (old & ARM_LPAE_PTE_SW_SYNC)) - return old; - - /* Even if it's not ours, there's no point waiting; just kick it */ - __arm_lpae_sync_pte(ptep, 1, cfg); - if (old == curr) - WRITE_ONCE(*ptep, new | ARM_LPAE_PTE_SW_SYNC); - - return old; -} - -static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, - phys_addr_t paddr, size_t size, size_t pgcount, - arm_lpae_iopte prot, int lvl, arm_lpae_iopte *ptep, - gfp_t gfp, size_t *mapped) -{ - arm_lpae_iopte *cptep, pte; - size_t block_size = ARM_LPAE_BLOCK_SIZE(lvl, data); - size_t tblsz = ARM_LPAE_GRANULE(data); - struct io_pgtable_cfg *cfg = &data->iop.cfg; - int ret = 0, num_entries, max_entries, map_idx_start; - - /* Find our entry at the current level */ - map_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data); - ptep += map_idx_start; - - /* If we can install a leaf entry at this level, then do so */ - if (size == block_size) { - max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start; - num_entries = min_t(int, pgcount, max_entries); - ret = arm_lpae_init_pte(data, iova, paddr, prot, lvl, num_entries, ptep); - if (!ret) - *mapped += num_entries * size; - - return ret; - } - - /* We can't allocate tables at the final level */ - if (WARN_ON(lvl >= ARM_LPAE_MAX_LEVELS - 1)) - return -EINVAL; - - /* Grab a pointer to the next level */ - pte = READ_ONCE(*ptep); - if (!pte) { - cptep = __arm_lpae_alloc_pages(tblsz, gfp, cfg); - if (!cptep) - return -ENOMEM; - - pte = arm_lpae_install_table(cptep, ptep, 0, data); - if (pte) - __arm_lpae_free_pages(cptep, tblsz, cfg); - } else if (!cfg->coherent_walk && !(pte & ARM_LPAE_PTE_SW_SYNC)) { - __arm_lpae_sync_pte(ptep, 1, cfg); - } - - if (pte && !iopte_leaf(pte, lvl, data->iop.fmt)) { - cptep = iopte_deref(pte, data); - } else if (pte) { - /* We require an unmap first */ - WARN_ON(!selftest_running); - return -EEXIST; - } - - /* Rinse, repeat */ - return __arm_lpae_map(data, iova, paddr, size, pgcount, prot, lvl + 1, - cptep, gfp, mapped); -} - -static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data, - int prot) -{ - arm_lpae_iopte pte; - - if (data->iop.fmt == ARM_64_LPAE_S1 || - data->iop.fmt == ARM_32_LPAE_S1) { - pte = ARM_LPAE_PTE_nG; - if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ)) - pte |= ARM_LPAE_PTE_AP_RDONLY; - if (!(prot & IOMMU_PRIV)) - pte |= ARM_LPAE_PTE_AP_UNPRIV; - } else { - pte = ARM_LPAE_PTE_HAP_FAULT; - if (prot & IOMMU_READ) - pte |= ARM_LPAE_PTE_HAP_READ; - if (prot & IOMMU_WRITE) - pte |= ARM_LPAE_PTE_HAP_WRITE; - } - - /* - * Note that this logic is structured to accommodate Mali LPAE - * having stage-1-like attributes but stage-2-like permissions. - */ - if (data->iop.fmt == ARM_64_LPAE_S2 || - data->iop.fmt == ARM_32_LPAE_S2) { - if (prot & IOMMU_MMIO) - pte |= ARM_LPAE_PTE_MEMATTR_DEV; - else if (prot & IOMMU_CACHE) - pte |= ARM_LPAE_PTE_MEMATTR_OIWB; - else - pte |= ARM_LPAE_PTE_MEMATTR_NC; - } else { - if (prot & IOMMU_MMIO) - pte |= (ARM_LPAE_MAIR_ATTR_IDX_DEV - << ARM_LPAE_PTE_ATTRINDX_SHIFT); - else if (prot & IOMMU_CACHE) - pte |= (ARM_LPAE_MAIR_ATTR_IDX_CACHE - << ARM_LPAE_PTE_ATTRINDX_SHIFT); - } - - /* - * Also Mali has its own notions of shareability wherein its Inner - * domain covers the cores within the GPU, and its Outer domain is - * "outside the GPU" (i.e. either the Inner or System domain in CPU - * terms, depending on coherency). - */ - if (prot & IOMMU_CACHE && data->iop.fmt != ARM_MALI_LPAE) - pte |= ARM_LPAE_PTE_SH_IS; - else - pte |= ARM_LPAE_PTE_SH_OS; - - if (prot & IOMMU_NOEXEC) - pte |= ARM_LPAE_PTE_XN; - - if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS) - pte |= ARM_LPAE_PTE_NS; - - if (data->iop.fmt != ARM_MALI_LPAE) - pte |= ARM_LPAE_PTE_AF; - - return pte; -} - -static int arm_lpae_map_pages(struct io_pgtable_ops *ops, unsigned long iova, - phys_addr_t paddr, size_t pgsize, size_t pgcount, - int iommu_prot, gfp_t gfp, size_t *mapped) -{ - struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); - struct io_pgtable_cfg *cfg = &data->iop.cfg; - arm_lpae_iopte *ptep = data->pgd; - int ret, lvl = data->start_level; - arm_lpae_iopte prot; - long iaext = (s64)iova >> cfg->ias; - - if (WARN_ON(!pgsize || (pgsize & cfg->pgsize_bitmap) != pgsize)) - return -EINVAL; - - if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) - iaext = ~iaext; - if (WARN_ON(iaext || paddr >> cfg->oas)) - return -ERANGE; - - /* If no access, then nothing to do */ - if (!(iommu_prot & (IOMMU_READ | IOMMU_WRITE))) - return 0; - - prot = arm_lpae_prot_to_pte(data, iommu_prot); - ret = __arm_lpae_map(data, iova, paddr, pgsize, pgcount, prot, lvl, - ptep, gfp, mapped); - /* - * Synchronise all PTE updates for the new mapping before there's - * a chance for anything to kick off a table walk for the new iova. - */ - wmb(); - - return ret; -} - -static void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, - arm_lpae_iopte *ptep) -{ - arm_lpae_iopte *start, *end; - unsigned long table_size; - - if (lvl == data->start_level) - table_size = ARM_LPAE_PGD_SIZE(data); - else - table_size = ARM_LPAE_GRANULE(data); - - start = ptep; - - /* Only leaf entries at the last level */ - if (lvl == ARM_LPAE_MAX_LEVELS - 1) - end = ptep; - else - end = (void *)ptep + table_size; - - while (ptep != end) { - arm_lpae_iopte pte = *ptep++; - - if (!pte || iopte_leaf(pte, lvl, data->iop.fmt)) - continue; - - __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); - } - - __arm_lpae_free_pages(start, table_size, &data->iop.cfg); -} - static void arm_lpae_free_pgtable(struct io_pgtable *iop) { struct arm_lpae_io_pgtable *data = io_pgtable_to_data(iop); @@ -535,182 +91,6 @@ static void arm_lpae_free_pgtable(struct io_pgtable *iop) kfree(data); } -static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, - struct iommu_iotlb_gather *gather, - unsigned long iova, size_t size, - arm_lpae_iopte blk_pte, int lvl, - arm_lpae_iopte *ptep, size_t pgcount) -{ - struct io_pgtable_cfg *cfg = &data->iop.cfg; - arm_lpae_iopte pte, *tablep; - phys_addr_t blk_paddr; - size_t tablesz = ARM_LPAE_GRANULE(data); - size_t split_sz = ARM_LPAE_BLOCK_SIZE(lvl, data); - int ptes_per_table = ARM_LPAE_PTES_PER_TABLE(data); - int i, unmap_idx_start = -1, num_entries = 0, max_entries; - - if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS)) - return 0; - - tablep = __arm_lpae_alloc_pages(tablesz, GFP_ATOMIC, cfg); - if (!tablep) - return 0; /* Bytes unmapped */ - - if (size == split_sz) { - unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data); - max_entries = ptes_per_table - unmap_idx_start; - num_entries = min_t(int, pgcount, max_entries); - } - - blk_paddr = iopte_to_paddr(blk_pte, data); - pte = iopte_prot(blk_pte); - - for (i = 0; i < ptes_per_table; i++, blk_paddr += split_sz) { - /* Unmap! */ - if (i >= unmap_idx_start && i < (unmap_idx_start + num_entries)) - continue; - - __arm_lpae_init_pte(data, blk_paddr, pte, lvl, 1, &tablep[i]); - } - - pte = arm_lpae_install_table(tablep, ptep, blk_pte, data); - if (pte != blk_pte) { - __arm_lpae_free_pages(tablep, tablesz, cfg); - /* - * We may race against someone unmapping another part of this - * block, but anything else is invalid. We can't misinterpret - * a page entry here since we're never at the last level. - */ - if (iopte_type(pte) != ARM_LPAE_PTE_TYPE_TABLE) - return 0; - - tablep = iopte_deref(pte, data); - } else if (unmap_idx_start >= 0) { - for (i = 0; i < num_entries; i++) - io_pgtable_tlb_add_page(&data->iop, gather, iova + i * size, size); - - return num_entries * size; - } - - return __arm_lpae_unmap(data, gather, iova, size, pgcount, lvl, tablep); -} - -static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, - struct iommu_iotlb_gather *gather, - unsigned long iova, size_t size, size_t pgcount, - int lvl, arm_lpae_iopte *ptep) -{ - arm_lpae_iopte pte; - struct io_pgtable *iop = &data->iop; - int i = 0, num_entries, max_entries, unmap_idx_start; - - /* Something went horribly wrong and we ran out of page table */ - if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS)) - return 0; - - unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data); - ptep += unmap_idx_start; - pte = READ_ONCE(*ptep); - if (WARN_ON(!pte)) - return 0; - - /* If the size matches this level, we're in the right place */ - if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) { - max_entries = ARM_LPAE_PTES_PER_TABLE(data) - unmap_idx_start; - num_entries = min_t(int, pgcount, max_entries); - - while (i < num_entries) { - pte = READ_ONCE(*ptep); - if (WARN_ON(!pte)) - break; - - __arm_lpae_clear_pte(ptep, &iop->cfg); - - if (!iopte_leaf(pte, lvl, iop->fmt)) { - /* Also flush any partial walks */ - io_pgtable_tlb_flush_walk(iop, iova + i * size, size, - ARM_LPAE_GRANULE(data)); - __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); - } else if (!iommu_iotlb_gather_queued(gather)) { - io_pgtable_tlb_add_page(iop, gather, iova + i * size, size); - } - - ptep++; - i++; - } - - return i * size; - } else if (iopte_leaf(pte, lvl, iop->fmt)) { - /* - * Insert a table at the next level to map the old region, - * minus the part we want to unmap - */ - return arm_lpae_split_blk_unmap(data, gather, iova, size, pte, - lvl + 1, ptep, pgcount); - } - - /* Keep on walkin' */ - ptep = iopte_deref(pte, data); - return __arm_lpae_unmap(data, gather, iova, size, pgcount, lvl + 1, ptep); -} - -static size_t arm_lpae_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, - size_t pgsize, size_t pgcount, - struct iommu_iotlb_gather *gather) -{ - struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); - struct io_pgtable_cfg *cfg = &data->iop.cfg; - arm_lpae_iopte *ptep = data->pgd; - long iaext = (s64)iova >> cfg->ias; - - if (WARN_ON(!pgsize || (pgsize & cfg->pgsize_bitmap) != pgsize || !pgcount)) - return 0; - - if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) - iaext = ~iaext; - if (WARN_ON(iaext)) - return 0; - - return __arm_lpae_unmap(data, gather, iova, pgsize, pgcount, - data->start_level, ptep); -} - -static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, - unsigned long iova) -{ - struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); - arm_lpae_iopte pte, *ptep = data->pgd; - int lvl = data->start_level; - - do { - /* Valid IOPTE pointer? */ - if (!ptep) - return 0; - - /* Grab the IOPTE we're interested in */ - ptep += ARM_LPAE_LVL_IDX(iova, lvl, data); - pte = READ_ONCE(*ptep); - - /* Valid entry? */ - if (!pte) - return 0; - - /* Leaf entry? */ - if (iopte_leaf(pte, lvl, data->iop.fmt)) - goto found_translation; - - /* Take it to the next level */ - ptep = iopte_deref(pte, data); - } while (++lvl < ARM_LPAE_MAX_LEVELS); - - /* Ran out of page tables to walk */ - return 0; - -found_translation: - iova &= (ARM_LPAE_BLOCK_SIZE(lvl, data) - 1); - return iopte_to_paddr(pte, data) | iova; -} - static void arm_lpae_restrict_pgsizes(struct io_pgtable_cfg *cfg) { unsigned long granule, page_sizes; From patchwork Wed Feb 1 12:52:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124230 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 53DA5C636CD for ; Wed, 1 Feb 2023 13:01:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IC20RzeCnV9UzxT6PhHijC1kJLzptSu/n8OCQVpxI+Q=; b=BTzQKYlgo02mM7 wE9tdz0aurlnsAWxWWDDCidRbhLrxo4CpoCCdgsP6mbLo+WwQsVXJxKmTLjgkK4Lo9Mf406Cj2N9h LcrsSOYSDZnWAlTBOuQ44coypKU9LEvZ06tVY6j7PPhc2vP3219siONUhNyKi1WOLbwaeEDBj+5vL 23wlJhJVBhFMdTldb2PhOqOwuIktDcoy/XoNHUqhJvmmCX8OaobWv2ei14Q88H16ghkojS5CBAO6e vsBQmgPjnSSq+HoysWd03YXo6OKba5Gn0wAiovPBnMWSgLO/XT9ouHa6w5F1wjDKBEl55HM3yH6xD q1y0noV8+0d901vw+6XQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNChz-00BnID-UL; Wed, 01 Feb 2023 12:59:28 +0000 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNChv-00BnFT-1H for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:25 +0000 Received: by mail-wr1-x42b.google.com with SMTP id d14so17221263wrr.9 for ; Wed, 01 Feb 2023 04:59:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ziphhZcJPXJLC+5r9d0l9f2mVBvLXVVKVfJ7Fj2IsgE=; b=k8mo9d1Qi6hHRkH5orJsmAv6IfDHKIWlBl94/SUu6jGGDAGcQxqxEOc0QHjQsoUzk1 iEN9dRic5h79s9FfRaM8xgguH5bf+Ao53VfPb56TR27Lt505QImnErcWRM7ZHn79qmof F6rXcSfeMOp66lGADyZP69TnStr4hXdPRVdbst7cYjkw1ezAMne05FTQ8u/2ua9OUuDg 2VMLiAGNg3iqoMRuyqqTREQ4eQ7RF+V+p9DvA+Yp8UarFNFhxeEdBqV/Gc4lZotD5oj3 SG4lCtJiUziUp2uLZjv7gfFgV4PqAJ6U0kzGAlZoZIv5/Dr58/+NTw4f7xhwzmsEb4oM ZpQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ziphhZcJPXJLC+5r9d0l9f2mVBvLXVVKVfJ7Fj2IsgE=; b=qekdxdbux6QvGyC3zrTWz74HEHrrxYWq7lah/nnUVSoBvVoCId0tmNyc8dTKnPsSMo +Zs6ec5Y95aNNS+5BM4jxz8coRJ5YjjMfMkgbjQR6iVHrVa2WvWDEzmtmS+EOorc909O ZMhcmejpdyCyEVN7NocTFQLiGHwBf4XlpESh+Gud9eTYoVi+3vLqKdWkwatsK83/4ejq j3dWGrTpgxfxoIFRMDTEeePjJc6jrjfq0PVucGkmOkuA8pHTqIdKLRKD9hPE8Yhg2/h+ QxDsMFE1gu4Wqrdl26caVYMEdnHliAcsEa5SArPT/RiNp9DFoivqAmdwEPbJu6E31V9N akKA== X-Gm-Message-State: AO0yUKXUst5UxQCOP1UfdYK8D8XJFDRkp8SoPLFQBqX7kedkJyp/L8kE zSTmJ4Vat0JsS9exINFIdC6kSg== X-Google-Smtp-Source: AK7set/3uCsXVIhxNwxZFCKcMQw1Q27U2Ycj8zW5qSODTxIAbq/oiQ9HDm+qezJoKiQ5iAd9qQe8MA== X-Received: by 2002:a05:6000:1085:b0:2bf:f7c9:327c with SMTP id y5-20020a056000108500b002bff7c9327cmr2191461wrw.27.1675256359902; Wed, 01 Feb 2023 04:59:19 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:19 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 02/45] iommu/io-pgtable-arm: Split initialization Date: Wed, 1 Feb 2023 12:52:46 +0000 Message-Id: <20230201125328.2186498-3-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045923_112441_BD6D2928 X-CRM114-Status: GOOD ( 22.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Extract the configuration part from io-pgtable-arm.c, move it to io-pgtable-arm-common.c. Signed-off-by: Jean-Philippe Brucker --- include/linux/io-pgtable-arm.h | 15 +- drivers/iommu/io-pgtable-arm-common.c | 255 ++++++++++++++++++++++++++ drivers/iommu/io-pgtable-arm.c | 245 +------------------------ 3 files changed, 270 insertions(+), 245 deletions(-) diff --git a/include/linux/io-pgtable-arm.h b/include/linux/io-pgtable-arm.h index 594b5030b450..42202bc0ffa2 100644 --- a/include/linux/io-pgtable-arm.h +++ b/include/linux/io-pgtable-arm.h @@ -167,17 +167,16 @@ static inline bool iopte_leaf(arm_lpae_iopte pte, int lvl, #define __arm_lpae_phys_to_virt __va /* Generic functions */ -int arm_lpae_map_pages(struct io_pgtable_ops *ops, unsigned long iova, - phys_addr_t paddr, size_t pgsize, size_t pgcount, - int iommu_prot, gfp_t gfp, size_t *mapped); -size_t arm_lpae_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, - size_t pgsize, size_t pgcount, - struct iommu_iotlb_gather *gather); -phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, - unsigned long iova); void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, arm_lpae_iopte *ptep); +int arm_lpae_init_pgtable(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data); +int arm_lpae_init_pgtable_s1(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data); +int arm_lpae_init_pgtable_s2(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data); + /* Host/hyp-specific functions */ void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp, struct io_pgtable_cfg *cfg); void __arm_lpae_free_pages(void *pages, size_t size, struct io_pgtable_cfg *cfg); diff --git a/drivers/iommu/io-pgtable-arm-common.c b/drivers/iommu/io-pgtable-arm-common.c index 74d962712d15..7340b5096499 100644 --- a/drivers/iommu/io-pgtable-arm-common.c +++ b/drivers/iommu/io-pgtable-arm-common.c @@ -15,6 +15,9 @@ #define iopte_deref(pte, d) __arm_lpae_phys_to_virt(iopte_to_paddr(pte, d)) +#define ARM_LPAE_MAX_ADDR_BITS 52 +#define ARM_LPAE_S2_MAX_CONCAT_PAGES 16 + static arm_lpae_iopte paddr_to_iopte(phys_addr_t paddr, struct arm_lpae_io_pgtable *data) { @@ -498,3 +501,255 @@ phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, iova &= (ARM_LPAE_BLOCK_SIZE(lvl, data) - 1); return iopte_to_paddr(pte, data) | iova; } + +static void arm_lpae_restrict_pgsizes(struct io_pgtable_cfg *cfg) +{ + unsigned long granule, page_sizes; + unsigned int max_addr_bits = 48; + + /* + * We need to restrict the supported page sizes to match the + * translation regime for a particular granule. Aim to match + * the CPU page size if possible, otherwise prefer smaller sizes. + * While we're at it, restrict the block sizes to match the + * chosen granule. + */ + if (cfg->pgsize_bitmap & PAGE_SIZE) + granule = PAGE_SIZE; + else if (cfg->pgsize_bitmap & ~PAGE_MASK) + granule = 1UL << __fls(cfg->pgsize_bitmap & ~PAGE_MASK); + else if (cfg->pgsize_bitmap & PAGE_MASK) + granule = 1UL << __ffs(cfg->pgsize_bitmap & PAGE_MASK); + else + granule = 0; + + switch (granule) { + case SZ_4K: + page_sizes = (SZ_4K | SZ_2M | SZ_1G); + break; + case SZ_16K: + page_sizes = (SZ_16K | SZ_32M); + break; + case SZ_64K: + max_addr_bits = 52; + page_sizes = (SZ_64K | SZ_512M); + if (cfg->oas > 48) + page_sizes |= 1ULL << 42; /* 4TB */ + break; + default: + page_sizes = 0; + } + + cfg->pgsize_bitmap &= page_sizes; + cfg->ias = min(cfg->ias, max_addr_bits); + cfg->oas = min(cfg->oas, max_addr_bits); +} + +int arm_lpae_init_pgtable(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data) +{ + int levels, va_bits, pg_shift; + + arm_lpae_restrict_pgsizes(cfg); + + if (!(cfg->pgsize_bitmap & (SZ_4K | SZ_16K | SZ_64K))) + return -EINVAL; + + if (cfg->ias > ARM_LPAE_MAX_ADDR_BITS) + return -E2BIG; + + if (cfg->oas > ARM_LPAE_MAX_ADDR_BITS) + return -E2BIG; + + pg_shift = __ffs(cfg->pgsize_bitmap); + data->bits_per_level = pg_shift - ilog2(sizeof(arm_lpae_iopte)); + + va_bits = cfg->ias - pg_shift; + levels = DIV_ROUND_UP(va_bits, data->bits_per_level); + data->start_level = ARM_LPAE_MAX_LEVELS - levels; + + /* Calculate the actual size of our pgd (without concatenation) */ + data->pgd_bits = va_bits - (data->bits_per_level * (levels - 1)); + + data->iop.ops = (struct io_pgtable_ops) { + .map_pages = arm_lpae_map_pages, + .unmap_pages = arm_lpae_unmap_pages, + .iova_to_phys = arm_lpae_iova_to_phys, + }; + + return 0; +} + +int arm_lpae_init_pgtable_s1(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data) +{ + u64 reg; + int ret; + typeof(&cfg->arm_lpae_s1_cfg.tcr) tcr = &cfg->arm_lpae_s1_cfg.tcr; + bool tg1; + + if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | + IO_PGTABLE_QUIRK_ARM_TTBR1 | + IO_PGTABLE_QUIRK_ARM_OUTER_WBWA)) + return -EINVAL; + + ret = arm_lpae_init_pgtable(cfg, data); + if (ret) + return ret; + + /* TCR */ + if (cfg->coherent_walk) { + tcr->sh = ARM_LPAE_TCR_SH_IS; + tcr->irgn = ARM_LPAE_TCR_RGN_WBWA; + tcr->orgn = ARM_LPAE_TCR_RGN_WBWA; + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_OUTER_WBWA) + return -EINVAL; + } else { + tcr->sh = ARM_LPAE_TCR_SH_OS; + tcr->irgn = ARM_LPAE_TCR_RGN_NC; + if (!(cfg->quirks & IO_PGTABLE_QUIRK_ARM_OUTER_WBWA)) + tcr->orgn = ARM_LPAE_TCR_RGN_NC; + else + tcr->orgn = ARM_LPAE_TCR_RGN_WBWA; + } + + tg1 = cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1; + switch (ARM_LPAE_GRANULE(data)) { + case SZ_4K: + tcr->tg = tg1 ? ARM_LPAE_TCR_TG1_4K : ARM_LPAE_TCR_TG0_4K; + break; + case SZ_16K: + tcr->tg = tg1 ? ARM_LPAE_TCR_TG1_16K : ARM_LPAE_TCR_TG0_16K; + break; + case SZ_64K: + tcr->tg = tg1 ? ARM_LPAE_TCR_TG1_64K : ARM_LPAE_TCR_TG0_64K; + break; + } + + switch (cfg->oas) { + case 32: + tcr->ips = ARM_LPAE_TCR_PS_32_BIT; + break; + case 36: + tcr->ips = ARM_LPAE_TCR_PS_36_BIT; + break; + case 40: + tcr->ips = ARM_LPAE_TCR_PS_40_BIT; + break; + case 42: + tcr->ips = ARM_LPAE_TCR_PS_42_BIT; + break; + case 44: + tcr->ips = ARM_LPAE_TCR_PS_44_BIT; + break; + case 48: + tcr->ips = ARM_LPAE_TCR_PS_48_BIT; + break; + case 52: + tcr->ips = ARM_LPAE_TCR_PS_52_BIT; + break; + default: + return -EINVAL; + } + + tcr->tsz = 64ULL - cfg->ias; + + /* MAIRs */ + reg = (ARM_LPAE_MAIR_ATTR_NC + << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_NC)) | + (ARM_LPAE_MAIR_ATTR_WBRWA + << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_CACHE)) | + (ARM_LPAE_MAIR_ATTR_DEVICE + << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_DEV)) | + (ARM_LPAE_MAIR_ATTR_INC_OWBRWA + << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_INC_OCACHE)); + + cfg->arm_lpae_s1_cfg.mair = reg; + return 0; +} + +int arm_lpae_init_pgtable_s2(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data) +{ + u64 sl; + int ret; + typeof(&cfg->arm_lpae_s2_cfg.vtcr) vtcr = &cfg->arm_lpae_s2_cfg.vtcr; + + /* The NS quirk doesn't apply at stage 2 */ + if (cfg->quirks) + return -EINVAL; + + ret = arm_lpae_init_pgtable(cfg, data); + if (ret) + return ret; + + /* + * Concatenate PGDs at level 1 if possible in order to reduce + * the depth of the stage-2 walk. + */ + if (data->start_level == 0) { + unsigned long pgd_pages; + + pgd_pages = ARM_LPAE_PGD_SIZE(data) / sizeof(arm_lpae_iopte); + if (pgd_pages <= ARM_LPAE_S2_MAX_CONCAT_PAGES) { + data->pgd_bits += data->bits_per_level; + data->start_level++; + } + } + + /* VTCR */ + if (cfg->coherent_walk) { + vtcr->sh = ARM_LPAE_TCR_SH_IS; + vtcr->irgn = ARM_LPAE_TCR_RGN_WBWA; + vtcr->orgn = ARM_LPAE_TCR_RGN_WBWA; + } else { + vtcr->sh = ARM_LPAE_TCR_SH_OS; + vtcr->irgn = ARM_LPAE_TCR_RGN_NC; + vtcr->orgn = ARM_LPAE_TCR_RGN_NC; + } + + sl = data->start_level; + + switch (ARM_LPAE_GRANULE(data)) { + case SZ_4K: + vtcr->tg = ARM_LPAE_TCR_TG0_4K; + sl++; /* SL0 format is different for 4K granule size */ + break; + case SZ_16K: + vtcr->tg = ARM_LPAE_TCR_TG0_16K; + break; + case SZ_64K: + vtcr->tg = ARM_LPAE_TCR_TG0_64K; + break; + } + + switch (cfg->oas) { + case 32: + vtcr->ps = ARM_LPAE_TCR_PS_32_BIT; + break; + case 36: + vtcr->ps = ARM_LPAE_TCR_PS_36_BIT; + break; + case 40: + vtcr->ps = ARM_LPAE_TCR_PS_40_BIT; + break; + case 42: + vtcr->ps = ARM_LPAE_TCR_PS_42_BIT; + break; + case 44: + vtcr->ps = ARM_LPAE_TCR_PS_44_BIT; + break; + case 48: + vtcr->ps = ARM_LPAE_TCR_PS_48_BIT; + break; + case 52: + vtcr->ps = ARM_LPAE_TCR_PS_52_BIT; + break; + default: + return -EINVAL; + } + + vtcr->tsz = 64ULL - cfg->ias; + vtcr->sl = ~sl & ARM_LPAE_VTCR_SL0_MASK; + return 0; +} diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index db42aed6ad7b..b2b188bb86b3 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -21,9 +21,6 @@ #include -#define ARM_LPAE_MAX_ADDR_BITS 52 -#define ARM_LPAE_S2_MAX_CONCAT_PAGES 16 - bool selftest_running = false; static dma_addr_t __arm_lpae_dma_addr(void *pages) @@ -91,174 +88,17 @@ static void arm_lpae_free_pgtable(struct io_pgtable *iop) kfree(data); } -static void arm_lpae_restrict_pgsizes(struct io_pgtable_cfg *cfg) -{ - unsigned long granule, page_sizes; - unsigned int max_addr_bits = 48; - - /* - * We need to restrict the supported page sizes to match the - * translation regime for a particular granule. Aim to match - * the CPU page size if possible, otherwise prefer smaller sizes. - * While we're at it, restrict the block sizes to match the - * chosen granule. - */ - if (cfg->pgsize_bitmap & PAGE_SIZE) - granule = PAGE_SIZE; - else if (cfg->pgsize_bitmap & ~PAGE_MASK) - granule = 1UL << __fls(cfg->pgsize_bitmap & ~PAGE_MASK); - else if (cfg->pgsize_bitmap & PAGE_MASK) - granule = 1UL << __ffs(cfg->pgsize_bitmap & PAGE_MASK); - else - granule = 0; - - switch (granule) { - case SZ_4K: - page_sizes = (SZ_4K | SZ_2M | SZ_1G); - break; - case SZ_16K: - page_sizes = (SZ_16K | SZ_32M); - break; - case SZ_64K: - max_addr_bits = 52; - page_sizes = (SZ_64K | SZ_512M); - if (cfg->oas > 48) - page_sizes |= 1ULL << 42; /* 4TB */ - break; - default: - page_sizes = 0; - } - - cfg->pgsize_bitmap &= page_sizes; - cfg->ias = min(cfg->ias, max_addr_bits); - cfg->oas = min(cfg->oas, max_addr_bits); -} - -static struct arm_lpae_io_pgtable * -arm_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg) -{ - struct arm_lpae_io_pgtable *data; - int levels, va_bits, pg_shift; - - arm_lpae_restrict_pgsizes(cfg); - - if (!(cfg->pgsize_bitmap & (SZ_4K | SZ_16K | SZ_64K))) - return NULL; - - if (cfg->ias > ARM_LPAE_MAX_ADDR_BITS) - return NULL; - - if (cfg->oas > ARM_LPAE_MAX_ADDR_BITS) - return NULL; - - data = kmalloc(sizeof(*data), GFP_KERNEL); - if (!data) - return NULL; - - pg_shift = __ffs(cfg->pgsize_bitmap); - data->bits_per_level = pg_shift - ilog2(sizeof(arm_lpae_iopte)); - - va_bits = cfg->ias - pg_shift; - levels = DIV_ROUND_UP(va_bits, data->bits_per_level); - data->start_level = ARM_LPAE_MAX_LEVELS - levels; - - /* Calculate the actual size of our pgd (without concatenation) */ - data->pgd_bits = va_bits - (data->bits_per_level * (levels - 1)); - - data->iop.ops = (struct io_pgtable_ops) { - .map_pages = arm_lpae_map_pages, - .unmap_pages = arm_lpae_unmap_pages, - .iova_to_phys = arm_lpae_iova_to_phys, - }; - - return data; -} - static struct io_pgtable * arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) { - u64 reg; struct arm_lpae_io_pgtable *data; - typeof(&cfg->arm_lpae_s1_cfg.tcr) tcr = &cfg->arm_lpae_s1_cfg.tcr; - bool tg1; - - if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | - IO_PGTABLE_QUIRK_ARM_TTBR1 | - IO_PGTABLE_QUIRK_ARM_OUTER_WBWA)) - return NULL; - data = arm_lpae_alloc_pgtable(cfg); + data = kzalloc(sizeof(*data), GFP_KERNEL); if (!data) return NULL; - /* TCR */ - if (cfg->coherent_walk) { - tcr->sh = ARM_LPAE_TCR_SH_IS; - tcr->irgn = ARM_LPAE_TCR_RGN_WBWA; - tcr->orgn = ARM_LPAE_TCR_RGN_WBWA; - if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_OUTER_WBWA) - goto out_free_data; - } else { - tcr->sh = ARM_LPAE_TCR_SH_OS; - tcr->irgn = ARM_LPAE_TCR_RGN_NC; - if (!(cfg->quirks & IO_PGTABLE_QUIRK_ARM_OUTER_WBWA)) - tcr->orgn = ARM_LPAE_TCR_RGN_NC; - else - tcr->orgn = ARM_LPAE_TCR_RGN_WBWA; - } - - tg1 = cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1; - switch (ARM_LPAE_GRANULE(data)) { - case SZ_4K: - tcr->tg = tg1 ? ARM_LPAE_TCR_TG1_4K : ARM_LPAE_TCR_TG0_4K; - break; - case SZ_16K: - tcr->tg = tg1 ? ARM_LPAE_TCR_TG1_16K : ARM_LPAE_TCR_TG0_16K; - break; - case SZ_64K: - tcr->tg = tg1 ? ARM_LPAE_TCR_TG1_64K : ARM_LPAE_TCR_TG0_64K; - break; - } - - switch (cfg->oas) { - case 32: - tcr->ips = ARM_LPAE_TCR_PS_32_BIT; - break; - case 36: - tcr->ips = ARM_LPAE_TCR_PS_36_BIT; - break; - case 40: - tcr->ips = ARM_LPAE_TCR_PS_40_BIT; - break; - case 42: - tcr->ips = ARM_LPAE_TCR_PS_42_BIT; - break; - case 44: - tcr->ips = ARM_LPAE_TCR_PS_44_BIT; - break; - case 48: - tcr->ips = ARM_LPAE_TCR_PS_48_BIT; - break; - case 52: - tcr->ips = ARM_LPAE_TCR_PS_52_BIT; - break; - default: + if (arm_lpae_init_pgtable_s1(cfg, data)) goto out_free_data; - } - - tcr->tsz = 64ULL - cfg->ias; - - /* MAIRs */ - reg = (ARM_LPAE_MAIR_ATTR_NC - << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_NC)) | - (ARM_LPAE_MAIR_ATTR_WBRWA - << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_CACHE)) | - (ARM_LPAE_MAIR_ATTR_DEVICE - << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_DEV)) | - (ARM_LPAE_MAIR_ATTR_INC_OWBRWA - << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_INC_OCACHE)); - - cfg->arm_lpae_s1_cfg.mair = reg; /* Looking good; allocate a pgd */ data->pgd = __arm_lpae_alloc_pages(ARM_LPAE_PGD_SIZE(data), @@ -281,86 +121,14 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) static struct io_pgtable * arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) { - u64 sl; struct arm_lpae_io_pgtable *data; - typeof(&cfg->arm_lpae_s2_cfg.vtcr) vtcr = &cfg->arm_lpae_s2_cfg.vtcr; - - /* The NS quirk doesn't apply at stage 2 */ - if (cfg->quirks) - return NULL; - data = arm_lpae_alloc_pgtable(cfg); + data = kzalloc(sizeof(*data), GFP_KERNEL); if (!data) return NULL; - /* - * Concatenate PGDs at level 1 if possible in order to reduce - * the depth of the stage-2 walk. - */ - if (data->start_level == 0) { - unsigned long pgd_pages; - - pgd_pages = ARM_LPAE_PGD_SIZE(data) / sizeof(arm_lpae_iopte); - if (pgd_pages <= ARM_LPAE_S2_MAX_CONCAT_PAGES) { - data->pgd_bits += data->bits_per_level; - data->start_level++; - } - } - - /* VTCR */ - if (cfg->coherent_walk) { - vtcr->sh = ARM_LPAE_TCR_SH_IS; - vtcr->irgn = ARM_LPAE_TCR_RGN_WBWA; - vtcr->orgn = ARM_LPAE_TCR_RGN_WBWA; - } else { - vtcr->sh = ARM_LPAE_TCR_SH_OS; - vtcr->irgn = ARM_LPAE_TCR_RGN_NC; - vtcr->orgn = ARM_LPAE_TCR_RGN_NC; - } - - sl = data->start_level; - - switch (ARM_LPAE_GRANULE(data)) { - case SZ_4K: - vtcr->tg = ARM_LPAE_TCR_TG0_4K; - sl++; /* SL0 format is different for 4K granule size */ - break; - case SZ_16K: - vtcr->tg = ARM_LPAE_TCR_TG0_16K; - break; - case SZ_64K: - vtcr->tg = ARM_LPAE_TCR_TG0_64K; - break; - } - - switch (cfg->oas) { - case 32: - vtcr->ps = ARM_LPAE_TCR_PS_32_BIT; - break; - case 36: - vtcr->ps = ARM_LPAE_TCR_PS_36_BIT; - break; - case 40: - vtcr->ps = ARM_LPAE_TCR_PS_40_BIT; - break; - case 42: - vtcr->ps = ARM_LPAE_TCR_PS_42_BIT; - break; - case 44: - vtcr->ps = ARM_LPAE_TCR_PS_44_BIT; - break; - case 48: - vtcr->ps = ARM_LPAE_TCR_PS_48_BIT; - break; - case 52: - vtcr->ps = ARM_LPAE_TCR_PS_52_BIT; - break; - default: + if (arm_lpae_init_pgtable_s2(cfg, data)) goto out_free_data; - } - - vtcr->tsz = 64ULL - cfg->ias; - vtcr->sl = ~sl & ARM_LPAE_VTCR_SL0_MASK; /* Allocate pgd pages */ data->pgd = __arm_lpae_alloc_pages(ARM_LPAE_PGD_SIZE(data), @@ -414,10 +182,13 @@ arm_mali_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie) cfg->pgsize_bitmap &= (SZ_4K | SZ_2M | SZ_1G); - data = arm_lpae_alloc_pgtable(cfg); + data = kzalloc(sizeof(*data), GFP_KERNEL); if (!data) return NULL; + if (arm_lpae_init_pgtable(cfg, data)) + return NULL; + /* Mali seems to need a full 4-level table regardless of IAS */ if (data->start_level > 0) { data->start_level = 0; From patchwork Wed Feb 1 12:52:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124366 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BFCE2C636CD for ; Wed, 1 Feb 2023 14:02:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ioLaaTvjsgSZgz+Q+b69rAD1yPesmPvSgHbf4NLOsss=; b=MNXtC2g2uDvDJR voYdJh7NOVM6mF6sS+VbYuoBKDSyQADxIrnkOx7HF/motUFwFnDJFlAt4vEaKAzGREV9ef/jlDmb7 V6eEoMJ84QI4mDeAQmWKL8tbSlimbtN6T1+oTb+5wpdwlqsJP/56x+UfxHuv7goFxMSh8k2D1M9MO kMUKosArh6SrrwbxoFnohdmL8WqUgVlsklp9fARYs8n0FrxNj/aBIoTwzcvcxZeygMk1g45Aktoo7 LAtd7oVR+bZ6whX63EV8ZR3hvpEziOtG9JLEKB+YmiBKcPA83qHVQs74kRqc8aVPx2kCU5TFVXqag xmOkYy/oy3Gbdfvh5HjA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDfn-00CAvf-LP; Wed, 01 Feb 2023 14:01:15 +0000 Received: from mail-wr1-x433.google.com ([2a00:1450:4864:20::433]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNChw-00BnFY-GO for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:28 +0000 Received: by mail-wr1-x433.google.com with SMTP id a3so10533574wrt.6 for ; Wed, 01 Feb 2023 04:59:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YsGYNve8hX9I99GNWJXy7kiNdcmlj5MIAZ2Vs0tJhoA=; b=qeSqSanwTbubU+Gs5f4LmuyWyVcFf9vOQ/K9VM1yU8P+R1veMhiqdduLI0LCqbqjQR 3r0iVVDryHs3RNWzOXHKeK4dyheGdL+xGkF2I8xDrE2YHvrDfp3nA1XK1ONdm69OpjIO oTpvywH8dpaBeDEzY3dCy4dQVyx0Me8FDJQqy0Ao9ojz2O7AxTUr6zAqosuEE5HZadP+ 8qmTkD5Psuc1xfIpdj7hHI1CuaDGL7alNN6EHGaohYMa5FbEAL2Z6Xgv8L1GkZUukSMp m4nfdue5i/tpNTOnKwvYsIBCF3zs41OZvm/ICc3Ym4z6idZnDPS+4filSkNhnn2VNiZO uU6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YsGYNve8hX9I99GNWJXy7kiNdcmlj5MIAZ2Vs0tJhoA=; b=YoSZu/qAQwa6pCEcZ5nLSQX4Sjql9DO2PFv8Kpq/1JMIXDxMcrLITRzz0bJmo3k/cR 6oBx0ZdjgSNV1Jy2dF9Im/w0ygBX9iptqNE015lwaY3wb19P/Tu87YJhNC/ZJCGZnw9u EDiE1tn2e/KJBmXQXqTHG42R2z30uNGAucLbWbHvxvoHHnbBuPlWenmLWLE3SCDsM1b5 G6nbvrw9VgN2AcpQFoaxe+j8QduHUMKqBQhnKf8nlIORs60spfOQQqwCc+7v4HYmnfMI Fv7/5Zzwm5B+3gmczHhUw+sn5J6Q3cla/IZ7I+5pXoBnUWpiLJQ3ySX7q+AR1Dg3rj+c f4nA== X-Gm-Message-State: AO0yUKXJwIqIznb0ueEO0x1TzbO2bvNROySJA1BqpPblQHbQihEYs8ku ls0/7QDKNypFrNedllPq+tqD1w== X-Google-Smtp-Source: AK7set81v+022ldgub7AckASxsmAGBs929+vZShG8pFYswnLcnEnmvFxum9z5IiQFYu027L0fwbj/g== X-Received: by 2002:a5d:514d:0:b0:2bf:c364:47b3 with SMTP id u13-20020a5d514d000000b002bfc36447b3mr2323082wrt.26.1675256360707; Wed, 01 Feb 2023 04:59:20 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:20 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 03/45] iommu/io-pgtable: Move fmt into io_pgtable_cfg Date: Wed, 1 Feb 2023 12:52:47 +0000 Message-Id: <20230201125328.2186498-4-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045924_575767_8D78B3F5 X-CRM114-Status: GOOD ( 23.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When passing the I/O pagetable configuration around and adding new operations, it will be slightly more convenient to have fmt be part of the config structure rather than a separate parameter. Signed-off-by: Jean-Philippe Brucker --- include/linux/io-pgtable.h | 8 +++---- drivers/gpu/drm/msm/msm_iommu.c | 3 +-- drivers/gpu/drm/panfrost/panfrost_mmu.c | 4 ++-- drivers/iommu/amd/iommu.c | 3 ++- drivers/iommu/apple-dart.c | 4 ++-- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 3 ++- drivers/iommu/arm/arm-smmu/arm-smmu.c | 3 ++- drivers/iommu/arm/arm-smmu/qcom_iommu.c | 3 ++- drivers/iommu/io-pgtable-arm-common.c | 26 ++++++++++----------- drivers/iommu/io-pgtable-arm-v7s.c | 3 ++- drivers/iommu/io-pgtable-arm.c | 3 ++- drivers/iommu/io-pgtable-dart.c | 8 +++---- drivers/iommu/io-pgtable.c | 10 ++++---- drivers/iommu/ipmmu-vmsa.c | 4 ++-- drivers/iommu/msm_iommu.c | 3 ++- drivers/iommu/mtk_iommu.c | 3 ++- 16 files changed, 47 insertions(+), 44 deletions(-) diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index 1b7a44b35616..1b0c26241a78 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -49,6 +49,7 @@ struct iommu_flush_ops { /** * struct io_pgtable_cfg - Configuration data for a set of page tables. * + * @fmt Format used for these page tables * @quirks: A bitmap of hardware quirks that require some special * action by the low-level page table allocator. * @pgsize_bitmap: A bitmap of page sizes supported by this set of page @@ -62,6 +63,7 @@ struct iommu_flush_ops { * page table walker. */ struct io_pgtable_cfg { + enum io_pgtable_fmt fmt; /* * IO_PGTABLE_QUIRK_ARM_NS: (ARM formats) Set NS and NSTABLE bits in * stage 1 PTEs, for hardware which insists on validating them @@ -171,15 +173,13 @@ struct io_pgtable_ops { /** * alloc_io_pgtable_ops() - Allocate a page table allocator for use by an IOMMU. * - * @fmt: The page table format. * @cfg: The page table configuration. This will be modified to represent * the configuration actually provided by the allocator (e.g. the * pgsize_bitmap may be restricted). * @cookie: An opaque token provided by the IOMMU driver and passed back to * the callback routines in cfg->tlb. */ -struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt, - struct io_pgtable_cfg *cfg, +struct io_pgtable_ops *alloc_io_pgtable_ops(struct io_pgtable_cfg *cfg, void *cookie); /** @@ -199,14 +199,12 @@ void free_io_pgtable_ops(struct io_pgtable_ops *ops); /** * struct io_pgtable - Internal structure describing a set of page tables. * - * @fmt: The page table format. * @cookie: An opaque token provided by the IOMMU driver and passed back to * any callback routines. * @cfg: A copy of the page table configuration. * @ops: The page table operations in use for this set of page tables. */ struct io_pgtable { - enum io_pgtable_fmt fmt; void *cookie; struct io_pgtable_cfg cfg; struct io_pgtable_ops ops; diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index c2507582ecf3..e9c6f281e3dd 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -258,8 +258,7 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) ttbr0_cfg.quirks &= ~IO_PGTABLE_QUIRK_ARM_TTBR1; ttbr0_cfg.tlb = &null_tlb_ops; - pagetable->pgtbl_ops = alloc_io_pgtable_ops(ARM_64_LPAE_S1, - &ttbr0_cfg, iommu->domain); + pagetable->pgtbl_ops = alloc_io_pgtable_ops(&ttbr0_cfg, iommu->domain); if (!pagetable->pgtbl_ops) { kfree(pagetable); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 4e83a1891f3e..31bdb5d46244 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -622,6 +622,7 @@ struct panfrost_mmu *panfrost_mmu_ctx_create(struct panfrost_device *pfdev) mmu->as = -1; mmu->pgtbl_cfg = (struct io_pgtable_cfg) { + .fmt = ARM_MALI_LPAE, .pgsize_bitmap = SZ_4K | SZ_2M, .ias = FIELD_GET(0xff, pfdev->features.mmu_features), .oas = FIELD_GET(0xff00, pfdev->features.mmu_features), @@ -630,8 +631,7 @@ struct panfrost_mmu *panfrost_mmu_ctx_create(struct panfrost_device *pfdev) .iommu_dev = pfdev->dev, }; - mmu->pgtbl_ops = alloc_io_pgtable_ops(ARM_MALI_LPAE, &mmu->pgtbl_cfg, - mmu); + mmu->pgtbl_ops = alloc_io_pgtable_ops(&mmu->pgtbl_cfg, mmu); if (!mmu->pgtbl_ops) { kfree(mmu); return ERR_PTR(-EINVAL); diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index cbeaab55c0db..7efb6b467041 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -2072,7 +2072,8 @@ static struct protection_domain *protection_domain_alloc(unsigned int type) if (ret) goto out_err; - pgtbl_ops = alloc_io_pgtable_ops(pgtable, &domain->iop.pgtbl_cfg, domain); + domain->iop.pgtbl_cfg.fmt = pgtable; + pgtbl_ops = alloc_io_pgtable_ops(&domain->iop.pgtbl_cfg, domain); if (!pgtbl_ops) { domain_id_free(domain->id); goto out_err; diff --git a/drivers/iommu/apple-dart.c b/drivers/iommu/apple-dart.c index 4f4a323be0d0..571f948add7c 100644 --- a/drivers/iommu/apple-dart.c +++ b/drivers/iommu/apple-dart.c @@ -427,6 +427,7 @@ static int apple_dart_finalize_domain(struct iommu_domain *domain, } pgtbl_cfg = (struct io_pgtable_cfg){ + .fmt = dart->hw->fmt, .pgsize_bitmap = dart->pgsize, .ias = 32, .oas = dart->hw->oas, @@ -434,8 +435,7 @@ static int apple_dart_finalize_domain(struct iommu_domain *domain, .iommu_dev = dart->dev, }; - dart_domain->pgtbl_ops = - alloc_io_pgtable_ops(dart->hw->fmt, &pgtbl_cfg, domain); + dart_domain->pgtbl_ops = alloc_io_pgtable_ops(&pgtbl_cfg, domain); if (!dart_domain->pgtbl_ops) { ret = -ENOMEM; goto done; diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index ab160198edd6..c033b23ca4b2 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2209,6 +2209,7 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain, } pgtbl_cfg = (struct io_pgtable_cfg) { + .fmt = fmt, .pgsize_bitmap = smmu->pgsize_bitmap, .ias = ias, .oas = oas, @@ -2217,7 +2218,7 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain, .iommu_dev = smmu->dev, }; - pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain); + pgtbl_ops = alloc_io_pgtable_ops(&pgtbl_cfg, smmu_domain); if (!pgtbl_ops) return -ENOMEM; diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 719fbca1fe52..f230d2ce977a 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -747,6 +747,7 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, cfg->asid = cfg->cbndx; pgtbl_cfg = (struct io_pgtable_cfg) { + .fmt = fmt, .pgsize_bitmap = smmu->pgsize_bitmap, .ias = ias, .oas = oas, @@ -764,7 +765,7 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, if (smmu_domain->pgtbl_quirks) pgtbl_cfg.quirks |= smmu_domain->pgtbl_quirks; - pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain); + pgtbl_ops = alloc_io_pgtable_ops(&pgtbl_cfg, smmu_domain); if (!pgtbl_ops) { ret = -ENOMEM; goto out_clear_smmu; diff --git a/drivers/iommu/arm/arm-smmu/qcom_iommu.c b/drivers/iommu/arm/arm-smmu/qcom_iommu.c index 270c3d9128ba..65eb8bdcbe50 100644 --- a/drivers/iommu/arm/arm-smmu/qcom_iommu.c +++ b/drivers/iommu/arm/arm-smmu/qcom_iommu.c @@ -239,6 +239,7 @@ static int qcom_iommu_init_domain(struct iommu_domain *domain, goto out_unlock; pgtbl_cfg = (struct io_pgtable_cfg) { + .fmt = ARM_32_LPAE_S1, .pgsize_bitmap = qcom_iommu_ops.pgsize_bitmap, .ias = 32, .oas = 40, @@ -249,7 +250,7 @@ static int qcom_iommu_init_domain(struct iommu_domain *domain, qcom_domain->iommu = qcom_iommu; qcom_domain->fwspec = fwspec; - pgtbl_ops = alloc_io_pgtable_ops(ARM_32_LPAE_S1, &pgtbl_cfg, qcom_domain); + pgtbl_ops = alloc_io_pgtable_ops(&pgtbl_cfg, qcom_domain); if (!pgtbl_ops) { dev_err(qcom_iommu->dev, "failed to allocate pagetable ops\n"); ret = -ENOMEM; diff --git a/drivers/iommu/io-pgtable-arm-common.c b/drivers/iommu/io-pgtable-arm-common.c index 7340b5096499..4b3a9ce806ea 100644 --- a/drivers/iommu/io-pgtable-arm-common.c +++ b/drivers/iommu/io-pgtable-arm-common.c @@ -62,7 +62,7 @@ static void __arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data); int i; - if (data->iop.fmt != ARM_MALI_LPAE && lvl == ARM_LPAE_MAX_LEVELS - 1) + if (data->iop.cfg.fmt != ARM_MALI_LPAE && lvl == ARM_LPAE_MAX_LEVELS - 1) pte |= ARM_LPAE_PTE_TYPE_PAGE; else pte |= ARM_LPAE_PTE_TYPE_BLOCK; @@ -82,7 +82,7 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, int i; for (i = 0; i < num_entries; i++) - if (iopte_leaf(ptep[i], lvl, data->iop.fmt)) { + if (iopte_leaf(ptep[i], lvl, data->iop.cfg.fmt)) { /* We require an unmap first */ WARN_ON(!selftest_running); return -EEXIST; @@ -183,7 +183,7 @@ int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, __arm_lpae_sync_pte(ptep, 1, cfg); } - if (pte && !iopte_leaf(pte, lvl, data->iop.fmt)) { + if (pte && !iopte_leaf(pte, lvl, data->iop.cfg.fmt)) { cptep = iopte_deref(pte, data); } else if (pte) { /* We require an unmap first */ @@ -201,8 +201,8 @@ static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data, { arm_lpae_iopte pte; - if (data->iop.fmt == ARM_64_LPAE_S1 || - data->iop.fmt == ARM_32_LPAE_S1) { + if (data->iop.cfg.fmt == ARM_64_LPAE_S1 || + data->iop.cfg.fmt == ARM_32_LPAE_S1) { pte = ARM_LPAE_PTE_nG; if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ)) pte |= ARM_LPAE_PTE_AP_RDONLY; @@ -220,8 +220,8 @@ static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data, * Note that this logic is structured to accommodate Mali LPAE * having stage-1-like attributes but stage-2-like permissions. */ - if (data->iop.fmt == ARM_64_LPAE_S2 || - data->iop.fmt == ARM_32_LPAE_S2) { + if (data->iop.cfg.fmt == ARM_64_LPAE_S2 || + data->iop.cfg.fmt == ARM_32_LPAE_S2) { if (prot & IOMMU_MMIO) pte |= ARM_LPAE_PTE_MEMATTR_DEV; else if (prot & IOMMU_CACHE) @@ -243,7 +243,7 @@ static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data, * "outside the GPU" (i.e. either the Inner or System domain in CPU * terms, depending on coherency). */ - if (prot & IOMMU_CACHE && data->iop.fmt != ARM_MALI_LPAE) + if (prot & IOMMU_CACHE && data->iop.cfg.fmt != ARM_MALI_LPAE) pte |= ARM_LPAE_PTE_SH_IS; else pte |= ARM_LPAE_PTE_SH_OS; @@ -254,7 +254,7 @@ static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data, if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS) pte |= ARM_LPAE_PTE_NS; - if (data->iop.fmt != ARM_MALI_LPAE) + if (data->iop.cfg.fmt != ARM_MALI_LPAE) pte |= ARM_LPAE_PTE_AF; return pte; @@ -317,7 +317,7 @@ void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, while (ptep != end) { arm_lpae_iopte pte = *ptep++; - if (!pte || iopte_leaf(pte, lvl, data->iop.fmt)) + if (!pte || iopte_leaf(pte, lvl, data->iop.cfg.fmt)) continue; __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); @@ -417,7 +417,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, __arm_lpae_clear_pte(ptep, &iop->cfg); - if (!iopte_leaf(pte, lvl, iop->fmt)) { + if (!iopte_leaf(pte, lvl, iop->cfg.fmt)) { /* Also flush any partial walks */ io_pgtable_tlb_flush_walk(iop, iova + i * size, size, ARM_LPAE_GRANULE(data)); @@ -431,7 +431,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, } return i * size; - } else if (iopte_leaf(pte, lvl, iop->fmt)) { + } else if (iopte_leaf(pte, lvl, iop->cfg.fmt)) { /* * Insert a table at the next level to map the old region, * minus the part we want to unmap @@ -487,7 +487,7 @@ phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, return 0; /* Leaf entry? */ - if (iopte_leaf(pte, lvl, data->iop.fmt)) + if (iopte_leaf(pte, lvl, data->iop.cfg.fmt)) goto found_translation; /* Take it to the next level */ diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c index 75f244a3e12d..278b4299d757 100644 --- a/drivers/iommu/io-pgtable-arm-v7s.c +++ b/drivers/iommu/io-pgtable-arm-v7s.c @@ -930,6 +930,7 @@ static int __init arm_v7s_do_selftests(void) { struct io_pgtable_ops *ops; struct io_pgtable_cfg cfg = { + .fmt = ARM_V7S, .tlb = &dummy_tlb_ops, .oas = 32, .ias = 32, @@ -945,7 +946,7 @@ static int __init arm_v7s_do_selftests(void) cfg_cookie = &cfg; - ops = alloc_io_pgtable_ops(ARM_V7S, &cfg, &cfg); + ops = alloc_io_pgtable_ops(&cfg, &cfg); if (!ops) { pr_err("selftest: failed to allocate io pgtable ops\n"); return -EINVAL; diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index b2b188bb86b3..b76b903400de 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -319,7 +319,8 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) for (i = 0; i < ARRAY_SIZE(fmts); ++i) { cfg_cookie = cfg; - ops = alloc_io_pgtable_ops(fmts[i], cfg, cfg); + cfg->fmt = fmts[i]; + ops = alloc_io_pgtable_ops(cfg, cfg); if (!ops) { pr_err("selftest: failed to allocate io pgtable ops\n"); return -ENOMEM; diff --git a/drivers/iommu/io-pgtable-dart.c b/drivers/iommu/io-pgtable-dart.c index 74b1ef2b96be..f981b25d8c98 100644 --- a/drivers/iommu/io-pgtable-dart.c +++ b/drivers/iommu/io-pgtable-dart.c @@ -81,7 +81,7 @@ static dart_iopte paddr_to_iopte(phys_addr_t paddr, { dart_iopte pte; - if (data->iop.fmt == APPLE_DART) + if (data->iop.cfg.fmt == APPLE_DART) return paddr & APPLE_DART1_PADDR_MASK; /* format is APPLE_DART2 */ @@ -96,7 +96,7 @@ static phys_addr_t iopte_to_paddr(dart_iopte pte, { u64 paddr; - if (data->iop.fmt == APPLE_DART) + if (data->iop.cfg.fmt == APPLE_DART) return pte & APPLE_DART1_PADDR_MASK; /* format is APPLE_DART2 */ @@ -215,13 +215,13 @@ static dart_iopte dart_prot_to_pte(struct dart_io_pgtable *data, { dart_iopte pte = 0; - if (data->iop.fmt == APPLE_DART) { + if (data->iop.cfg.fmt == APPLE_DART) { if (!(prot & IOMMU_WRITE)) pte |= APPLE_DART1_PTE_PROT_NO_WRITE; if (!(prot & IOMMU_READ)) pte |= APPLE_DART1_PTE_PROT_NO_READ; } - if (data->iop.fmt == APPLE_DART2) { + if (data->iop.cfg.fmt == APPLE_DART2) { if (!(prot & IOMMU_WRITE)) pte |= APPLE_DART2_PTE_PROT_NO_WRITE; if (!(prot & IOMMU_READ)) diff --git a/drivers/iommu/io-pgtable.c b/drivers/iommu/io-pgtable.c index b843fcd365d2..79e459f95012 100644 --- a/drivers/iommu/io-pgtable.c +++ b/drivers/iommu/io-pgtable.c @@ -34,17 +34,16 @@ io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] = { #endif }; -struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt, - struct io_pgtable_cfg *cfg, +struct io_pgtable_ops *alloc_io_pgtable_ops(struct io_pgtable_cfg *cfg, void *cookie) { struct io_pgtable *iop; const struct io_pgtable_init_fns *fns; - if (fmt >= IO_PGTABLE_NUM_FMTS) + if (cfg->fmt >= IO_PGTABLE_NUM_FMTS) return NULL; - fns = io_pgtable_init_table[fmt]; + fns = io_pgtable_init_table[cfg->fmt]; if (!fns) return NULL; @@ -52,7 +51,6 @@ struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt, if (!iop) return NULL; - iop->fmt = fmt; iop->cookie = cookie; iop->cfg = *cfg; @@ -73,6 +71,6 @@ void free_io_pgtable_ops(struct io_pgtable_ops *ops) iop = io_pgtable_ops_to_pgtable(ops); io_pgtable_tlb_flush_all(iop); - io_pgtable_init_table[iop->fmt]->free(iop); + io_pgtable_init_table[iop->cfg.fmt]->free(iop); } EXPORT_SYMBOL_GPL(free_io_pgtable_ops); diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c index a003bd5fc65c..4a1927489635 100644 --- a/drivers/iommu/ipmmu-vmsa.c +++ b/drivers/iommu/ipmmu-vmsa.c @@ -447,6 +447,7 @@ static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain) */ domain->cfg.coherent_walk = false; domain->cfg.iommu_dev = domain->mmu->root->dev; + domain->cfg.fmt = ARM_32_LPAE_S1; /* * Find an unused context. @@ -457,8 +458,7 @@ static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain) domain->context_id = ret; - domain->iop = alloc_io_pgtable_ops(ARM_32_LPAE_S1, &domain->cfg, - domain); + domain->iop = alloc_io_pgtable_ops(&domain->cfg, domain); if (!domain->iop) { ipmmu_domain_free_context(domain->mmu->root, domain->context_id); diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c index c60624910872..2c05a84ec1bf 100644 --- a/drivers/iommu/msm_iommu.c +++ b/drivers/iommu/msm_iommu.c @@ -342,6 +342,7 @@ static int msm_iommu_domain_config(struct msm_priv *priv) spin_lock_init(&priv->pgtlock); priv->cfg = (struct io_pgtable_cfg) { + .fmt = ARM_V7S, .pgsize_bitmap = msm_iommu_ops.pgsize_bitmap, .ias = 32, .oas = 32, @@ -349,7 +350,7 @@ static int msm_iommu_domain_config(struct msm_priv *priv) .iommu_dev = priv->dev, }; - priv->iop = alloc_io_pgtable_ops(ARM_V7S, &priv->cfg, priv); + priv->iop = alloc_io_pgtable_ops(&priv->cfg, priv); if (!priv->iop) { dev_err(priv->dev, "Failed to allocate pgtable\n"); return -EINVAL; diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c index 2badd6acfb23..0d754d94ae52 100644 --- a/drivers/iommu/mtk_iommu.c +++ b/drivers/iommu/mtk_iommu.c @@ -598,6 +598,7 @@ static int mtk_iommu_domain_finalise(struct mtk_iommu_domain *dom, } dom->cfg = (struct io_pgtable_cfg) { + .fmt = ARM_V7S, .quirks = IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_NO_PERMS | IO_PGTABLE_QUIRK_ARM_MTK_EXT, @@ -614,7 +615,7 @@ static int mtk_iommu_domain_finalise(struct mtk_iommu_domain *dom, else dom->cfg.oas = 35; - dom->iop = alloc_io_pgtable_ops(ARM_V7S, &dom->cfg, data); + dom->iop = alloc_io_pgtable_ops(&dom->cfg, data); if (!dom->iop) { dev_err(data->dev, "Failed to alloc io pgtable\n"); return -ENOMEM; From patchwork Wed Feb 1 12:52:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124364 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 37A6EC05027 for ; Wed, 1 Feb 2023 14:01:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QOdwKeIkuvIhiZlkUQEK0GWnP59APeUsHzrfm1myle4=; b=luecAHbSOyM52n 8LA+awqQOYSvbgyIm+NgPFwg92fCD6CMIpcNVvzlP+T8E7WE3/RbzDIMzklMA+iSgP7nkYVgErbKi Uf3XewCKTPqx63/ZA0M/xxayHvKCfMxxzHxrwx7FoQSbstSH3JcWGrj+Ksp2mG/fmqE2riDtpCGuG Mg9QJGAGbC/Tky7LsfHqAnyMJDL6P/i6n4qWpnEAVurfG8DneR0TzjtZ2MUbd8+4QpO2SUfK4G4Lp Oj7Np2jnxM5dCNZkVDiV7fb/1pJTD2HssPKCB/WVhQ0/WY0ofcQtw+i9eTAG9WJSelMrjCx7JL+XT B9SpX92ReCBnxSqkZoQw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDfT-00CAq1-Jr; Wed, 01 Feb 2023 14:00:55 +0000 Received: from mail-wr1-x42c.google.com ([2a00:1450:4864:20::42c]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNChx-00BnFc-6H for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:27 +0000 Received: by mail-wr1-x42c.google.com with SMTP id q10so17232558wrm.4 for ; Wed, 01 Feb 2023 04:59:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eAC1lP6EVsxQ6cWZKQ+kd+wU7MPt1TmTk8/S67gp2M0=; b=w4vJzCvawMc999TOnrJV0tQeBCtOsQRWvV57bojsKb9lEtYGP8IGDHzH0S2aPhAj/I udFzlNkNn8lPVdxLZutwCEqnCwpLVXtl+K0pkZtHFmZGDdOKsUEg6VCCpkjitLIjrofB Y7AJ9gaJF6cOP3ry9E+TBd84XkGGHDpJsG/9RLCwoCQXDjzwAkviNizOIUZ51vQfyYDV SLJ0SPIXHHsjc++GMa5+6adQEbEISPWv4KYsYpRGyZePuV1tjsJQxgsUCIMd6vi+2/JK ATFtIXFy9NNxBpWc5cgRBp2efiJfSQc88/62AnpEayqWkwUdi2HboEjp0Fo+9MIt0zW6 EhCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eAC1lP6EVsxQ6cWZKQ+kd+wU7MPt1TmTk8/S67gp2M0=; b=vkig0oMYU0vTDGazur1AyKBSQhfQVLESsXipld0xrawSIuSdZQ3ARBNBSdc4eXvCIm x7l/JP3aIOXTLvGU/1DjXojHjdo7uTpJnewkMq1Xct2rNCue5ubPch5If7TEJeHKM4Xi KiVVoRLjcFAGPfs5Eq/DZ1cNh6q2P4D26lxAEiVb2DRVK3dV0r2Ok+EUsbFdpHmZ6Ltv xlm1erz1RxVKcB3WPPcvXzkJa2qnVFoLo585jpPMdjmqmxsCwncEqDPQ+ICxofTPuvDi C3MfhznODJkpmtaDkAV3aAyxe7wZGX7+P5M/TASsdfnTKKuyUvKYoeqjcY5YKQSr+HcA WYXw== X-Gm-Message-State: AO0yUKWWqhU0i+UfZhc0o6TlA+WY5KHvaHm8JkmQcMU/xBvYR6kq5a/j fU3XfyM4qHHv+lNThz/kl+eVlg== X-Google-Smtp-Source: AK7set8lcZHDDrM2/NwAgTEvSykoOX8z0G7efK9YIQNKtEjePM8qYTEccwXQhcVFZQ/ZP3uvTMGjvw== X-Received: by 2002:a05:6000:1449:b0:2bd:d34e:5355 with SMTP id v9-20020a056000144900b002bdd34e5355mr3116261wrx.20.1675256361465; Wed, 01 Feb 2023 04:59:21 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:21 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 04/45] iommu/io-pgtable: Add configure() operation Date: Wed, 1 Feb 2023 12:52:48 +0000 Message-Id: <20230201125328.2186498-5-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045925_250494_94B1C416 X-CRM114-Status: GOOD ( 16.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Allow IOMMU drivers to create the io-pgtable configuration without allocating any tables. This will be used by the SMMUv3-KVM driver to initialize a config and pass it to KVM. Signed-off-by: Jean-Philippe Brucker --- include/linux/io-pgtable.h | 14 +++++++++++ drivers/iommu/io-pgtable-arm.c | 46 ++++++++++++++++++++++++++-------- drivers/iommu/io-pgtable.c | 15 +++++++++++ 3 files changed, 65 insertions(+), 10 deletions(-) diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index 1b0c26241a78..ee6484d7a5e0 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -191,6 +191,18 @@ struct io_pgtable_ops *alloc_io_pgtable_ops(struct io_pgtable_cfg *cfg, */ void free_io_pgtable_ops(struct io_pgtable_ops *ops); +/** + * io_pgtable_configure - Create page table config + * + * @cfg: The page table configuration. + * @pgd_size: On success, size of the top-level table in bytes. + * + * Initialize @cfg in the same way as alloc_io_pgtable_ops(), without allocating + * anything. + * + * Not all io_pgtable drivers implement this operation. + */ +int io_pgtable_configure(struct io_pgtable_cfg *cfg, size_t *pgd_size); /* * Internal structures for page table allocator implementations. @@ -241,10 +253,12 @@ io_pgtable_tlb_add_page(struct io_pgtable *iop, * * @alloc: Allocate a set of page tables described by cfg. * @free: Free the page tables associated with iop. + * @configure: Create the configuration without allocating anything. Optional. */ struct io_pgtable_init_fns { struct io_pgtable *(*alloc)(struct io_pgtable_cfg *cfg, void *cookie); void (*free)(struct io_pgtable *iop); + int (*configure)(struct io_pgtable_cfg *cfg, size_t *pgd_size); }; extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s1_init_fns; diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index b76b903400de..c412500efadf 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -118,6 +118,18 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) return NULL; } +static int arm_64_lpae_configure_s1(struct io_pgtable_cfg *cfg, size_t *pgd_size) +{ + int ret; + struct arm_lpae_io_pgtable data = {}; + + ret = arm_lpae_init_pgtable_s1(cfg, &data); + if (ret) + return ret; + *pgd_size = sizeof(arm_lpae_iopte) << data.pgd_bits; + return 0; +} + static struct io_pgtable * arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) { @@ -148,6 +160,18 @@ arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) return NULL; } +static int arm_64_lpae_configure_s2(struct io_pgtable_cfg *cfg, size_t *pgd_size) +{ + int ret; + struct arm_lpae_io_pgtable data = {}; + + ret = arm_lpae_init_pgtable_s2(cfg, &data); + if (ret) + return ret; + *pgd_size = sizeof(arm_lpae_iopte) << data.pgd_bits; + return 0; +} + static struct io_pgtable * arm_32_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) { @@ -231,28 +255,30 @@ arm_mali_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie) } struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s1_init_fns = { - .alloc = arm_64_lpae_alloc_pgtable_s1, - .free = arm_lpae_free_pgtable, + .alloc = arm_64_lpae_alloc_pgtable_s1, + .free = arm_lpae_free_pgtable, + .configure = arm_64_lpae_configure_s1, }; struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s2_init_fns = { - .alloc = arm_64_lpae_alloc_pgtable_s2, - .free = arm_lpae_free_pgtable, + .alloc = arm_64_lpae_alloc_pgtable_s2, + .free = arm_lpae_free_pgtable, + .configure = arm_64_lpae_configure_s2, }; struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s1_init_fns = { - .alloc = arm_32_lpae_alloc_pgtable_s1, - .free = arm_lpae_free_pgtable, + .alloc = arm_32_lpae_alloc_pgtable_s1, + .free = arm_lpae_free_pgtable, }; struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s2_init_fns = { - .alloc = arm_32_lpae_alloc_pgtable_s2, - .free = arm_lpae_free_pgtable, + .alloc = arm_32_lpae_alloc_pgtable_s2, + .free = arm_lpae_free_pgtable, }; struct io_pgtable_init_fns io_pgtable_arm_mali_lpae_init_fns = { - .alloc = arm_mali_lpae_alloc_pgtable, - .free = arm_lpae_free_pgtable, + .alloc = arm_mali_lpae_alloc_pgtable, + .free = arm_lpae_free_pgtable, }; #ifdef CONFIG_IOMMU_IO_PGTABLE_LPAE_SELFTEST diff --git a/drivers/iommu/io-pgtable.c b/drivers/iommu/io-pgtable.c index 79e459f95012..2aba691db1da 100644 --- a/drivers/iommu/io-pgtable.c +++ b/drivers/iommu/io-pgtable.c @@ -74,3 +74,18 @@ void free_io_pgtable_ops(struct io_pgtable_ops *ops) io_pgtable_init_table[iop->cfg.fmt]->free(iop); } EXPORT_SYMBOL_GPL(free_io_pgtable_ops); + +int io_pgtable_configure(struct io_pgtable_cfg *cfg, size_t *pgd_size) +{ + const struct io_pgtable_init_fns *fns; + + if (cfg->fmt >= IO_PGTABLE_NUM_FMTS) + return -EINVAL; + + fns = io_pgtable_init_table[cfg->fmt]; + if (!fns || !fns->configure) + return -EOPNOTSUPP; + + return fns->configure(cfg, pgd_size); +} +EXPORT_SYMBOL_GPL(io_pgtable_configure); From patchwork Wed Feb 1 12:52:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 56EC6C636D3 for ; Wed, 1 Feb 2023 14:03:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GvFkvDea082t3w/A+6p4en0aYDIDZMGKPjEez6AiKl4=; b=DXq68IvWOsI87W 5xDUEXjps+x1xzomJ1aAo6kgkWUdekljUjHJvIRk74fHu1H+jHRE86ZmzV4L1N9kmyLOiJYxuCY1J j/kn96aVi8OrDT72RBEtZbsyTDn5hmTodPZWbjFLDUg7nRhv0d7BIFwC8sY396bNDLHNhHuStmRiN duEDoc9si5s33I3oFSaCDDZKf2tbfSpfsbk3/YoTlDyBte/WogNEHk+zFDAUx2sJiALVgjnJ5ktnt PAJwr7BBPwJeI7q/z3880zNkeya+5wZbxGke8OQsjjQpVRkcyQeV+wnBmc9EnTe02fwvwxg33b6SD BIvmI1FdbN7h0GAOLjbw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDh3-00CBTn-Tr; Wed, 01 Feb 2023 14:02:34 +0000 Received: from mail-wr1-x429.google.com ([2a00:1450:4864:20::429]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNChy-00BnGh-PA for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:33 +0000 Received: by mail-wr1-x429.google.com with SMTP id m14so16759927wrg.13 for ; Wed, 01 Feb 2023 04:59:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RHLwawGn7KCMEsPkT6mUyPEKGaKUKhQMK9CXzzW3QLc=; b=VP5g2YIc09On9kPzMJyABgqRc1w7dfyzMbeQ4c2mg5QEeLUNTCuOIx1JxrmhjSfoCu P84usaicGuB3wrllXJ70ZPmfRCjHVjixOpp31glMKJ4ragjRIuLbqOhH1P4TJ+ArSkWK cJZ7S2yyT2KFJSluZATiI/BqupBdg2+1PaJpM21asG05jn4Cl3ZpYXtyNEaLplLUnXtW APBEoi0jHiayPPX7015hpw13AoH6Q4dePWZRynoDZsIS1C9JXwsrLLkS7zkcX1tKPIYz 2V45fa/xm7TsrKke7JqeCqi5dzfRayQhseDNA3Jicf3Qa57YcbA005MDTXBRbirSKdC6 XNBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RHLwawGn7KCMEsPkT6mUyPEKGaKUKhQMK9CXzzW3QLc=; b=tvB3FBdz1/rT6cb1oT8tRVIcZ9t64cL4jLKOx/Ix36ql1K+7IOKuslhJ4Zvn+WJcaB 6YFN9+15WgRLSrdWuNmFSp0O8RoLh73AdhZoZJYT0HSlSC5ttUGd+/oy4n742PwqsxuL cTNLlhrJIZ/Og2m9m4VixmrAySOi/keb+Naey+amUTdZBCJvpMo88gh8OA0i1zj4jWzl 1x41wxerJIOC2TKC/tkpbLE/BXqXOHBO7jBElGpKokqa77L109e1sRok5O1gy3a0ARBf TXEt/FN8O3k9bwLVe7Rvg4FYYWnkLCrH6KpHu4jyrxJIYrrUfE4yQdCMGGRfl6ZsI6vX RWpw== X-Gm-Message-State: AO0yUKVWWA0inOQ+px8SAMxagz6WjA1TmOSS5Cx5wojngja/pEaQPHab NG5x2KhQFw+abmBBxujolLBe5w== X-Google-Smtp-Source: AK7set/R/X9r3sOLiZgJ7ZMXY8wHc1io8F5kfjX+9TO5Vnp+asYo/ssEbNKZzeiSoHhodzSQUCklyg== X-Received: by 2002:a5d:4d84:0:b0:2bf:f5c0:8b65 with SMTP id b4-20020a5d4d84000000b002bff5c08b65mr2310683wru.70.1675256362798; Wed, 01 Feb 2023 04:59:22 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:22 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker , Abhinav Kumar , Alyssa Rosenzweig , Andy Gross , Bjorn Andersson , Daniel Vetter , David Airlie , Dmitry Baryshkov , Hector Martin , Konrad Dybcio , Matthias Brugger , Rob Clark , Rob Herring , Sean Paul , Steven Price , Suravee Suthikulpanit , Sven Peter , Tomeu Vizoso , Yong Wu Subject: [RFC PATCH 05/45] iommu/io-pgtable: Split io_pgtable structure Date: Wed, 1 Feb 2023 12:52:49 +0000 Message-Id: <20230201125328.2186498-6-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The io_pgtable structure contains all information needed for io-pgtable ops map() and unmap(), including a static configuration, driver-facing ops, TLB callbacks and the PGD pointer. Most of these are common to all sets of page tables for a given configuration, and really only need one instance. Split the structure in two: * io_pgtable_params contains information that is common to all sets of page tables for a given io_pgtable_cfg. * io_pgtable contains information that is different for each set of page tables, namely the PGD and the IOMMU driver cookie passed to TLB callbacks. Keep essentially the same interface for IOMMU drivers, but move it behind a set of helpers. The goal is to optimize for space, in order to allocate less memory in the KVM SMMU driver. While storing 64k io-pgtables with identical configuration would previously require 10MB, it is now 512kB because the driver only needs to store the pgd for each domain. Note that the io_pgtable_cfg still contains the TTBRs, which are specific to a set of page tables. Most of them can be removed, since IOMMU drivers can trivially obtain them with virt_to_phys(iop->pgd). Some architectures do have static configuration bits in the TTBR that need to be kept. Unfortunately the split does add an additional dereference which degrades performance slightly. Running a single-threaded dma-map benchmark on a server with SMMUv3, I measured a regression of 7-9ns for map() and 32-78ns for unmap(), which is a slowdown of about 4% and 8% respectively. Cc: Abhinav Kumar Cc: Alyssa Rosenzweig Cc: Andy Gross Cc: Bjorn Andersson Cc: Daniel Vetter Cc: David Airlie Cc: Dmitry Baryshkov Cc: Hector Martin Cc: Konrad Dybcio Cc: Matthias Brugger Cc: Rob Clark Cc: Rob Herring Cc: Sean Paul Cc: Steven Price Cc: Suravee Suthikulpanit Cc: Sven Peter Cc: Tomeu Vizoso Cc: Yong Wu Signed-off-by: Jean-Philippe Brucker --- drivers/gpu/drm/panfrost/panfrost_device.h | 2 +- drivers/iommu/amd/amd_iommu_types.h | 17 +- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 3 +- drivers/iommu/arm/arm-smmu/arm-smmu.h | 2 +- include/linux/io-pgtable-arm.h | 12 +- include/linux/io-pgtable.h | 94 +++++++--- drivers/gpu/drm/msm/msm_iommu.c | 21 ++- drivers/gpu/drm/panfrost/panfrost_mmu.c | 20 +-- drivers/iommu/amd/io_pgtable.c | 26 +-- drivers/iommu/amd/io_pgtable_v2.c | 43 ++--- drivers/iommu/amd/iommu.c | 28 ++- drivers/iommu/apple-dart.c | 36 ++-- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 34 ++-- drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c | 7 +- drivers/iommu/arm/arm-smmu/arm-smmu.c | 40 ++--- drivers/iommu/arm/arm-smmu/qcom_iommu.c | 40 ++--- drivers/iommu/io-pgtable-arm-common.c | 80 +++++---- drivers/iommu/io-pgtable-arm-v7s.c | 189 ++++++++++---------- drivers/iommu/io-pgtable-arm.c | 158 ++++++++-------- drivers/iommu/io-pgtable-dart.c | 97 +++++----- drivers/iommu/io-pgtable.c | 36 ++-- drivers/iommu/ipmmu-vmsa.c | 18 +- drivers/iommu/msm_iommu.c | 17 +- drivers/iommu/mtk_iommu.c | 13 +- 24 files changed, 519 insertions(+), 514 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index 8b25278f34c8..8a610c4b8f03 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -126,7 +126,7 @@ struct panfrost_mmu { struct panfrost_device *pfdev; struct kref refcount; struct io_pgtable_cfg pgtbl_cfg; - struct io_pgtable_ops *pgtbl_ops; + struct io_pgtable pgtbl; struct drm_mm mm; spinlock_t mm_lock; int as; diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h index 3d684190b4d5..5920a556f7ec 100644 --- a/drivers/iommu/amd/amd_iommu_types.h +++ b/drivers/iommu/amd/amd_iommu_types.h @@ -516,10 +516,10 @@ struct amd_irte_ops; #define AMD_IOMMU_FLAG_TRANS_PRE_ENABLED (1 << 0) #define io_pgtable_to_data(x) \ - container_of((x), struct amd_io_pgtable, iop) + container_of((x), struct amd_io_pgtable, iop_params) #define io_pgtable_ops_to_data(x) \ - io_pgtable_to_data(io_pgtable_ops_to_pgtable(x)) + io_pgtable_to_data(io_pgtable_ops_to_params(x)) #define io_pgtable_ops_to_domain(x) \ container_of(io_pgtable_ops_to_data(x), \ @@ -529,12 +529,13 @@ struct amd_irte_ops; container_of((x), struct amd_io_pgtable, pgtbl_cfg) struct amd_io_pgtable { - struct io_pgtable_cfg pgtbl_cfg; - struct io_pgtable iop; - int mode; - u64 *root; - atomic64_t pt_root; /* pgtable root and pgtable mode */ - u64 *pgd; /* v2 pgtable pgd pointer */ + struct io_pgtable_cfg pgtbl_cfg; + struct io_pgtable iop; + struct io_pgtable_params iop_params; + int mode; + u64 *root; + atomic64_t pt_root; /* pgtable root and pgtable mode */ + u64 *pgd; /* v2 pgtable pgd pointer */ }; /* diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 8d772ea8a583..cec3c8103404 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -10,6 +10,7 @@ #include #include +#include #include #include #include @@ -710,7 +711,7 @@ struct arm_smmu_domain { struct arm_smmu_device *smmu; struct mutex init_mutex; /* Protects smmu pointer */ - struct io_pgtable_ops *pgtbl_ops; + struct io_pgtable pgtbl; bool stall_enabled; atomic_t nr_ats_masters; diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.h b/drivers/iommu/arm/arm-smmu/arm-smmu.h index 703fd5817ec1..249825fc71ac 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.h +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.h @@ -366,7 +366,7 @@ enum arm_smmu_domain_stage { struct arm_smmu_domain { struct arm_smmu_device *smmu; - struct io_pgtable_ops *pgtbl_ops; + struct io_pgtable pgtbl; unsigned long pgtbl_quirks; const struct iommu_flush_ops *flush_ops; struct arm_smmu_cfg cfg; diff --git a/include/linux/io-pgtable-arm.h b/include/linux/io-pgtable-arm.h index 42202bc0ffa2..5199bd9851b6 100644 --- a/include/linux/io-pgtable-arm.h +++ b/include/linux/io-pgtable-arm.h @@ -9,13 +9,11 @@ extern bool selftest_running; typedef u64 arm_lpae_iopte; struct arm_lpae_io_pgtable { - struct io_pgtable iop; + struct io_pgtable_params iop; - int pgd_bits; - int start_level; - int bits_per_level; - - void *pgd; + int pgd_bits; + int start_level; + int bits_per_level; }; /* Struct accessors */ @@ -23,7 +21,7 @@ struct arm_lpae_io_pgtable { container_of((x), struct arm_lpae_io_pgtable, iop) #define io_pgtable_ops_to_data(x) \ - io_pgtable_to_data(io_pgtable_ops_to_pgtable(x)) + io_pgtable_to_data(io_pgtable_ops_to_params(x)) /* * Calculate the right shift amount to get to the portion describing level l diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index ee6484d7a5e0..cce5ddbf71c7 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -149,6 +149,20 @@ struct io_pgtable_cfg { }; }; +/** + * struct io_pgtable - Structure describing a set of page tables. + * + * @ops: The page table operations in use for this set of page tables. + * @cookie: An opaque token provided by the IOMMU driver and passed back to + * any callback routines. + * @pgd: Virtual address of the page directory. + */ +struct io_pgtable { + struct io_pgtable_ops *ops; + void *cookie; + void *pgd; +}; + /** * struct io_pgtable_ops - Page table manipulation API for IOMMU drivers. * @@ -160,36 +174,64 @@ struct io_pgtable_cfg { * the same names. */ struct io_pgtable_ops { - int (*map_pages)(struct io_pgtable_ops *ops, unsigned long iova, + int (*map_pages)(struct io_pgtable *iop, unsigned long iova, phys_addr_t paddr, size_t pgsize, size_t pgcount, int prot, gfp_t gfp, size_t *mapped); - size_t (*unmap_pages)(struct io_pgtable_ops *ops, unsigned long iova, + size_t (*unmap_pages)(struct io_pgtable *iop, unsigned long iova, size_t pgsize, size_t pgcount, struct iommu_iotlb_gather *gather); - phys_addr_t (*iova_to_phys)(struct io_pgtable_ops *ops, - unsigned long iova); + phys_addr_t (*iova_to_phys)(struct io_pgtable *iop, unsigned long iova); }; +static inline int +iopt_map_pages(struct io_pgtable *iop, unsigned long iova, phys_addr_t paddr, + size_t pgsize, size_t pgcount, int prot, gfp_t gfp, + size_t *mapped) +{ + if (!iop->ops || !iop->ops->map_pages) + return -EINVAL; + return iop->ops->map_pages(iop, iova, paddr, pgsize, pgcount, prot, gfp, + mapped); +} + +static inline size_t +iopt_unmap_pages(struct io_pgtable *iop, unsigned long iova, size_t pgsize, + size_t pgcount, struct iommu_iotlb_gather *gather) +{ + if (!iop->ops || !iop->ops->map_pages) + return 0; + return iop->ops->unmap_pages(iop, iova, pgsize, pgcount, gather); +} + +static inline phys_addr_t +iopt_iova_to_phys(struct io_pgtable *iop, unsigned long iova) +{ + if (!iop->ops || !iop->ops->iova_to_phys) + return 0; + return iop->ops->iova_to_phys(iop, iova); +} + /** * alloc_io_pgtable_ops() - Allocate a page table allocator for use by an IOMMU. * + * @iop: The page table object, filled with the allocated ops on success * @cfg: The page table configuration. This will be modified to represent * the configuration actually provided by the allocator (e.g. the * pgsize_bitmap may be restricted). * @cookie: An opaque token provided by the IOMMU driver and passed back to * the callback routines in cfg->tlb. */ -struct io_pgtable_ops *alloc_io_pgtable_ops(struct io_pgtable_cfg *cfg, - void *cookie); +int alloc_io_pgtable_ops(struct io_pgtable *iop, struct io_pgtable_cfg *cfg, + void *cookie); /** - * free_io_pgtable_ops() - Free an io_pgtable_ops structure. The caller + * free_io_pgtable_ops() - Free the page table. The caller * *must* ensure that the page table is no longer * live, but the TLB can be dirty. * - * @ops: The ops returned from alloc_io_pgtable_ops. + * @iop: The iop object passed to alloc_io_pgtable_ops */ -void free_io_pgtable_ops(struct io_pgtable_ops *ops); +void free_io_pgtable_ops(struct io_pgtable *iop); /** * io_pgtable_configure - Create page table config @@ -209,42 +251,41 @@ int io_pgtable_configure(struct io_pgtable_cfg *cfg, size_t *pgd_size); */ /** - * struct io_pgtable - Internal structure describing a set of page tables. + * struct io_pgtable_params - Internal structure describing parameters for a + * given page table configuration * - * @cookie: An opaque token provided by the IOMMU driver and passed back to - * any callback routines. * @cfg: A copy of the page table configuration. * @ops: The page table operations in use for this set of page tables. */ -struct io_pgtable { - void *cookie; +struct io_pgtable_params { struct io_pgtable_cfg cfg; struct io_pgtable_ops ops; }; -#define io_pgtable_ops_to_pgtable(x) container_of((x), struct io_pgtable, ops) +#define io_pgtable_ops_to_params(x) container_of((x), struct io_pgtable_params, ops) -static inline void io_pgtable_tlb_flush_all(struct io_pgtable *iop) +static inline void io_pgtable_tlb_flush_all(struct io_pgtable_cfg *cfg, + struct io_pgtable *iop) { - if (iop->cfg.tlb && iop->cfg.tlb->tlb_flush_all) - iop->cfg.tlb->tlb_flush_all(iop->cookie); + if (cfg->tlb && cfg->tlb->tlb_flush_all) + cfg->tlb->tlb_flush_all(iop->cookie); } static inline void -io_pgtable_tlb_flush_walk(struct io_pgtable *iop, unsigned long iova, - size_t size, size_t granule) +io_pgtable_tlb_flush_walk(struct io_pgtable_cfg *cfg, struct io_pgtable *iop, + unsigned long iova, size_t size, size_t granule) { - if (iop->cfg.tlb && iop->cfg.tlb->tlb_flush_walk) - iop->cfg.tlb->tlb_flush_walk(iova, size, granule, iop->cookie); + if (cfg->tlb && cfg->tlb->tlb_flush_walk) + cfg->tlb->tlb_flush_walk(iova, size, granule, iop->cookie); } static inline void -io_pgtable_tlb_add_page(struct io_pgtable *iop, +io_pgtable_tlb_add_page(struct io_pgtable_cfg *cfg, struct io_pgtable *iop, struct iommu_iotlb_gather * gather, unsigned long iova, size_t granule) { - if (iop->cfg.tlb && iop->cfg.tlb->tlb_add_page) - iop->cfg.tlb->tlb_add_page(gather, iova, granule, iop->cookie); + if (cfg->tlb && cfg->tlb->tlb_add_page) + cfg->tlb->tlb_add_page(gather, iova, granule, iop->cookie); } /** @@ -256,7 +297,8 @@ io_pgtable_tlb_add_page(struct io_pgtable *iop, * @configure: Create the configuration without allocating anything. Optional. */ struct io_pgtable_init_fns { - struct io_pgtable *(*alloc)(struct io_pgtable_cfg *cfg, void *cookie); + int (*alloc)(struct io_pgtable *iop, struct io_pgtable_cfg *cfg, + void *cookie); void (*free)(struct io_pgtable *iop); int (*configure)(struct io_pgtable_cfg *cfg, size_t *pgd_size); }; diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index e9c6f281e3dd..e372ca6cd79c 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -20,7 +20,7 @@ struct msm_iommu { struct msm_iommu_pagetable { struct msm_mmu base; struct msm_mmu *parent; - struct io_pgtable_ops *pgtbl_ops; + struct io_pgtable pgtbl; unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */ phys_addr_t ttbr; u32 asid; @@ -90,14 +90,14 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, size_t size) { struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); - struct io_pgtable_ops *ops = pagetable->pgtbl_ops; while (size) { size_t unmapped, pgsize, count; pgsize = calc_pgsize(pagetable, iova, iova, size, &count); - unmapped = ops->unmap_pages(ops, iova, pgsize, count, NULL); + unmapped = iopt_unmap_pages(&pagetable->pgtbl, iova, pgsize, + count, NULL); if (!unmapped) break; @@ -114,7 +114,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, struct sg_table *sgt, size_t len, int prot) { struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); - struct io_pgtable_ops *ops = pagetable->pgtbl_ops; + struct io_pgtable *iop = &pagetable->pgtbl; struct scatterlist *sg; u64 addr = iova; unsigned int i; @@ -129,7 +129,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, pgsize = calc_pgsize(pagetable, addr, phys, size, &count); - ret = ops->map_pages(ops, addr, phys, pgsize, count, + ret = iopt_map_pages(iop, addr, phys, pgsize, count, prot, GFP_KERNEL, &mapped); /* map_pages could fail after mapping some of the pages, @@ -163,7 +163,7 @@ static void msm_iommu_pagetable_destroy(struct msm_mmu *mmu) if (atomic_dec_return(&iommu->pagetables) == 0) adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, NULL); - free_io_pgtable_ops(pagetable->pgtbl_ops); + free_io_pgtable_ops(&pagetable->pgtbl); kfree(pagetable); } @@ -258,11 +258,10 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) ttbr0_cfg.quirks &= ~IO_PGTABLE_QUIRK_ARM_TTBR1; ttbr0_cfg.tlb = &null_tlb_ops; - pagetable->pgtbl_ops = alloc_io_pgtable_ops(&ttbr0_cfg, iommu->domain); - - if (!pagetable->pgtbl_ops) { + ret = alloc_io_pgtable_ops(&pagetable->pgtbl, &ttbr0_cfg, iommu->domain); + if (ret) { kfree(pagetable); - return ERR_PTR(-ENOMEM); + return ERR_PTR(ret); } /* @@ -275,7 +274,7 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) ret = adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, &ttbr0_cfg); if (ret) { - free_io_pgtable_ops(pagetable->pgtbl_ops); + free_io_pgtable_ops(&pagetable->pgtbl); kfree(pagetable); return ERR_PTR(ret); } diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 31bdb5d46244..118b49ab120f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -290,7 +290,6 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu, { unsigned int count; struct scatterlist *sgl; - struct io_pgtable_ops *ops = mmu->pgtbl_ops; u64 start_iova = iova; for_each_sgtable_dma_sg(sgt, sgl, count) { @@ -303,8 +302,8 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu, size_t pgcount, mapped = 0; size_t pgsize = get_pgsize(iova | paddr, len, &pgcount); - ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, - GFP_KERNEL, &mapped); + iopt_map_pages(&mmu->pgtbl, iova, paddr, pgsize, + pgcount, prot, GFP_KERNEL, &mapped); /* Don't get stuck if things have gone wrong */ mapped = max(mapped, pgsize); iova += mapped; @@ -349,7 +348,7 @@ void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping) struct panfrost_gem_object *bo = mapping->obj; struct drm_gem_object *obj = &bo->base.base; struct panfrost_device *pfdev = to_panfrost_device(obj->dev); - struct io_pgtable_ops *ops = mapping->mmu->pgtbl_ops; + struct io_pgtable *iop = &mapping->mmu->pgtbl; u64 iova = mapping->mmnode.start << PAGE_SHIFT; size_t len = mapping->mmnode.size << PAGE_SHIFT; size_t unmapped_len = 0; @@ -366,8 +365,8 @@ void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping) if (bo->is_heap) pgcount = 1; - if (!bo->is_heap || ops->iova_to_phys(ops, iova)) { - unmapped_page = ops->unmap_pages(ops, iova, pgsize, pgcount, NULL); + if (!bo->is_heap || iopt_iova_to_phys(iop, iova)) { + unmapped_page = iopt_unmap_pages(iop, iova, pgsize, pgcount, NULL); WARN_ON(unmapped_page != pgsize * pgcount); } iova += pgsize * pgcount; @@ -560,7 +559,7 @@ static void panfrost_mmu_release_ctx(struct kref *kref) } spin_unlock(&pfdev->as_lock); - free_io_pgtable_ops(mmu->pgtbl_ops); + free_io_pgtable_ops(&mmu->pgtbl); drm_mm_takedown(&mmu->mm); kfree(mmu); } @@ -605,6 +604,7 @@ static void panfrost_drm_mm_color_adjust(const struct drm_mm_node *node, struct panfrost_mmu *panfrost_mmu_ctx_create(struct panfrost_device *pfdev) { + int ret; struct panfrost_mmu *mmu; mmu = kzalloc(sizeof(*mmu), GFP_KERNEL); @@ -631,10 +631,10 @@ struct panfrost_mmu *panfrost_mmu_ctx_create(struct panfrost_device *pfdev) .iommu_dev = pfdev->dev, }; - mmu->pgtbl_ops = alloc_io_pgtable_ops(&mmu->pgtbl_cfg, mmu); - if (!mmu->pgtbl_ops) { + ret = alloc_io_pgtable_ops(&mmu->pgtbl, &mmu->pgtbl_cfg, mmu); + if (ret) { kfree(mmu); - return ERR_PTR(-EINVAL); + return ERR_PTR(ret); } kref_init(&mmu->refcount); diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c index ace0e9b8b913..f9ea551404ba 100644 --- a/drivers/iommu/amd/io_pgtable.c +++ b/drivers/iommu/amd/io_pgtable.c @@ -360,11 +360,11 @@ static void free_clear_pte(u64 *pte, u64 pteval, struct list_head *freelist) * supporting all features of AMD IOMMU page tables like level skipping * and full 64 bit address spaces. */ -static int iommu_v1_map_pages(struct io_pgtable_ops *ops, unsigned long iova, +static int iommu_v1_map_pages(struct io_pgtable *iop, unsigned long iova, phys_addr_t paddr, size_t pgsize, size_t pgcount, int prot, gfp_t gfp, size_t *mapped) { - struct protection_domain *dom = io_pgtable_ops_to_domain(ops); + struct protection_domain *dom = io_pgtable_ops_to_domain(iop->ops); LIST_HEAD(freelist); bool updated = false; u64 __pte, *pte; @@ -435,12 +435,12 @@ static int iommu_v1_map_pages(struct io_pgtable_ops *ops, unsigned long iova, return ret; } -static unsigned long iommu_v1_unmap_pages(struct io_pgtable_ops *ops, +static unsigned long iommu_v1_unmap_pages(struct io_pgtable *iop, unsigned long iova, size_t pgsize, size_t pgcount, struct iommu_iotlb_gather *gather) { - struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops); + struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(iop->ops); unsigned long long unmapped; unsigned long unmap_size; u64 *pte; @@ -469,9 +469,9 @@ static unsigned long iommu_v1_unmap_pages(struct io_pgtable_ops *ops, return unmapped; } -static phys_addr_t iommu_v1_iova_to_phys(struct io_pgtable_ops *ops, unsigned long iova) +static phys_addr_t iommu_v1_iova_to_phys(struct io_pgtable *iop, unsigned long iova) { - struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops); + struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(iop->ops); unsigned long offset_mask, pte_pgsize; u64 *pte, __pte; @@ -491,7 +491,7 @@ static phys_addr_t iommu_v1_iova_to_phys(struct io_pgtable_ops *ops, unsigned lo */ static void v1_free_pgtable(struct io_pgtable *iop) { - struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, iop); + struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(iop->ops); struct protection_domain *dom; LIST_HEAD(freelist); @@ -515,7 +515,8 @@ static void v1_free_pgtable(struct io_pgtable *iop) put_pages_list(&freelist); } -static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie) +int v1_alloc_pgtable(struct io_pgtable *iop, struct io_pgtable_cfg *cfg, + void *cookie) { struct amd_io_pgtable *pgtable = io_pgtable_cfg_to_data(cfg); @@ -524,11 +525,12 @@ static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *coo cfg->oas = IOMMU_OUT_ADDR_BIT_SIZE, cfg->tlb = &v1_flush_ops; - pgtable->iop.ops.map_pages = iommu_v1_map_pages; - pgtable->iop.ops.unmap_pages = iommu_v1_unmap_pages; - pgtable->iop.ops.iova_to_phys = iommu_v1_iova_to_phys; + pgtable->iop_params.ops.map_pages = iommu_v1_map_pages; + pgtable->iop_params.ops.unmap_pages = iommu_v1_unmap_pages; + pgtable->iop_params.ops.iova_to_phys = iommu_v1_iova_to_phys; + iop->ops = &pgtable->iop_params.ops; - return &pgtable->iop; + return 0; } struct io_pgtable_init_fns io_pgtable_amd_iommu_v1_init_fns = { diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c index 8638ddf6fb3b..52acb8f11a27 100644 --- a/drivers/iommu/amd/io_pgtable_v2.c +++ b/drivers/iommu/amd/io_pgtable_v2.c @@ -239,12 +239,12 @@ static u64 *fetch_pte(struct amd_io_pgtable *pgtable, return pte; } -static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova, +static int iommu_v2_map_pages(struct io_pgtable *iop, unsigned long iova, phys_addr_t paddr, size_t pgsize, size_t pgcount, int prot, gfp_t gfp, size_t *mapped) { - struct protection_domain *pdom = io_pgtable_ops_to_domain(ops); - struct io_pgtable_cfg *cfg = &pdom->iop.iop.cfg; + struct protection_domain *pdom = io_pgtable_ops_to_domain(iop->ops); + struct io_pgtable_cfg *cfg = &pdom->iop.iop_params.cfg; u64 *pte; unsigned long map_size; unsigned long mapped_size = 0; @@ -290,13 +290,13 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova, return ret; } -static unsigned long iommu_v2_unmap_pages(struct io_pgtable_ops *ops, +static unsigned long iommu_v2_unmap_pages(struct io_pgtable *iop, unsigned long iova, size_t pgsize, size_t pgcount, struct iommu_iotlb_gather *gather) { - struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops); - struct io_pgtable_cfg *cfg = &pgtable->iop.cfg; + struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(iop->ops); + struct io_pgtable_cfg *cfg = &pgtable->iop_params.cfg; unsigned long unmap_size; unsigned long unmapped = 0; size_t size = pgcount << __ffs(pgsize); @@ -319,9 +319,9 @@ static unsigned long iommu_v2_unmap_pages(struct io_pgtable_ops *ops, return unmapped; } -static phys_addr_t iommu_v2_iova_to_phys(struct io_pgtable_ops *ops, unsigned long iova) +static phys_addr_t iommu_v2_iova_to_phys(struct io_pgtable *iop, unsigned long iova) { - struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops); + struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(iop->ops); unsigned long offset_mask, pte_pgsize; u64 *pte, __pte; @@ -362,7 +362,7 @@ static const struct iommu_flush_ops v2_flush_ops = { static void v2_free_pgtable(struct io_pgtable *iop) { struct protection_domain *pdom; - struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, iop); + struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(iop->ops); pdom = container_of(pgtable, struct protection_domain, iop); if (!(pdom->flags & PD_IOMMUV2_MASK)) @@ -375,38 +375,39 @@ static void v2_free_pgtable(struct io_pgtable *iop) amd_iommu_domain_update(pdom); /* Free page table */ - free_pgtable(pgtable->pgd, get_pgtable_level()); + free_pgtable(iop->pgd, get_pgtable_level()); } -static struct io_pgtable *v2_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie) +int v2_alloc_pgtable(struct io_pgtable *iop, struct io_pgtable_cfg *cfg, void *cookie) { struct amd_io_pgtable *pgtable = io_pgtable_cfg_to_data(cfg); struct protection_domain *pdom = (struct protection_domain *)cookie; int ret; - pgtable->pgd = alloc_pgtable_page(); - if (!pgtable->pgd) - return NULL; + iop->pgd = alloc_pgtable_page(); + if (!iop->pgd) + return -ENOMEM; - ret = amd_iommu_domain_set_gcr3(&pdom->domain, 0, iommu_virt_to_phys(pgtable->pgd)); + ret = amd_iommu_domain_set_gcr3(&pdom->domain, 0, iommu_virt_to_phys(iop->pgd)); if (ret) goto err_free_pgd; - pgtable->iop.ops.map_pages = iommu_v2_map_pages; - pgtable->iop.ops.unmap_pages = iommu_v2_unmap_pages; - pgtable->iop.ops.iova_to_phys = iommu_v2_iova_to_phys; + pgtable->iop_params.ops.map_pages = iommu_v2_map_pages; + pgtable->iop_params.ops.unmap_pages = iommu_v2_unmap_pages; + pgtable->iop_params.ops.iova_to_phys = iommu_v2_iova_to_phys; + iop->ops = &pgtable->iop_params.ops; cfg->pgsize_bitmap = AMD_IOMMU_PGSIZES_V2, cfg->ias = IOMMU_IN_ADDR_BIT_SIZE, cfg->oas = IOMMU_OUT_ADDR_BIT_SIZE, cfg->tlb = &v2_flush_ops; - return &pgtable->iop; + return 0; err_free_pgd: - free_pgtable_page(pgtable->pgd); + free_pgtable_page(iop->pgd); - return NULL; + return ret; } struct io_pgtable_init_fns io_pgtable_amd_iommu_v2_init_fns = { diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index 7efb6b467041..51f9cecdcb6b 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -1984,7 +1984,7 @@ static void protection_domain_free(struct protection_domain *domain) return; if (domain->iop.pgtbl_cfg.tlb) - free_io_pgtable_ops(&domain->iop.iop.ops); + free_io_pgtable_ops(&domain->iop.iop); if (domain->id) domain_id_free(domain->id); @@ -2037,7 +2037,6 @@ static int protection_domain_init_v2(struct protection_domain *domain) static struct protection_domain *protection_domain_alloc(unsigned int type) { - struct io_pgtable_ops *pgtbl_ops; struct protection_domain *domain; int pgtable = amd_iommu_pgtable; int mode = DEFAULT_PGTABLE_LEVEL; @@ -2073,8 +2072,9 @@ static struct protection_domain *protection_domain_alloc(unsigned int type) goto out_err; domain->iop.pgtbl_cfg.fmt = pgtable; - pgtbl_ops = alloc_io_pgtable_ops(&domain->iop.pgtbl_cfg, domain); - if (!pgtbl_ops) { + ret = alloc_io_pgtable_ops(&domain->iop.iop, &domain->iop.pgtbl_cfg, + domain); + if (ret) { domain_id_free(domain->id); goto out_err; } @@ -2185,7 +2185,7 @@ static void amd_iommu_iotlb_sync_map(struct iommu_domain *dom, unsigned long iova, size_t size) { struct protection_domain *domain = to_pdomain(dom); - struct io_pgtable_ops *ops = &domain->iop.iop.ops; + struct io_pgtable_ops *ops = domain->iop.iop.ops; if (ops->map_pages) domain_flush_np_cache(domain, iova, size); @@ -2196,9 +2196,7 @@ static int amd_iommu_map_pages(struct iommu_domain *dom, unsigned long iova, int iommu_prot, gfp_t gfp, size_t *mapped) { struct protection_domain *domain = to_pdomain(dom); - struct io_pgtable_ops *ops = &domain->iop.iop.ops; int prot = 0; - int ret = -EINVAL; if ((amd_iommu_pgtable == AMD_IOMMU_V1) && (domain->iop.mode == PAGE_MODE_NONE)) @@ -2209,12 +2207,8 @@ static int amd_iommu_map_pages(struct iommu_domain *dom, unsigned long iova, if (iommu_prot & IOMMU_WRITE) prot |= IOMMU_PROT_IW; - if (ops->map_pages) { - ret = ops->map_pages(ops, iova, paddr, pgsize, - pgcount, prot, gfp, mapped); - } - - return ret; + return iopt_map_pages(&domain->iop.iop, iova, paddr, pgsize, pgcount, + prot, gfp, mapped); } static void amd_iommu_iotlb_gather_add_page(struct iommu_domain *domain, @@ -2243,14 +2237,13 @@ static size_t amd_iommu_unmap_pages(struct iommu_domain *dom, unsigned long iova struct iommu_iotlb_gather *gather) { struct protection_domain *domain = to_pdomain(dom); - struct io_pgtable_ops *ops = &domain->iop.iop.ops; size_t r; if ((amd_iommu_pgtable == AMD_IOMMU_V1) && (domain->iop.mode == PAGE_MODE_NONE)) return 0; - r = (ops->unmap_pages) ? ops->unmap_pages(ops, iova, pgsize, pgcount, NULL) : 0; + r = iopt_unmap_pages(&domain->iop.iop, iova, pgsize, pgcount, NULL); if (r) amd_iommu_iotlb_gather_add_page(dom, gather, iova, r); @@ -2262,9 +2255,8 @@ static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom, dma_addr_t iova) { struct protection_domain *domain = to_pdomain(dom); - struct io_pgtable_ops *ops = &domain->iop.iop.ops; - return ops->iova_to_phys(ops, iova); + return iopt_iova_to_phys(&domain->iop.iop, iova); } static bool amd_iommu_capable(struct device *dev, enum iommu_cap cap) @@ -2460,7 +2452,7 @@ void amd_iommu_domain_direct_map(struct iommu_domain *dom) spin_lock_irqsave(&domain->lock, flags); if (domain->iop.pgtbl_cfg.tlb) - free_io_pgtable_ops(&domain->iop.iop.ops); + free_io_pgtable_ops(&domain->iop.iop); spin_unlock_irqrestore(&domain->lock, flags); } diff --git a/drivers/iommu/apple-dart.c b/drivers/iommu/apple-dart.c index 571f948add7c..b806019f925b 100644 --- a/drivers/iommu/apple-dart.c +++ b/drivers/iommu/apple-dart.c @@ -150,14 +150,14 @@ struct apple_dart_atomic_stream_map { /* * This structure is attached to each iommu domain handled by a DART. * - * @pgtbl_ops: pagetable ops allocated by io-pgtable + * @pgtbl: pagetable allocated by io-pgtable * @finalized: true if the domain has been completely initialized * @init_lock: protects domain initialization * @stream_maps: streams attached to this domain (valid for DMA/UNMANAGED only) * @domain: core iommu domain pointer */ struct apple_dart_domain { - struct io_pgtable_ops *pgtbl_ops; + struct io_pgtable pgtbl; bool finalized; struct mutex init_lock; @@ -354,12 +354,8 @@ static phys_addr_t apple_dart_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) { struct apple_dart_domain *dart_domain = to_dart_domain(domain); - struct io_pgtable_ops *ops = dart_domain->pgtbl_ops; - if (!ops) - return 0; - - return ops->iova_to_phys(ops, iova); + return iopt_iova_to_phys(&dart_domain->pgtbl, iova); } static int apple_dart_map_pages(struct iommu_domain *domain, unsigned long iova, @@ -368,13 +364,9 @@ static int apple_dart_map_pages(struct iommu_domain *domain, unsigned long iova, size_t *mapped) { struct apple_dart_domain *dart_domain = to_dart_domain(domain); - struct io_pgtable_ops *ops = dart_domain->pgtbl_ops; - - if (!ops) - return -ENODEV; - return ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, gfp, - mapped); + return iopt_map_pages(&dart_domain->pgtbl, iova, paddr, pgsize, pgcount, + prot, gfp, mapped); } static size_t apple_dart_unmap_pages(struct iommu_domain *domain, @@ -383,9 +375,9 @@ static size_t apple_dart_unmap_pages(struct iommu_domain *domain, struct iommu_iotlb_gather *gather) { struct apple_dart_domain *dart_domain = to_dart_domain(domain); - struct io_pgtable_ops *ops = dart_domain->pgtbl_ops; - return ops->unmap_pages(ops, iova, pgsize, pgcount, gather); + return iopt_unmap_pages(&dart_domain->pgtbl, iova, pgsize, pgcount, + gather); } static void @@ -394,7 +386,7 @@ apple_dart_setup_translation(struct apple_dart_domain *domain, { int i; struct io_pgtable_cfg *pgtbl_cfg = - &io_pgtable_ops_to_pgtable(domain->pgtbl_ops)->cfg; + &io_pgtable_ops_to_params(domain->pgtbl.ops)->cfg; for (i = 0; i < pgtbl_cfg->apple_dart_cfg.n_ttbrs; ++i) apple_dart_hw_set_ttbr(stream_map, i, @@ -435,11 +427,9 @@ static int apple_dart_finalize_domain(struct iommu_domain *domain, .iommu_dev = dart->dev, }; - dart_domain->pgtbl_ops = alloc_io_pgtable_ops(&pgtbl_cfg, domain); - if (!dart_domain->pgtbl_ops) { - ret = -ENOMEM; + ret = alloc_io_pgtable_ops(&dart_domain->pgtbl, &pgtbl_cfg, domain); + if (ret) goto done; - } domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap; domain->geometry.aperture_start = 0; @@ -590,7 +580,7 @@ static struct iommu_domain *apple_dart_domain_alloc(unsigned int type) mutex_init(&dart_domain->init_lock); - /* no need to allocate pgtbl_ops or do any other finalization steps */ + /* no need to allocate pgtbl or do any other finalization steps */ if (type == IOMMU_DOMAIN_IDENTITY || type == IOMMU_DOMAIN_BLOCKED) dart_domain->finalized = true; @@ -601,8 +591,8 @@ static void apple_dart_domain_free(struct iommu_domain *domain) { struct apple_dart_domain *dart_domain = to_dart_domain(domain); - if (dart_domain->pgtbl_ops) - free_io_pgtable_ops(dart_domain->pgtbl_ops); + if (dart_domain->pgtbl.ops) + free_io_pgtable_ops(&dart_domain->pgtbl); kfree(dart_domain); } diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index c033b23ca4b2..97d24ee5c14d 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2058,7 +2058,7 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); struct arm_smmu_device *smmu = smmu_domain->smmu; - free_io_pgtable_ops(smmu_domain->pgtbl_ops); + free_io_pgtable_ops(&smmu_domain->pgtbl); /* Free the CD and ASID, if we allocated them */ if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { @@ -2171,7 +2171,6 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain, unsigned long ias, oas; enum io_pgtable_fmt fmt; struct io_pgtable_cfg pgtbl_cfg; - struct io_pgtable_ops *pgtbl_ops; int (*finalise_stage_fn)(struct arm_smmu_domain *, struct arm_smmu_master *, struct io_pgtable_cfg *); @@ -2218,9 +2217,9 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain, .iommu_dev = smmu->dev, }; - pgtbl_ops = alloc_io_pgtable_ops(&pgtbl_cfg, smmu_domain); - if (!pgtbl_ops) - return -ENOMEM; + ret = alloc_io_pgtable_ops(&smmu_domain->pgtbl, &pgtbl_cfg, smmu_domain); + if (ret) + return ret; domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap; domain->geometry.aperture_end = (1UL << pgtbl_cfg.ias) - 1; @@ -2228,11 +2227,10 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain, ret = finalise_stage_fn(smmu_domain, master, &pgtbl_cfg); if (ret < 0) { - free_io_pgtable_ops(pgtbl_ops); + free_io_pgtable_ops(&smmu_domain->pgtbl); return ret; } - smmu_domain->pgtbl_ops = pgtbl_ops; return 0; } @@ -2468,12 +2466,10 @@ static int arm_smmu_map_pages(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t pgsize, size_t pgcount, int prot, gfp_t gfp, size_t *mapped) { - struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops; - - if (!ops) - return -ENODEV; + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); - return ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, gfp, mapped); + return iopt_map_pages(&smmu_domain->pgtbl, iova, paddr, pgsize, pgcount, + prot, gfp, mapped); } static size_t arm_smmu_unmap_pages(struct iommu_domain *domain, unsigned long iova, @@ -2481,12 +2477,9 @@ static size_t arm_smmu_unmap_pages(struct iommu_domain *domain, unsigned long io struct iommu_iotlb_gather *gather) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); - struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops; - if (!ops) - return 0; - - return ops->unmap_pages(ops, iova, pgsize, pgcount, gather); + return iopt_unmap_pages(&smmu_domain->pgtbl, iova, pgsize, pgcount, + gather); } static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain) @@ -2513,12 +2506,9 @@ static void arm_smmu_iotlb_sync(struct iommu_domain *domain, static phys_addr_t arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) { - struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops; - - if (!ops) - return 0; + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); - return ops->iova_to_phys(ops, iova); + return iopt_iova_to_phys(&smmu_domain->pgtbl, iova); } static struct platform_driver arm_smmu_driver; diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c index 91d404deb115..0673841167be 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c @@ -122,8 +122,8 @@ static const struct io_pgtable_cfg *qcom_adreno_smmu_get_ttbr1_cfg( const void *cookie) { struct arm_smmu_domain *smmu_domain = (void *)cookie; - struct io_pgtable *pgtable = - io_pgtable_ops_to_pgtable(smmu_domain->pgtbl_ops); + struct io_pgtable_params *pgtable = + io_pgtable_ops_to_params(smmu_domain->pgtbl.ops); return &pgtable->cfg; } @@ -137,7 +137,8 @@ static int qcom_adreno_smmu_set_ttbr0_cfg(const void *cookie, const struct io_pgtable_cfg *pgtbl_cfg) { struct arm_smmu_domain *smmu_domain = (void *)cookie; - struct io_pgtable *pgtable = io_pgtable_ops_to_pgtable(smmu_domain->pgtbl_ops); + struct io_pgtable_params *pgtable = + io_pgtable_ops_to_params(smmu_domain->pgtbl.ops); struct arm_smmu_cfg *cfg = &smmu_domain->cfg; struct arm_smmu_cb *cb = &smmu_domain->smmu->cbs[cfg->cbndx]; diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index f230d2ce977a..201055254d5b 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -614,7 +614,6 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, { int irq, start, ret = 0; unsigned long ias, oas; - struct io_pgtable_ops *pgtbl_ops; struct io_pgtable_cfg pgtbl_cfg; enum io_pgtable_fmt fmt; struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); @@ -765,11 +764,9 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, if (smmu_domain->pgtbl_quirks) pgtbl_cfg.quirks |= smmu_domain->pgtbl_quirks; - pgtbl_ops = alloc_io_pgtable_ops(&pgtbl_cfg, smmu_domain); - if (!pgtbl_ops) { - ret = -ENOMEM; + ret = alloc_io_pgtable_ops(&smmu_domain->pgtbl, &pgtbl_cfg, smmu_domain); + if (ret) goto out_clear_smmu; - } /* Update the domain's page sizes to reflect the page table format */ domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap; @@ -808,8 +805,6 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, mutex_unlock(&smmu_domain->init_mutex); - /* Publish page table ops for map/unmap */ - smmu_domain->pgtbl_ops = pgtbl_ops; return 0; out_clear_smmu: @@ -846,7 +841,7 @@ static void arm_smmu_destroy_domain_context(struct iommu_domain *domain) devm_free_irq(smmu->dev, irq, domain); } - free_io_pgtable_ops(smmu_domain->pgtbl_ops); + free_io_pgtable_ops(&smmu_domain->pgtbl); __arm_smmu_free_bitmap(smmu->context_map, cfg->cbndx); arm_smmu_rpm_put(smmu); @@ -1181,15 +1176,13 @@ static int arm_smmu_map_pages(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t pgsize, size_t pgcount, int prot, gfp_t gfp, size_t *mapped) { - struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops; - struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu; + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_device *smmu = smmu_domain->smmu; int ret; - if (!ops) - return -ENODEV; - arm_smmu_rpm_get(smmu); - ret = ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, gfp, mapped); + ret = iopt_map_pages(&smmu_domain->pgtbl, iova, paddr, pgsize, pgcount, + prot, gfp, mapped); arm_smmu_rpm_put(smmu); return ret; @@ -1199,15 +1192,13 @@ static size_t arm_smmu_unmap_pages(struct iommu_domain *domain, unsigned long io size_t pgsize, size_t pgcount, struct iommu_iotlb_gather *iotlb_gather) { - struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops; - struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu; + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_device *smmu = smmu_domain->smmu; size_t ret; - if (!ops) - return 0; - arm_smmu_rpm_get(smmu); - ret = ops->unmap_pages(ops, iova, pgsize, pgcount, iotlb_gather); + ret = iopt_unmap_pages(&smmu_domain->pgtbl, iova, pgsize, pgcount, + iotlb_gather); arm_smmu_rpm_put(smmu); return ret; @@ -1249,7 +1240,6 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain, struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_cfg *cfg = &smmu_domain->cfg; - struct io_pgtable_ops *ops= smmu_domain->pgtbl_ops; struct device *dev = smmu->dev; void __iomem *reg; u32 tmp; @@ -1277,7 +1267,7 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain, "iova to phys timed out on %pad. Falling back to software table walk.\n", &iova); arm_smmu_rpm_put(smmu); - return ops->iova_to_phys(ops, iova); + return iopt_iova_to_phys(&smmu_domain->pgtbl, iova); } phys = arm_smmu_cb_readq(smmu, idx, ARM_SMMU_CB_PAR); @@ -1299,16 +1289,12 @@ static phys_addr_t arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); - struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops; - - if (!ops) - return 0; if (smmu_domain->smmu->features & ARM_SMMU_FEAT_TRANS_OPS && smmu_domain->stage == ARM_SMMU_DOMAIN_S1) return arm_smmu_iova_to_phys_hard(domain, iova); - return ops->iova_to_phys(ops, iova); + return iopt_iova_to_phys(&smmu_domain->pgtbl, iova); } static bool arm_smmu_capable(struct device *dev, enum iommu_cap cap) diff --git a/drivers/iommu/arm/arm-smmu/qcom_iommu.c b/drivers/iommu/arm/arm-smmu/qcom_iommu.c index 65eb8bdcbe50..56676dd84462 100644 --- a/drivers/iommu/arm/arm-smmu/qcom_iommu.c +++ b/drivers/iommu/arm/arm-smmu/qcom_iommu.c @@ -64,7 +64,7 @@ struct qcom_iommu_ctx { }; struct qcom_iommu_domain { - struct io_pgtable_ops *pgtbl_ops; + struct io_pgtable pgtbl; spinlock_t pgtbl_lock; struct mutex init_mutex; /* Protects iommu pointer */ struct iommu_domain domain; @@ -229,7 +229,6 @@ static int qcom_iommu_init_domain(struct iommu_domain *domain, { struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain); struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); - struct io_pgtable_ops *pgtbl_ops; struct io_pgtable_cfg pgtbl_cfg; int i, ret = 0; u32 reg; @@ -250,10 +249,9 @@ static int qcom_iommu_init_domain(struct iommu_domain *domain, qcom_domain->iommu = qcom_iommu; qcom_domain->fwspec = fwspec; - pgtbl_ops = alloc_io_pgtable_ops(&pgtbl_cfg, qcom_domain); - if (!pgtbl_ops) { + ret = alloc_io_pgtable_ops(&qcom_domain->pgtbl, &pgtbl_cfg, qcom_domain); + if (ret) { dev_err(qcom_iommu->dev, "failed to allocate pagetable ops\n"); - ret = -ENOMEM; goto out_clear_iommu; } @@ -308,9 +306,6 @@ static int qcom_iommu_init_domain(struct iommu_domain *domain, mutex_unlock(&qcom_domain->init_mutex); - /* Publish page table ops for map/unmap */ - qcom_domain->pgtbl_ops = pgtbl_ops; - return 0; out_clear_iommu: @@ -353,7 +348,7 @@ static void qcom_iommu_domain_free(struct iommu_domain *domain) * is on to avoid unclocked accesses in the TLB inv path: */ pm_runtime_get_sync(qcom_domain->iommu->dev); - free_io_pgtable_ops(qcom_domain->pgtbl_ops); + free_io_pgtable_ops(&qcom_domain->pgtbl); pm_runtime_put_sync(qcom_domain->iommu->dev); } @@ -417,13 +412,10 @@ static int qcom_iommu_map(struct iommu_domain *domain, unsigned long iova, int ret; unsigned long flags; struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain); - struct io_pgtable_ops *ops = qcom_domain->pgtbl_ops; - - if (!ops) - return -ENODEV; spin_lock_irqsave(&qcom_domain->pgtbl_lock, flags); - ret = ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, GFP_ATOMIC, mapped); + ret = iopt_map_pages(&qcom_domain->pgtbl, iova, paddr, pgsize, pgcount, + prot, GFP_ATOMIC, mapped); spin_unlock_irqrestore(&qcom_domain->pgtbl_lock, flags); return ret; } @@ -435,10 +427,6 @@ static size_t qcom_iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t ret; unsigned long flags; struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain); - struct io_pgtable_ops *ops = qcom_domain->pgtbl_ops; - - if (!ops) - return 0; /* NOTE: unmap can be called after client device is powered off, * for example, with GPUs or anything involving dma-buf. So we @@ -447,7 +435,8 @@ static size_t qcom_iommu_unmap(struct iommu_domain *domain, unsigned long iova, */ pm_runtime_get_sync(qcom_domain->iommu->dev); spin_lock_irqsave(&qcom_domain->pgtbl_lock, flags); - ret = ops->unmap_pages(ops, iova, pgsize, pgcount, gather); + ret = iopt_unmap_pages(&qcom_domain->pgtbl, iova, pgsize, pgcount, + gather); spin_unlock_irqrestore(&qcom_domain->pgtbl_lock, flags); pm_runtime_put_sync(qcom_domain->iommu->dev); @@ -457,13 +446,12 @@ static size_t qcom_iommu_unmap(struct iommu_domain *domain, unsigned long iova, static void qcom_iommu_flush_iotlb_all(struct iommu_domain *domain) { struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain); - struct io_pgtable *pgtable = container_of(qcom_domain->pgtbl_ops, - struct io_pgtable, ops); - if (!qcom_domain->pgtbl_ops) + + if (!qcom_domain->pgtbl.ops) return; pm_runtime_get_sync(qcom_domain->iommu->dev); - qcom_iommu_tlb_sync(pgtable->cookie); + qcom_iommu_tlb_sync(qcom_domain->pgtbl.cookie); pm_runtime_put_sync(qcom_domain->iommu->dev); } @@ -479,13 +467,9 @@ static phys_addr_t qcom_iommu_iova_to_phys(struct iommu_domain *domain, phys_addr_t ret; unsigned long flags; struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain); - struct io_pgtable_ops *ops = qcom_domain->pgtbl_ops; - - if (!ops) - return 0; spin_lock_irqsave(&qcom_domain->pgtbl_lock, flags); - ret = ops->iova_to_phys(ops, iova); + ret = iopt_iova_to_phys(&qcom_domain->pgtbl, iova); spin_unlock_irqrestore(&qcom_domain->pgtbl_lock, flags); return ret; diff --git a/drivers/iommu/io-pgtable-arm-common.c b/drivers/iommu/io-pgtable-arm-common.c index 4b3a9ce806ea..359086cace34 100644 --- a/drivers/iommu/io-pgtable-arm-common.c +++ b/drivers/iommu/io-pgtable-arm-common.c @@ -48,7 +48,8 @@ static void __arm_lpae_clear_pte(arm_lpae_iopte *ptep, struct io_pgtable_cfg *cf __arm_lpae_sync_pte(ptep, 1, cfg); } -static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, +static size_t __arm_lpae_unmap(struct io_pgtable *iop, + struct arm_lpae_io_pgtable *data, struct iommu_iotlb_gather *gather, unsigned long iova, size_t size, size_t pgcount, int lvl, arm_lpae_iopte *ptep); @@ -74,7 +75,8 @@ static void __arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, __arm_lpae_sync_pte(ptep, num_entries, cfg); } -static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, +static int arm_lpae_init_pte(struct io_pgtable *iop, + struct arm_lpae_io_pgtable *data, unsigned long iova, phys_addr_t paddr, arm_lpae_iopte prot, int lvl, int num_entries, arm_lpae_iopte *ptep) @@ -95,8 +97,8 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data); tblp = ptep - ARM_LPAE_LVL_IDX(iova, lvl, data); - if (__arm_lpae_unmap(data, NULL, iova + i * sz, sz, 1, - lvl, tblp) != sz) { + if (__arm_lpae_unmap(iop, data, NULL, iova + i * sz, sz, + 1, lvl, tblp) != sz) { WARN_ON(1); return -EINVAL; } @@ -139,10 +141,10 @@ static arm_lpae_iopte arm_lpae_install_table(arm_lpae_iopte *table, return old; } -int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, - phys_addr_t paddr, size_t size, size_t pgcount, - arm_lpae_iopte prot, int lvl, arm_lpae_iopte *ptep, - gfp_t gfp, size_t *mapped) +int __arm_lpae_map(struct io_pgtable *iop, struct arm_lpae_io_pgtable *data, + unsigned long iova, phys_addr_t paddr, size_t size, + size_t pgcount, arm_lpae_iopte prot, int lvl, + arm_lpae_iopte *ptep, gfp_t gfp, size_t *mapped) { arm_lpae_iopte *cptep, pte; size_t block_size = ARM_LPAE_BLOCK_SIZE(lvl, data); @@ -158,7 +160,8 @@ int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, if (size == block_size) { max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start; num_entries = min_t(int, pgcount, max_entries); - ret = arm_lpae_init_pte(data, iova, paddr, prot, lvl, num_entries, ptep); + ret = arm_lpae_init_pte(iop, data, iova, paddr, prot, lvl, + num_entries, ptep); if (!ret) *mapped += num_entries * size; @@ -192,7 +195,7 @@ int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, } /* Rinse, repeat */ - return __arm_lpae_map(data, iova, paddr, size, pgcount, prot, lvl + 1, + return __arm_lpae_map(iop, data, iova, paddr, size, pgcount, prot, lvl + 1, cptep, gfp, mapped); } @@ -260,13 +263,13 @@ static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data, return pte; } -int arm_lpae_map_pages(struct io_pgtable_ops *ops, unsigned long iova, +int arm_lpae_map_pages(struct io_pgtable *iop, unsigned long iova, phys_addr_t paddr, size_t pgsize, size_t pgcount, int iommu_prot, gfp_t gfp, size_t *mapped) { - struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(iop->ops); struct io_pgtable_cfg *cfg = &data->iop.cfg; - arm_lpae_iopte *ptep = data->pgd; + arm_lpae_iopte *ptep = iop->pgd; int ret, lvl = data->start_level; arm_lpae_iopte prot; long iaext = (s64)iova >> cfg->ias; @@ -284,7 +287,7 @@ int arm_lpae_map_pages(struct io_pgtable_ops *ops, unsigned long iova, return 0; prot = arm_lpae_prot_to_pte(data, iommu_prot); - ret = __arm_lpae_map(data, iova, paddr, pgsize, pgcount, prot, lvl, + ret = __arm_lpae_map(iop, data, iova, paddr, pgsize, pgcount, prot, lvl, ptep, gfp, mapped); /* * Synchronise all PTE updates for the new mapping before there's @@ -326,7 +329,8 @@ void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, __arm_lpae_free_pages(start, table_size, &data->iop.cfg); } -static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, +static size_t arm_lpae_split_blk_unmap(struct io_pgtable *iop, + struct arm_lpae_io_pgtable *data, struct iommu_iotlb_gather *gather, unsigned long iova, size_t size, arm_lpae_iopte blk_pte, int lvl, @@ -378,21 +382,24 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, tablep = iopte_deref(pte, data); } else if (unmap_idx_start >= 0) { for (i = 0; i < num_entries; i++) - io_pgtable_tlb_add_page(&data->iop, gather, iova + i * size, size); + io_pgtable_tlb_add_page(cfg, iop, gather, + iova + i * size, size); return num_entries * size; } - return __arm_lpae_unmap(data, gather, iova, size, pgcount, lvl, tablep); + return __arm_lpae_unmap(iop, data, gather, iova, size, pgcount, lvl, + tablep); } -static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, +static size_t __arm_lpae_unmap(struct io_pgtable *iop, + struct arm_lpae_io_pgtable *data, struct iommu_iotlb_gather *gather, unsigned long iova, size_t size, size_t pgcount, int lvl, arm_lpae_iopte *ptep) { arm_lpae_iopte pte; - struct io_pgtable *iop = &data->iop; + struct io_pgtable_cfg *cfg = &data->iop.cfg; int i = 0, num_entries, max_entries, unmap_idx_start; /* Something went horribly wrong and we ran out of page table */ @@ -415,15 +422,16 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, if (WARN_ON(!pte)) break; - __arm_lpae_clear_pte(ptep, &iop->cfg); + __arm_lpae_clear_pte(ptep, cfg); - if (!iopte_leaf(pte, lvl, iop->cfg.fmt)) { + if (!iopte_leaf(pte, lvl, cfg->fmt)) { /* Also flush any partial walks */ - io_pgtable_tlb_flush_walk(iop, iova + i * size, size, + io_pgtable_tlb_flush_walk(cfg, iop, iova + i * size, size, ARM_LPAE_GRANULE(data)); __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); } else if (!iommu_iotlb_gather_queued(gather)) { - io_pgtable_tlb_add_page(iop, gather, iova + i * size, size); + io_pgtable_tlb_add_page(cfg, iop, gather, + iova + i * size, size); } ptep++; @@ -431,27 +439,28 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, } return i * size; - } else if (iopte_leaf(pte, lvl, iop->cfg.fmt)) { + } else if (iopte_leaf(pte, lvl, cfg->fmt)) { /* * Insert a table at the next level to map the old region, * minus the part we want to unmap */ - return arm_lpae_split_blk_unmap(data, gather, iova, size, pte, - lvl + 1, ptep, pgcount); + return arm_lpae_split_blk_unmap(iop, data, gather, iova, size, + pte, lvl + 1, ptep, pgcount); } /* Keep on walkin' */ ptep = iopte_deref(pte, data); - return __arm_lpae_unmap(data, gather, iova, size, pgcount, lvl + 1, ptep); + return __arm_lpae_unmap(iop, data, gather, iova, size, + pgcount, lvl + 1, ptep); } -size_t arm_lpae_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, +size_t arm_lpae_unmap_pages(struct io_pgtable *iop, unsigned long iova, size_t pgsize, size_t pgcount, struct iommu_iotlb_gather *gather) { - struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(iop->ops); struct io_pgtable_cfg *cfg = &data->iop.cfg; - arm_lpae_iopte *ptep = data->pgd; + arm_lpae_iopte *ptep = iop->pgd; long iaext = (s64)iova >> cfg->ias; if (WARN_ON(!pgsize || (pgsize & cfg->pgsize_bitmap) != pgsize || !pgcount)) @@ -462,15 +471,14 @@ size_t arm_lpae_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, if (WARN_ON(iaext)) return 0; - return __arm_lpae_unmap(data, gather, iova, pgsize, pgcount, - data->start_level, ptep); + return __arm_lpae_unmap(iop, data, gather, iova, pgsize, + pgcount, data->start_level, ptep); } -phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, - unsigned long iova) +static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable *iop, unsigned long iova) { - struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); - arm_lpae_iopte pte, *ptep = data->pgd; + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(iop->ops); + arm_lpae_iopte pte, *ptep = iop->pgd; int lvl = data->start_level; do { diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c index 278b4299d757..2dd12fabfaee 100644 --- a/drivers/iommu/io-pgtable-arm-v7s.c +++ b/drivers/iommu/io-pgtable-arm-v7s.c @@ -40,7 +40,7 @@ container_of((x), struct arm_v7s_io_pgtable, iop) #define io_pgtable_ops_to_data(x) \ - io_pgtable_to_data(io_pgtable_ops_to_pgtable(x)) + io_pgtable_to_data(io_pgtable_ops_to_params(x)) /* * We have 32 bits total; 12 bits resolved at level 1, 8 bits at level 2, @@ -162,11 +162,10 @@ typedef u32 arm_v7s_iopte; static bool selftest_running; struct arm_v7s_io_pgtable { - struct io_pgtable iop; + struct io_pgtable_params iop; - arm_v7s_iopte *pgd; - struct kmem_cache *l2_tables; - spinlock_t split_lock; + struct kmem_cache *l2_tables; + spinlock_t split_lock; }; static bool arm_v7s_pte_is_cont(arm_v7s_iopte pte, int lvl); @@ -424,13 +423,14 @@ static bool arm_v7s_pte_is_cont(arm_v7s_iopte pte, int lvl) return false; } -static size_t __arm_v7s_unmap(struct arm_v7s_io_pgtable *, +static size_t __arm_v7s_unmap(struct io_pgtable *, struct arm_v7s_io_pgtable *, struct iommu_iotlb_gather *, unsigned long, size_t, int, arm_v7s_iopte *); -static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data, - unsigned long iova, phys_addr_t paddr, int prot, - int lvl, int num_entries, arm_v7s_iopte *ptep) +static int arm_v7s_init_pte(struct io_pgtable *iop, + struct arm_v7s_io_pgtable *data, unsigned long iova, + phys_addr_t paddr, int prot, int lvl, + int num_entries, arm_v7s_iopte *ptep) { struct io_pgtable_cfg *cfg = &data->iop.cfg; arm_v7s_iopte pte; @@ -446,7 +446,7 @@ static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data, size_t sz = ARM_V7S_BLOCK_SIZE(lvl); tblp = ptep - ARM_V7S_LVL_IDX(iova, lvl, cfg); - if (WARN_ON(__arm_v7s_unmap(data, NULL, iova + i * sz, + if (WARN_ON(__arm_v7s_unmap(iop, data, NULL, iova + i * sz, sz, lvl, tblp) != sz)) return -EINVAL; } else if (ptep[i]) { @@ -494,9 +494,9 @@ static arm_v7s_iopte arm_v7s_install_table(arm_v7s_iopte *table, return old; } -static int __arm_v7s_map(struct arm_v7s_io_pgtable *data, unsigned long iova, - phys_addr_t paddr, size_t size, int prot, - int lvl, arm_v7s_iopte *ptep, gfp_t gfp) +static int __arm_v7s_map(struct io_pgtable *iop, struct arm_v7s_io_pgtable *data, + unsigned long iova, phys_addr_t paddr, size_t size, + int prot, int lvl, arm_v7s_iopte *ptep, gfp_t gfp) { struct io_pgtable_cfg *cfg = &data->iop.cfg; arm_v7s_iopte pte, *cptep; @@ -507,7 +507,7 @@ static int __arm_v7s_map(struct arm_v7s_io_pgtable *data, unsigned long iova, /* If we can install a leaf entry at this level, then do so */ if (num_entries) - return arm_v7s_init_pte(data, iova, paddr, prot, + return arm_v7s_init_pte(iop, data, iova, paddr, prot, lvl, num_entries, ptep); /* We can't allocate tables at the final level */ @@ -538,14 +538,14 @@ static int __arm_v7s_map(struct arm_v7s_io_pgtable *data, unsigned long iova, } /* Rinse, repeat */ - return __arm_v7s_map(data, iova, paddr, size, prot, lvl + 1, cptep, gfp); + return __arm_v7s_map(iop, data, iova, paddr, size, prot, lvl + 1, cptep, gfp); } -static int arm_v7s_map_pages(struct io_pgtable_ops *ops, unsigned long iova, +static int arm_v7s_map_pages(struct io_pgtable *iop, unsigned long iova, phys_addr_t paddr, size_t pgsize, size_t pgcount, int prot, gfp_t gfp, size_t *mapped) { - struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops); + struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(iop->ops); int ret = -EINVAL; if (WARN_ON(iova >= (1ULL << data->iop.cfg.ias) || @@ -557,8 +557,8 @@ static int arm_v7s_map_pages(struct io_pgtable_ops *ops, unsigned long iova, return 0; while (pgcount--) { - ret = __arm_v7s_map(data, iova, paddr, pgsize, prot, 1, data->pgd, - gfp); + ret = __arm_v7s_map(iop, data, iova, paddr, pgsize, prot, 1, + iop->pgd, gfp); if (ret) break; @@ -577,26 +577,26 @@ static int arm_v7s_map_pages(struct io_pgtable_ops *ops, unsigned long iova, static void arm_v7s_free_pgtable(struct io_pgtable *iop) { - struct arm_v7s_io_pgtable *data = io_pgtable_to_data(iop); + struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(iop->ops); + arm_v7s_iopte *ptep = iop->pgd; int i; - for (i = 0; i < ARM_V7S_PTES_PER_LVL(1, &data->iop.cfg); i++) { - arm_v7s_iopte pte = data->pgd[i]; - - if (ARM_V7S_PTE_IS_TABLE(pte, 1)) - __arm_v7s_free_table(iopte_deref(pte, 1, data), + for (i = 0; i < ARM_V7S_PTES_PER_LVL(1, &data->iop.cfg); i++, ptep++) { + if (ARM_V7S_PTE_IS_TABLE(*ptep, 1)) + __arm_v7s_free_table(iopte_deref(*ptep, 1, data), 2, data); } - __arm_v7s_free_table(data->pgd, 1, data); + __arm_v7s_free_table(iop->pgd, 1, data); kmem_cache_destroy(data->l2_tables); kfree(data); } -static arm_v7s_iopte arm_v7s_split_cont(struct arm_v7s_io_pgtable *data, +static arm_v7s_iopte arm_v7s_split_cont(struct io_pgtable *iop, + struct arm_v7s_io_pgtable *data, unsigned long iova, int idx, int lvl, arm_v7s_iopte *ptep) { - struct io_pgtable *iop = &data->iop; + struct io_pgtable_cfg *cfg = &data->iop.cfg; arm_v7s_iopte pte; size_t size = ARM_V7S_BLOCK_SIZE(lvl); int i; @@ -611,14 +611,15 @@ static arm_v7s_iopte arm_v7s_split_cont(struct arm_v7s_io_pgtable *data, for (i = 0; i < ARM_V7S_CONT_PAGES; i++) ptep[i] = pte + i * size; - __arm_v7s_pte_sync(ptep, ARM_V7S_CONT_PAGES, &iop->cfg); + __arm_v7s_pte_sync(ptep, ARM_V7S_CONT_PAGES, cfg); size *= ARM_V7S_CONT_PAGES; - io_pgtable_tlb_flush_walk(iop, iova, size, size); + io_pgtable_tlb_flush_walk(cfg, iop, iova, size, size); return pte; } -static size_t arm_v7s_split_blk_unmap(struct arm_v7s_io_pgtable *data, +static size_t arm_v7s_split_blk_unmap(struct io_pgtable *iop, + struct arm_v7s_io_pgtable *data, struct iommu_iotlb_gather *gather, unsigned long iova, size_t size, arm_v7s_iopte blk_pte, @@ -656,27 +657,28 @@ static size_t arm_v7s_split_blk_unmap(struct arm_v7s_io_pgtable *data, return 0; tablep = iopte_deref(pte, 1, data); - return __arm_v7s_unmap(data, gather, iova, size, 2, tablep); + return __arm_v7s_unmap(iop, data, gather, iova, size, 2, tablep); } - io_pgtable_tlb_add_page(&data->iop, gather, iova, size); + io_pgtable_tlb_add_page(cfg, iop, gather, iova, size); return size; } -static size_t __arm_v7s_unmap(struct arm_v7s_io_pgtable *data, +static size_t __arm_v7s_unmap(struct io_pgtable *iop, + struct arm_v7s_io_pgtable *data, struct iommu_iotlb_gather *gather, unsigned long iova, size_t size, int lvl, arm_v7s_iopte *ptep) { arm_v7s_iopte pte[ARM_V7S_CONT_PAGES]; - struct io_pgtable *iop = &data->iop; + struct io_pgtable_cfg *cfg = &data->iop.cfg; int idx, i = 0, num_entries = size >> ARM_V7S_LVL_SHIFT(lvl); /* Something went horribly wrong and we ran out of page table */ if (WARN_ON(lvl > 2)) return 0; - idx = ARM_V7S_LVL_IDX(iova, lvl, &iop->cfg); + idx = ARM_V7S_LVL_IDX(iova, lvl, cfg); ptep += idx; do { pte[i] = READ_ONCE(ptep[i]); @@ -698,7 +700,7 @@ static size_t __arm_v7s_unmap(struct arm_v7s_io_pgtable *data, unsigned long flags; spin_lock_irqsave(&data->split_lock, flags); - pte[0] = arm_v7s_split_cont(data, iova, idx, lvl, ptep); + pte[0] = arm_v7s_split_cont(iop, data, iova, idx, lvl, ptep); spin_unlock_irqrestore(&data->split_lock, flags); } @@ -706,17 +708,18 @@ static size_t __arm_v7s_unmap(struct arm_v7s_io_pgtable *data, if (num_entries) { size_t blk_size = ARM_V7S_BLOCK_SIZE(lvl); - __arm_v7s_set_pte(ptep, 0, num_entries, &iop->cfg); + __arm_v7s_set_pte(ptep, 0, num_entries, cfg); for (i = 0; i < num_entries; i++) { if (ARM_V7S_PTE_IS_TABLE(pte[i], lvl)) { /* Also flush any partial walks */ - io_pgtable_tlb_flush_walk(iop, iova, blk_size, + io_pgtable_tlb_flush_walk(cfg, iop, iova, blk_size, ARM_V7S_BLOCK_SIZE(lvl + 1)); ptep = iopte_deref(pte[i], lvl, data); __arm_v7s_free_table(ptep, lvl + 1, data); } else if (!iommu_iotlb_gather_queued(gather)) { - io_pgtable_tlb_add_page(iop, gather, iova, blk_size); + io_pgtable_tlb_add_page(cfg, iop, gather, iova, + blk_size); } iova += blk_size; } @@ -726,27 +729,27 @@ static size_t __arm_v7s_unmap(struct arm_v7s_io_pgtable *data, * Insert a table at the next level to map the old region, * minus the part we want to unmap */ - return arm_v7s_split_blk_unmap(data, gather, iova, size, pte[0], - ptep); + return arm_v7s_split_blk_unmap(iop, data, gather, iova, size, + pte[0], ptep); } /* Keep on walkin' */ ptep = iopte_deref(pte[0], lvl, data); - return __arm_v7s_unmap(data, gather, iova, size, lvl + 1, ptep); + return __arm_v7s_unmap(iop, data, gather, iova, size, lvl + 1, ptep); } -static size_t arm_v7s_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, +static size_t arm_v7s_unmap_pages(struct io_pgtable *iop, unsigned long iova, size_t pgsize, size_t pgcount, struct iommu_iotlb_gather *gather) { - struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops); + struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(iop->ops); size_t unmapped = 0, ret; if (WARN_ON(iova >= (1ULL << data->iop.cfg.ias))) return 0; while (pgcount--) { - ret = __arm_v7s_unmap(data, gather, iova, pgsize, 1, data->pgd); + ret = __arm_v7s_unmap(iop, data, gather, iova, pgsize, 1, iop->pgd); if (!ret) break; @@ -757,11 +760,11 @@ static size_t arm_v7s_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova return unmapped; } -static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops, +static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable *iop, unsigned long iova) { - struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops); - arm_v7s_iopte *ptep = data->pgd, pte; + struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(iop->ops); + arm_v7s_iopte *ptep = iop->pgd, pte; int lvl = 0; u32 mask; @@ -780,37 +783,37 @@ static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops, return iopte_to_paddr(pte, lvl, &data->iop.cfg) | (iova & ~mask); } -static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg, - void *cookie) +static int arm_v7s_alloc_pgtable(struct io_pgtable *iop, + struct io_pgtable_cfg *cfg, void *cookie) { struct arm_v7s_io_pgtable *data; slab_flags_t slab_flag; phys_addr_t paddr; if (cfg->ias > (arm_v7s_is_mtk_enabled(cfg) ? 34 : ARM_V7S_ADDR_BITS)) - return NULL; + return -EINVAL; if (cfg->oas > (arm_v7s_is_mtk_enabled(cfg) ? 35 : ARM_V7S_ADDR_BITS)) - return NULL; + return -EINVAL; if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_NO_PERMS | IO_PGTABLE_QUIRK_ARM_MTK_EXT | IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT)) - return NULL; + return -EINVAL; /* If ARM_MTK_4GB is enabled, the NO_PERMS is also expected. */ if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_EXT && !(cfg->quirks & IO_PGTABLE_QUIRK_NO_PERMS)) - return NULL; + return -EINVAL; if ((cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT) && !arm_v7s_is_mtk_enabled(cfg)) - return NULL; + return -EINVAL; data = kmalloc(sizeof(*data), GFP_KERNEL); if (!data) - return NULL; + return -ENOMEM; spin_lock_init(&data->split_lock); @@ -860,15 +863,15 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg, ARM_V7S_NMRR_OR(7, ARM_V7S_RGN_WBWA); /* Looking good; allocate a pgd */ - data->pgd = __arm_v7s_alloc_table(1, GFP_KERNEL, data); - if (!data->pgd) + iop->pgd = __arm_v7s_alloc_table(1, GFP_KERNEL, data); + if (!iop->pgd) goto out_free_data; /* Ensure the empty pgd is visible before any actual TTBR write */ wmb(); /* TTBR */ - paddr = virt_to_phys(data->pgd); + paddr = virt_to_phys(iop->pgd); if (arm_v7s_is_mtk_enabled(cfg)) cfg->arm_v7s_cfg.ttbr = paddr | upper_32_bits(paddr); else @@ -878,12 +881,13 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg, ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_WBWA)) : (ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_NC) | ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_NC))); - return &data->iop; + iop->ops = &data->iop.ops; + return 0; out_free_data: kmem_cache_destroy(data->l2_tables); kfree(data); - return NULL; + return -EINVAL; } struct io_pgtable_init_fns io_pgtable_arm_v7s_init_fns = { @@ -920,7 +924,7 @@ static const struct iommu_flush_ops dummy_tlb_ops __initconst = { .tlb_add_page = dummy_tlb_add_page, }; -#define __FAIL(ops) ({ \ +#define __FAIL() ({ \ WARN(1, "selftest: test failed\n"); \ selftest_running = false; \ -EFAULT; \ @@ -928,7 +932,7 @@ static const struct iommu_flush_ops dummy_tlb_ops __initconst = { static int __init arm_v7s_do_selftests(void) { - struct io_pgtable_ops *ops; + struct io_pgtable iop; struct io_pgtable_cfg cfg = { .fmt = ARM_V7S, .tlb = &dummy_tlb_ops, @@ -946,8 +950,7 @@ static int __init arm_v7s_do_selftests(void) cfg_cookie = &cfg; - ops = alloc_io_pgtable_ops(&cfg, &cfg); - if (!ops) { + if (alloc_io_pgtable_ops(&iop, &cfg, &cfg)) { pr_err("selftest: failed to allocate io pgtable ops\n"); return -EINVAL; } @@ -956,14 +959,14 @@ static int __init arm_v7s_do_selftests(void) * Initial sanity checks. * Empty page tables shouldn't provide any translations. */ - if (ops->iova_to_phys(ops, 42)) - return __FAIL(ops); + if (iopt_iova_to_phys(&iop, 42)) + return __FAIL(); - if (ops->iova_to_phys(ops, SZ_1G + 42)) - return __FAIL(ops); + if (iopt_iova_to_phys(&iop, SZ_1G + 42)) + return __FAIL(); - if (ops->iova_to_phys(ops, SZ_2G + 42)) - return __FAIL(ops); + if (iopt_iova_to_phys(&iop, SZ_2G + 42)) + return __FAIL(); /* * Distinct mappings of different granule sizes. @@ -971,20 +974,20 @@ static int __init arm_v7s_do_selftests(void) iova = 0; for_each_set_bit(i, &cfg.pgsize_bitmap, BITS_PER_LONG) { size = 1UL << i; - if (ops->map_pages(ops, iova, iova, size, 1, + if (iopt_map_pages(&iop, iova, iova, size, 1, IOMMU_READ | IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_CACHE, GFP_KERNEL, &mapped)) - return __FAIL(ops); + return __FAIL(); /* Overlapping mappings */ - if (!ops->map_pages(ops, iova, iova + size, size, 1, + if (!iopt_map_pages(&iop, iova, iova + size, size, 1, IOMMU_READ | IOMMU_NOEXEC, GFP_KERNEL, &mapped)) - return __FAIL(ops); + return __FAIL(); - if (ops->iova_to_phys(ops, iova + 42) != (iova + 42)) - return __FAIL(ops); + if (iopt_iova_to_phys(&iop, iova + 42) != (iova + 42)) + return __FAIL(); iova += SZ_16M; loopnr++; @@ -995,17 +998,17 @@ static int __init arm_v7s_do_selftests(void) size = 1UL << __ffs(cfg.pgsize_bitmap); while (i < loopnr) { iova_start = i * SZ_16M; - if (ops->unmap_pages(ops, iova_start + size, size, 1, NULL) != size) - return __FAIL(ops); + if (iopt_unmap_pages(&iop, iova_start + size, size, 1, NULL) != size) + return __FAIL(); /* Remap of partial unmap */ - if (ops->map_pages(ops, iova_start + size, size, size, 1, + if (iopt_map_pages(&iop, iova_start + size, size, size, 1, IOMMU_READ, GFP_KERNEL, &mapped)) - return __FAIL(ops); + return __FAIL(); - if (ops->iova_to_phys(ops, iova_start + size + 42) + if (iopt_iova_to_phys(&iop, iova_start + size + 42) != (size + 42)) - return __FAIL(ops); + return __FAIL(); i++; } @@ -1014,24 +1017,24 @@ static int __init arm_v7s_do_selftests(void) for_each_set_bit(i, &cfg.pgsize_bitmap, BITS_PER_LONG) { size = 1UL << i; - if (ops->unmap_pages(ops, iova, size, 1, NULL) != size) - return __FAIL(ops); + if (iopt_unmap_pages(&iop, iova, size, 1, NULL) != size) + return __FAIL(); - if (ops->iova_to_phys(ops, iova + 42)) - return __FAIL(ops); + if (iopt_iova_to_phys(&iop, iova + 42)) + return __FAIL(); /* Remap full block */ - if (ops->map_pages(ops, iova, iova, size, 1, IOMMU_WRITE, + if (iopt_map_pages(&iop, iova, iova, size, 1, IOMMU_WRITE, GFP_KERNEL, &mapped)) - return __FAIL(ops); + return __FAIL(); - if (ops->iova_to_phys(ops, iova + 42) != (iova + 42)) - return __FAIL(ops); + if (iopt_iova_to_phys(&iop, iova + 42) != (iova + 42)) + return __FAIL(); iova += SZ_16M; } - free_io_pgtable_ops(ops); + free_io_pgtable_ops(&iop); selftest_running = false; diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index c412500efadf..bee8980c89eb 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -82,40 +82,40 @@ void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, int num_entries, static void arm_lpae_free_pgtable(struct io_pgtable *iop) { - struct arm_lpae_io_pgtable *data = io_pgtable_to_data(iop); + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(iop->ops); - __arm_lpae_free_pgtable(data, data->start_level, data->pgd); + __arm_lpae_free_pgtable(data, data->start_level, iop->pgd); kfree(data); } -static struct io_pgtable * -arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) +int arm_64_lpae_alloc_pgtable_s1(struct io_pgtable *iop, + struct io_pgtable_cfg *cfg, void *cookie) { struct arm_lpae_io_pgtable *data; data = kzalloc(sizeof(*data), GFP_KERNEL); if (!data) - return NULL; + return -ENOMEM; if (arm_lpae_init_pgtable_s1(cfg, data)) goto out_free_data; /* Looking good; allocate a pgd */ - data->pgd = __arm_lpae_alloc_pages(ARM_LPAE_PGD_SIZE(data), - GFP_KERNEL, cfg); - if (!data->pgd) + iop->pgd = __arm_lpae_alloc_pages(ARM_LPAE_PGD_SIZE(data), + GFP_KERNEL, cfg); + if (!iop->pgd) goto out_free_data; /* Ensure the empty pgd is visible before any actual TTBR write */ wmb(); - /* TTBR */ - cfg->arm_lpae_s1_cfg.ttbr = virt_to_phys(data->pgd); - return &data->iop; + cfg->arm_lpae_s1_cfg.ttbr = virt_to_phys(iop->pgd); + iop->ops = &data->iop.ops; + return 0; out_free_data: kfree(data); - return NULL; + return -EINVAL; } static int arm_64_lpae_configure_s1(struct io_pgtable_cfg *cfg, size_t *pgd_size) @@ -130,34 +130,35 @@ static int arm_64_lpae_configure_s1(struct io_pgtable_cfg *cfg, size_t *pgd_size return 0; } -static struct io_pgtable * -arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) +int arm_64_lpae_alloc_pgtable_s2(struct io_pgtable *iop, + struct io_pgtable_cfg *cfg, void *cookie) { struct arm_lpae_io_pgtable *data; data = kzalloc(sizeof(*data), GFP_KERNEL); if (!data) - return NULL; + return -ENOMEM; if (arm_lpae_init_pgtable_s2(cfg, data)) goto out_free_data; /* Allocate pgd pages */ - data->pgd = __arm_lpae_alloc_pages(ARM_LPAE_PGD_SIZE(data), - GFP_KERNEL, cfg); - if (!data->pgd) + iop->pgd = __arm_lpae_alloc_pages(ARM_LPAE_PGD_SIZE(data), + GFP_KERNEL, cfg); + if (!iop->pgd) goto out_free_data; /* Ensure the empty pgd is visible before any actual TTBR write */ wmb(); /* VTTBR */ - cfg->arm_lpae_s2_cfg.vttbr = virt_to_phys(data->pgd); - return &data->iop; + cfg->arm_lpae_s2_cfg.vttbr = virt_to_phys(iop->pgd); + iop->ops = &data->iop.ops; + return 0; out_free_data: kfree(data); - return NULL; + return -EINVAL; } static int arm_64_lpae_configure_s2(struct io_pgtable_cfg *cfg, size_t *pgd_size) @@ -172,46 +173,46 @@ static int arm_64_lpae_configure_s2(struct io_pgtable_cfg *cfg, size_t *pgd_size return 0; } -static struct io_pgtable * -arm_32_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) +int arm_32_lpae_alloc_pgtable_s1(struct io_pgtable *iop, + struct io_pgtable_cfg *cfg, void *cookie) { if (cfg->ias > 32 || cfg->oas > 40) - return NULL; + return -EINVAL; cfg->pgsize_bitmap &= (SZ_4K | SZ_2M | SZ_1G); - return arm_64_lpae_alloc_pgtable_s1(cfg, cookie); + return arm_64_lpae_alloc_pgtable_s1(iop, cfg, cookie); } -static struct io_pgtable * -arm_32_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) +int arm_32_lpae_alloc_pgtable_s2(struct io_pgtable *iop, + struct io_pgtable_cfg *cfg, void *cookie) { if (cfg->ias > 40 || cfg->oas > 40) - return NULL; + return -EINVAL; cfg->pgsize_bitmap &= (SZ_4K | SZ_2M | SZ_1G); - return arm_64_lpae_alloc_pgtable_s2(cfg, cookie); + return arm_64_lpae_alloc_pgtable_s2(iop, cfg, cookie); } -static struct io_pgtable * -arm_mali_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie) +int arm_mali_lpae_alloc_pgtable(struct io_pgtable *iop, + struct io_pgtable_cfg *cfg, void *cookie) { struct arm_lpae_io_pgtable *data; /* No quirks for Mali (hopefully) */ if (cfg->quirks) - return NULL; + return -EINVAL; if (cfg->ias > 48 || cfg->oas > 40) - return NULL; + return -EINVAL; cfg->pgsize_bitmap &= (SZ_4K | SZ_2M | SZ_1G); data = kzalloc(sizeof(*data), GFP_KERNEL); if (!data) - return NULL; + return -ENOMEM; if (arm_lpae_init_pgtable(cfg, data)) - return NULL; + goto out_free_data; /* Mali seems to need a full 4-level table regardless of IAS */ if (data->start_level > 0) { @@ -233,25 +234,26 @@ arm_mali_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie) (ARM_MALI_LPAE_MEMATTR_IMP_DEF << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_DEV)); - data->pgd = __arm_lpae_alloc_pages(ARM_LPAE_PGD_SIZE(data), GFP_KERNEL, - cfg); - if (!data->pgd) + iop->pgd = __arm_lpae_alloc_pages(ARM_LPAE_PGD_SIZE(data), GFP_KERNEL, + cfg); + if (!iop->pgd) goto out_free_data; /* Ensure the empty pgd is visible before TRANSTAB can be written */ wmb(); - cfg->arm_mali_lpae_cfg.transtab = virt_to_phys(data->pgd) | + cfg->arm_mali_lpae_cfg.transtab = virt_to_phys(iop->pgd) | ARM_MALI_LPAE_TTBR_READ_INNER | ARM_MALI_LPAE_TTBR_ADRMODE_TABLE; if (cfg->coherent_walk) cfg->arm_mali_lpae_cfg.transtab |= ARM_MALI_LPAE_TTBR_SHARE_OUTER; - return &data->iop; + iop->ops = &data->iop.ops; + return 0; out_free_data: kfree(data); - return NULL; + return -EINVAL; } struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s1_init_fns = { @@ -310,21 +312,21 @@ static const struct iommu_flush_ops dummy_tlb_ops __initconst = { .tlb_add_page = dummy_tlb_add_page, }; -static void __init arm_lpae_dump_ops(struct io_pgtable_ops *ops) +static void __init arm_lpae_dump_ops(struct io_pgtable *iop) { - struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(iop->ops); struct io_pgtable_cfg *cfg = &data->iop.cfg; pr_err("cfg: pgsize_bitmap 0x%lx, ias %u-bit\n", cfg->pgsize_bitmap, cfg->ias); pr_err("data: %d levels, 0x%zx pgd_size, %u pg_shift, %u bits_per_level, pgd @ %p\n", ARM_LPAE_MAX_LEVELS - data->start_level, ARM_LPAE_PGD_SIZE(data), - ilog2(ARM_LPAE_GRANULE(data)), data->bits_per_level, data->pgd); + ilog2(ARM_LPAE_GRANULE(data)), data->bits_per_level, iop->pgd); } -#define __FAIL(ops, i) ({ \ +#define __FAIL(iop, i) ({ \ WARN(1, "selftest: test failed for fmt idx %d\n", (i)); \ - arm_lpae_dump_ops(ops); \ + arm_lpae_dump_ops(iop); \ selftest_running = false; \ -EFAULT; \ }) @@ -336,34 +338,34 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) ARM_64_LPAE_S2, }; - int i, j; + int i, j, ret; unsigned long iova; size_t size, mapped; - struct io_pgtable_ops *ops; + struct io_pgtable iop; selftest_running = true; for (i = 0; i < ARRAY_SIZE(fmts); ++i) { cfg_cookie = cfg; cfg->fmt = fmts[i]; - ops = alloc_io_pgtable_ops(cfg, cfg); - if (!ops) { + ret = alloc_io_pgtable_ops(&iop, cfg, cfg); + if (ret) { pr_err("selftest: failed to allocate io pgtable ops\n"); - return -ENOMEM; + return ret; } /* * Initial sanity checks. * Empty page tables shouldn't provide any translations. */ - if (ops->iova_to_phys(ops, 42)) - return __FAIL(ops, i); + if (iopt_iova_to_phys(&iop, 42)) + return __FAIL(&iop, i); - if (ops->iova_to_phys(ops, SZ_1G + 42)) - return __FAIL(ops, i); + if (iopt_iova_to_phys(&iop, SZ_1G + 42)) + return __FAIL(&iop, i); - if (ops->iova_to_phys(ops, SZ_2G + 42)) - return __FAIL(ops, i); + if (iopt_iova_to_phys(&iop, SZ_2G + 42)) + return __FAIL(&iop, i); /* * Distinct mappings of different granule sizes. @@ -372,60 +374,60 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) for_each_set_bit(j, &cfg->pgsize_bitmap, BITS_PER_LONG) { size = 1UL << j; - if (ops->map_pages(ops, iova, iova, size, 1, + if (iopt_map_pages(&iop, iova, iova, size, 1, IOMMU_READ | IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_CACHE, GFP_KERNEL, &mapped)) - return __FAIL(ops, i); + return __FAIL(&iop, i); /* Overlapping mappings */ - if (!ops->map_pages(ops, iova, iova + size, size, 1, + if (!iopt_map_pages(&iop, iova, iova + size, size, 1, IOMMU_READ | IOMMU_NOEXEC, GFP_KERNEL, &mapped)) - return __FAIL(ops, i); + return __FAIL(&iop, i); - if (ops->iova_to_phys(ops, iova + 42) != (iova + 42)) - return __FAIL(ops, i); + if (iopt_iova_to_phys(&iop, iova + 42) != (iova + 42)) + return __FAIL(&iop, i); iova += SZ_1G; } /* Partial unmap */ size = 1UL << __ffs(cfg->pgsize_bitmap); - if (ops->unmap_pages(ops, SZ_1G + size, size, 1, NULL) != size) - return __FAIL(ops, i); + if (iopt_unmap_pages(&iop, SZ_1G + size, size, 1, NULL) != size) + return __FAIL(&iop, i); /* Remap of partial unmap */ - if (ops->map_pages(ops, SZ_1G + size, size, size, 1, + if (iopt_map_pages(&iop, SZ_1G + size, size, size, 1, IOMMU_READ, GFP_KERNEL, &mapped)) - return __FAIL(ops, i); + return __FAIL(&iop, i); - if (ops->iova_to_phys(ops, SZ_1G + size + 42) != (size + 42)) - return __FAIL(ops, i); + if (iopt_iova_to_phys(&iop, SZ_1G + size + 42) != (size + 42)) + return __FAIL(&iop, i); /* Full unmap */ iova = 0; for_each_set_bit(j, &cfg->pgsize_bitmap, BITS_PER_LONG) { size = 1UL << j; - if (ops->unmap_pages(ops, iova, size, 1, NULL) != size) - return __FAIL(ops, i); + if (iopt_unmap_pages(&iop, iova, size, 1, NULL) != size) + return __FAIL(&iop, i); - if (ops->iova_to_phys(ops, iova + 42)) - return __FAIL(ops, i); + if (iopt_iova_to_phys(&iop, iova + 42)) + return __FAIL(&iop, i); /* Remap full block */ - if (ops->map_pages(ops, iova, iova, size, 1, + if (iopt_map_pages(&iop, iova, iova, size, 1, IOMMU_WRITE, GFP_KERNEL, &mapped)) - return __FAIL(ops, i); + return __FAIL(&iop, i); - if (ops->iova_to_phys(ops, iova + 42) != (iova + 42)) - return __FAIL(ops, i); + if (iopt_iova_to_phys(&iop, iova + 42) != (iova + 42)) + return __FAIL(&iop, i); iova += SZ_1G; } - free_io_pgtable_ops(ops); + free_io_pgtable_ops(&iop); } selftest_running = false; diff --git a/drivers/iommu/io-pgtable-dart.c b/drivers/iommu/io-pgtable-dart.c index f981b25d8c98..1bb2e91ed0a7 100644 --- a/drivers/iommu/io-pgtable-dart.c +++ b/drivers/iommu/io-pgtable-dart.c @@ -34,7 +34,7 @@ container_of((x), struct dart_io_pgtable, iop) #define io_pgtable_ops_to_data(x) \ - io_pgtable_to_data(io_pgtable_ops_to_pgtable(x)) + io_pgtable_to_data(io_pgtable_ops_to_params(x)) #define DART_GRANULE(d) \ (sizeof(dart_iopte) << (d)->bits_per_level) @@ -65,12 +65,10 @@ #define iopte_deref(pte, d) __va(iopte_to_paddr(pte, d)) struct dart_io_pgtable { - struct io_pgtable iop; + struct io_pgtable_params iop; - int tbl_bits; - int bits_per_level; - - void *pgd[DART_MAX_TABLES]; + int tbl_bits; + int bits_per_level; }; typedef u64 dart_iopte; @@ -170,10 +168,14 @@ static dart_iopte dart_install_table(dart_iopte *table, return old; } -static int dart_get_table(struct dart_io_pgtable *data, unsigned long iova) +static dart_iopte *dart_get_table(struct io_pgtable *iop, + struct dart_io_pgtable *data, + unsigned long iova) { - return (iova >> (3 * data->bits_per_level + ilog2(sizeof(dart_iopte)))) & + int tbl = (iova >> (3 * data->bits_per_level + ilog2(sizeof(dart_iopte)))) & ((1 << data->tbl_bits) - 1); + + return iop->pgd + DART_GRANULE(data) * tbl; } static int dart_get_l1_index(struct dart_io_pgtable *data, unsigned long iova) @@ -190,12 +192,12 @@ static int dart_get_l2_index(struct dart_io_pgtable *data, unsigned long iova) ((1 << data->bits_per_level) - 1); } -static dart_iopte *dart_get_l2(struct dart_io_pgtable *data, unsigned long iova) +static dart_iopte *dart_get_l2(struct io_pgtable *iop, + struct dart_io_pgtable *data, unsigned long iova) { dart_iopte pte, *ptep; - int tbl = dart_get_table(data, iova); - ptep = data->pgd[tbl]; + ptep = dart_get_table(iop, data, iova); if (!ptep) return NULL; @@ -233,14 +235,14 @@ static dart_iopte dart_prot_to_pte(struct dart_io_pgtable *data, return pte; } -static int dart_map_pages(struct io_pgtable_ops *ops, unsigned long iova, +static int dart_map_pages(struct io_pgtable *iop, unsigned long iova, phys_addr_t paddr, size_t pgsize, size_t pgcount, int iommu_prot, gfp_t gfp, size_t *mapped) { - struct dart_io_pgtable *data = io_pgtable_ops_to_data(ops); + struct dart_io_pgtable *data = io_pgtable_ops_to_data(iop->ops); struct io_pgtable_cfg *cfg = &data->iop.cfg; size_t tblsz = DART_GRANULE(data); - int ret = 0, tbl, num_entries, max_entries, map_idx_start; + int ret = 0, num_entries, max_entries, map_idx_start; dart_iopte pte, *cptep, *ptep; dart_iopte prot; @@ -254,9 +256,7 @@ static int dart_map_pages(struct io_pgtable_ops *ops, unsigned long iova, if (!(iommu_prot & (IOMMU_READ | IOMMU_WRITE))) return 0; - tbl = dart_get_table(data, iova); - - ptep = data->pgd[tbl]; + ptep = dart_get_table(iop, data, iova); ptep += dart_get_l1_index(data, iova); pte = READ_ONCE(*ptep); @@ -295,11 +295,11 @@ static int dart_map_pages(struct io_pgtable_ops *ops, unsigned long iova, return ret; } -static size_t dart_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, +static size_t dart_unmap_pages(struct io_pgtable *iop, unsigned long iova, size_t pgsize, size_t pgcount, struct iommu_iotlb_gather *gather) { - struct dart_io_pgtable *data = io_pgtable_ops_to_data(ops); + struct dart_io_pgtable *data = io_pgtable_ops_to_data(iop->ops); struct io_pgtable_cfg *cfg = &data->iop.cfg; int i = 0, num_entries, max_entries, unmap_idx_start; dart_iopte pte, *ptep; @@ -307,7 +307,7 @@ static size_t dart_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, if (WARN_ON(pgsize != cfg->pgsize_bitmap || !pgcount)) return 0; - ptep = dart_get_l2(data, iova); + ptep = dart_get_l2(iop, data, iova); /* Valid L2 IOPTE pointer? */ if (WARN_ON(!ptep)) @@ -328,7 +328,7 @@ static size_t dart_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, *ptep = 0; if (!iommu_iotlb_gather_queued(gather)) - io_pgtable_tlb_add_page(&data->iop, gather, + io_pgtable_tlb_add_page(cfg, iop, gather, iova + i * pgsize, pgsize); ptep++; @@ -338,13 +338,13 @@ static size_t dart_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, return i * pgsize; } -static phys_addr_t dart_iova_to_phys(struct io_pgtable_ops *ops, +static phys_addr_t dart_iova_to_phys(struct io_pgtable *iop, unsigned long iova) { - struct dart_io_pgtable *data = io_pgtable_ops_to_data(ops); + struct dart_io_pgtable *data = io_pgtable_ops_to_data(iop->ops); dart_iopte pte, *ptep; - ptep = dart_get_l2(data, iova); + ptep = dart_get_l2(iop, data, iova); /* Valid L2 IOPTE pointer? */ if (!ptep) @@ -394,56 +394,56 @@ dart_alloc_pgtable(struct io_pgtable_cfg *cfg) return data; } -static struct io_pgtable * -apple_dart_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie) +static int apple_dart_alloc_pgtable(struct io_pgtable *iop, + struct io_pgtable_cfg *cfg, void *cookie) { struct dart_io_pgtable *data; int i; if (!cfg->coherent_walk) - return NULL; + return -EINVAL; if (cfg->oas != 36 && cfg->oas != 42) - return NULL; + return -EINVAL; if (cfg->ias > cfg->oas) - return NULL; + return -EINVAL; if (!(cfg->pgsize_bitmap == SZ_4K || cfg->pgsize_bitmap == SZ_16K)) - return NULL; + return -EINVAL; data = dart_alloc_pgtable(cfg); if (!data) - return NULL; + return -ENOMEM; cfg->apple_dart_cfg.n_ttbrs = 1 << data->tbl_bits; - for (i = 0; i < cfg->apple_dart_cfg.n_ttbrs; ++i) { - data->pgd[i] = __dart_alloc_pages(DART_GRANULE(data), GFP_KERNEL, - cfg); - if (!data->pgd[i]) - goto out_free_data; - cfg->apple_dart_cfg.ttbr[i] = virt_to_phys(data->pgd[i]); - } + iop->pgd = __dart_alloc_pages(cfg->apple_dart_cfg.n_ttbrs * + DART_GRANULE(data), GFP_KERNEL, cfg); + if (!iop->pgd) + goto out_free_data; + + for (i = 0; i < cfg->apple_dart_cfg.n_ttbrs; ++i) + cfg->apple_dart_cfg.ttbr[i] = virt_to_phys(iop->pgd) + + i * DART_GRANULE(data); - return &data->iop; + iop->ops = &data->iop.ops; + return 0; out_free_data: - while (--i >= 0) - free_pages((unsigned long)data->pgd[i], - get_order(DART_GRANULE(data))); kfree(data); - return NULL; + return -ENOMEM; } static void apple_dart_free_pgtable(struct io_pgtable *iop) { - struct dart_io_pgtable *data = io_pgtable_to_data(iop); + struct dart_io_pgtable *data = io_pgtable_ops_to_data(iop->ops); + size_t n_ttbrs = 1 << data->tbl_bits; dart_iopte *ptep, *end; int i; - for (i = 0; i < (1 << data->tbl_bits) && data->pgd[i]; ++i) { - ptep = data->pgd[i]; + for (i = 0; i < n_ttbrs; ++i) { + ptep = iop->pgd + DART_GRANULE(data) * i; end = (void *)ptep + DART_GRANULE(data); while (ptep != end) { @@ -456,10 +456,9 @@ static void apple_dart_free_pgtable(struct io_pgtable *iop) free_pages(page, get_order(DART_GRANULE(data))); } } - free_pages((unsigned long)data->pgd[i], - get_order(DART_GRANULE(data))); } - + free_pages((unsigned long)iop->pgd, + get_order(DART_GRANULE(data) * n_ttbrs)); kfree(data); } diff --git a/drivers/iommu/io-pgtable.c b/drivers/iommu/io-pgtable.c index 2aba691db1da..acc6802b2f50 100644 --- a/drivers/iommu/io-pgtable.c +++ b/drivers/iommu/io-pgtable.c @@ -34,27 +34,30 @@ io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] = { #endif }; -struct io_pgtable_ops *alloc_io_pgtable_ops(struct io_pgtable_cfg *cfg, - void *cookie) +int alloc_io_pgtable_ops(struct io_pgtable *iop, struct io_pgtable_cfg *cfg, + void *cookie) { - struct io_pgtable *iop; + int ret; + struct io_pgtable_params *params; const struct io_pgtable_init_fns *fns; if (cfg->fmt >= IO_PGTABLE_NUM_FMTS) - return NULL; + return -EINVAL; fns = io_pgtable_init_table[cfg->fmt]; if (!fns) - return NULL; + return -EINVAL; - iop = fns->alloc(cfg, cookie); - if (!iop) - return NULL; + ret = fns->alloc(iop, cfg, cookie); + if (ret) + return ret; + + params = io_pgtable_ops_to_params(iop->ops); iop->cookie = cookie; - iop->cfg = *cfg; + params->cfg = *cfg; - return &iop->ops; + return 0; } EXPORT_SYMBOL_GPL(alloc_io_pgtable_ops); @@ -62,16 +65,17 @@ EXPORT_SYMBOL_GPL(alloc_io_pgtable_ops); * It is the IOMMU driver's responsibility to ensure that the page table * is no longer accessible to the walker by this point. */ -void free_io_pgtable_ops(struct io_pgtable_ops *ops) +void free_io_pgtable_ops(struct io_pgtable *iop) { - struct io_pgtable *iop; + struct io_pgtable_params *params; - if (!ops) + if (!iop) return; - iop = io_pgtable_ops_to_pgtable(ops); - io_pgtable_tlb_flush_all(iop); - io_pgtable_init_table[iop->cfg.fmt]->free(iop); + params = io_pgtable_ops_to_params(iop->ops); + io_pgtable_tlb_flush_all(¶ms->cfg, iop); + io_pgtable_init_table[params->cfg.fmt]->free(iop); + memset(iop, 0, sizeof(*iop)); } EXPORT_SYMBOL_GPL(free_io_pgtable_ops); diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c index 4a1927489635..3ff21e6bf939 100644 --- a/drivers/iommu/ipmmu-vmsa.c +++ b/drivers/iommu/ipmmu-vmsa.c @@ -73,7 +73,7 @@ struct ipmmu_vmsa_domain { struct iommu_domain io_domain; struct io_pgtable_cfg cfg; - struct io_pgtable_ops *iop; + struct io_pgtable iop; unsigned int context_id; struct mutex mutex; /* Protects mappings */ @@ -458,11 +458,11 @@ static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain) domain->context_id = ret; - domain->iop = alloc_io_pgtable_ops(&domain->cfg, domain); - if (!domain->iop) { + ret = alloc_io_pgtable_ops(&domain->iop, &domain->cfg, domain); + if (ret) { ipmmu_domain_free_context(domain->mmu->root, domain->context_id); - return -EINVAL; + return ret; } ipmmu_domain_setup_context(domain); @@ -592,7 +592,7 @@ static void ipmmu_domain_free(struct iommu_domain *io_domain) * been detached. */ ipmmu_domain_destroy_context(domain); - free_io_pgtable_ops(domain->iop); + free_io_pgtable_ops(&domain->iop); kfree(domain); } @@ -664,8 +664,8 @@ static int ipmmu_map(struct iommu_domain *io_domain, unsigned long iova, { struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain); - return domain->iop->map_pages(domain->iop, iova, paddr, pgsize, pgcount, - prot, gfp, mapped); + return iopt_map_pages(&domain->iop, iova, paddr, pgsize, pgcount, prot, + gfp, mapped); } static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova, @@ -674,7 +674,7 @@ static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova, { struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain); - return domain->iop->unmap_pages(domain->iop, iova, pgsize, pgcount, gather); + return iopt_unmap_pages(&domain->iop, iova, pgsize, pgcount, gather); } static void ipmmu_flush_iotlb_all(struct iommu_domain *io_domain) @@ -698,7 +698,7 @@ static phys_addr_t ipmmu_iova_to_phys(struct iommu_domain *io_domain, /* TODO: Is locking needed ? */ - return domain->iop->iova_to_phys(domain->iop, iova); + return iopt_iova_to_phys(&domain->iop, iova); } static int ipmmu_init_platform_device(struct device *dev, diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c index 2c05a84ec1bf..6dae6743e11b 100644 --- a/drivers/iommu/msm_iommu.c +++ b/drivers/iommu/msm_iommu.c @@ -41,7 +41,7 @@ struct msm_priv { struct list_head list_attached; struct iommu_domain domain; struct io_pgtable_cfg cfg; - struct io_pgtable_ops *iop; + struct io_pgtable iop; struct device *dev; spinlock_t pgtlock; /* pagetable lock */ }; @@ -339,6 +339,7 @@ static void msm_iommu_domain_free(struct iommu_domain *domain) static int msm_iommu_domain_config(struct msm_priv *priv) { + int ret; spin_lock_init(&priv->pgtlock); priv->cfg = (struct io_pgtable_cfg) { @@ -350,10 +351,10 @@ static int msm_iommu_domain_config(struct msm_priv *priv) .iommu_dev = priv->dev, }; - priv->iop = alloc_io_pgtable_ops(&priv->cfg, priv); - if (!priv->iop) { + ret = alloc_io_pgtable_ops(&priv->iop, &priv->cfg, priv); + if (ret) { dev_err(priv->dev, "Failed to allocate pgtable\n"); - return -EINVAL; + return ret; } msm_iommu_ops.pgsize_bitmap = priv->cfg.pgsize_bitmap; @@ -453,7 +454,7 @@ static void msm_iommu_detach_dev(struct iommu_domain *domain, struct msm_iommu_ctx_dev *master; int ret; - free_io_pgtable_ops(priv->iop); + free_io_pgtable_ops(&priv->iop); spin_lock_irqsave(&msm_iommu_lock, flags); list_for_each_entry(iommu, &priv->list_attached, dom_node) { @@ -480,8 +481,8 @@ static int msm_iommu_map(struct iommu_domain *domain, unsigned long iova, int ret; spin_lock_irqsave(&priv->pgtlock, flags); - ret = priv->iop->map_pages(priv->iop, iova, pa, pgsize, pgcount, prot, - GFP_ATOMIC, mapped); + ret = iopt_map_pages(&priv->iop, iova, pa, pgsize, pgcount, prot, + GFP_ATOMIC, mapped); spin_unlock_irqrestore(&priv->pgtlock, flags); return ret; @@ -504,7 +505,7 @@ static size_t msm_iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t ret; spin_lock_irqsave(&priv->pgtlock, flags); - ret = priv->iop->unmap_pages(priv->iop, iova, pgsize, pgcount, gather); + ret = iopt_unmap_pages(&priv->iop, iova, pgsize, pgcount, gather); spin_unlock_irqrestore(&priv->pgtlock, flags); return ret; diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c index 0d754d94ae52..615d9ade575e 100644 --- a/drivers/iommu/mtk_iommu.c +++ b/drivers/iommu/mtk_iommu.c @@ -244,7 +244,7 @@ struct mtk_iommu_data { struct mtk_iommu_domain { struct io_pgtable_cfg cfg; - struct io_pgtable_ops *iop; + struct io_pgtable iop; struct mtk_iommu_bank_data *bank; struct iommu_domain domain; @@ -587,6 +587,7 @@ static int mtk_iommu_domain_finalise(struct mtk_iommu_domain *dom, { const struct mtk_iommu_iova_region *region; struct mtk_iommu_domain *m4u_dom; + int ret; /* Always use bank0 in sharing pgtable case */ m4u_dom = data->bank[0].m4u_dom; @@ -615,8 +616,8 @@ static int mtk_iommu_domain_finalise(struct mtk_iommu_domain *dom, else dom->cfg.oas = 35; - dom->iop = alloc_io_pgtable_ops(&dom->cfg, data); - if (!dom->iop) { + ret = alloc_io_pgtable_ops(&dom->iop, &dom->cfg, data); + if (ret) { dev_err(data->dev, "Failed to alloc io pgtable\n"); return -ENOMEM; } @@ -730,7 +731,7 @@ static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova, paddr |= BIT_ULL(32); /* Synchronize with the tlb_lock */ - return dom->iop->map_pages(dom->iop, iova, paddr, pgsize, pgcount, prot, gfp, mapped); + return iopt_map_pages(&dom->iop, iova, paddr, pgsize, pgcount, prot, gfp, mapped); } static size_t mtk_iommu_unmap(struct iommu_domain *domain, @@ -740,7 +741,7 @@ static size_t mtk_iommu_unmap(struct iommu_domain *domain, struct mtk_iommu_domain *dom = to_mtk_domain(domain); iommu_iotlb_gather_add_range(gather, iova, pgsize * pgcount); - return dom->iop->unmap_pages(dom->iop, iova, pgsize, pgcount, gather); + return iopt_unmap_pages(&dom->iop, iova, pgsize, pgcount, gather); } static void mtk_iommu_flush_iotlb_all(struct iommu_domain *domain) @@ -773,7 +774,7 @@ static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain, struct mtk_iommu_domain *dom = to_mtk_domain(domain); phys_addr_t pa; - pa = dom->iop->iova_to_phys(dom->iop, iova); + pa = iopt_iova_to_phys(&dom->iop, iova); if (IS_ENABLED(CONFIG_PHYS_ADDR_T_64BIT) && dom->bank->parent_data->enable_4GB && pa >= MTK_IOMMU_4GB_MODE_REMAP_BASE) From patchwork Wed Feb 1 12:52:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5A66FC05027 for ; Wed, 1 Feb 2023 14:02:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=x8lZEvK1bhTia+UBHygbMG+O+lEMKbuywKqjU4ri9LU=; b=YbhjmxLcUsAyXh k7XILuC2cbn5DSppMCXR0iJJ+8t/7X3vjmvdBZrrxRQWk0rE8CQLqTgv1Vb5SUjhCB3ig8ODqhIPf QRcWbqzPDduRR2rnDbAyILIFhnxfWj5WspEmaqQSULyPXIIsYyuieRx0DQK3HwyZbet8XU5Wn0XIp dpS5y2658puf0CSWIuOSW5D761Nt9xQnEpw9wgUWNC16I28DNiMeAVdmu5op6XWMOhSIuVu00obF5 F/QOP2jJHvi5TllU5+O6YkQbGiuxV0iUSbtRgro/uTaCBZ3qhJ4Zx822fsTzW0ZSCbJxhEsR5mfhj IOwlMNlXh+FX5XLuMqhQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDff-00CAta-0N; Wed, 01 Feb 2023 14:01:07 +0000 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNChw-00BnGy-PH for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:27 +0000 Received: by mail-wr1-x42b.google.com with SMTP id a3so10533697wrt.6 for ; Wed, 01 Feb 2023 04:59:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YRBo9744342o7YaPm6Zs1GUxiZStyoHs3LqNTLW5yzA=; b=qFbMHP2Embel72ZqaIvf16FStDCJ4QZES5VFhISvO9W7nO81lyzTIKtMEhlU+zhtfD Acyxvaz0SfXYVyEX1+37LcggDUAafoAFXwTtDqPHL5nZ2wh1Mppnv11qvBJZTtzYW0Pe jyms0ZKcjsOUamzYuHhEiVJKI04tGO9st0XWIi2oTqwQAryEVi5hMFEzJKg5bxlRfP0U HbGZtK0RHzXwBLW7x3UNUH3aturY6rgTBWmakPrDUCqVDwLtrVGPm8p7wm1hwsviIWG0 9Ze5mCYfXpWEOKzkjnLhraDN1+/0W+YDOBgn0ChgyLu8rZivL/pv4YNu0OtWnyTHoX6F Ipog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YRBo9744342o7YaPm6Zs1GUxiZStyoHs3LqNTLW5yzA=; b=SfbfjNpBr4IslIo8k21JvnOPSYa6e/guzTVNM0lg/GuOx84ltwBHcEtid1lpEvucAE xOJIE5GsjeitaBvGwmBC2LohveZHmr/x6cgZZPxqvMOSXw2ttA38EV6VBNw0tABJJQcu 9XcNPwCxFF0X2KR2Pe9mPLqhxX6tbpADIj0tPIf5bA7DvzYbHJ8VH37g3GdztT89Pdq3 lrOkkv71ZlX9/fm4IViqdA3cyS6RooxwiIawCCz+l0vRODg4zc4UUM7zda5EEtaYtWRD MiIGymxJd2wqBhLALAvwGqnYiVjAWr5ZJhxr7r1TfhEaya5MIU9wyWTETwhPUPOB82jH gHmQ== X-Gm-Message-State: AO0yUKXWm3K49jcm8oCbjv+JNRrAGQ+ul80R1uGsytKOF+ZiQjqln5+4 ss45zOkqjWR7RTKPe4YUPY3Q4g== X-Google-Smtp-Source: AK7set/uKV9+XcgUaZrw1N/7IRQ/kWwCi0MQTZn1q2xxii3BO/acDAW9vQEsB2O7/k5h/L/O9TtlEw== X-Received: by 2002:adf:f7c9:0:b0:2bf:bfc1:f622 with SMTP id a9-20020adff7c9000000b002bfbfc1f622mr2225381wrq.64.1675256363577; Wed, 01 Feb 2023 04:59:23 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:23 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 06/45] iommu/io-pgtable-arm: Extend __arm_lpae_free_pgtable() to only free child tables Date: Wed, 1 Feb 2023 12:52:50 +0000 Message-Id: <20230201125328.2186498-7-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045924_838386_7B615DED X-CRM114-Status: GOOD ( 13.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hypervisor side of io-pgtable-arm needs to free the top-level page table separately from the other tables (which are page-sized and will use a page queue). Signed-off-by: Jean-Philippe Brucker --- include/linux/io-pgtable-arm.h | 2 +- drivers/iommu/io-pgtable-arm-common.c | 11 +++++++---- drivers/iommu/io-pgtable-arm.c | 2 +- 3 files changed, 9 insertions(+), 6 deletions(-) diff --git a/include/linux/io-pgtable-arm.h b/include/linux/io-pgtable-arm.h index 5199bd9851b6..2b3e69386d08 100644 --- a/include/linux/io-pgtable-arm.h +++ b/include/linux/io-pgtable-arm.h @@ -166,7 +166,7 @@ static inline bool iopte_leaf(arm_lpae_iopte pte, int lvl, /* Generic functions */ void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, - arm_lpae_iopte *ptep); + arm_lpae_iopte *ptep, bool only_children); int arm_lpae_init_pgtable(struct io_pgtable_cfg *cfg, struct arm_lpae_io_pgtable *data); diff --git a/drivers/iommu/io-pgtable-arm-common.c b/drivers/iommu/io-pgtable-arm-common.c index 359086cace34..009c35d4095f 100644 --- a/drivers/iommu/io-pgtable-arm-common.c +++ b/drivers/iommu/io-pgtable-arm-common.c @@ -299,7 +299,7 @@ int arm_lpae_map_pages(struct io_pgtable *iop, unsigned long iova, } void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, - arm_lpae_iopte *ptep) + arm_lpae_iopte *ptep, bool only_children) { arm_lpae_iopte *start, *end; unsigned long table_size; @@ -323,10 +323,12 @@ void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, if (!pte || iopte_leaf(pte, lvl, data->iop.cfg.fmt)) continue; - __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); + __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data), + false); } - __arm_lpae_free_pages(start, table_size, &data->iop.cfg); + if (!only_children) + __arm_lpae_free_pages(start, table_size, &data->iop.cfg); } static size_t arm_lpae_split_blk_unmap(struct io_pgtable *iop, @@ -428,7 +430,8 @@ static size_t __arm_lpae_unmap(struct io_pgtable *iop, /* Also flush any partial walks */ io_pgtable_tlb_flush_walk(cfg, iop, iova + i * size, size, ARM_LPAE_GRANULE(data)); - __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); + __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data), + false); } else if (!iommu_iotlb_gather_queued(gather)) { io_pgtable_tlb_add_page(cfg, iop, gather, iova + i * size, size); diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index bee8980c89eb..b7920637126c 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -84,7 +84,7 @@ static void arm_lpae_free_pgtable(struct io_pgtable *iop) { struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(iop->ops); - __arm_lpae_free_pgtable(data, data->start_level, iop->pgd); + __arm_lpae_free_pgtable(data, data->start_level, iop->pgd, false); kfree(data); } From patchwork Wed Feb 1 12:52:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124382 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A71BBC05027 for ; Wed, 1 Feb 2023 14:04:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=P+C4tRC2IUnm9RC6wFjp1kf8yojAM2qCN3BTs7RysZw=; b=24FBFTAyabq9sd EEdiNB0O6HtJN0PK/wloNAEMY/2xa9vmDIOkwU9mtLD5YoHPS1x1g8lNqLb7qcitqQP6hSha06gL3 vVHVH8AkBupzEyL9GVFGWYUsEHLAcdh6dGP9UqGIKjc3U4vJKBmnK9cnNvTdalkYbhf8YyHeW7OyH 2uX/PaUupJHleqGw7nXjwx6ziNAkHl13X4HklTols/Vc+9NvGFrhN57qvljQ+lnsSIg0SdSD4ETvc L6cZlMgQh3aLYW9MaHJXxb4o2cYA8tdKMCA5EJ7BDJ6GJ+wDq9/ip0iwPY8M9/PMX2GndDZ3BC1va ms7JCEu2GGDGRCj8UuzA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDiF-00CC4K-31; Wed, 01 Feb 2023 14:03:47 +0000 Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCi1-00BnHl-E3 for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:36 +0000 Received: by mail-wm1-x333.google.com with SMTP id m5-20020a05600c4f4500b003db03b2559eso1360697wmq.5 for ; Wed, 01 Feb 2023 04:59:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=u0AWSiz1mMAMbxk5emNtSsIFcTqThfMDBTUOEAsDYSQ=; b=Jv/RTxY1Mw0pL1Wx1fXEtjZ4zEge7JsG04pKXxO7h6y5TMWxrz09HARe5C4ob+GHB1 Qs2kBP9iIHwgM5UM7rEFjzx7kSoz3/ugCMB4pGui+T6Zlyq+jjnqZTqWMi3381y9WGN0 AzvrAKBCf0i5EW6gqka9f4xrS/VtiQIQ+g/M/OmqHl0v/evA8cX0ufTwFw9yAGANGr0f guO8731YKNvJh7gfGWil+Y/mfa7qkTrBiNSr5Rjdmzc29mnx2VCFRYJzMkApO8HSGUrg UKspwlqdFl/mBPMkp0B2xSEZZqFdsHxak9SFZs6QDBoU/TopbeWlswlxsAFPw/OzdX3x rsmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=u0AWSiz1mMAMbxk5emNtSsIFcTqThfMDBTUOEAsDYSQ=; b=y4Q6bCbDSFEGTMEetrLF6UKjvx9uQaKT0C8x6dGbKKylmePn67KU+KP0aUg7ha3D+p saiNi1lmAijaR4zDPxNPSZdy+Ep/cJV8zOObLJNRQ0qBFChVCgjezulsKjQ2VGsi8zLT 5lF5wZ+gZtoT1hAQClpWHcgH9MSgoFjbSKU8oqlCjU6u+Rr83ciLYB0pq7qEjq3GuHXk Dcc6FEcSbzpXMvE/1w5VohrbaWbygvn1OZSbASbct0TLdFXtBNewysDjmEb158Z+X1Bw 2unKPsgxVDUZ87T0eNsrnatx/2WJt0zApshc/Dc1N4v7Iftl9mSHLkhpQ1krynv5Fffw D6LA== X-Gm-Message-State: AO0yUKWbpxoEDFwtuZzX9/Y4T1EdjyCbqj24HHJ6ziLuw7GL7Gmip0Sg 6XhFdm60aoL5W6nrp4eigkZAEQ== X-Google-Smtp-Source: AK7set9OFl5LPgkqTB0kq+fImj210UWHMhC/jV7y29dEm+YBoXL1jfd/m82Nw8PgBcMo/Dp6RzV6cA== X-Received: by 2002:a05:600c:1d96:b0:3dd:af7a:53db with SMTP id p22-20020a05600c1d9600b003ddaf7a53dbmr2137291wms.11.1675256364752; Wed, 01 Feb 2023 04:59:24 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:24 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 07/45] iommu/arm-smmu-v3: Move some definitions to arm64 include/ Date: Wed, 1 Feb 2023 12:52:51 +0000 Message-Id: <20230201125328.2186498-8-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045929_530805_8DE3DA3C X-CRM114-Status: GOOD ( 13.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org So that the KVM SMMUv3 driver can re-use architectural definitions, command structures and feature bits, move them to the arm64 include/ Signed-off-by: Jean-Philippe Brucker --- arch/arm64/include/asm/arm-smmu-v3-regs.h | 479 +++++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 481 +------------------- 2 files changed, 485 insertions(+), 475 deletions(-) create mode 100644 arch/arm64/include/asm/arm-smmu-v3-regs.h diff --git a/arch/arm64/include/asm/arm-smmu-v3-regs.h b/arch/arm64/include/asm/arm-smmu-v3-regs.h new file mode 100644 index 000000000000..646a734f2554 --- /dev/null +++ b/arch/arm64/include/asm/arm-smmu-v3-regs.h @@ -0,0 +1,479 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _ARM_SMMU_V3_REGS_H +#define _ARM_SMMU_V3_REGS_H + +#include + +/* MMIO registers */ +#define ARM_SMMU_IDR0 0x0 +#define IDR0_ST_LVL GENMASK(28, 27) +#define IDR0_ST_LVL_2LVL 1 +#define IDR0_STALL_MODEL GENMASK(25, 24) +#define IDR0_STALL_MODEL_STALL 0 +#define IDR0_STALL_MODEL_FORCE 2 +#define IDR0_TTENDIAN GENMASK(22, 21) +#define IDR0_TTENDIAN_MIXED 0 +#define IDR0_TTENDIAN_LE 2 +#define IDR0_TTENDIAN_BE 3 +#define IDR0_CD2L (1 << 19) +#define IDR0_VMID16 (1 << 18) +#define IDR0_PRI (1 << 16) +#define IDR0_SEV (1 << 14) +#define IDR0_MSI (1 << 13) +#define IDR0_ASID16 (1 << 12) +#define IDR0_ATS (1 << 10) +#define IDR0_HYP (1 << 9) +#define IDR0_COHACC (1 << 4) +#define IDR0_TTF GENMASK(3, 2) +#define IDR0_TTF_AARCH64 2 +#define IDR0_TTF_AARCH32_64 3 +#define IDR0_S1P (1 << 1) +#define IDR0_S2P (1 << 0) + +#define ARM_SMMU_IDR1 0x4 +#define IDR1_TABLES_PRESET (1 << 30) +#define IDR1_QUEUES_PRESET (1 << 29) +#define IDR1_REL (1 << 28) +#define IDR1_CMDQS GENMASK(25, 21) +#define IDR1_EVTQS GENMASK(20, 16) +#define IDR1_PRIQS GENMASK(15, 11) +#define IDR1_SSIDSIZE GENMASK(10, 6) +#define IDR1_SIDSIZE GENMASK(5, 0) + +#define ARM_SMMU_IDR3 0xc +#define IDR3_RIL (1 << 10) + +#define ARM_SMMU_IDR5 0x14 +#define IDR5_STALL_MAX GENMASK(31, 16) +#define IDR5_GRAN64K (1 << 6) +#define IDR5_GRAN16K (1 << 5) +#define IDR5_GRAN4K (1 << 4) +#define IDR5_OAS GENMASK(2, 0) +#define IDR5_OAS_32_BIT 0 +#define IDR5_OAS_36_BIT 1 +#define IDR5_OAS_40_BIT 2 +#define IDR5_OAS_42_BIT 3 +#define IDR5_OAS_44_BIT 4 +#define IDR5_OAS_48_BIT 5 +#define IDR5_OAS_52_BIT 6 +#define IDR5_VAX GENMASK(11, 10) +#define IDR5_VAX_52_BIT 1 + +#define ARM_SMMU_CR0 0x20 +#define CR0_ATSCHK (1 << 4) +#define CR0_CMDQEN (1 << 3) +#define CR0_EVTQEN (1 << 2) +#define CR0_PRIQEN (1 << 1) +#define CR0_SMMUEN (1 << 0) + +#define ARM_SMMU_CR0ACK 0x24 + +#define ARM_SMMU_CR1 0x28 +#define CR1_TABLE_SH GENMASK(11, 10) +#define CR1_TABLE_OC GENMASK(9, 8) +#define CR1_TABLE_IC GENMASK(7, 6) +#define CR1_QUEUE_SH GENMASK(5, 4) +#define CR1_QUEUE_OC GENMASK(3, 2) +#define CR1_QUEUE_IC GENMASK(1, 0) +/* CR1 cacheability fields don't quite follow the usual TCR-style encoding */ +#define CR1_CACHE_NC 0 +#define CR1_CACHE_WB 1 +#define CR1_CACHE_WT 2 + +#define ARM_SMMU_CR2 0x2c +#define CR2_PTM (1 << 2) +#define CR2_RECINVSID (1 << 1) +#define CR2_E2H (1 << 0) + +#define ARM_SMMU_GBPA 0x44 +#define GBPA_UPDATE (1 << 31) +#define GBPA_ABORT (1 << 20) + +#define ARM_SMMU_IRQ_CTRL 0x50 +#define IRQ_CTRL_EVTQ_IRQEN (1 << 2) +#define IRQ_CTRL_PRIQ_IRQEN (1 << 1) +#define IRQ_CTRL_GERROR_IRQEN (1 << 0) + +#define ARM_SMMU_IRQ_CTRLACK 0x54 + +#define ARM_SMMU_GERROR 0x60 +#define GERROR_SFM_ERR (1 << 8) +#define GERROR_MSI_GERROR_ABT_ERR (1 << 7) +#define GERROR_MSI_PRIQ_ABT_ERR (1 << 6) +#define GERROR_MSI_EVTQ_ABT_ERR (1 << 5) +#define GERROR_MSI_CMDQ_ABT_ERR (1 << 4) +#define GERROR_PRIQ_ABT_ERR (1 << 3) +#define GERROR_EVTQ_ABT_ERR (1 << 2) +#define GERROR_CMDQ_ERR (1 << 0) +#define GERROR_ERR_MASK 0x1fd + +#define ARM_SMMU_GERRORN 0x64 + +#define ARM_SMMU_GERROR_IRQ_CFG0 0x68 +#define ARM_SMMU_GERROR_IRQ_CFG1 0x70 +#define ARM_SMMU_GERROR_IRQ_CFG2 0x74 + +#define ARM_SMMU_STRTAB_BASE 0x80 +#define STRTAB_BASE_RA (1UL << 62) +#define STRTAB_BASE_ADDR_MASK GENMASK_ULL(51, 6) + +#define ARM_SMMU_STRTAB_BASE_CFG 0x88 +#define STRTAB_BASE_CFG_FMT GENMASK(17, 16) +#define STRTAB_BASE_CFG_FMT_LINEAR 0 +#define STRTAB_BASE_CFG_FMT_2LVL 1 +#define STRTAB_BASE_CFG_SPLIT GENMASK(10, 6) +#define STRTAB_BASE_CFG_LOG2SIZE GENMASK(5, 0) + +#define Q_BASE_RWA (1UL << 62) +#define Q_BASE_ADDR_MASK GENMASK_ULL(51, 5) +#define Q_BASE_LOG2SIZE GENMASK(4, 0) + +#define ARM_SMMU_CMDQ_BASE 0x90 +#define ARM_SMMU_CMDQ_PROD 0x98 +#define ARM_SMMU_CMDQ_CONS 0x9c + +#define ARM_SMMU_EVTQ_BASE 0xa0 +#define ARM_SMMU_EVTQ_PROD 0xa8 +#define ARM_SMMU_EVTQ_CONS 0xac +#define ARM_SMMU_EVTQ_IRQ_CFG0 0xb0 +#define ARM_SMMU_EVTQ_IRQ_CFG1 0xb8 +#define ARM_SMMU_EVTQ_IRQ_CFG2 0xbc + +#define ARM_SMMU_PRIQ_BASE 0xc0 +#define ARM_SMMU_PRIQ_PROD 0xc8 +#define ARM_SMMU_PRIQ_CONS 0xcc +#define ARM_SMMU_PRIQ_IRQ_CFG0 0xd0 +#define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8 +#define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc + +#define ARM_SMMU_REG_SZ 0xe00 + +/* Common MSI config fields */ +#define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2) +#define MSI_CFG2_SH GENMASK(5, 4) +#define MSI_CFG2_MEMATTR GENMASK(3, 0) + +/* Common memory attribute values */ +#define ARM_SMMU_SH_NSH 0 +#define ARM_SMMU_SH_OSH 2 +#define ARM_SMMU_SH_ISH 3 +#define ARM_SMMU_MEMATTR_DEVICE_nGnRE 0x1 +#define ARM_SMMU_MEMATTR_OIWB 0xf + +/* + * Stream table. + * + * Linear: Enough to cover 1 << IDR1.SIDSIZE entries + * 2lvl: 128k L1 entries, + * 256 lazy entries per table (each table covers a PCI bus) + */ +#define STRTAB_L1_SZ_SHIFT 20 +#define STRTAB_SPLIT 8 + +#define STRTAB_L1_DESC_DWORDS 1 +#define STRTAB_L1_DESC_SPAN GENMASK_ULL(4, 0) +#define STRTAB_L1_DESC_L2PTR_MASK GENMASK_ULL(51, 6) + +#define STRTAB_STE_DWORDS 8 +#define STRTAB_STE_0_V (1UL << 0) +#define STRTAB_STE_0_CFG GENMASK_ULL(3, 1) +#define STRTAB_STE_0_CFG_ABORT 0 +#define STRTAB_STE_0_CFG_BYPASS 4 +#define STRTAB_STE_0_CFG_S1_TRANS 5 +#define STRTAB_STE_0_CFG_S2_TRANS 6 + +#define STRTAB_STE_0_S1FMT GENMASK_ULL(5, 4) +#define STRTAB_STE_0_S1FMT_LINEAR 0 +#define STRTAB_STE_0_S1FMT_64K_L2 2 +#define STRTAB_STE_0_S1CTXPTR_MASK GENMASK_ULL(51, 6) +#define STRTAB_STE_0_S1CDMAX GENMASK_ULL(63, 59) + +#define STRTAB_STE_1_S1DSS GENMASK_ULL(1, 0) +#define STRTAB_STE_1_S1DSS_TERMINATE 0x0 +#define STRTAB_STE_1_S1DSS_BYPASS 0x1 +#define STRTAB_STE_1_S1DSS_SSID0 0x2 + +#define STRTAB_STE_1_S1C_CACHE_NC 0UL +#define STRTAB_STE_1_S1C_CACHE_WBRA 1UL +#define STRTAB_STE_1_S1C_CACHE_WT 2UL +#define STRTAB_STE_1_S1C_CACHE_WB 3UL +#define STRTAB_STE_1_S1CIR GENMASK_ULL(3, 2) +#define STRTAB_STE_1_S1COR GENMASK_ULL(5, 4) +#define STRTAB_STE_1_S1CSH GENMASK_ULL(7, 6) + +#define STRTAB_STE_1_S1STALLD (1UL << 27) + +#define STRTAB_STE_1_EATS GENMASK_ULL(29, 28) +#define STRTAB_STE_1_EATS_ABT 0UL +#define STRTAB_STE_1_EATS_TRANS 1UL +#define STRTAB_STE_1_EATS_S1CHK 2UL + +#define STRTAB_STE_1_STRW GENMASK_ULL(31, 30) +#define STRTAB_STE_1_STRW_NSEL1 0UL +#define STRTAB_STE_1_STRW_EL2 2UL + +#define STRTAB_STE_1_SHCFG GENMASK_ULL(45, 44) +#define STRTAB_STE_1_SHCFG_INCOMING 1UL + +#define STRTAB_STE_2_S2VMID GENMASK_ULL(15, 0) +#define STRTAB_STE_2_VTCR GENMASK_ULL(50, 32) +#define STRTAB_STE_2_VTCR_S2T0SZ GENMASK_ULL(5, 0) +#define STRTAB_STE_2_VTCR_S2SL0 GENMASK_ULL(7, 6) +#define STRTAB_STE_2_VTCR_S2IR0 GENMASK_ULL(9, 8) +#define STRTAB_STE_2_VTCR_S2OR0 GENMASK_ULL(11, 10) +#define STRTAB_STE_2_VTCR_S2SH0 GENMASK_ULL(13, 12) +#define STRTAB_STE_2_VTCR_S2TG GENMASK_ULL(15, 14) +#define STRTAB_STE_2_VTCR_S2PS GENMASK_ULL(18, 16) +#define STRTAB_STE_2_S2AA64 (1UL << 51) +#define STRTAB_STE_2_S2ENDI (1UL << 52) +#define STRTAB_STE_2_S2PTW (1UL << 54) +#define STRTAB_STE_2_S2R (1UL << 58) + +#define STRTAB_STE_3_S2TTB_MASK GENMASK_ULL(51, 4) + +/* + * Context descriptors. + * + * Linear: when less than 1024 SSIDs are supported + * 2lvl: at most 1024 L1 entries, + * 1024 lazy entries per table. + */ +#define CTXDESC_SPLIT 10 +#define CTXDESC_L2_ENTRIES (1 << CTXDESC_SPLIT) + +#define CTXDESC_L1_DESC_DWORDS 1 +#define CTXDESC_L1_DESC_V (1UL << 0) +#define CTXDESC_L1_DESC_L2PTR_MASK GENMASK_ULL(51, 12) + +#define CTXDESC_CD_DWORDS 8 +#define CTXDESC_CD_0_TCR_T0SZ GENMASK_ULL(5, 0) +#define CTXDESC_CD_0_TCR_TG0 GENMASK_ULL(7, 6) +#define CTXDESC_CD_0_TCR_IRGN0 GENMASK_ULL(9, 8) +#define CTXDESC_CD_0_TCR_ORGN0 GENMASK_ULL(11, 10) +#define CTXDESC_CD_0_TCR_SH0 GENMASK_ULL(13, 12) +#define CTXDESC_CD_0_TCR_EPD0 (1ULL << 14) +#define CTXDESC_CD_0_TCR_EPD1 (1ULL << 30) + +#define CTXDESC_CD_0_ENDI (1UL << 15) +#define CTXDESC_CD_0_V (1UL << 31) + +#define CTXDESC_CD_0_TCR_IPS GENMASK_ULL(34, 32) +#define CTXDESC_CD_0_TCR_TBI0 (1ULL << 38) + +#define CTXDESC_CD_0_AA64 (1UL << 41) +#define CTXDESC_CD_0_S (1UL << 44) +#define CTXDESC_CD_0_R (1UL << 45) +#define CTXDESC_CD_0_A (1UL << 46) +#define CTXDESC_CD_0_ASET (1UL << 47) +#define CTXDESC_CD_0_ASID GENMASK_ULL(63, 48) + +#define CTXDESC_CD_1_TTB0_MASK GENMASK_ULL(51, 4) + +/* Command queue */ +#define CMDQ_ENT_SZ_SHIFT 4 +#define CMDQ_ENT_DWORDS ((1 << CMDQ_ENT_SZ_SHIFT) >> 3) +#define CMDQ_MAX_SZ_SHIFT (Q_MAX_SZ_SHIFT - CMDQ_ENT_SZ_SHIFT) + +#define CMDQ_CONS_ERR GENMASK(30, 24) +#define CMDQ_ERR_CERROR_NONE_IDX 0 +#define CMDQ_ERR_CERROR_ILL_IDX 1 +#define CMDQ_ERR_CERROR_ABT_IDX 2 +#define CMDQ_ERR_CERROR_ATC_INV_IDX 3 + +#define CMDQ_0_OP GENMASK_ULL(7, 0) +#define CMDQ_0_SSV (1UL << 11) + +#define CMDQ_PREFETCH_0_SID GENMASK_ULL(63, 32) +#define CMDQ_PREFETCH_1_SIZE GENMASK_ULL(4, 0) +#define CMDQ_PREFETCH_1_ADDR_MASK GENMASK_ULL(63, 12) + +#define CMDQ_CFGI_0_SSID GENMASK_ULL(31, 12) +#define CMDQ_CFGI_0_SID GENMASK_ULL(63, 32) +#define CMDQ_CFGI_1_LEAF (1UL << 0) +#define CMDQ_CFGI_1_RANGE GENMASK_ULL(4, 0) + +#define CMDQ_TLBI_0_NUM GENMASK_ULL(16, 12) +#define CMDQ_TLBI_RANGE_NUM_MAX 31 +#define CMDQ_TLBI_0_SCALE GENMASK_ULL(24, 20) +#define CMDQ_TLBI_0_VMID GENMASK_ULL(47, 32) +#define CMDQ_TLBI_0_ASID GENMASK_ULL(63, 48) +#define CMDQ_TLBI_1_LEAF (1UL << 0) +#define CMDQ_TLBI_1_TTL GENMASK_ULL(9, 8) +#define CMDQ_TLBI_1_TG GENMASK_ULL(11, 10) +#define CMDQ_TLBI_1_VA_MASK GENMASK_ULL(63, 12) +#define CMDQ_TLBI_1_IPA_MASK GENMASK_ULL(51, 12) + +#define CMDQ_ATC_0_SSID GENMASK_ULL(31, 12) +#define CMDQ_ATC_0_SID GENMASK_ULL(63, 32) +#define CMDQ_ATC_0_GLOBAL (1UL << 9) +#define CMDQ_ATC_1_SIZE GENMASK_ULL(5, 0) +#define CMDQ_ATC_1_ADDR_MASK GENMASK_ULL(63, 12) + +#define CMDQ_PRI_0_SSID GENMASK_ULL(31, 12) +#define CMDQ_PRI_0_SID GENMASK_ULL(63, 32) +#define CMDQ_PRI_1_GRPID GENMASK_ULL(8, 0) +#define CMDQ_PRI_1_RESP GENMASK_ULL(13, 12) + +#define CMDQ_RESUME_0_RESP_TERM 0UL +#define CMDQ_RESUME_0_RESP_RETRY 1UL +#define CMDQ_RESUME_0_RESP_ABORT 2UL +#define CMDQ_RESUME_0_RESP GENMASK_ULL(13, 12) +#define CMDQ_RESUME_0_SID GENMASK_ULL(63, 32) +#define CMDQ_RESUME_1_STAG GENMASK_ULL(15, 0) + +#define CMDQ_SYNC_0_CS GENMASK_ULL(13, 12) +#define CMDQ_SYNC_0_CS_NONE 0 +#define CMDQ_SYNC_0_CS_IRQ 1 +#define CMDQ_SYNC_0_CS_SEV 2 +#define CMDQ_SYNC_0_MSH GENMASK_ULL(23, 22) +#define CMDQ_SYNC_0_MSIATTR GENMASK_ULL(27, 24) +#define CMDQ_SYNC_0_MSIDATA GENMASK_ULL(63, 32) +#define CMDQ_SYNC_1_MSIADDR_MASK GENMASK_ULL(51, 2) + +/* Event queue */ +#define EVTQ_ENT_SZ_SHIFT 5 +#define EVTQ_ENT_DWORDS ((1 << EVTQ_ENT_SZ_SHIFT) >> 3) +#define EVTQ_MAX_SZ_SHIFT (Q_MAX_SZ_SHIFT - EVTQ_ENT_SZ_SHIFT) + +#define EVTQ_0_ID GENMASK_ULL(7, 0) + +#define EVT_ID_TRANSLATION_FAULT 0x10 +#define EVT_ID_ADDR_SIZE_FAULT 0x11 +#define EVT_ID_ACCESS_FAULT 0x12 +#define EVT_ID_PERMISSION_FAULT 0x13 + +#define EVTQ_0_SSV (1UL << 11) +#define EVTQ_0_SSID GENMASK_ULL(31, 12) +#define EVTQ_0_SID GENMASK_ULL(63, 32) +#define EVTQ_1_STAG GENMASK_ULL(15, 0) +#define EVTQ_1_STALL (1UL << 31) +#define EVTQ_1_PnU (1UL << 33) +#define EVTQ_1_InD (1UL << 34) +#define EVTQ_1_RnW (1UL << 35) +#define EVTQ_1_S2 (1UL << 39) +#define EVTQ_1_CLASS GENMASK_ULL(41, 40) +#define EVTQ_1_TT_READ (1UL << 44) +#define EVTQ_2_ADDR GENMASK_ULL(63, 0) +#define EVTQ_3_IPA GENMASK_ULL(51, 12) + +/* PRI queue */ +#define PRIQ_ENT_SZ_SHIFT 4 +#define PRIQ_ENT_DWORDS ((1 << PRIQ_ENT_SZ_SHIFT) >> 3) +#define PRIQ_MAX_SZ_SHIFT (Q_MAX_SZ_SHIFT - PRIQ_ENT_SZ_SHIFT) + +#define PRIQ_0_SID GENMASK_ULL(31, 0) +#define PRIQ_0_SSID GENMASK_ULL(51, 32) +#define PRIQ_0_PERM_PRIV (1UL << 58) +#define PRIQ_0_PERM_EXEC (1UL << 59) +#define PRIQ_0_PERM_READ (1UL << 60) +#define PRIQ_0_PERM_WRITE (1UL << 61) +#define PRIQ_0_PRG_LAST (1UL << 62) +#define PRIQ_0_SSID_V (1UL << 63) + +#define PRIQ_1_PRG_IDX GENMASK_ULL(8, 0) +#define PRIQ_1_ADDR_MASK GENMASK_ULL(63, 12) + +/* Synthesized features */ +#define ARM_SMMU_FEAT_2_LVL_STRTAB (1 << 0) +#define ARM_SMMU_FEAT_2_LVL_CDTAB (1 << 1) +#define ARM_SMMU_FEAT_TT_LE (1 << 2) +#define ARM_SMMU_FEAT_TT_BE (1 << 3) +#define ARM_SMMU_FEAT_PRI (1 << 4) +#define ARM_SMMU_FEAT_ATS (1 << 5) +#define ARM_SMMU_FEAT_SEV (1 << 6) +#define ARM_SMMU_FEAT_MSI (1 << 7) +#define ARM_SMMU_FEAT_COHERENCY (1 << 8) +#define ARM_SMMU_FEAT_TRANS_S1 (1 << 9) +#define ARM_SMMU_FEAT_TRANS_S2 (1 << 10) +#define ARM_SMMU_FEAT_STALLS (1 << 11) +#define ARM_SMMU_FEAT_HYP (1 << 12) +#define ARM_SMMU_FEAT_STALL_FORCE (1 << 13) +#define ARM_SMMU_FEAT_VAX (1 << 14) +#define ARM_SMMU_FEAT_RANGE_INV (1 << 15) +#define ARM_SMMU_FEAT_BTM (1 << 16) +#define ARM_SMMU_FEAT_SVA (1 << 17) +#define ARM_SMMU_FEAT_E2H (1 << 18) + +enum pri_resp { + PRI_RESP_DENY = 0, + PRI_RESP_FAIL = 1, + PRI_RESP_SUCC = 2, +}; + +struct arm_smmu_cmdq_ent { + /* Common fields */ + u8 opcode; + bool substream_valid; + + /* Command-specific fields */ + union { + #define CMDQ_OP_PREFETCH_CFG 0x1 + struct { + u32 sid; + } prefetch; + + #define CMDQ_OP_CFGI_STE 0x3 + #define CMDQ_OP_CFGI_ALL 0x4 + #define CMDQ_OP_CFGI_CD 0x5 + #define CMDQ_OP_CFGI_CD_ALL 0x6 + struct { + u32 sid; + u32 ssid; + union { + bool leaf; + u8 span; + }; + } cfgi; + + #define CMDQ_OP_TLBI_NH_ASID 0x11 + #define CMDQ_OP_TLBI_NH_VA 0x12 + #define CMDQ_OP_TLBI_EL2_ALL 0x20 + #define CMDQ_OP_TLBI_EL2_ASID 0x21 + #define CMDQ_OP_TLBI_EL2_VA 0x22 + #define CMDQ_OP_TLBI_S12_VMALL 0x28 + #define CMDQ_OP_TLBI_S2_IPA 0x2a + #define CMDQ_OP_TLBI_NSNH_ALL 0x30 + struct { + u8 num; + u8 scale; + u16 asid; + u16 vmid; + bool leaf; + u8 ttl; + u8 tg; + u64 addr; + } tlbi; + + #define CMDQ_OP_ATC_INV 0x40 + #define ATC_INV_SIZE_ALL 52 + struct { + u32 sid; + u32 ssid; + u64 addr; + u8 size; + bool global; + } atc; + + #define CMDQ_OP_PRI_RESP 0x41 + struct { + u32 sid; + u32 ssid; + u16 grpid; + enum pri_resp resp; + } pri; + + #define CMDQ_OP_RESUME 0x44 + struct { + u32 sid; + u16 stag; + u8 resp; + } resume; + + #define CMDQ_OP_CMD_SYNC 0x46 + struct { + u64 msiaddr; + } sync; + }; +}; + +#endif /* _ARM_SMMU_V3_REGS_H */ diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index cec3c8103404..32ce835ab4eb 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -8,164 +8,13 @@ #ifndef _ARM_SMMU_V3_H #define _ARM_SMMU_V3_H -#include #include #include #include #include #include -/* MMIO registers */ -#define ARM_SMMU_IDR0 0x0 -#define IDR0_ST_LVL GENMASK(28, 27) -#define IDR0_ST_LVL_2LVL 1 -#define IDR0_STALL_MODEL GENMASK(25, 24) -#define IDR0_STALL_MODEL_STALL 0 -#define IDR0_STALL_MODEL_FORCE 2 -#define IDR0_TTENDIAN GENMASK(22, 21) -#define IDR0_TTENDIAN_MIXED 0 -#define IDR0_TTENDIAN_LE 2 -#define IDR0_TTENDIAN_BE 3 -#define IDR0_CD2L (1 << 19) -#define IDR0_VMID16 (1 << 18) -#define IDR0_PRI (1 << 16) -#define IDR0_SEV (1 << 14) -#define IDR0_MSI (1 << 13) -#define IDR0_ASID16 (1 << 12) -#define IDR0_ATS (1 << 10) -#define IDR0_HYP (1 << 9) -#define IDR0_COHACC (1 << 4) -#define IDR0_TTF GENMASK(3, 2) -#define IDR0_TTF_AARCH64 2 -#define IDR0_TTF_AARCH32_64 3 -#define IDR0_S1P (1 << 1) -#define IDR0_S2P (1 << 0) - -#define ARM_SMMU_IDR1 0x4 -#define IDR1_TABLES_PRESET (1 << 30) -#define IDR1_QUEUES_PRESET (1 << 29) -#define IDR1_REL (1 << 28) -#define IDR1_CMDQS GENMASK(25, 21) -#define IDR1_EVTQS GENMASK(20, 16) -#define IDR1_PRIQS GENMASK(15, 11) -#define IDR1_SSIDSIZE GENMASK(10, 6) -#define IDR1_SIDSIZE GENMASK(5, 0) - -#define ARM_SMMU_IDR3 0xc -#define IDR3_RIL (1 << 10) - -#define ARM_SMMU_IDR5 0x14 -#define IDR5_STALL_MAX GENMASK(31, 16) -#define IDR5_GRAN64K (1 << 6) -#define IDR5_GRAN16K (1 << 5) -#define IDR5_GRAN4K (1 << 4) -#define IDR5_OAS GENMASK(2, 0) -#define IDR5_OAS_32_BIT 0 -#define IDR5_OAS_36_BIT 1 -#define IDR5_OAS_40_BIT 2 -#define IDR5_OAS_42_BIT 3 -#define IDR5_OAS_44_BIT 4 -#define IDR5_OAS_48_BIT 5 -#define IDR5_OAS_52_BIT 6 -#define IDR5_VAX GENMASK(11, 10) -#define IDR5_VAX_52_BIT 1 - -#define ARM_SMMU_CR0 0x20 -#define CR0_ATSCHK (1 << 4) -#define CR0_CMDQEN (1 << 3) -#define CR0_EVTQEN (1 << 2) -#define CR0_PRIQEN (1 << 1) -#define CR0_SMMUEN (1 << 0) - -#define ARM_SMMU_CR0ACK 0x24 - -#define ARM_SMMU_CR1 0x28 -#define CR1_TABLE_SH GENMASK(11, 10) -#define CR1_TABLE_OC GENMASK(9, 8) -#define CR1_TABLE_IC GENMASK(7, 6) -#define CR1_QUEUE_SH GENMASK(5, 4) -#define CR1_QUEUE_OC GENMASK(3, 2) -#define CR1_QUEUE_IC GENMASK(1, 0) -/* CR1 cacheability fields don't quite follow the usual TCR-style encoding */ -#define CR1_CACHE_NC 0 -#define CR1_CACHE_WB 1 -#define CR1_CACHE_WT 2 - -#define ARM_SMMU_CR2 0x2c -#define CR2_PTM (1 << 2) -#define CR2_RECINVSID (1 << 1) -#define CR2_E2H (1 << 0) - -#define ARM_SMMU_GBPA 0x44 -#define GBPA_UPDATE (1 << 31) -#define GBPA_ABORT (1 << 20) - -#define ARM_SMMU_IRQ_CTRL 0x50 -#define IRQ_CTRL_EVTQ_IRQEN (1 << 2) -#define IRQ_CTRL_PRIQ_IRQEN (1 << 1) -#define IRQ_CTRL_GERROR_IRQEN (1 << 0) - -#define ARM_SMMU_IRQ_CTRLACK 0x54 - -#define ARM_SMMU_GERROR 0x60 -#define GERROR_SFM_ERR (1 << 8) -#define GERROR_MSI_GERROR_ABT_ERR (1 << 7) -#define GERROR_MSI_PRIQ_ABT_ERR (1 << 6) -#define GERROR_MSI_EVTQ_ABT_ERR (1 << 5) -#define GERROR_MSI_CMDQ_ABT_ERR (1 << 4) -#define GERROR_PRIQ_ABT_ERR (1 << 3) -#define GERROR_EVTQ_ABT_ERR (1 << 2) -#define GERROR_CMDQ_ERR (1 << 0) -#define GERROR_ERR_MASK 0x1fd - -#define ARM_SMMU_GERRORN 0x64 - -#define ARM_SMMU_GERROR_IRQ_CFG0 0x68 -#define ARM_SMMU_GERROR_IRQ_CFG1 0x70 -#define ARM_SMMU_GERROR_IRQ_CFG2 0x74 - -#define ARM_SMMU_STRTAB_BASE 0x80 -#define STRTAB_BASE_RA (1UL << 62) -#define STRTAB_BASE_ADDR_MASK GENMASK_ULL(51, 6) - -#define ARM_SMMU_STRTAB_BASE_CFG 0x88 -#define STRTAB_BASE_CFG_FMT GENMASK(17, 16) -#define STRTAB_BASE_CFG_FMT_LINEAR 0 -#define STRTAB_BASE_CFG_FMT_2LVL 1 -#define STRTAB_BASE_CFG_SPLIT GENMASK(10, 6) -#define STRTAB_BASE_CFG_LOG2SIZE GENMASK(5, 0) - -#define ARM_SMMU_CMDQ_BASE 0x90 -#define ARM_SMMU_CMDQ_PROD 0x98 -#define ARM_SMMU_CMDQ_CONS 0x9c - -#define ARM_SMMU_EVTQ_BASE 0xa0 -#define ARM_SMMU_EVTQ_PROD 0xa8 -#define ARM_SMMU_EVTQ_CONS 0xac -#define ARM_SMMU_EVTQ_IRQ_CFG0 0xb0 -#define ARM_SMMU_EVTQ_IRQ_CFG1 0xb8 -#define ARM_SMMU_EVTQ_IRQ_CFG2 0xbc - -#define ARM_SMMU_PRIQ_BASE 0xc0 -#define ARM_SMMU_PRIQ_PROD 0xc8 -#define ARM_SMMU_PRIQ_CONS 0xcc -#define ARM_SMMU_PRIQ_IRQ_CFG0 0xd0 -#define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8 -#define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc - -#define ARM_SMMU_REG_SZ 0xe00 - -/* Common MSI config fields */ -#define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2) -#define MSI_CFG2_SH GENMASK(5, 4) -#define MSI_CFG2_MEMATTR GENMASK(3, 0) - -/* Common memory attribute values */ -#define ARM_SMMU_SH_NSH 0 -#define ARM_SMMU_SH_OSH 2 -#define ARM_SMMU_SH_ISH 3 -#define ARM_SMMU_MEMATTR_DEVICE_nGnRE 0x1 -#define ARM_SMMU_MEMATTR_OIWB 0xf +#include #define Q_IDX(llq, p) ((p) & ((1 << (llq)->max_n_shift) - 1)) #define Q_WRP(llq, p) ((p) & (1 << (llq)->max_n_shift)) @@ -175,10 +24,6 @@ Q_IDX(&((q)->llq), p) * \ (q)->ent_dwords) -#define Q_BASE_RWA (1UL << 62) -#define Q_BASE_ADDR_MASK GENMASK_ULL(51, 5) -#define Q_BASE_LOG2SIZE GENMASK(4, 0) - /* Ensure DMA allocations are naturally aligned */ #ifdef CONFIG_CMA_ALIGNMENT #define Q_MAX_SZ_SHIFT (PAGE_SHIFT + CONFIG_CMA_ALIGNMENT) @@ -186,132 +31,6 @@ #define Q_MAX_SZ_SHIFT (PAGE_SHIFT + MAX_ORDER - 1) #endif -/* - * Stream table. - * - * Linear: Enough to cover 1 << IDR1.SIDSIZE entries - * 2lvl: 128k L1 entries, - * 256 lazy entries per table (each table covers a PCI bus) - */ -#define STRTAB_L1_SZ_SHIFT 20 -#define STRTAB_SPLIT 8 - -#define STRTAB_L1_DESC_DWORDS 1 -#define STRTAB_L1_DESC_SPAN GENMASK_ULL(4, 0) -#define STRTAB_L1_DESC_L2PTR_MASK GENMASK_ULL(51, 6) - -#define STRTAB_STE_DWORDS 8 -#define STRTAB_STE_0_V (1UL << 0) -#define STRTAB_STE_0_CFG GENMASK_ULL(3, 1) -#define STRTAB_STE_0_CFG_ABORT 0 -#define STRTAB_STE_0_CFG_BYPASS 4 -#define STRTAB_STE_0_CFG_S1_TRANS 5 -#define STRTAB_STE_0_CFG_S2_TRANS 6 - -#define STRTAB_STE_0_S1FMT GENMASK_ULL(5, 4) -#define STRTAB_STE_0_S1FMT_LINEAR 0 -#define STRTAB_STE_0_S1FMT_64K_L2 2 -#define STRTAB_STE_0_S1CTXPTR_MASK GENMASK_ULL(51, 6) -#define STRTAB_STE_0_S1CDMAX GENMASK_ULL(63, 59) - -#define STRTAB_STE_1_S1DSS GENMASK_ULL(1, 0) -#define STRTAB_STE_1_S1DSS_TERMINATE 0x0 -#define STRTAB_STE_1_S1DSS_BYPASS 0x1 -#define STRTAB_STE_1_S1DSS_SSID0 0x2 - -#define STRTAB_STE_1_S1C_CACHE_NC 0UL -#define STRTAB_STE_1_S1C_CACHE_WBRA 1UL -#define STRTAB_STE_1_S1C_CACHE_WT 2UL -#define STRTAB_STE_1_S1C_CACHE_WB 3UL -#define STRTAB_STE_1_S1CIR GENMASK_ULL(3, 2) -#define STRTAB_STE_1_S1COR GENMASK_ULL(5, 4) -#define STRTAB_STE_1_S1CSH GENMASK_ULL(7, 6) - -#define STRTAB_STE_1_S1STALLD (1UL << 27) - -#define STRTAB_STE_1_EATS GENMASK_ULL(29, 28) -#define STRTAB_STE_1_EATS_ABT 0UL -#define STRTAB_STE_1_EATS_TRANS 1UL -#define STRTAB_STE_1_EATS_S1CHK 2UL - -#define STRTAB_STE_1_STRW GENMASK_ULL(31, 30) -#define STRTAB_STE_1_STRW_NSEL1 0UL -#define STRTAB_STE_1_STRW_EL2 2UL - -#define STRTAB_STE_1_SHCFG GENMASK_ULL(45, 44) -#define STRTAB_STE_1_SHCFG_INCOMING 1UL - -#define STRTAB_STE_2_S2VMID GENMASK_ULL(15, 0) -#define STRTAB_STE_2_VTCR GENMASK_ULL(50, 32) -#define STRTAB_STE_2_VTCR_S2T0SZ GENMASK_ULL(5, 0) -#define STRTAB_STE_2_VTCR_S2SL0 GENMASK_ULL(7, 6) -#define STRTAB_STE_2_VTCR_S2IR0 GENMASK_ULL(9, 8) -#define STRTAB_STE_2_VTCR_S2OR0 GENMASK_ULL(11, 10) -#define STRTAB_STE_2_VTCR_S2SH0 GENMASK_ULL(13, 12) -#define STRTAB_STE_2_VTCR_S2TG GENMASK_ULL(15, 14) -#define STRTAB_STE_2_VTCR_S2PS GENMASK_ULL(18, 16) -#define STRTAB_STE_2_S2AA64 (1UL << 51) -#define STRTAB_STE_2_S2ENDI (1UL << 52) -#define STRTAB_STE_2_S2PTW (1UL << 54) -#define STRTAB_STE_2_S2R (1UL << 58) - -#define STRTAB_STE_3_S2TTB_MASK GENMASK_ULL(51, 4) - -/* - * Context descriptors. - * - * Linear: when less than 1024 SSIDs are supported - * 2lvl: at most 1024 L1 entries, - * 1024 lazy entries per table. - */ -#define CTXDESC_SPLIT 10 -#define CTXDESC_L2_ENTRIES (1 << CTXDESC_SPLIT) - -#define CTXDESC_L1_DESC_DWORDS 1 -#define CTXDESC_L1_DESC_V (1UL << 0) -#define CTXDESC_L1_DESC_L2PTR_MASK GENMASK_ULL(51, 12) - -#define CTXDESC_CD_DWORDS 8 -#define CTXDESC_CD_0_TCR_T0SZ GENMASK_ULL(5, 0) -#define CTXDESC_CD_0_TCR_TG0 GENMASK_ULL(7, 6) -#define CTXDESC_CD_0_TCR_IRGN0 GENMASK_ULL(9, 8) -#define CTXDESC_CD_0_TCR_ORGN0 GENMASK_ULL(11, 10) -#define CTXDESC_CD_0_TCR_SH0 GENMASK_ULL(13, 12) -#define CTXDESC_CD_0_TCR_EPD0 (1ULL << 14) -#define CTXDESC_CD_0_TCR_EPD1 (1ULL << 30) - -#define CTXDESC_CD_0_ENDI (1UL << 15) -#define CTXDESC_CD_0_V (1UL << 31) - -#define CTXDESC_CD_0_TCR_IPS GENMASK_ULL(34, 32) -#define CTXDESC_CD_0_TCR_TBI0 (1ULL << 38) - -#define CTXDESC_CD_0_AA64 (1UL << 41) -#define CTXDESC_CD_0_S (1UL << 44) -#define CTXDESC_CD_0_R (1UL << 45) -#define CTXDESC_CD_0_A (1UL << 46) -#define CTXDESC_CD_0_ASET (1UL << 47) -#define CTXDESC_CD_0_ASID GENMASK_ULL(63, 48) - -#define CTXDESC_CD_1_TTB0_MASK GENMASK_ULL(51, 4) - -/* - * When the SMMU only supports linear context descriptor tables, pick a - * reasonable size limit (64kB). - */ -#define CTXDESC_LINEAR_CDMAX ilog2(SZ_64K / (CTXDESC_CD_DWORDS << 3)) - -/* Command queue */ -#define CMDQ_ENT_SZ_SHIFT 4 -#define CMDQ_ENT_DWORDS ((1 << CMDQ_ENT_SZ_SHIFT) >> 3) -#define CMDQ_MAX_SZ_SHIFT (Q_MAX_SZ_SHIFT - CMDQ_ENT_SZ_SHIFT) - -#define CMDQ_CONS_ERR GENMASK(30, 24) -#define CMDQ_ERR_CERROR_NONE_IDX 0 -#define CMDQ_ERR_CERROR_ILL_IDX 1 -#define CMDQ_ERR_CERROR_ABT_IDX 2 -#define CMDQ_ERR_CERROR_ATC_INV_IDX 3 - #define CMDQ_PROD_OWNED_FLAG Q_OVERFLOW_FLAG /* @@ -321,98 +40,11 @@ */ #define CMDQ_BATCH_ENTRIES BITS_PER_LONG -#define CMDQ_0_OP GENMASK_ULL(7, 0) -#define CMDQ_0_SSV (1UL << 11) - -#define CMDQ_PREFETCH_0_SID GENMASK_ULL(63, 32) -#define CMDQ_PREFETCH_1_SIZE GENMASK_ULL(4, 0) -#define CMDQ_PREFETCH_1_ADDR_MASK GENMASK_ULL(63, 12) - -#define CMDQ_CFGI_0_SSID GENMASK_ULL(31, 12) -#define CMDQ_CFGI_0_SID GENMASK_ULL(63, 32) -#define CMDQ_CFGI_1_LEAF (1UL << 0) -#define CMDQ_CFGI_1_RANGE GENMASK_ULL(4, 0) - -#define CMDQ_TLBI_0_NUM GENMASK_ULL(16, 12) -#define CMDQ_TLBI_RANGE_NUM_MAX 31 -#define CMDQ_TLBI_0_SCALE GENMASK_ULL(24, 20) -#define CMDQ_TLBI_0_VMID GENMASK_ULL(47, 32) -#define CMDQ_TLBI_0_ASID GENMASK_ULL(63, 48) -#define CMDQ_TLBI_1_LEAF (1UL << 0) -#define CMDQ_TLBI_1_TTL GENMASK_ULL(9, 8) -#define CMDQ_TLBI_1_TG GENMASK_ULL(11, 10) -#define CMDQ_TLBI_1_VA_MASK GENMASK_ULL(63, 12) -#define CMDQ_TLBI_1_IPA_MASK GENMASK_ULL(51, 12) - -#define CMDQ_ATC_0_SSID GENMASK_ULL(31, 12) -#define CMDQ_ATC_0_SID GENMASK_ULL(63, 32) -#define CMDQ_ATC_0_GLOBAL (1UL << 9) -#define CMDQ_ATC_1_SIZE GENMASK_ULL(5, 0) -#define CMDQ_ATC_1_ADDR_MASK GENMASK_ULL(63, 12) - -#define CMDQ_PRI_0_SSID GENMASK_ULL(31, 12) -#define CMDQ_PRI_0_SID GENMASK_ULL(63, 32) -#define CMDQ_PRI_1_GRPID GENMASK_ULL(8, 0) -#define CMDQ_PRI_1_RESP GENMASK_ULL(13, 12) - -#define CMDQ_RESUME_0_RESP_TERM 0UL -#define CMDQ_RESUME_0_RESP_RETRY 1UL -#define CMDQ_RESUME_0_RESP_ABORT 2UL -#define CMDQ_RESUME_0_RESP GENMASK_ULL(13, 12) -#define CMDQ_RESUME_0_SID GENMASK_ULL(63, 32) -#define CMDQ_RESUME_1_STAG GENMASK_ULL(15, 0) - -#define CMDQ_SYNC_0_CS GENMASK_ULL(13, 12) -#define CMDQ_SYNC_0_CS_NONE 0 -#define CMDQ_SYNC_0_CS_IRQ 1 -#define CMDQ_SYNC_0_CS_SEV 2 -#define CMDQ_SYNC_0_MSH GENMASK_ULL(23, 22) -#define CMDQ_SYNC_0_MSIATTR GENMASK_ULL(27, 24) -#define CMDQ_SYNC_0_MSIDATA GENMASK_ULL(63, 32) -#define CMDQ_SYNC_1_MSIADDR_MASK GENMASK_ULL(51, 2) - -/* Event queue */ -#define EVTQ_ENT_SZ_SHIFT 5 -#define EVTQ_ENT_DWORDS ((1 << EVTQ_ENT_SZ_SHIFT) >> 3) -#define EVTQ_MAX_SZ_SHIFT (Q_MAX_SZ_SHIFT - EVTQ_ENT_SZ_SHIFT) - -#define EVTQ_0_ID GENMASK_ULL(7, 0) - -#define EVT_ID_TRANSLATION_FAULT 0x10 -#define EVT_ID_ADDR_SIZE_FAULT 0x11 -#define EVT_ID_ACCESS_FAULT 0x12 -#define EVT_ID_PERMISSION_FAULT 0x13 - -#define EVTQ_0_SSV (1UL << 11) -#define EVTQ_0_SSID GENMASK_ULL(31, 12) -#define EVTQ_0_SID GENMASK_ULL(63, 32) -#define EVTQ_1_STAG GENMASK_ULL(15, 0) -#define EVTQ_1_STALL (1UL << 31) -#define EVTQ_1_PnU (1UL << 33) -#define EVTQ_1_InD (1UL << 34) -#define EVTQ_1_RnW (1UL << 35) -#define EVTQ_1_S2 (1UL << 39) -#define EVTQ_1_CLASS GENMASK_ULL(41, 40) -#define EVTQ_1_TT_READ (1UL << 44) -#define EVTQ_2_ADDR GENMASK_ULL(63, 0) -#define EVTQ_3_IPA GENMASK_ULL(51, 12) - -/* PRI queue */ -#define PRIQ_ENT_SZ_SHIFT 4 -#define PRIQ_ENT_DWORDS ((1 << PRIQ_ENT_SZ_SHIFT) >> 3) -#define PRIQ_MAX_SZ_SHIFT (Q_MAX_SZ_SHIFT - PRIQ_ENT_SZ_SHIFT) - -#define PRIQ_0_SID GENMASK_ULL(31, 0) -#define PRIQ_0_SSID GENMASK_ULL(51, 32) -#define PRIQ_0_PERM_PRIV (1UL << 58) -#define PRIQ_0_PERM_EXEC (1UL << 59) -#define PRIQ_0_PERM_READ (1UL << 60) -#define PRIQ_0_PERM_WRITE (1UL << 61) -#define PRIQ_0_PRG_LAST (1UL << 62) -#define PRIQ_0_SSID_V (1UL << 63) - -#define PRIQ_1_PRG_IDX GENMASK_ULL(8, 0) -#define PRIQ_1_ADDR_MASK GENMASK_ULL(63, 12) +/* + * When the SMMU only supports linear context descriptor tables, pick a + * reasonable size limit (64kB). + */ +#define CTXDESC_LINEAR_CDMAX ilog2(SZ_64K / (CTXDESC_CD_DWORDS << 3)) /* High-level queue structures */ #define ARM_SMMU_POLL_TIMEOUT_US 1000000 /* 1s! */ @@ -421,88 +53,6 @@ #define MSI_IOVA_BASE 0x8000000 #define MSI_IOVA_LENGTH 0x100000 -enum pri_resp { - PRI_RESP_DENY = 0, - PRI_RESP_FAIL = 1, - PRI_RESP_SUCC = 2, -}; - -struct arm_smmu_cmdq_ent { - /* Common fields */ - u8 opcode; - bool substream_valid; - - /* Command-specific fields */ - union { - #define CMDQ_OP_PREFETCH_CFG 0x1 - struct { - u32 sid; - } prefetch; - - #define CMDQ_OP_CFGI_STE 0x3 - #define CMDQ_OP_CFGI_ALL 0x4 - #define CMDQ_OP_CFGI_CD 0x5 - #define CMDQ_OP_CFGI_CD_ALL 0x6 - struct { - u32 sid; - u32 ssid; - union { - bool leaf; - u8 span; - }; - } cfgi; - - #define CMDQ_OP_TLBI_NH_ASID 0x11 - #define CMDQ_OP_TLBI_NH_VA 0x12 - #define CMDQ_OP_TLBI_EL2_ALL 0x20 - #define CMDQ_OP_TLBI_EL2_ASID 0x21 - #define CMDQ_OP_TLBI_EL2_VA 0x22 - #define CMDQ_OP_TLBI_S12_VMALL 0x28 - #define CMDQ_OP_TLBI_S2_IPA 0x2a - #define CMDQ_OP_TLBI_NSNH_ALL 0x30 - struct { - u8 num; - u8 scale; - u16 asid; - u16 vmid; - bool leaf; - u8 ttl; - u8 tg; - u64 addr; - } tlbi; - - #define CMDQ_OP_ATC_INV 0x40 - #define ATC_INV_SIZE_ALL 52 - struct { - u32 sid; - u32 ssid; - u64 addr; - u8 size; - bool global; - } atc; - - #define CMDQ_OP_PRI_RESP 0x41 - struct { - u32 sid; - u32 ssid; - u16 grpid; - enum pri_resp resp; - } pri; - - #define CMDQ_OP_RESUME 0x44 - struct { - u32 sid; - u16 stag; - u8 resp; - } resume; - - #define CMDQ_OP_CMD_SYNC 0x46 - struct { - u64 msiaddr; - } sync; - }; -}; - struct arm_smmu_ll_queue { union { u64 val; @@ -621,25 +171,6 @@ struct arm_smmu_device { void __iomem *base; void __iomem *page1; -#define ARM_SMMU_FEAT_2_LVL_STRTAB (1 << 0) -#define ARM_SMMU_FEAT_2_LVL_CDTAB (1 << 1) -#define ARM_SMMU_FEAT_TT_LE (1 << 2) -#define ARM_SMMU_FEAT_TT_BE (1 << 3) -#define ARM_SMMU_FEAT_PRI (1 << 4) -#define ARM_SMMU_FEAT_ATS (1 << 5) -#define ARM_SMMU_FEAT_SEV (1 << 6) -#define ARM_SMMU_FEAT_MSI (1 << 7) -#define ARM_SMMU_FEAT_COHERENCY (1 << 8) -#define ARM_SMMU_FEAT_TRANS_S1 (1 << 9) -#define ARM_SMMU_FEAT_TRANS_S2 (1 << 10) -#define ARM_SMMU_FEAT_STALLS (1 << 11) -#define ARM_SMMU_FEAT_HYP (1 << 12) -#define ARM_SMMU_FEAT_STALL_FORCE (1 << 13) -#define ARM_SMMU_FEAT_VAX (1 << 14) -#define ARM_SMMU_FEAT_RANGE_INV (1 << 15) -#define ARM_SMMU_FEAT_BTM (1 << 16) -#define ARM_SMMU_FEAT_SVA (1 << 17) -#define ARM_SMMU_FEAT_E2H (1 << 18) u32 features; #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0) From patchwork Wed Feb 1 12:52:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4F7CBC636CD for ; Wed, 1 Feb 2023 14:03:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VC///6NtvLE5RN9TcBvgMuTcPZjxb8WBnF5vBeK2Jm8=; b=aLLkckXw4d6GMC IYAPhsXPVwYXD64SMHmRG4egFHBvy/hMcnPFSrwTt626V5A7ZaWW7Gr0tx+LUd2IRXV1xQcQaUauI F7ILxvUXxsOKZMnk8YzIfW7QUEJytfVmQpj2Rx3NDg2TC0mg4zQBpj4xX3hQGGAjmczWZfJflfkgF aKJdiKygHbCNgHGKZUQagfoBGMH9Yr0PlvNs4+4auvsxKDSXENYr8283n7tMkih2HErvh0ucoX2Ut af5BrK0QAjy9ULg6+7wcdcOYZtAPMr3CWNX5dkG6m8+7C4JFC87ERO3w6y7nps6pV1MmXR64IRiy6 wuChhT8MSCIW2x+FLsUw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDgl-00CBIE-PP; Wed, 01 Feb 2023 14:02:15 +0000 Received: from mail-wr1-x42f.google.com ([2a00:1450:4864:20::42f]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCi2-00BnI4-Aj for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:33 +0000 Received: by mail-wr1-x42f.google.com with SMTP id o18so7753471wrj.3 for ; Wed, 01 Feb 2023 04:59:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mpqAxnbuSEle+b4ZozqvuRbtg84sOU2wLsHHXy60Xf4=; b=H8N6t8azgmWBH886NvtNIa/88xftg/d47CN1lNWpwQY8g41X2dJULDBUt8RWJTJeSI WThReSjvqRrR6PVFUKbeH+UFULZSwESdu2a/Lm9dDMa5lpETn4ZZ/+Opu68LiZmqq9sN 7P/93+ZxvEA2KO5Usrr4b/D33b6zAhssqHN/jocvhOTGBVp9C2dmRhMkotIsbkaCIu35 VlkRPDQbg9E1wdVnf/L6ySFyOuxkCqSpAFTCvU8t7w+YTqswvLQFx7+ljAE9t0xAPWKe 1n4xTnAPGdVNWDKH/odBfQ6RBeJ1YG99BQDMs9ceNFwJeIzOmly9KiYRBCORoZE+neG/ rcxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mpqAxnbuSEle+b4ZozqvuRbtg84sOU2wLsHHXy60Xf4=; b=0q6nsnP0CLnrOYLw7yBo/sK0ODUj8kbimALLkfdZj6p+ZSXunMpE2XLoW0FmRMSgpZ L8TezRnwKL/+ghr+vfiWYjhNeguaqUBnq/S3Jo1Va79UN1HoFccDhEJdURDU7jB2h6uJ 5LB6kdi1oT23zKpq+HUZzgONrMkrjXO2HZdSa6OqSEvAp4j5ofCvwdWznHl8PtW7PnPc 1aLGVg6mLtUOust+gyxfqEaciQwf58d7gMpAb0+cj2SecHn0JQM0/GY7gABaioA/IRAk 5IN0aUTFjB1ip7NsS7Y7HVq/2DQjicNVWjtN3LJsgCGUjeN4urkrt5imXvY5QeeF0cI8 K8jg== X-Gm-Message-State: AO0yUKW8YBQma8ki2BmsWTpPxgIRlBwhWsBVaFyxo5KEhNP+6I5rYs5g hYEIOCuTPBKJOiRz0kY0j3pv5T37D7sf7dpo X-Google-Smtp-Source: AK7set8PyrVNoe2MKjwL7WkMsO801xwUwupVDKmMI+GE07B0RcxtODTmVsZbmKSEFKICiI5L55zPfQ== X-Received: by 2002:adf:e181:0:b0:2bd:c4ea:3b59 with SMTP id az1-20020adfe181000000b002bdc4ea3b59mr2690988wrb.0.1675256365691; Wed, 01 Feb 2023 04:59:25 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:25 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 08/45] KVM: arm64: pkvm: Add pkvm_udelay() Date: Wed, 1 Feb 2023 12:52:52 +0000 Message-Id: <20230201125328.2186498-9-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045930_407184_AB99925B X-CRM114-Status: GOOD ( 16.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a simple delay loop for drivers. This could use more work. It should be possible to insert a wfe and save power, but I haven't studied whether it is safe to do so with the host in control of the event stream. The SMMU driver will use wfe anyway for frequent waits (provided the implementation can send command queue events). Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 3 ++ arch/arm64/kvm/hyp/nvhe/setup.c | 4 +++ arch/arm64/kvm/hyp/nvhe/timer-sr.c | 43 ++++++++++++++++++++++++++ 3 files changed, 50 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index 6160d1a34fa2..746dc1c05a8e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -109,4 +109,7 @@ bool kvm_handle_pvm_hvc64(struct kvm_vcpu *vcpu, u64 *exit_code); struct pkvm_hyp_vcpu *pkvm_mpidr_to_hyp_vcpu(struct pkvm_hyp_vm *vm, u64 mpidr); +int pkvm_timer_init(void); +void pkvm_udelay(unsigned long usecs); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 8a357637ce81..629e74c46b35 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -300,6 +300,10 @@ void __noreturn __pkvm_init_finalise(void) }; pkvm_pgtable.mm_ops = &pkvm_pgtable_mm_ops; + ret = pkvm_timer_init(); + if (ret) + goto out; + ret = fix_host_ownership(); if (ret) goto out; diff --git a/arch/arm64/kvm/hyp/nvhe/timer-sr.c b/arch/arm64/kvm/hyp/nvhe/timer-sr.c index 9072e71693ba..202df9003a0d 100644 --- a/arch/arm64/kvm/hyp/nvhe/timer-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/timer-sr.c @@ -10,6 +10,10 @@ #include +#include + +static u32 timer_freq; + void __kvm_timer_set_cntvoff(u64 cntvoff) { write_sysreg(cntvoff, cntvoff_el2); @@ -46,3 +50,42 @@ void __timer_enable_traps(struct kvm_vcpu *vcpu) val |= CNTHCTL_EL1PCTEN; write_sysreg(val, cnthctl_el2); } + +static u64 pkvm_ticks_get(void) +{ + return __arch_counter_get_cntvct(); +} + +#define SEC_TO_US 1000000 + +int pkvm_timer_init(void) +{ + timer_freq = read_sysreg(cntfrq_el0); + /* + * TODO: The highest privileged level is supposed to initialize this + * register. But on some systems (which?), this information is only + * contained in the device-tree, so we'll need to find it out some other + * way. + */ + if (!timer_freq || timer_freq < SEC_TO_US) + return -ENODEV; + return 0; +} + + +#define pkvm_time_us_to_ticks(us) ((u64)(us) * timer_freq / SEC_TO_US) + +void pkvm_udelay(unsigned long usecs) +{ + u64 ticks = pkvm_time_us_to_ticks(usecs); + u64 start = pkvm_ticks_get(); + + while (true) { + u64 cur = pkvm_ticks_get(); + + if ((cur - start) >= ticks || cur < start) + break; + /* TODO wfe */ + cpu_relax(); + } +} From patchwork Wed Feb 1 12:52:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6A22FC636CD for ; Wed, 1 Feb 2023 14:03:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bhTy/4kK7EKI79EjmQt2B3l6iGyYpoGHg5+1mnGrxBI=; b=QMLySj9+z2Lsli INSUzAmsxd4mXmdeLvkwbFcj1CcHEsLVxBkNMdEto88oHDcPgzR7HEULzEnGkluaaikBB7yWbGKqB m6xgPKS9e43CDKZtGlRjkTKa9CpJ4Rt/u47rPXxMGv2yp8IVLm/XMGBt76V+RYTgwyU0RZYWz+wOi rUBZOtCqSBqPdybO+YrUK0MLLS8Izc2Zcux7JnWwb2aDgFbw98Jc82WlRBduGy3AbgGRGcXGMIipC 5pRdR8c6ncqjvuz1yXX/55Mayi0AijXFZwMRSgp0BCHzAaQc+jOCpZSTeJRBPCpDmHQ3E5ioDxJKK V9Rg2x6z+ww6Ws1Bi4uA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDhY-00CBjT-MW; Wed, 01 Feb 2023 14:03:04 +0000 Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCi2-00BnIA-ED for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:35 +0000 Received: by mail-wm1-x332.google.com with SMTP id l8so12605666wms.3 for ; Wed, 01 Feb 2023 04:59:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RLOTGdIX5XzH+Jwl3kF9DSu2jc0/WXz1kG1OARrYJNM=; b=Q3d2qtJCq3BIVZGKfu+1IbjBWHzFBo5/Xb5HFUtbvMJiuz3cAPjEqCHt3UwOffYZxE TMFUSJOltSsIXD8YBlXzzMeEB3fEpiE9zUaiuEEPBVru2iH+3SS3201GANWWtgrCmLfp yp/r96OfA0uSfjIy1uIBj8EViI4HE3mP0t4qaiaKs8CSwJRo9psQTTID6YbKjjmjvw5I ecCHZjebTzN1f6avWcf8CrPIb6hGrng7Q4dsF98hbcyuENPsy9HlSYBL62deV6Pt96s+ aDVggipAu4GpMD4mgFQZmn9cMcZnEDM6Akk5nC8sqLdcDYZCpkG4NS3wVlZ2x/Lpsq6r WDGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RLOTGdIX5XzH+Jwl3kF9DSu2jc0/WXz1kG1OARrYJNM=; b=AeGEM6xKV4LTkLsFB1LnHJ2gihf8YTegPz5MqTNdVRzhmBhWAIasCAE1RwkEMIAuMz hybuKaD8VkZB/jMH2z3nNZ1utm0oJ80AVj0EKK753ilJcs0QPDQ7Kfx0j2+PxVQcwztl YUAVAwHEEDgguONXF1t/s7qMm1Cv0/gHfZkZfLR8lj2A1w71TTA4Yuabk2CKUTu5XT1z wq1Z1t1B65RljKsk9m3uQ74+dGSczmIcmiKPOpdkR7HxCsYXA+zQhm8WP2VqycQOZAXt 6xIGZmgdOBDcwO2TYeAkwLh5hbfyjnB/9IHqeulPORNjFWYcCqGp0kywNcSbBvXnxTvC wXyA== X-Gm-Message-State: AO0yUKWOxu0YCVG5KDo+WFs35odKcgJZ8NZ++G8GJrGh3r/m2S/1s784 FjxpfPIATC4eHzvdI/MXrcmf4w== X-Google-Smtp-Source: AK7set/rYkvZAy6++0J4vjVN94POb8/wXVqA/8/v2lPmHrVQwzFxOEu9D5CiFkQiJcSsOv/7hSdgVQ== X-Received: by 2002:a05:600c:1989:b0:3db:2d7e:1204 with SMTP id t9-20020a05600c198900b003db2d7e1204mr2027422wmq.27.1675256366507; Wed, 01 Feb 2023 04:59:26 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:26 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 09/45] KVM: arm64: pkvm: Add pkvm_create_hyp_device_mapping() Date: Wed, 1 Feb 2023 12:52:53 +0000 Message-Id: <20230201125328.2186498-10-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045930_507348_D3BFAB99 X-CRM114-Status: GOOD ( 13.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a function to map a MMIO region in the hypervisor and remove it from the host. Hypervisor device drivers use this to reserve their regions during setup. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/include/nvhe/mm.h | 1 + arch/arm64/kvm/hyp/nvhe/setup.c | 17 +++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index d5ec972b5c1e..84db840f2057 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -27,5 +27,6 @@ int __pkvm_create_private_mapping(phys_addr_t phys, size_t size, enum kvm_pgtable_prot prot, unsigned long *haddr); int pkvm_alloc_private_va_range(size_t size, unsigned long *haddr); +int pkvm_create_hyp_device_mapping(u64 base, u64 size, void __iomem *haddr); #endif /* __KVM_HYP_MM_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 629e74c46b35..de7d60c3c20b 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -259,6 +259,23 @@ static int fix_host_ownership(void) return 0; } +/* Map the MMIO region into the hypervisor and remove it from host */ +int pkvm_create_hyp_device_mapping(u64 base, u64 size, void __iomem *haddr) +{ + int ret; + + ret = __pkvm_create_private_mapping(base, size, PAGE_HYP_DEVICE, haddr); + if (ret) + return ret; + + /* lock not needed during setup */ + ret = host_stage2_set_owner_locked(base, size, PKVM_ID_HYP); + if (ret) + return ret; + + return ret; +} + static int fix_hyp_pgtable_refcnt(void) { struct kvm_pgtable_walker walker = { From patchwork Wed Feb 1 12:52:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8CA55C636CD for ; Wed, 1 Feb 2023 14:04:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0jrb3PA0Z7d9onYui07EBkWRqOGV6i6CQWoZLGslras=; b=qiNSS96bt3xRWQ QYdOPgj5AKws3dS8breoexWvNOo5jepMZl3fKTcbJx8RIAsC5F1Xss2CeDOuQv1+yyumzFz7jd/vQ Ds1+WvWnxTd4LOGBfnjiJurzcjKekPk7dOdTWpyT+Dfq84HjI8iMsQldKOr1n+Ct55PxixnEo9xZ0 Syp/QOCe7xgsib4+r44fYz73MmK+itqE4s4FZoDUUzYTTZGsw4mfXh78xHlt/iTUlSz+/pNZwLaim Ussy3tW8YtRyYSR1+lvxmXsNOVE561kUZBBNlo/0qqI7AkHqrzyoYRe4RO/N3cd5ZWygzjBQu79FR cuJhC97OHGYE7gTAjvQA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDi0-00CBwU-Qi; Wed, 01 Feb 2023 14:03:33 +0000 Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCi2-00BnIT-E7 for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:35 +0000 Received: by mail-wr1-x42d.google.com with SMTP id bk16so17209936wrb.11 for ; Wed, 01 Feb 2023 04:59:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JL+q7powKXjRQBM9aUxd/Ixujr3/xsGD0eW5qeS08kg=; b=XvIuhKnbdaSJmJH8iDjgXLwNO9bmY/agjJa9GYHK5fvNc0rbxK9E8+MePeO1hYYVkj BLoKe4FnYGZ+IcTOkZe7A/26QsSpzsLPFCxLtdFiID61kSy1LAZHhSepPc/I0WjajANX zqcV2727uc8XO/ZvNCcb3Dwjpmq+TZk1Mq07dsMCgGcMTplN5QoiX9TyV73FJGwuWPl5 VVfuWMPqHgHHXae2ZU0wPy/5nRKQ4LHuLzsStjXWsObRhMnAWkK+9/ljtH6QUPqObP0o 1KVNZv0lWLfSW5q9HV60q+XYtLC9dKPNWlqVRI8Ylf60E6JZU9bJG0pBGfBHyapeYBxY IujA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JL+q7powKXjRQBM9aUxd/Ixujr3/xsGD0eW5qeS08kg=; b=P/yiMDUhKzp18Vq3P1HHLiKRYEWUnTgSLVGp9ojfkbMOfwnF+5sJnQWk24QGJ83GWi /keCbBVRUs2ESwF9GQy4gEGA82rB4lImnCt8E+pN2JRvX7AIOTV/CJ3bx6Hax97vRzss WESouzIwFBB+GY8qZAvETjWPpZuLvu/Ry7IlwKsFa0aIf3R6EIjkHxBA281eHsDeIlgm 4B9MixX8MQc8Mk5OUyAE3TAZcdAgHd98i9rcIlLyyLucO0SgVYQJpR+fqFUfjBvNO1KJ Zh3n0s0bwsEkwBXA8jiSZwy6dAUCbO6H3doyIndTBZ1Kp3rs9N3IGSlfq18TuRPPpySo lXDw== X-Gm-Message-State: AO0yUKVPYLO7gLpDk4442P6j9kQ3RRuxcVb8hvysk4DdF2tail4mZJKR 4c7eNKb3Zpm9pAJcKtrazRdbpw== X-Google-Smtp-Source: AK7set/b2J59uvwFQEz7jZcXjC6ArgctU3mXzzUT7UxY8LgLmGH7l73VT4ZIMiNBTG1OX7/4xi4JWw== X-Received: by 2002:a05:6000:188f:b0:2bf:cb6a:a7e with SMTP id a15-20020a056000188f00b002bfcb6a0a7emr2957190wri.42.1675256367258; Wed, 01 Feb 2023 04:59:27 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:26 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 10/45] KVM: arm64: pkvm: Expose pkvm_map/unmap_donated_memory() Date: Wed, 1 Feb 2023 12:52:54 +0000 Message-Id: <20230201125328.2186498-11-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045930_536828_E7289A6F X-CRM114-Status: GOOD ( 12.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Allow the IOMMU driver to use pkvm_map/unmap_donated memory() Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 3 +++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 18 +++++++++--------- 2 files changed, 12 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 38e5e9b259fc..40decbe4cc70 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -86,6 +86,9 @@ void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc); int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages, struct kvm_hyp_memcache *host_mc); +void *pkvm_map_donated_memory(unsigned long host_va, size_t size); +void pkvm_unmap_donated_memory(void *va, size_t size); + static __always_inline void __load_host_stage2(void) { if (static_branch_likely(&kvm_protected_mode_initialized)) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 905c05c7e9bf..a3711979bbd3 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -592,7 +592,7 @@ static void *map_donated_memory_noclear(unsigned long host_va, size_t size) return va; } -static void *map_donated_memory(unsigned long host_va, size_t size) +void *pkvm_map_donated_memory(unsigned long host_va, size_t size) { void *va = map_donated_memory_noclear(host_va, size); @@ -608,7 +608,7 @@ static void __unmap_donated_memory(void *va, size_t size) PAGE_ALIGN(size) >> PAGE_SHIFT)); } -static void unmap_donated_memory(void *va, size_t size) +void pkvm_unmap_donated_memory(void *va, size_t size) { if (!va) return; @@ -668,11 +668,11 @@ int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva, ret = -ENOMEM; - hyp_vm = map_donated_memory(vm_hva, vm_size); + hyp_vm = pkvm_map_donated_memory(vm_hva, vm_size); if (!hyp_vm) goto err_remove_mappings; - last_ran = map_donated_memory(last_ran_hva, last_ran_size); + last_ran = pkvm_map_donated_memory(last_ran_hva, last_ran_size); if (!last_ran) goto err_remove_mappings; @@ -699,9 +699,9 @@ int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva, err_unlock: hyp_spin_unlock(&vm_table_lock); err_remove_mappings: - unmap_donated_memory(hyp_vm, vm_size); - unmap_donated_memory(last_ran, last_ran_size); - unmap_donated_memory(pgd, pgd_size); + pkvm_unmap_donated_memory(hyp_vm, vm_size); + pkvm_unmap_donated_memory(last_ran, last_ran_size); + pkvm_unmap_donated_memory(pgd, pgd_size); err_unpin_kvm: hyp_unpin_shared_mem(host_kvm, host_kvm + 1); return ret; @@ -726,7 +726,7 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu, unsigned int idx; int ret; - hyp_vcpu = map_donated_memory(vcpu_hva, sizeof(*hyp_vcpu)); + hyp_vcpu = pkvm_map_donated_memory(vcpu_hva, sizeof(*hyp_vcpu)); if (!hyp_vcpu) return -ENOMEM; @@ -754,7 +754,7 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu, hyp_spin_unlock(&vm_table_lock); if (ret) - unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu)); + pkvm_unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu)); return ret; } From patchwork Wed Feb 1 12:52:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124410 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DC82CC05027 for ; Wed, 1 Feb 2023 14:28:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0X63k/pWfJpiq4ta51WMkfhj3eNjsg+DUqJ815sU1hE=; b=duR5H2/I9DNG1u hVW4wiz+PRbEumOj6aO7v1rQjtjrXJavS7NB+mlXs6c0gJChg45fEQn1DMdHiUr+lGlNFjDx0ZjbV 77frueLgQJBkoydpU0CIPl5YagadAWg3aTCRMYdmArDCpwxgkQ09YNvlFTwRGOWxW0sEn5f4oobP2 ZBX+cKpgpFE+X8kCw+Hd5xMJUgHLnQBqeDDq3FQWZX2R+5qs97enox1di1FVPUiN3RyAUm1u/SOeA Oxy5bMDHh60arNKmGOLEJVDev7DcyifG1SOoObgb3R/9y9vaViaeylQvqFzeoqyFiyJYVC6o25cV2 a0Q2BLU+invWSy/DBOBA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNE4m-00CLVT-Dz; Wed, 01 Feb 2023 14:27:04 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDhb-00CBke-7q for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 14:03:07 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yvsZEZd93DQdk3Gw4NS1NraTmYfO9Oz159bdinMkA6M=; b=RNk/BUlx0tFg9d1qlp9/aVsUvS KcCLK8J2K6tzNT+dQn9IEj82fKWcNSvRqBEJIgWGeTpWaK/1WqwFqDkefl5vZYektWFFIAUrv3+B7 8trJ3folXrS/f6bMxXSv2+Dy9QKPKWIwvWFsLMub6SDoiWfOH8t+iBC/rC8mpdoG6mSoFbJp6KJnY 5eBY9YnfhgiAHMepryRtqC+3o0ZhAGB97IxQkgNVsBliHeREZwlfN54pYJL4Gx7PrqsVEKOaDfQxD XnXsnYbBSTqtD3ByeRCzC71ZeTpVdagmpnNo4FXbFoswRI6Hr/iCl5hk3ABE1YuzNdQ7POORuBBIa W0Wm0PeQ==; Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pNChZ-004m0p-0e for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:05 +0000 Received: by mail-wm1-x333.google.com with SMTP id n13so5597881wmr.4 for ; Wed, 01 Feb 2023 04:59:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yvsZEZd93DQdk3Gw4NS1NraTmYfO9Oz159bdinMkA6M=; b=oItCvCgzbiMM79tWChYKkG5G1ZGcdUz0gKcyWcJtAcqX32+3N+ydf/mJG3YDZ3H4yU ZlRpMMjbnmXiOTDySDVdU5ydxmEmOVJbBDWuilJNR4fr7Om+Lwre+KiXYG+a02PzMgXl vA7UC+RpgBu+Dvd1PGbVsC3j2RzgivRmI3OzYGbO1h9qivNQDPZ2hCqw/OHp518gf29x DTLO2VAbm1JKbIUI2mup5i3KtCbnFe2NgQX7RwLjWUIC9pkcJSMfmVUdoxCboL+XsXpP h9osaVsOzKGfSfIR3kFDQggjhnvQvHJM8Q1oWL+hq/425+aTB8HpZdlDJp7i7xcON34p LZGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yvsZEZd93DQdk3Gw4NS1NraTmYfO9Oz159bdinMkA6M=; b=w9BCSL07PwiUTG+//hYLqTLdBGggr2YaBHHirJeJADXMfkH4BIErKK9lJ+AKVaDp5r SYSpQ2Pl1J9kbHtGK+tDQsvhwGQNF0rNJJjjCK9rg8cP7443T6/A1tDM1g5M7sfQcE06 HX6cUHN5kHcI/OKUpzR3IBhnUrMWJbpWv3UillRi8yxfPqQl/QHv4UfGl7PUCqX2nm+D U/JtPAg/48hL5NhfftJ4rE8/nCttTeHrBKksDZPJQTlqMeS1P+7si7DojSHoNa336kW7 72DsCumEdXJVlkm3wdnfgmjufviAXywCQt9RrGc5n4t01eTuV3014Bx6vH6KgdzOWLAU KKbA== X-Gm-Message-State: AO0yUKWXymqzq3RScd3EpFWn34bq981pNPuocCCJWqugA+H/h0LZ3KcM 8rZOUNO1m/ig1yzkP3fpFaE/bw== X-Google-Smtp-Source: AK7set9N6OXBd89JzPHrqu15NAgJtTXIO8rAEPNule/gn66F+3qLMRl53iaOB7E+zcMNN668hoyKoA== X-Received: by 2002:a7b:ce94:0:b0:3dc:43a0:83bb with SMTP id q20-20020a7bce94000000b003dc43a083bbmr2087275wmj.3.1675256368023; Wed, 01 Feb 2023 04:59:28 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:27 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 11/45] KVM: arm64: pkvm: Expose pkvm_admit_host_page() Date: Wed, 1 Feb 2023 12:52:55 +0000 Message-Id: <20230201125328.2186498-12-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125901_499234_819A619B X-CRM114-Status: GOOD ( 17.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Since the IOMMU driver will need admit_host_page(), make it non static. As a result we can drop refill_memcache() and call admit_host_page() directly from pkvm_refill_memcache(). Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 -- arch/arm64/kvm/hyp/include/nvhe/mm.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 16 ++++++++--- arch/arm64/kvm/hyp/nvhe/mm.c | 27 +++++-------------- 4 files changed, 20 insertions(+), 26 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 40decbe4cc70..d4f4ffbb7dbb 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -83,8 +83,6 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); int hyp_pin_shared_mem(void *from, void *to); void hyp_unpin_shared_mem(void *from, void *to); void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc); -int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages, - struct kvm_hyp_memcache *host_mc); void *pkvm_map_donated_memory(unsigned long host_va, size_t size); void pkvm_unmap_donated_memory(void *va, size_t size); diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index 84db840f2057..a8c46a0ebc4a 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -26,6 +26,7 @@ int pkvm_create_mappings_locked(void *from, void *to, enum kvm_pgtable_prot prot int __pkvm_create_private_mapping(phys_addr_t phys, size_t size, enum kvm_pgtable_prot prot, unsigned long *haddr); +void *pkvm_admit_host_page(struct kvm_hyp_memcache *mc); int pkvm_alloc_private_va_range(size_t size, unsigned long *haddr); int pkvm_create_hyp_device_mapping(u64 base, u64 size, void __iomem *haddr); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index e8328f54200e..29ce7b09edbb 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -766,14 +766,24 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = ret; } +static void *admit_host_page(void *arg) +{ + return pkvm_admit_host_page(arg); +} + static int pkvm_refill_memcache(struct pkvm_hyp_vcpu *hyp_vcpu) { + int ret; struct pkvm_hyp_vm *hyp_vm = pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu); u64 nr_pages = VTCR_EL2_LVLS(hyp_vm->kvm.arch.vtcr) - 1; - struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu; + struct kvm_hyp_memcache host_mc = hyp_vcpu->host_vcpu->arch.pkvm_memcache; + + ret = __topup_hyp_memcache(&hyp_vcpu->vcpu.arch.pkvm_memcache, + nr_pages, admit_host_page, + hyp_virt_to_phys, &host_mc); - return refill_memcache(&hyp_vcpu->vcpu.arch.pkvm_memcache, nr_pages, - &host_vcpu->arch.pkvm_memcache); + hyp_vcpu->host_vcpu->arch.pkvm_memcache = host_mc; + return ret; } static void handle___pkvm_host_map_guest(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index 318298eb3d6b..9daaf2b2b191 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -340,35 +340,20 @@ int hyp_create_idmap(u32 hyp_va_bits) return __pkvm_create_mappings(start, end - start, start, PAGE_HYP_EXEC); } -static void *admit_host_page(void *arg) +void *pkvm_admit_host_page(struct kvm_hyp_memcache *mc) { - struct kvm_hyp_memcache *host_mc = arg; - - if (!host_mc->nr_pages) + if (!mc->nr_pages) return NULL; /* * The host still owns the pages in its memcache, so we need to go * through a full host-to-hyp donation cycle to change it. Fortunately, * __pkvm_host_donate_hyp() takes care of races for us, so if it - * succeeds we're good to go. + * succeeds we're good to go. Because mc is a copy of the memcache + * struct, the host cannot modify mc->head between donate and pop. */ - if (__pkvm_host_donate_hyp(hyp_phys_to_pfn(host_mc->head), 1)) + if (__pkvm_host_donate_hyp(hyp_phys_to_pfn(mc->head), 1)) return NULL; - return pop_hyp_memcache(host_mc, hyp_phys_to_virt); -} - -/* Refill our local memcache by poping pages from the one provided by the host. */ -int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages, - struct kvm_hyp_memcache *host_mc) -{ - struct kvm_hyp_memcache tmp = *host_mc; - int ret; - - ret = __topup_hyp_memcache(mc, min_pages, admit_host_page, - hyp_virt_to_phys, &tmp); - *host_mc = tmp; - - return ret; + return pop_hyp_memcache(mc, hyp_phys_to_virt); } From patchwork Wed Feb 1 12:52:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 87DAEC05027 for ; Wed, 1 Feb 2023 14:02:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lwu/no6P2S3HcEJxHuYT3MceNsfsQ75GL+8LeVZoWno=; b=b8QA86N5r3Nxew tAXSA42KiArVn3HrxsNtsuhFp7ZH+fpJcJZwEmdvMDiMv5aTrkInRqE6Qtaip7uyH3kPIdA5NZVcv amgvZg4jfVcW5pwj+PSaU/TtQoA8UadARlTA9GYwxX4cswJiNyfCiatfomnEEsvHF/ZNfb4LtjuwQ FFm0zdCGcYutsvtJKxfDagssNFp/gL0V4f1bHQ5y7EK/XkP0A1iFPOqQBfJSVMxy/hcysrXHdE7vn Qbz+7juLeZ2qWdsAMirVQPpmbSPAAm0qwwiSAqCaJQDxqAjAlKlsdddMizgHLGCS3xKZvdIadj/N+ GsDEXtVXjVeNw5DeD77w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDfz-00CAzN-TR; Wed, 01 Feb 2023 14:01:28 +0000 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCi1-00BnFL-9v for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:31 +0000 Received: by mail-wr1-x42b.google.com with SMTP id m7so17222403wru.8 for ; Wed, 01 Feb 2023 04:59:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XRJgdMdcBYAn8GCsksV5+iIqwprfPIABeIWgyP5LWbc=; b=vxx76hjuHBg6kz3QouqdplT80m0HuVrQQ2SXEjueNrrxS6l2LYfPFH4yDLRiiLMsDG jpCcELs8krbnD4ePXxYpMlLiHUykuqQbDq4iE/TuEGr3napuMp2zDVjoaaTp/socubr0 VOgzRpYubOM2SvtRmhnJ68dv1kOxJqbjNm1uDCQ5fJBR0/St7/iyvJhb1VGsOqCuAJjh KpgR2bOfqz7NNa9Ps68DJJUPhDsOHxD0mYLB50+wwhtIKhV433fD6E1DyEp2Teo6wfBM TRxtIwFFT8ItQB48VmGLHs6vKthvsZR6gddoMK3xQ2ilKufxjFhTL9RWCXlD594pQEob +vMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XRJgdMdcBYAn8GCsksV5+iIqwprfPIABeIWgyP5LWbc=; b=yyFh5mHnUSi6hvQLu5gFnyQBBLMLOo+66Kb+L1/f1wv8vrnEELPYn62oZcQAKc4wUl ucTLOQNZNxye78EVfhWbh1XpllZ6tO+xAP7h7IBQdP6qYXzPp2n+TnWgMm6q6NFB//Jq pmrp1Urnpsk8BgAukzaIcqTYaxH4wWWyGy9Z2iB7k982n4sIhDQZutU1b2MUqdnefsiD 4Hz9f3KqQtJ/2NvnlbHMpyIYoOkgBQPn2LmtsiQz+c+uSTCDF8cnksi6fXsgT1abgt54 6ry2EkvrxkGeKEZFkFpCzMGP7P3vA4/xJewk3WN65A4vMAFsogfG9xR3G9huVPBsMf3v vZEA== X-Gm-Message-State: AO0yUKUMHT1KZi8XVdO513RMrTRqE6T3AZF88iLdfRyoFtXQAWM3RaTn kJ6TdEZkZ2fTCZRJYfYakSKVvw== X-Google-Smtp-Source: AK7set/hW5ZAcObPErBUJzQi2gCLjPyNt9O9D278zKuPYtqDAlofquElbIfsEMoFml8u8ROlxfgf1Q== X-Received: by 2002:a5d:684d:0:b0:2c3:293:3c64 with SMTP id o13-20020a5d684d000000b002c302933c64mr2106297wrw.71.1675256368822; Wed, 01 Feb 2023 04:59:28 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:28 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 12/45] KVM: arm64: pkvm: Unify pkvm_pkvm_teardown_donated_memory() Date: Wed, 1 Feb 2023 12:52:56 +0000 Message-Id: <20230201125328.2186498-13-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045929_389621_C13F2350 X-CRM114-Status: GOOD ( 13.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Tearing down donated memory requires clearing the memory, pushing the pages into the reclaim memcache, and moving the mapping into the host stage-2. Keep these operations in a single function. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 3 +- arch/arm64/kvm/hyp/nvhe/pkvm.c | 50 +++++++------------ 3 files changed, 22 insertions(+), 33 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index d4f4ffbb7dbb..021825aee854 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -86,6 +86,8 @@ void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc); void *pkvm_map_donated_memory(unsigned long host_va, size_t size); void pkvm_unmap_donated_memory(void *va, size_t size); +void pkvm_teardown_donated_memory(struct kvm_hyp_memcache *mc, void *addr, + size_t dirty_size); static __always_inline void __load_host_stage2(void) { diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 410361f41e38..cad5736026d5 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -314,8 +314,7 @@ void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc) addr = hyp_alloc_pages(&vm->pool, 0); while (addr) { memset(hyp_virt_to_page(addr), 0, sizeof(struct hyp_page)); - push_hyp_memcache(mc, addr, hyp_virt_to_phys); - WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(addr), 1)); + pkvm_teardown_donated_memory(mc, addr, 0); addr = hyp_alloc_pages(&vm->pool, 0); } } diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index a3711979bbd3..c51a8a592849 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -602,27 +602,28 @@ void *pkvm_map_donated_memory(unsigned long host_va, size_t size) return va; } -static void __unmap_donated_memory(void *va, size_t size) +void pkvm_teardown_donated_memory(struct kvm_hyp_memcache *mc, void *va, + size_t dirty_size) { - WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(va), - PAGE_ALIGN(size) >> PAGE_SHIFT)); -} + size_t size = max(PAGE_ALIGN(dirty_size), PAGE_SIZE); -void pkvm_unmap_donated_memory(void *va, size_t size) -{ if (!va) return; - memset(va, 0, size); - __unmap_donated_memory(va, size); + memset(va, 0, dirty_size); + + if (mc) { + for (void *start = va; start < va + size; start += PAGE_SIZE) + push_hyp_memcache(mc, start, hyp_virt_to_phys); + } + + WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(va), + size >> PAGE_SHIFT)); } -static void unmap_donated_memory_noclear(void *va, size_t size) +void pkvm_unmap_donated_memory(void *va, size_t size) { - if (!va) - return; - - __unmap_donated_memory(va, size); + pkvm_teardown_donated_memory(NULL, va, size); } /* @@ -759,18 +760,6 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu, return ret; } -static void -teardown_donated_memory(struct kvm_hyp_memcache *mc, void *addr, size_t size) -{ - size = PAGE_ALIGN(size); - memset(addr, 0, size); - - for (void *start = addr; start < addr + size; start += PAGE_SIZE) - push_hyp_memcache(mc, start, hyp_virt_to_phys); - - unmap_donated_memory_noclear(addr, size); -} - int __pkvm_teardown_vm(pkvm_handle_t handle) { size_t vm_size, last_ran_size; @@ -813,19 +802,18 @@ int __pkvm_teardown_vm(pkvm_handle_t handle) vcpu_mc = &hyp_vcpu->vcpu.arch.pkvm_memcache; while (vcpu_mc->nr_pages) { addr = pop_hyp_memcache(vcpu_mc, hyp_phys_to_virt); - push_hyp_memcache(mc, addr, hyp_virt_to_phys); - unmap_donated_memory_noclear(addr, PAGE_SIZE); + pkvm_teardown_donated_memory(mc, addr, 0); } - teardown_donated_memory(mc, hyp_vcpu, sizeof(*hyp_vcpu)); + pkvm_teardown_donated_memory(mc, hyp_vcpu, sizeof(*hyp_vcpu)); } last_ran_size = pkvm_get_last_ran_size(); - teardown_donated_memory(mc, hyp_vm->kvm.arch.mmu.last_vcpu_ran, - last_ran_size); + pkvm_teardown_donated_memory(mc, hyp_vm->kvm.arch.mmu.last_vcpu_ran, + last_ran_size); vm_size = pkvm_get_hyp_vm_size(hyp_vm->kvm.created_vcpus); - teardown_donated_memory(mc, hyp_vm, vm_size); + pkvm_teardown_donated_memory(mc, hyp_vm, vm_size); hyp_unpin_shared_mem(host_kvm, host_kvm + 1); return 0; From patchwork Wed Feb 1 12:52:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 09111C05027 for ; Wed, 1 Feb 2023 14:07:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+YrnXdIADdTKw8xnR3Rl82gCyz/qwfW2yzJWbJvOa5M=; b=eXk7SrXwrRTEi4 mEa2CvJjlRuhh1WMyCAr1SX3fkTExhAJWH+mJNEtIGDTEz/mMZ2x8+TQGsd3YQ5xMBmVWQpcTzd2K uYJ1DPQ0dGsHZjR5oBChFG+YLFPiLWelcwG6hvDTc80Nmdq4K3WHOvW1WWiSKBxU8sQ/WojQJ7xKQ Rqod6IPql3VyDfRJCYDCQqIfZf7N1Y+jRF1MHOl9d5w6SKzr2dNbV1FJKC65VUjRn4QyJNceSO/bV bRI4Nr4yG/t4a3KBY3UK+bo8jTRmCuWy0WfQ9GzajyrbcWAktMJ7kmJjdT8NsfCyi3cjM7ESXDchu KySYLtSRhnkMFKP3OXxA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDlC-00CDWo-Be; Wed, 01 Feb 2023 14:06:51 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiC-00BnOv-5A for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 12:59:40 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yTVD2fQtd31jg24PflRIs2adMeiXA9dKMS0aZE3JHMI=; b=FCEgJah0rwUZUpO7rnBnB4JfeK jvMzARKHVtxCi2DhdPOkPBB7lPRkziKpV1HAG+6LJoaiMB+HD7qyQ/8z6nYLjOa21RaXi6N9AFJ73 2tqTU/PBXZ60z7Qd+1pnffpIHTEO0qJ7zPB/QUwHd+y5fGnTHLV44Mynw511zuE7+wADINLUhtgxF Wx1k3U3wpA70Cwcu3b7yMH4/W78t7VhAxwPMtHHqhXCiBm/veqkfVE+2pB9TPPK6XLFAYRnXYsIcr JLzojmLiJ07BmRBfKyBukhh/JyqUAZ330eVymF6M0NUF6JXEZB/SYd0z5PF/bFF14K5KR216SCPuv glF4dtSw==; Received: from mail-wr1-x429.google.com ([2a00:1450:4864:20::429]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pNChY-004m0q-39 for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:03 +0000 Received: by mail-wr1-x429.google.com with SMTP id r2so17218546wrv.7 for ; Wed, 01 Feb 2023 04:59:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yTVD2fQtd31jg24PflRIs2adMeiXA9dKMS0aZE3JHMI=; b=HgseBKQJR3mUP4ey4NQiqlt22BAAegJr70aI92CwFpJAsdt0eLdcdkeX4f0PdvjHrp Zj1aAgh/KUpylBjc5xQXtKIG2A/Nwa/2YINMMc3PtWgMR/N4QpRe0ODXXfSJrAXEDUtz xERnKveM4HB3z7TGKqVKpIc6rg0joIdZU1zmjyIDTn4eXa+wA3lZg/+fA7rFvBX6rxY0 0IQBXXWZsFDBElNLhlItHdshDMESxCzOp5LE/CQ9ja9Rty5K+toEHeTpBzYoB3ri2g7t gtywvnEpfqPDNv+mKroN/cCOtastA+PJD+XFtNCz38u9spceEJotlwv8BZo4Fh3eW/40 6H3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yTVD2fQtd31jg24PflRIs2adMeiXA9dKMS0aZE3JHMI=; b=KEh5fhcn3VIT+Q6SmwAqNpXtt1eNw90iiMje0xur3jry3cqxiQOFYeY0IZgDocNoQZ us9b/56UF9Zqp6VZ76DtpE0G4pb3Aw/e+KsD56mhZxZVZpNRwPxDV7wfostKDMjqTRO+ m9F/+coQR0VhiKuK6rDS6b+z4Fs26MG6zCJZ6yfjFBmNOrNrfCkmBInS3McbR7jNLXeg FY7LJODry4QgNH1zzM1QHjxtNcTLChmWr03USpx0/l+4CF+rLhNYix/BOD4txkopBwhS fmnWF3YUcUn40yVe6+lJ3sJRhKVCsvCGJz+SiJQu66hJFiwl0s+2BKqvVk3VWU3zeoLU I+3Q== X-Gm-Message-State: AO0yUKXQqBHpbn+Mo/vzm2Myl3ZKP42OcOhh6m6kjoEugHrRYuLlN2fM Y3XKygH2JFWoU3jaGqcsx52i+g== X-Google-Smtp-Source: AK7set+anVUapcTWowjU9gRbchOJLs0dsis9UuqvCSYhzwBfEh+gNQ4gaKCP92MQeUd1Iw/kaXXq4A== X-Received: by 2002:adf:c754:0:b0:2bf:d554:30c9 with SMTP id b20-20020adfc754000000b002bfd55430c9mr2456675wrh.21.1675256369585; Wed, 01 Feb 2023 04:59:29 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:29 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 13/45] KVM: arm64: pkvm: Add hyp_page_ref_inc_return() Date: Wed, 1 Feb 2023 12:52:57 +0000 Message-Id: <20230201125328.2186498-14-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125901_420574_15F06CFE X-CRM114-Status: GOOD ( 11.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a page_ref_inc() helper that returns an error on saturation instead of BUG()ing. There is no limit in the IOMMU API for the number of times a page can be mapped. Since pKVM has this limit at 2^16, error out gracefully. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index a8d4a5b919d2..c40fff5d6d22 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -57,10 +57,21 @@ static inline int hyp_page_count(void *addr) return p->refcount; } +/* + * Increase the refcount and return its new value. + * If the refcount is saturated, return a negative error + */ +static inline int hyp_page_ref_inc_return(struct hyp_page *p) +{ + if (p->refcount == USHRT_MAX) + return -EOVERFLOW; + + return ++p->refcount; +} + static inline void hyp_page_ref_inc(struct hyp_page *p) { - BUG_ON(p->refcount == USHRT_MAX); - p->refcount++; + BUG_ON(hyp_page_ref_inc_return(p) <= 0); } static inline void hyp_page_ref_dec(struct hyp_page *p) From patchwork Wed Feb 1 12:52:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124390 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DE495C636CD for ; Wed, 1 Feb 2023 14:08:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BYu1qFUEHjwIBuqIV847rl0L6WtNm+qjXPITvdWgwNg=; b=WGs0VsPPPCuBvx OpfVVsn6yhan2FyXCgL5XqjMDpzY0YBb5/2HsEwy9OuftOru2IOAWq1wjkGJQiy+q751CjAldOFlg IFzc0lCquJQqSY81HDBQaThK0Pt4ICLS5qNFIFNHygYHHgNpGCxL5fk60F1SASx1Dj+cHsulYEbd6 RyGeVd/OL8aKZVLIK2FQZ+CzaY8z8HXTnQxxl2vXw8GqTvYljhx7x1XJZTut1t+hOFV36IyVDC6V6 IrRZTHh5UNru33aqM+D2nEwyErEOQi20ekLJDR28NyYx7CWYLckE+/3P7b7twxaEaBGx2XGPaGE0a 00ZB9XXXDdI/vyXnrPyw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDlq-00CDpG-6Y; Wed, 01 Feb 2023 14:07:31 +0000 Received: from desiato.infradead.org ([90.155.92.199]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiC-00BnP4-Ge for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 12:59:40 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=qy+b5xpgUFjRrLFCzYf6p/yqNIUOSZiLBkXfaz/rSx4=; b=UHzGNBAHJmYvz6llM7zjHE3iKK efKUClJMtU83EFgWCoru1Rqb2tWjWdcgqkedBj2f5D+Cew8fi8d0BAFRUi8yaEfVXPa/fOiaDLpvi oMBA+LxnoV7UZIiMgawAfkD118R9BSfgs2PC6KZ98Fv8Hrq2brXkFlJW3AqXCg5G0W6Ja401y7Vsj WGhj2WKAiAh4vczV7Rx7UXSEJps2d22iYOQIcWYrXgC/mltQgSgvI8qenQgtBeVR00C/V6iqGk9BT eP/5vg9kTF9jeUB5HyP8xgqjpwdfe3wZjIbHBLyx2O5/qh4h8xmNp19P+6HisUtW43N6N0kwqtZx1 o5IVK7/A==; Received: from mail-wr1-x433.google.com ([2a00:1450:4864:20::433]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pNChY-004m0t-2d for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:03 +0000 Received: by mail-wr1-x433.google.com with SMTP id h16so17196715wrz.12 for ; Wed, 01 Feb 2023 04:59:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qy+b5xpgUFjRrLFCzYf6p/yqNIUOSZiLBkXfaz/rSx4=; b=YJ7fGLhkbBxulYluP+1fv39JA5l0528cZV53/Txw5z0tGfVSkCZtFGijGYa2TEcL6r baAPyHxvwBdaKiFrHgLIhJpA+imisi8kE9BZPKdlVscUiCeuRTiCj/vAc9EUOX3wAc6m 8cSJ++clWuf1NXz5qyFRpqahy3xbF7yVuR78qz/gnZYM6KkJLz1FuwqVll9FWiZzOhuU hBp4fFODpwyDfZ3l3CmLBBQjf5DsSrtp0tGs+m8kwqYgrkrwXOfWl2XS0JUKIRGnvPR8 S5rnd8Yx4IyVQB6RW2EMrNVxuAywHqx/6EqE0YseVmErLkBpyKNEc/AOzqpJZvgr2lAM 3r6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qy+b5xpgUFjRrLFCzYf6p/yqNIUOSZiLBkXfaz/rSx4=; b=LXCo9kzGHnMD+vAAvFL9RVgv/cfPEyymTigYhnE5WuhwrpHj20+6t1NWyvyOASHdoG jvexQwntmqxpdgeN1BEmFMgWwvD6+EpWNV5OEOfQZORuEDqiKbMw8d2nCFpPMbk7mCHN RvkFP5q6mGvCE+52PSwcV/V1wtZhfub3ZEqNY3wrq0U8x4JvdvmvM0btYd9N/LPLpbDp FxLXT/uIFtaR+wWfbz67Bik38waZ5iuVmTepcw1Ot+EN3MGvyg3bEbVkzW786kW2ArLM 91al/lV5Htj6fPoP1WqX1WW2/YQgxu4FrdGt5LuPq4cYTnA1AUZTm/03JjE9aJftDnPq jBRg== X-Gm-Message-State: AO0yUKXx0ohI+jF2IWOo3/m2icNpiTF0tvGIUhic0fd1F28rIFkSOIT9 JaZeMUtFBT41OMmTPfkat9dNGQ== X-Google-Smtp-Source: AK7set9FM4aaT3DiyolDQn9oGyxggfW62I5aRvlYUGsF2z4FeLPxOTWcxwy/9PLJAOejzAnBnuWMGg== X-Received: by 2002:a5d:4810:0:b0:2bf:c09a:c60e with SMTP id l16-20020a5d4810000000b002bfc09ac60emr2385342wrq.2.1675256370318; Wed, 01 Feb 2023 04:59:30 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:29 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 14/45] KVM: arm64: pkvm: Prevent host donation of device memory Date: Wed, 1 Feb 2023 12:52:58 +0000 Message-Id: <20230201125328.2186498-15-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125901_420582_969533D2 X-CRM114-Status: GOOD ( 12.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For the moment donating device memory cannot be supported. IOMMU support requires tracking host-owned pages that are mapped in the IOMMU, but the vmemmap portion of MMIO is not backed by physical pages, and ownership information in the host stage-2 page tables is not kept by host_stage2_try(). __check_page_state_visitor() already ensures that MMIO pages present in the host stage-2 are not donated, so we're just extending that check to pages that haven't been accessed by the host yet (typical of an MSI doorbell), or that have been recycled by host_stage2_try(). Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index cad5736026d5..856673291d70 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -719,6 +719,10 @@ static int host_request_owned_transition(u64 *completer_addr, u64 size = tx->nr_pages * PAGE_SIZE; u64 addr = tx->initiator.addr; + /* We don't support donating device memory at the moment */ + if (!range_is_memory(addr, addr + size)) + return -EINVAL; + *completer_addr = tx->initiator.host.completer_addr; return __host_check_page_state_range(addr, size, PKVM_PAGE_OWNED); } From patchwork Wed Feb 1 12:52:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9B85EC05027 for ; Wed, 1 Feb 2023 14:04:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SoqXgxBjN4jX1xytIgqFoejIEGwlEfi7bT/rNyn2XCc=; b=0wCEQx+vwOvZ2k 4NDYxiYk0yWqzFgvIzi3ezaEt76Ft44K8xdPeQ75qPcTjUTyngTwvGK3X5Ip2bC99NZHTd1iu30RY Tw8QzfY5Z77rbadXve6A5AiF7aXyot+y0okeo2QQ7QEtrnasuPxk4NV/WctLqjYnZTAmoYz5/8feb anUJ0OXTHRtgTk9ARM+JYZ9MLTJWB8Slv7F9i/WY9E2crQ7tEvMADNFcgY1d1HQ3dHMjMc9TpJ8Qt GWD5DsVt9ELwHknRjhqW828X7U+b/N2L+zdJiI4YZy3B1xIdTyb1wxtutAsYdQHpb3eIk2ceAdXs2 J9reXbjE/As/oNrYCVpg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDib-00CCFk-Kc; Wed, 01 Feb 2023 14:04:10 +0000 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCi3-00BnFM-O1 for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:37 +0000 Received: by mail-wr1-x430.google.com with SMTP id q5so17271709wrv.0 for ; Wed, 01 Feb 2023 04:59:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lU6O9CZcUVv8hYcDQ334N0zwZYT/omiCvAafRyvmTzE=; b=m/voFpeGQsUt+6NrWmVb5fj87WGpfE8oDM/HYbxPMBaboItMuRhjxJpWOC0UMmot2U HKiagg3IyUbb7RAY8EOvyDQ7kQrzP5VHWttvLSk3E6AM1J5Mbjt+FfakdBoywoJrkuev whu4Zb/6+vc3sD6va7O/OM5TjGtMaKYW8uiJxQQHP+I89qPsMHvnujrHvYUkmc7Km9Cp jw58YT7nvqAtjn66VOgsCgsENZyeq1kzuA17D18ZlCVWR6+hoVL1jC1uOJomzKkdLi5d TaBeleGJimSEybsh7xkzdNXyAeURNnapi11HfA5nZfDPdACQAqJu3gyVjm+f0KNh4XaA e1Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lU6O9CZcUVv8hYcDQ334N0zwZYT/omiCvAafRyvmTzE=; b=ErGWbjn6LW3fnlJw7O5gCPqss843jWysulC5Zodb3asY4uZ1D1v0n1oPehO/O1OX+8 oHs2oitAx/3gbc/tU1iLBcZeWIeZWlNH42K1KmmFkDUiDltxWJ0EIeYrWYAQjRuIIKeA q4yW44N9oTEFIoOCVPMytJ8KLoI9+g+YEbrkjJ9eVVdVl7DK01gt4F7xwWFtPkXppp4e 4fVEYkf3SqKnAhUQth/ca6zFkO5OyagC0lJ4Kt8Yc9l5Z/W7BEgXrcpo4aIu9euvfzgL AJJ8+3b6ov+y+lC09Q0bWkYWrlf1jVVa6bjXmHvjumLYQT22usX2udsY38riOljnr9M1 wCjA== X-Gm-Message-State: AO0yUKUJoO9B5gVk8WFcEi8F4KnPwwDij6FfFjpNCYi8pQKiDtHMbe2b /RnOgjGjUuLH5vkzhIkAA2uQhw== X-Google-Smtp-Source: AK7set8mqrMZj30mZqLkmp/Suy++GhfmsN1hQm9nbWbdJ8dYAZEUk6nRMACSn+IwYHRF/R++5hECXw== X-Received: by 2002:a05:6000:1285:b0:2bf:c58b:9cba with SMTP id f5-20020a056000128500b002bfc58b9cbamr2384179wrx.60.1675256371211; Wed, 01 Feb 2023 04:59:31 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:30 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 15/45] KVM: arm64: pkvm: Add __pkvm_host_share/unshare_dma() Date: Wed, 1 Feb 2023 12:52:59 +0000 Message-Id: <20230201125328.2186498-16-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045931_863496_B7246220 X-CRM114-Status: GOOD ( 25.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Host pages mapped in the SMMU must not be donated to the guest or hypervisor, since the host could then use DMA to break confidentiality. Mark them shared in the host stage-2 page tables, and keep a refcount in the hyp vmemmap. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 3 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 185 ++++++++++++++++++ 2 files changed, 188 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 021825aee854..a363d58a998b 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -58,6 +58,7 @@ enum pkvm_component_id { PKVM_ID_HOST, PKVM_ID_HYP, PKVM_ID_GUEST, + PKVM_ID_IOMMU, }; extern unsigned long hyp_nr_cpus; @@ -72,6 +73,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu); int __pkvm_host_donate_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu); int __pkvm_guest_share_host(struct pkvm_hyp_vcpu *hyp_vcpu, u64 ipa); int __pkvm_guest_unshare_host(struct pkvm_hyp_vcpu *hyp_vcpu, u64 ipa); +int __pkvm_host_share_dma(u64 phys_addr, size_t size, bool is_ram); +int __pkvm_host_unshare_dma(u64 phys_addr, size_t size); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 856673291d70..dcf08ce03790 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1148,6 +1148,9 @@ static int check_share(struct pkvm_mem_share *share) case PKVM_ID_GUEST: ret = guest_ack_share(completer_addr, tx, share->completer_prot); break; + case PKVM_ID_IOMMU: + ret = 0; + break; default: ret = -EINVAL; } @@ -1185,6 +1188,9 @@ static int __do_share(struct pkvm_mem_share *share) case PKVM_ID_GUEST: ret = guest_complete_share(completer_addr, tx, share->completer_prot); break; + case PKVM_ID_IOMMU: + ret = 0; + break; default: ret = -EINVAL; } @@ -1239,6 +1245,9 @@ static int check_unshare(struct pkvm_mem_share *share) case PKVM_ID_HYP: ret = hyp_ack_unshare(completer_addr, tx); break; + case PKVM_ID_IOMMU: + ret = 0; + break; default: ret = -EINVAL; } @@ -1273,6 +1282,9 @@ static int __do_unshare(struct pkvm_mem_share *share) case PKVM_ID_HYP: ret = hyp_complete_unshare(completer_addr, tx); break; + case PKVM_ID_IOMMU: + ret = 0; + break; default: ret = -EINVAL; } @@ -1633,6 +1645,179 @@ void hyp_unpin_shared_mem(void *from, void *to) host_unlock_component(); } +static int __host_check_page_dma_shared(phys_addr_t phys_addr) +{ + int ret; + u64 hyp_addr; + + /* + * The page is already refcounted. Make sure it's owned by the host, and + * not part of the hyp pool. + */ + ret = __host_check_page_state_range(phys_addr, PAGE_SIZE, + PKVM_PAGE_SHARED_OWNED); + if (ret) + return ret; + + /* + * Refcounted and owned by host, means it's either mapped in the + * SMMU, or it's some VM/VCPU state shared with the hypervisor. + * The host has no reason to use a page for both. + */ + hyp_addr = (u64)hyp_phys_to_virt(phys_addr); + return __hyp_check_page_state_range(hyp_addr, PAGE_SIZE, PKVM_NOPAGE); +} + +static int __pkvm_host_share_dma_page(phys_addr_t phys_addr, bool is_ram) +{ + int ret; + struct hyp_page *p = hyp_phys_to_page(phys_addr); + struct pkvm_mem_share share = { + .tx = { + .nr_pages = 1, + .initiator = { + .id = PKVM_ID_HOST, + .addr = phys_addr, + }, + .completer = { + .id = PKVM_ID_IOMMU, + }, + }, + }; + + hyp_assert_lock_held(&host_mmu.lock); + hyp_assert_lock_held(&pkvm_pgd_lock); + + /* + * Some differences between handling of RAM and device memory: + * - The hyp vmemmap area for device memory is not backed by physical + * pages in the hyp page tables. + * - Device memory is unmapped automatically under memory pressure + * (host_stage2_try()) and the ownership information would be + * discarded. + * We don't need to deal with that at the moment, because the host + * cannot share or donate device memory, only RAM. + * + * Since 'is_ram' is only a hint provided by the host, we do need to + * make sure of it. + */ + if (!is_ram) + return addr_is_memory(phys_addr) ? -EINVAL : 0; + + ret = hyp_page_ref_inc_return(p); + BUG_ON(ret == 0); + if (ret < 0) + return ret; + else if (ret == 1) + ret = do_share(&share); + else + ret = __host_check_page_dma_shared(phys_addr); + + if (ret) + hyp_page_ref_dec(p); + + return ret; +} + +static int __pkvm_host_unshare_dma_page(phys_addr_t phys_addr) +{ + struct hyp_page *p = hyp_phys_to_page(phys_addr); + struct pkvm_mem_share share = { + .tx = { + .nr_pages = 1, + .initiator = { + .id = PKVM_ID_HOST, + .addr = phys_addr, + }, + .completer = { + .id = PKVM_ID_IOMMU, + }, + }, + }; + + hyp_assert_lock_held(&host_mmu.lock); + hyp_assert_lock_held(&pkvm_pgd_lock); + + if (!addr_is_memory(phys_addr)) + return 0; + + if (!hyp_page_ref_dec_and_test(p)) + return 0; + + return do_unshare(&share); +} + +/* + * __pkvm_host_share_dma - Mark host memory as used for DMA + * @phys_addr: physical address of the DMA region + * @size: size of the DMA region + * @is_ram: whether it is RAM or device memory + * + * We must not allow the host to donate pages that are mapped in the IOMMU for + * DMA. So: + * 1. Mark the host S2 entry as being owned by IOMMU + * 2. Refcount it, since a page may be mapped in multiple device address spaces. + * + * At some point we may end up needing more than the current 16 bits for + * refcounting, for example if all devices and sub-devices map the same MSI + * doorbell page. It will do for now. + */ +int __pkvm_host_share_dma(phys_addr_t phys_addr, size_t size, bool is_ram) +{ + int i; + int ret; + size_t nr_pages = size >> PAGE_SHIFT; + + if (WARN_ON(!PAGE_ALIGNED(phys_addr | size))) + return -EINVAL; + + host_lock_component(); + hyp_lock_component(); + + for (i = 0; i < nr_pages; i++) { + ret = __pkvm_host_share_dma_page(phys_addr + i * PAGE_SIZE, + is_ram); + if (ret) + break; + } + + if (ret) { + for (--i; i >= 0; --i) + __pkvm_host_unshare_dma_page(phys_addr + i * PAGE_SIZE); + } + + hyp_unlock_component(); + host_unlock_component(); + + return ret; +} + +int __pkvm_host_unshare_dma(phys_addr_t phys_addr, size_t size) +{ + int i; + int ret; + size_t nr_pages = size >> PAGE_SHIFT; + + host_lock_component(); + hyp_lock_component(); + + /* + * We end up here after the caller successfully unmapped the page from + * the IOMMU table. Which means that a ref is held, the page is shared + * in the host s2, there can be no failure. + */ + for (i = 0; i < nr_pages; i++) { + ret = __pkvm_host_unshare_dma_page(phys_addr + i * PAGE_SIZE); + if (ret) + break; + } + + hyp_unlock_component(); + host_unlock_component(); + + return ret; +} + int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu) { int ret; From patchwork Wed Feb 1 12:53:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EB043C636CD for ; Wed, 1 Feb 2023 14:05:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YNrRMoFW6JwnVidJTIxQtDsG1GjrMyk9g1uGZTiCR60=; b=36aQX2JE3QJa8d f3HZZB5zuRXr6XgPt4pQQgomxiuPTUZsWnsK/wuG3GHqX2k0LcNAja85xTlHcf29x4Y+nulYZGgQx tQlZuHPBDGe8cxeVHB8NVuTYPRrn8rCXrDVn5BeMKP8dL/y+eBjEg1xwN3Ou2QJryEK5mWIPs2xVc VP3/51SHToL0EXbXUBQ8w9H7+UVZUIOoRDVLBBr8OZEzuZoPWqR1MOS8Zl/HFVnrsqxQoaKQnoBWt khYJcCZnazv9ND5gBpl2GodpuptOQEDO9pdfZ/NgeSxOuZJa07VdgPDuTJjwFgrpzbdLAu4jyvBgX 6d2eETEZaqXmlcaIb2CQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDjS-00CCfF-TI; Wed, 01 Feb 2023 14:05:04 +0000 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCi4-00BnFT-I9 for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:37 +0000 Received: by mail-wr1-x42b.google.com with SMTP id d14so17221801wrr.9 for ; Wed, 01 Feb 2023 04:59:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=G6BgqxwA1odPYdBAEqyReTKBcBbq6uXa1i9UbGB0ybQ=; b=EazViK8yxWcKpkfVJNZvJueBnGPM/oDDtabRBujMKyDPbeL0qHofs5atHNkRfT7ZiU GBuVLO9C1Fk47FFTq3HCr9mFCKmydVB5P6rX97/5Xtg/kAMRS1RZ0FX2vl28SkEYVfrJ YiuL7MJGeWHqiBYDgEEdOuwSZ0CCMU+BClA+bPMo1zID0tLRHnZL0T+r1q9jrnKqXU3I nXj8TywAhKFMQDgfxc/I/7CtKyutv6Sr+W8F1Lb4C31r4cawdlYagCo7y2mUp51vJx0v yeGknEWXUoptphB5+UzpJTwC0eL67LtRl+lcDuc/pKhLYmUXZqXscPy+Hm0AF3nRBIty TSSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=G6BgqxwA1odPYdBAEqyReTKBcBbq6uXa1i9UbGB0ybQ=; b=4wA++Tahv5CYM/7aFAMCpP791+weBMWbm1aBVQJ+97sgPMadzdwvXKvZMzNZMsvrla QL415yEr7qZx1cElMtND2DAMgRiuFba0WxLgp0/OZVip5z4uQqB9Rkc6uLO9/I9v+qZX 41m29VwAnPpRX94BLtR2DT/IUa5Kjd4oftil3JAY9a7cAdxwUFlYItlMhIOH7D3+rSns eJek4NVOK7W3y/zQj+mlC0yP2eU+xNzWQhhucZ2SNdJbXC4IGqFaolMLKnX8NN9sh2wt ilQxs3qlF9A8amv68R/FKJH4iaqsuC1ORX4RIrmwtahY3SI1D2YEquIPb8MUTwRVnAtv 2Zrw== X-Gm-Message-State: AO0yUKWrTPyLFq8m/ICaCZqPLh2p5PSFadUqJyBpBowYct1s+uvAqGU8 rFHCefRVUntfZjJsINV/I4Pggw== X-Google-Smtp-Source: AK7set81aKvdcQaLewXl1g1lgIZu1YK6jk5mNmi1mVBmg+yjTMjafMe+mt3JWLOK122eqlFgCHcvQQ== X-Received: by 2002:adf:e702:0:b0:2bf:b765:7a13 with SMTP id c2-20020adfe702000000b002bfb7657a13mr5780615wrm.5.1675256372070; Wed, 01 Feb 2023 04:59:32 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:31 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 16/45] KVM: arm64: Introduce IOMMU driver infrastructure Date: Wed, 1 Feb 2023 12:53:00 +0000 Message-Id: <20230201125328.2186498-17-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045932_671492_1DAD252C X-CRM114-Status: GOOD ( 24.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: David Brazdil Bootstrap infrastructure for IOMMU drivers by introducing kvm_iommu_ops struct in EL2 that is populated based on a iommu_driver parameter to __pkvm_init hypercall and selected in EL1 early init. An 'init' operation is called in __pkvm_init_finalise, giving the driver an opportunity to initialize itself in EL2 and create any EL2 mappings that it will need. 'init' is specifically called before 'finalize_host_mappings' so that: (a) pages mapped by the driver change owner to hyp, (b) ownership changes in 'finalize_host_mappings' get reflected in IOMMU mappings (added in a future patch). Signed-off-by: David Brazdil [JPB: add remove(), move to include/nvhe] Signed-off-by: Jean-Philippe Brucker --- arch/arm64/include/asm/kvm_host.h | 4 ++++ arch/arm64/include/asm/kvm_hyp.h | 3 ++- arch/arm64/kvm/hyp/include/nvhe/iommu.h | 11 +++++++++++ arch/arm64/kvm/arm.c | 25 +++++++++++++++++++++---- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 6 +++++- arch/arm64/kvm/hyp/nvhe/setup.c | 24 +++++++++++++++++++++++- 6 files changed, 66 insertions(+), 7 deletions(-) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/iommu.h diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 02850cf3f0de..b8e032bda022 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -377,6 +377,10 @@ extern s64 kvm_nvhe_sym(hyp_physvirt_offset); extern u64 kvm_nvhe_sym(hyp_cpu_logical_map)[NR_CPUS]; #define hyp_cpu_logical_map CHOOSE_NVHE_SYM(hyp_cpu_logical_map) +enum kvm_iommu_driver { + KVM_IOMMU_DRIVER_NONE, +}; + struct vcpu_reset_state { unsigned long pc; unsigned long r0; diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 1b597b7db99b..0226a719e28f 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -114,7 +114,8 @@ void __noreturn __hyp_do_panic(struct kvm_cpu_context *host_ctxt, u64 spsr, void __pkvm_init_switch_pgd(phys_addr_t phys, unsigned long size, phys_addr_t pgd, void *sp, void *cont_fn); int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus, - unsigned long *per_cpu_base, u32 hyp_va_bits); + unsigned long *per_cpu_base, u32 hyp_va_bits, + enum kvm_iommu_driver iommu_driver); void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt); #endif diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/include/nvhe/iommu.h new file mode 100644 index 000000000000..c728c8e913da --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ARM64_KVM_NVHE_IOMMU_H__ +#define __ARM64_KVM_NVHE_IOMMU_H__ + +struct kvm_iommu_ops { + int (*init)(void); +}; + +extern struct kvm_iommu_ops kvm_iommu_ops; + +#endif /* __ARM64_KVM_NVHE_IOMMU_H__ */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c96fd7deea14..31faae76d519 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1899,6 +1899,15 @@ static bool init_psci_relay(void) return true; } +static int init_stage2_iommu(void) +{ + return KVM_IOMMU_DRIVER_NONE; +} + +static void remove_stage2_iommu(enum kvm_iommu_driver iommu) +{ +} + static int init_subsystems(void) { int err = 0; @@ -1957,7 +1966,7 @@ static void teardown_hyp_mode(void) } } -static int do_pkvm_init(u32 hyp_va_bits) +static int do_pkvm_init(u32 hyp_va_bits, enum kvm_iommu_driver iommu_driver) { void *per_cpu_base = kvm_ksym_ref(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)); int ret; @@ -1966,7 +1975,7 @@ static int do_pkvm_init(u32 hyp_va_bits) cpu_hyp_init_context(); ret = kvm_call_hyp_nvhe(__pkvm_init, hyp_mem_base, hyp_mem_size, num_possible_cpus(), kern_hyp_va(per_cpu_base), - hyp_va_bits); + hyp_va_bits, iommu_driver); cpu_hyp_init_features(); /* @@ -1996,15 +2005,23 @@ static void kvm_hyp_init_symbols(void) static int kvm_hyp_init_protection(u32 hyp_va_bits) { void *addr = phys_to_virt(hyp_mem_base); + enum kvm_iommu_driver iommu; int ret; ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP); if (ret) return ret; - ret = do_pkvm_init(hyp_va_bits); - if (ret) + ret = init_stage2_iommu(); + if (ret < 0) return ret; + iommu = ret; + + ret = do_pkvm_init(hyp_va_bits, iommu); + if (ret) { + remove_stage2_iommu(iommu); + return ret; + } free_hyp_pgds(); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 29ce7b09edbb..37e308337fec 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -14,6 +14,7 @@ #include #include +#include #include #include #include @@ -34,6 +35,8 @@ static DEFINE_PER_CPU(struct user_fpsimd_state, loaded_host_fpsimd_state); DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); +struct kvm_iommu_ops kvm_iommu_ops; + void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); typedef void (*hyp_entry_exit_handler_fn)(struct pkvm_hyp_vcpu *); @@ -958,6 +961,7 @@ static void handle___pkvm_init(struct kvm_cpu_context *host_ctxt) DECLARE_REG(unsigned long, nr_cpus, host_ctxt, 3); DECLARE_REG(unsigned long *, per_cpu_base, host_ctxt, 4); DECLARE_REG(u32, hyp_va_bits, host_ctxt, 5); + DECLARE_REG(enum kvm_iommu_driver, iommu_driver, host_ctxt, 6); /* * __pkvm_init() will return only if an error occurred, otherwise it @@ -965,7 +969,7 @@ static void handle___pkvm_init(struct kvm_cpu_context *host_ctxt) * with the host context directly. */ cpu_reg(host_ctxt, 1) = __pkvm_init(phys, size, nr_cpus, per_cpu_base, - hyp_va_bits); + hyp_va_bits, iommu_driver); } static void handle___pkvm_cpu_set_vector(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index de7d60c3c20b..3e73c066d560 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -11,6 +11,7 @@ #include #include +#include #include #include #include @@ -288,6 +289,16 @@ static int fix_hyp_pgtable_refcnt(void) &walker); } +static int select_iommu_ops(enum kvm_iommu_driver driver) +{ + switch (driver) { + case KVM_IOMMU_DRIVER_NONE: + return 0; + } + + return -EINVAL; +} + void __noreturn __pkvm_init_finalise(void) { struct kvm_host_data *host_data = this_cpu_ptr(&kvm_host_data); @@ -321,6 +332,12 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; + if (kvm_iommu_ops.init) { + ret = kvm_iommu_ops.init(); + if (ret) + goto out; + } + ret = fix_host_ownership(); if (ret) goto out; @@ -345,7 +362,8 @@ void __noreturn __pkvm_init_finalise(void) } int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus, - unsigned long *per_cpu_base, u32 hyp_va_bits) + unsigned long *per_cpu_base, u32 hyp_va_bits, + enum kvm_iommu_driver iommu_driver) { struct kvm_nvhe_init_params *params; void *virt = hyp_phys_to_virt(phys); @@ -368,6 +386,10 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus, if (ret) return ret; + ret = select_iommu_ops(iommu_driver); + if (ret) + return ret; + update_nvhe_init_params(); /* Jump in the idmap page to switch to the new page-tables */ From patchwork Wed Feb 1 12:53:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1E368C05027 for ; Wed, 1 Feb 2023 14:06:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LoVBx8bqeJoIvmZMZkQAmJ4NA2yXyY1HhUyoJvS08mw=; b=wxLdR4WqBfMDtu lrWCv3bLzTjKkdc2n3C+e6A728DuU2KoT1H0Ozx5q2bxgOtHKUX8g2uObmsAKbABB2VFfWzZzyG/A GMMGCTlvK8pRfGMl9ZMJM3PmQlm/f1ZTvy48IUuIDojYQCLJ2ZcVXwz0PW9znBvQqwAJ+OzKtvzS9 c4wK2wSEHYXclxaSRaYh8RO0Tj0WT98twsmdD3G3lisnAQBhRCuXUe5HYmA+n6BJxSqFOFrPnPX+A Qc8mWwnRtmz0gSjFUwVBidVuVjykJydIkSDC/oiuj8jVnnUiX06ZV9Ok8iS4kJ+vvDpFYA3MX5/m8 utIIxJ/+wNSZz1g6kmoQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDkG-00CD4h-DM; Wed, 01 Feb 2023 14:05:52 +0000 Received: from mail-wr1-x436.google.com ([2a00:1450:4864:20::436]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCi9-00BnLU-FT for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:39 +0000 Received: by mail-wr1-x436.google.com with SMTP id bk16so17210178wrb.11 for ; Wed, 01 Feb 2023 04:59:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zbZLsQb0xg2PPl0XoduEhMIm9PR89tvM6CYFFsggaVQ=; b=EJPFEmmztrvIgrDqJ3MkjHOSDg10LW20VhEO2fXjPBvc6j5UWEZd1MsrLv1vfwRpiT cc62+AK11q+MLrLeHagED0GqHtTtgNMbjWYdLu4JQS8/BN8YkB9FYd7h4PIS2zdC3+vu 2iKKBnOpCcuwH0AbFAob7chGyqYiAxwVQoA8kDen4hca8LfZn6MGq5be+PVb9L4jZd3j tjp9gmImOJ77dnwKcjQL6qp+oB8pN7DPKOFoIDuuRYA4K2SsX9UQuEM4INZKmw/TnRgF b4/t3kcPLOY61LKoEA8twExnhgN2rtn+rPvfdcRIIP5T/5EdnDPr5LLk3KfRf/5YLi0y gqFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zbZLsQb0xg2PPl0XoduEhMIm9PR89tvM6CYFFsggaVQ=; b=tB67e5IleFHkwfcI79ZfsxsFgnHCNpztHk/fo5fugRsUi6BO4giffq2AcBcojIVwkO ruCOhWtSudQWmzY1dyZQHeUkVYbjpEHg/x0oQT5xxaVWbMSWouCnYRer9f7NPc5LhUWH F1+W+2Qch2439T1QEDi6M/iYU2muLDlwAVU/5rpl2JfEpgVwYMkrNuXF2J8O13m6ul5R qoxSkUcN0q1Uk7lIUXpxauatFkcGV4TEBz14+Z/bug7vZZUs4Q/BlBjceiKuI2mlwKtQ 3z0oZwdPWZLFJygmtivGBx043LA6XORdtY42wVlNMajGmSf+timjM3DXQmHVEFXXSM/Z mIug== X-Gm-Message-State: AO0yUKXBWkvROho0UVljuUC8GWBfA9oCzaMriO3AIOEfJ1h10hgatOQS 9R/zNT0cZLJPORfb8Srkqn+0eA== X-Google-Smtp-Source: AK7set9gMQhL6pt9dp/2vtz2l2umJOa+pYslGi7ef3uzOBfAa+HuE2R6nT4pVnYJgiPfcb1UxI1NgA== X-Received: by 2002:a5d:6b89:0:b0:2bf:b571:1f18 with SMTP id n9-20020a5d6b89000000b002bfb5711f18mr2573727wrx.61.1675256372895; Wed, 01 Feb 2023 04:59:32 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:32 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 17/45] KVM: arm64: pkvm: Add IOMMU hypercalls Date: Wed, 1 Feb 2023 12:53:01 +0000 Message-Id: <20230201125328.2186498-18-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045937_579338_3A79B7F2 X-CRM114-Status: GOOD ( 14.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The unprivileged host IOMMU driver forwards some of the IOMMU API calls to the hypervisor, which installs and populates the page tables. Note that this is not a stable ABI. Those hypercalls change with the kernel just like internal function calls. Signed-off-by: Jean-Philippe Brucker --- virt/kvm/Kconfig | 3 + arch/arm64/include/asm/kvm_asm.h | 7 +++ arch/arm64/kvm/hyp/include/nvhe/iommu.h | 68 ++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 77 +++++++++++++++++++++++++ 4 files changed, 155 insertions(+) diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 9fb1ff6f19e5..99b0ddc50443 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -92,3 +92,6 @@ config KVM_XFER_TO_GUEST_WORK config HAVE_KVM_PM_NOTIFIER bool + +config KVM_IOMMU + bool diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 12aa0ccc3b3d..e2ced352b49c 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -81,6 +81,13 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_sync_state, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_alloc_domain, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_free_domain, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_attach_dev, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_detach_dev, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_map_pages, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_unmap_pages, + __KVM_HOST_SMCCC_FUNC___pkvm_host_iommu_iova_to_phys, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/include/nvhe/iommu.h index c728c8e913da..26a95717b613 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -2,6 +2,74 @@ #ifndef __ARM64_KVM_NVHE_IOMMU_H__ #define __ARM64_KVM_NVHE_IOMMU_H__ +#if IS_ENABLED(CONFIG_KVM_IOMMU) +/* Hypercall handlers */ +int kvm_iommu_alloc_domain(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + unsigned long pgd_hva); +int kvm_iommu_free_domain(pkvm_handle_t iommu_id, pkvm_handle_t domain_id); +int kvm_iommu_attach_dev(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + u32 endpoint_id); +int kvm_iommu_detach_dev(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + u32 endpoint_id); +int kvm_iommu_map_pages(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + unsigned long iova, phys_addr_t paddr, size_t pgsize, + size_t pgcount, int prot); +int kvm_iommu_unmap_pages(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + unsigned long iova, size_t pgsize, size_t pgcount); +phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t iommu_id, + pkvm_handle_t domain_id, unsigned long iova); +#else /* !CONFIG_KVM_IOMMU */ +static inline int kvm_iommu_alloc_domain(pkvm_handle_t iommu_id, + pkvm_handle_t domain_id, + unsigned long pgd_hva) +{ + return -ENODEV; +} + +static inline int kvm_iommu_free_domain(pkvm_handle_t iommu_id, + pkvm_handle_t domain_id) +{ + return -ENODEV; +} + +static inline int kvm_iommu_attach_dev(pkvm_handle_t iommu_id, + pkvm_handle_t domain_id, + u32 endpoint_id) +{ + return -ENODEV; +} + +static inline int kvm_iommu_detach_dev(pkvm_handle_t iommu_id, + pkvm_handle_t domain_id, + u32 endpoint_id) +{ + return -ENODEV; +} + +static inline int kvm_iommu_map_pages(pkvm_handle_t iommu_id, + pkvm_handle_t domain_id, + unsigned long iova, phys_addr_t paddr, + size_t pgsize, size_t pgcount, int prot) +{ + return -ENODEV; +} + +static inline int kvm_iommu_unmap_pages(pkvm_handle_t iommu_id, + pkvm_handle_t domain_id, + unsigned long iova, size_t pgsize, + size_t pgcount) +{ + return 0; +} + +static inline phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t iommu_id, + pkvm_handle_t domain_id, + unsigned long iova) +{ + return 0; +} +#endif /* CONFIG_KVM_IOMMU */ + struct kvm_iommu_ops { int (*init)(void); }; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 37e308337fec..34ec46b890f0 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -1059,6 +1059,76 @@ static void handle___pkvm_teardown_vm(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = __pkvm_teardown_vm(handle); } +static void handle___pkvm_host_iommu_alloc_domain(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, iommu, host_ctxt, 1); + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 2); + DECLARE_REG(unsigned long, pgd_hva, host_ctxt, 3); + + cpu_reg(host_ctxt, 1) = kvm_iommu_alloc_domain(iommu, domain, pgd_hva); +} + +static void handle___pkvm_host_iommu_free_domain(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, iommu, host_ctxt, 1); + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 2); + + cpu_reg(host_ctxt, 1) = kvm_iommu_free_domain(iommu, domain); +} + +static void handle___pkvm_host_iommu_attach_dev(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, iommu, host_ctxt, 1); + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 2); + DECLARE_REG(unsigned int, endpoint, host_ctxt, 3); + + cpu_reg(host_ctxt, 1) = kvm_iommu_attach_dev(iommu, domain, endpoint); +} + +static void handle___pkvm_host_iommu_detach_dev(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, iommu, host_ctxt, 1); + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 2); + DECLARE_REG(unsigned int, endpoint, host_ctxt, 3); + + cpu_reg(host_ctxt, 1) = kvm_iommu_detach_dev(iommu, domain, endpoint); +} + +static void handle___pkvm_host_iommu_map_pages(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, iommu, host_ctxt, 1); + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 2); + DECLARE_REG(unsigned long, iova, host_ctxt, 3); + DECLARE_REG(phys_addr_t, paddr, host_ctxt, 4); + DECLARE_REG(size_t, pgsize, host_ctxt, 5); + DECLARE_REG(size_t, pgcount, host_ctxt, 6); + DECLARE_REG(unsigned int, prot, host_ctxt, 7); + + cpu_reg(host_ctxt, 1) = kvm_iommu_map_pages(iommu, domain, iova, paddr, + pgsize, pgcount, prot); +} + +static void handle___pkvm_host_iommu_unmap_pages(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, iommu, host_ctxt, 1); + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 2); + DECLARE_REG(unsigned long, iova, host_ctxt, 3); + DECLARE_REG(size_t, pgsize, host_ctxt, 4); + DECLARE_REG(size_t, pgcount, host_ctxt, 5); + + cpu_reg(host_ctxt, 1) = kvm_iommu_unmap_pages(iommu, domain, iova, + pgsize, pgcount); +} + +static void handle___pkvm_host_iommu_iova_to_phys(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, iommu, host_ctxt, 1); + DECLARE_REG(pkvm_handle_t, domain, host_ctxt, 2); + DECLARE_REG(unsigned long, iova, host_ctxt, 3); + + cpu_reg(host_ctxt, 1) = kvm_iommu_iova_to_phys(iommu, domain, iova); +} + typedef void (*hcall_t)(struct kvm_cpu_context *); #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x @@ -1093,6 +1163,13 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_vcpu_load), HANDLE_FUNC(__pkvm_vcpu_put), HANDLE_FUNC(__pkvm_vcpu_sync_state), + HANDLE_FUNC(__pkvm_host_iommu_alloc_domain), + HANDLE_FUNC(__pkvm_host_iommu_free_domain), + HANDLE_FUNC(__pkvm_host_iommu_attach_dev), + HANDLE_FUNC(__pkvm_host_iommu_detach_dev), + HANDLE_FUNC(__pkvm_host_iommu_map_pages), + HANDLE_FUNC(__pkvm_host_iommu_unmap_pages), + HANDLE_FUNC(__pkvm_host_iommu_iova_to_phys), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) From patchwork Wed Feb 1 12:53:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124384 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D3DECC05027 for ; Wed, 1 Feb 2023 14:05:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=r/q7ZWKcpd8q8O99mUptWK1kb6Ktf4kseipPC2zI1e8=; b=KzT+ba4d0BTtiU fLWpjxMjCnCwsIuVcn7yVNEogEyy/8MWjp7itsp7bEnA/F2PH83a0WAk6BcFQAqk0NqnvLo8f0qfv sF5NIyO4moWPEj0Qy89WjZpV6SAGLaSPPSuGkB+07jwUWR2D6aqp+Bhieu9gI08vPo10dDx/H1xx4 vemhS+pJRvF13h/xw+3kykyJBSjeArXArR16NgiG5K1joJGLtJfBwmpaRtI1cJKkoPfq1wUl/KwgY FNagF/Np4wCTO5Mcimjq8DTPrqs7/Y/AI57d5ywg2Dpxbl9GnoGzw5hes+IzxxQv9X4w4HpgTXbFk 9NT/G6SApCgxn7Rnt9tA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDjA-00CCVz-QN; Wed, 01 Feb 2023 14:04:44 +0000 Received: from mail-wr1-x42c.google.com ([2a00:1450:4864:20::42c]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCi6-00BnFc-6g for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:37 +0000 Received: by mail-wr1-x42c.google.com with SMTP id q10so17233077wrm.4 for ; Wed, 01 Feb 2023 04:59:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GNUt4LXC/vnI+uinFgyT66VRoqWna4CviemrNT21aBc=; b=thk5T6OXjBQKQ8JCuo3/JBvaiAeliPJbKUA6Ev5LIvBWS09yiKK0shaiUVxTtsTeBt c+XuSCQ1TP1IABgnoI6i3lGgX+giB2jgzc1THVgnusuLSQNNNt6bS+ImpDXuysQeJ11F JLcR0da7d/4keUdWrMMV2L7qbinuKb59mEpdP3CKABCoa7skqfyKpnUaZECC+1fYWn+j l070kGUPgEqRTrzSe+a7rh2sFc8U5Y5/WwbZmcxsKDn5r9s/zFcm47LcB05dNt10Vb1q pEcW8SBeF5Bj3vtIw+Q+nhL7Z+MyrwOPXm6jZ+5aZvuVzUPsAqGjE1ZVojd5nV5B6WZX GYGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GNUt4LXC/vnI+uinFgyT66VRoqWna4CviemrNT21aBc=; b=xbW3vUOMNcd9v5tismiQdhA2o3Ppa2lP4ARNnPLcQuhClXJfynXXXPFbqlDr2zoL8j hpZTVW7v+SiENpLMe9UzzyM8Ew9sFwaAB0XmgaZjfwiqmRn+uJpe4iRyj3CzoB9FuIXU K7lymHRFskMUBGvlA6BmcV2SmDcD+Tm9Zp32ouuY/lhPaGlbdBLyXQEvVnA+ZKgfgzKX HAb+zlJqWEiVVvWUIBdtlvt+Ul5JvQhC6c8TgmDbBwmyonvN/i+AMtlvQOPRKmQnAR99 SbmqfWSNnji/d3XZKLxESFBmsH/0SdXtYtPqqLxT8ClC2cwWSKfY8PvJb5v0kTA804Jm 6ktw== X-Gm-Message-State: AO0yUKXaVtl0zRYv22Z7yjklLoPqMljUBI2BDPIKhP/lXXfvqnSYm244 5wb0xnE+C23NEm3l40g++AYkBg== X-Google-Smtp-Source: AK7set9dF9IqnNhJnBWjTtGsSxR/PrtlZWcMlOX8jMonsihAZRhFZWtS6tLGLt7ILypTZ2wjrtzJig== X-Received: by 2002:a05:6000:1889:b0:2bf:c5e4:1af4 with SMTP id a9-20020a056000188900b002bfc5e41af4mr2995748wri.15.1675256373642; Wed, 01 Feb 2023 04:59:33 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:33 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 18/45] KVM: arm64: iommu: Add per-cpu page queue Date: Wed, 1 Feb 2023 12:53:02 +0000 Message-Id: <20230201125328.2186498-19-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045934_283572_7C125572 X-CRM114-Status: GOOD ( 20.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp driver will need to allocate pages when handling some hypercalls, to populate page, stream and domain tables. Add a per-cpu page queue that will contain host pages to be donated and reclaimed. When the driver needs a new page, it sets the needs_page bit and returns to the host with an error. The host pushes a page and retries the hypercall. The queue is per-cpu to ensure that IOMMU map()/unmap() requests from different CPUs don't step on each others. It is populated on demand rather than upfront to avoid wasting memory, as these allocations should be relatively rare. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/nvhe/Makefile | 2 + arch/arm64/kvm/hyp/include/nvhe/iommu.h | 4 ++ include/kvm/iommu.h | 15 +++++++ arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 52 +++++++++++++++++++++++++ 4 files changed, 73 insertions(+) create mode 100644 include/kvm/iommu.h create mode 100644 arch/arm64/kvm/hyp/nvhe/iommu/iommu.c diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 530347cdebe3..f7dfc88c9f5b 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -28,6 +28,8 @@ hyp-obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ hyp-obj-$(CONFIG_DEBUG_LIST) += list_debug.o hyp-obj-y += $(lib-objs) +hyp-obj-$(CONFIG_KVM_IOMMU) += iommu/iommu.o + ## ## Build rules for compiling nVHE hyp code ## Output of this folder is `kvm_nvhe.o`, a partially linked object diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/include/nvhe/iommu.h index 26a95717b613..4959c30977b8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -3,6 +3,10 @@ #define __ARM64_KVM_NVHE_IOMMU_H__ #if IS_ENABLED(CONFIG_KVM_IOMMU) +int kvm_iommu_init(void); +void *kvm_iommu_donate_page(void); +void kvm_iommu_reclaim_page(void *p); + /* Hypercall handlers */ int kvm_iommu_alloc_domain(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, unsigned long pgd_hva); diff --git a/include/kvm/iommu.h b/include/kvm/iommu.h new file mode 100644 index 000000000000..12b06a5df889 --- /dev/null +++ b/include/kvm/iommu.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_IOMMU_H +#define __KVM_IOMMU_H + +#include + +struct kvm_hyp_iommu_memcache { + struct kvm_hyp_memcache pages; + bool needs_page; +} ____cacheline_aligned_in_smp; + +extern struct kvm_hyp_iommu_memcache *kvm_nvhe_sym(kvm_hyp_iommu_memcaches); +#define kvm_hyp_iommu_memcaches kvm_nvhe_sym(kvm_hyp_iommu_memcaches) + +#endif /* __KVM_IOMMU_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c new file mode 100644 index 000000000000..1a9184fbbd27 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -0,0 +1,52 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * IOMMU operations for pKVM + * + * Copyright (C) 2022 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +struct kvm_hyp_iommu_memcache __ro_after_init *kvm_hyp_iommu_memcaches; + +void *kvm_iommu_donate_page(void) +{ + void *p; + int cpu = hyp_smp_processor_id(); + struct kvm_hyp_memcache tmp = kvm_hyp_iommu_memcaches[cpu].pages; + + if (!tmp.nr_pages) { + kvm_hyp_iommu_memcaches[cpu].needs_page = true; + return NULL; + } + + p = pkvm_admit_host_page(&tmp); + if (!p) + return NULL; + + kvm_hyp_iommu_memcaches[cpu].pages = tmp; + memset(p, 0, PAGE_SIZE); + return p; +} + +void kvm_iommu_reclaim_page(void *p) +{ + int cpu = hyp_smp_processor_id(); + + pkvm_teardown_donated_memory(&kvm_hyp_iommu_memcaches[cpu].pages, p, + PAGE_SIZE); +} + +int kvm_iommu_init(void) +{ + enum kvm_pgtable_prot prot; + + /* The memcache is shared with the host */ + prot = pkvm_mkstate(PAGE_HYP, PKVM_PAGE_SHARED_OWNED); + return pkvm_create_mappings(kvm_hyp_iommu_memcaches, + kvm_hyp_iommu_memcaches + NR_CPUS, prot); +} From patchwork Wed Feb 1 12:53:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124418 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B8F20C05027 for ; Wed, 1 Feb 2023 14:29:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4lzlzIbZajzPq1GEzDCVCAjZRG09X79RUQudvKPmJDA=; b=uTz3ZAXcoT9zS2 LLMINaq3RNHy1HmMZqxQxtYK0juNDGj2IcG8gNib188NGjxYrV9YYSY2G3R6qqvjbm5c2pkAtwTSx rKoOoEo4fA3F0CrVnQ9f2dvcmXDWTivXuerAxLyn10XkQeXHmhFN5KleaKJfg/FkpbOtwEr6MyaWm Lgx41aa3Lf6u8C0Sl5OTQf++jtTqGYHNtsp2Pa3Zj5R7HwEfLp9bFKcSVG+qu+xx19VKjVgSi7Ali Ry9tsh+pYXPGEw+7aGUTH8dTqTwD+mFYfubspmauyloarMbL202apmm6ZmoZzaU8YHn9IuUvlEEVH enFJWx11LNdXxepDx4PA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNE5m-00CLwS-4o; Wed, 01 Feb 2023 14:28:07 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDhb-00CBke-Qa for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 14:03:07 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Z3Yg+/pxswLQbPunG2DXR1JjAIQ1RbfjYcFDMPdwdzM=; b=UuB3rWVYsVY/Vv0Q6ajvmHPXri sGhWlgKNsE29pfuBJHpet5QTJy3oL44mzwYGLaxwQzSonVlWsO52DhADo7fGFSA8jD+mvse8+4vck +CVU2+m5Ovt4mMuv/aInikRGD4Ipu9xvC1GzDn3ZyBVm+4mN9/i4CJ7Mr+v+uDTNm9aHElHL1BqsA CFUOlZ2qNTIIEuD9kZKGSjCCjdXphB0uFBOzovQOdGZ67nnOQz+gdzL6jgrVtHrnympYG7AEQH0vC jl9Hp0yJVpIqYTFBB1xO1o2cO72wT18snfg+9CXQViWKBgiq/7ob6LkXWKOfrh8tXJP7kzcyQX+g0 VFgpuGLw==; Received: from mail-wr1-x436.google.com ([2a00:1450:4864:20::436]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pNChZ-004m0z-1d for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:22 +0000 Received: by mail-wr1-x436.google.com with SMTP id r2so17218767wrv.7 for ; Wed, 01 Feb 2023 04:59:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Z3Yg+/pxswLQbPunG2DXR1JjAIQ1RbfjYcFDMPdwdzM=; b=p/nK7+wPjP1LcWDKxtBPABr2lvORJQGF5cwCSAMD9DwVZ1FAJ1dTpxTJnepxujM7dA 7xQceH6A7so/6YM33Q33Mid751l1L4aK2IopzI9FvOZ3a9nkxXtSGU1H2/eYx/BUoYMQ UKO3pbv+TuYBp3+FIBmvd3sWSps62WaMrbgik6YBvQ6tfGN03HHekQv2+FHK0wZPY2ZT p2FyzuY7awvCU5Qa8sxG9yFr5i9HClCJY7Say468qpHde3oyJ5myuMq2RCBmdFkGpgT0 HplkULNe7KeFKAnOw2uPphLzQUETd5b6CoDDJugCRT+WgAmzvgEfX12JL991faA7cPii Omgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Z3Yg+/pxswLQbPunG2DXR1JjAIQ1RbfjYcFDMPdwdzM=; b=NI/gRakgs9n4aabwlMXgAfv7smVuNAv+y6gMXmzCOjL4C7lHRY3Tgj/aejJSJidB88 5aM/TIT2AAmgNyyW95UXMCtdz9+cQn+MUoOm9zbjEGreqAlGCDnS58vJsz9IfnmMwfH8 fS/ebI0h2fH/ZS4VIdSghj8d2tBB+KxQXwXq+FXvNAW0NBHR8+rsowVheudFJ/4vslOR OI195h1hEFsOO1qKENn2u2/hUYgP2/RjFmmm+AIy6DRsc0g1K+U6Vzl0+FpsIrDXqZ1j NYRlH37rNELfrq5Gw14f2V9fRBEFIxxPYpwdTWz6LhVyhdONJt7azd3xJ8ARebNQKphZ SOyA== X-Gm-Message-State: AO0yUKUOuH1NS34h389puLPp39IC+kbU1V5iKaM8oxtKkkfnuEXZ7YTW TMB4yA4Jpbyuby+NnqJVI9V/Tw== X-Google-Smtp-Source: AK7set+yLNu0DSsphBd0SuP5yvH+awkiyKJcWCgw8Jnpd5g9PXARhY0JjXD6HP7p0G4o7WR+j5r/4Q== X-Received: by 2002:adf:c713:0:b0:2bf:b8f0:f6c6 with SMTP id k19-20020adfc713000000b002bfb8f0f6c6mr2542639wrg.45.1675256374468; Wed, 01 Feb 2023 04:59:34 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:34 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 19/45] KVM: arm64: iommu: Add domains Date: Wed, 1 Feb 2023 12:53:03 +0000 Message-Id: <20230201125328.2186498-20-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125901_650132_A7DABA93 X-CRM114-Status: GOOD ( 24.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The IOMMU domain abstraction allows to share the same page tables between multiple devices. That may be necessary due to hardware constraints, if multiple devices cannot be isolated by the IOMMU (conventional PCI bus for example). It may also help with optimizing resource or TLB use. For pKVM in particular, it may be useful to reduce the amount of memory required for page tables. All devices owned by the host kernel could be attached to the same domain (though that requires host changes). Each IOMMU device holds an array of domains, and the host allocates domain IDs that index this array. The alloc() operation initializes the domain and prepares the page tables. The attach() operation initializes the device table that holds the PGD and its configuration. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Mostafa Saleh --- arch/arm64/kvm/hyp/include/nvhe/iommu.h | 16 +++ include/kvm/iommu.h | 55 ++++++++ arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 161 ++++++++++++++++++++++++ 3 files changed, 232 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/include/nvhe/iommu.h index 4959c30977b8..76d3fa6ce331 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -2,8 +2,12 @@ #ifndef __ARM64_KVM_NVHE_IOMMU_H__ #define __ARM64_KVM_NVHE_IOMMU_H__ +#include +#include + #if IS_ENABLED(CONFIG_KVM_IOMMU) int kvm_iommu_init(void); +int kvm_iommu_init_device(struct kvm_hyp_iommu *iommu); void *kvm_iommu_donate_page(void); void kvm_iommu_reclaim_page(void *p); @@ -74,8 +78,20 @@ static inline phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t iommu_id, } #endif /* CONFIG_KVM_IOMMU */ +struct kvm_iommu_tlb_cookie { + struct kvm_hyp_iommu *iommu; + pkvm_handle_t domain_id; +}; + struct kvm_iommu_ops { int (*init)(void); + struct kvm_hyp_iommu *(*get_iommu_by_id)(pkvm_handle_t smmu_id); + int (*alloc_iopt)(struct io_pgtable *iopt, unsigned long pgd_hva); + int (*free_iopt)(struct io_pgtable *iopt); + int (*attach_dev)(struct kvm_hyp_iommu *iommu, pkvm_handle_t domain_id, + struct kvm_hyp_iommu_domain *domain, u32 endpoint_id); + int (*detach_dev)(struct kvm_hyp_iommu *iommu, pkvm_handle_t domain_id, + struct kvm_hyp_iommu_domain *domain, u32 endpoint_id); }; extern struct kvm_iommu_ops kvm_iommu_ops; diff --git a/include/kvm/iommu.h b/include/kvm/iommu.h index 12b06a5df889..2bbe5f7bf726 100644 --- a/include/kvm/iommu.h +++ b/include/kvm/iommu.h @@ -3,6 +3,23 @@ #define __KVM_IOMMU_H #include +#include + +/* + * Parameters from the trusted host: + * @pgtable_cfg: page table configuration + * @domains: root domain table + * @nr_domains: max number of domains (exclusive) + * + * Other members are filled and used at runtime by the IOMMU driver. + */ +struct kvm_hyp_iommu { + struct io_pgtable_cfg pgtable_cfg; + void **domains; + size_t nr_domains; + + struct io_pgtable_params *pgtable; +}; struct kvm_hyp_iommu_memcache { struct kvm_hyp_memcache pages; @@ -12,4 +29,42 @@ struct kvm_hyp_iommu_memcache { extern struct kvm_hyp_iommu_memcache *kvm_nvhe_sym(kvm_hyp_iommu_memcaches); #define kvm_hyp_iommu_memcaches kvm_nvhe_sym(kvm_hyp_iommu_memcaches) +struct kvm_hyp_iommu_domain { + void *pgd; + u32 refs; +}; + +/* + * At the moment the number of domains is limited by the ASID and VMID size on + * Arm. With single-stage translation, that size is 2^8 or 2^16. On a lot of + * platforms the number of devices is actually the limiting factor and we'll + * only need a handful of domains, but with PASID or SR-IOV support that limit + * can be reached. + * + * In practice we're rarely going to need a lot of domains. To avoid allocating + * a large domain table, we use a two-level table, indexed by domain ID. With + * 4kB pages and 16-bytes domains, the leaf table contains 256 domains, and the + * root table 256 pointers. With 64kB pages, the leaf table contains 4096 + * domains and the root table 16 pointers. In this case, or when using 8-bit + * VMIDs, it may be more advantageous to use a single level. But using two + * levels allows to easily extend the domain size. + */ +#define KVM_IOMMU_MAX_DOMAINS (1 << 16) + +/* Number of entries in the level-2 domain table */ +#define KVM_IOMMU_DOMAINS_PER_PAGE \ + (PAGE_SIZE / sizeof(struct kvm_hyp_iommu_domain)) + +/* Number of entries in the root domain table */ +#define KVM_IOMMU_DOMAINS_ROOT_ENTRIES \ + (KVM_IOMMU_MAX_DOMAINS / KVM_IOMMU_DOMAINS_PER_PAGE) + +#define KVM_IOMMU_DOMAINS_ROOT_SIZE \ + (KVM_IOMMU_DOMAINS_ROOT_ENTRIES * sizeof(void *)) + +/* Bits [16:split] index the root table, bits [split-1:0] index the leaf table */ +#define KVM_IOMMU_DOMAIN_ID_SPLIT ilog2(KVM_IOMMU_DOMAINS_PER_PAGE) + +#define KVM_IOMMU_DOMAIN_ID_LEAF_MASK ((1 << KVM_IOMMU_DOMAIN_ID_SPLIT) - 1) + #endif /* __KVM_IOMMU_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c index 1a9184fbbd27..7404ea77ed9f 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -13,6 +13,22 @@ struct kvm_hyp_iommu_memcache __ro_after_init *kvm_hyp_iommu_memcaches; +/* + * Serialize access to domains and IOMMU driver internal structures (command + * queue, device tables) + */ +static hyp_spinlock_t iommu_lock; + +#define domain_to_iopt(_iommu, _domain, _domain_id) \ + (struct io_pgtable) { \ + .ops = &(_iommu)->pgtable->ops, \ + .pgd = (_domain)->pgd, \ + .cookie = &(struct kvm_iommu_tlb_cookie) { \ + .iommu = (_iommu), \ + .domain_id = (_domain_id), \ + }, \ + } + void *kvm_iommu_donate_page(void) { void *p; @@ -41,10 +57,155 @@ void kvm_iommu_reclaim_page(void *p) PAGE_SIZE); } +static struct kvm_hyp_iommu_domain * +handle_to_domain(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + struct kvm_hyp_iommu **out_iommu) +{ + int idx; + struct kvm_hyp_iommu *iommu; + struct kvm_hyp_iommu_domain *domains; + + iommu = kvm_iommu_ops.get_iommu_by_id(iommu_id); + if (!iommu) + return NULL; + + if (domain_id >= iommu->nr_domains) + return NULL; + domain_id = array_index_nospec(domain_id, iommu->nr_domains); + + idx = domain_id >> KVM_IOMMU_DOMAIN_ID_SPLIT; + domains = iommu->domains[idx]; + if (!domains) { + domains = kvm_iommu_donate_page(); + if (!domains) + return NULL; + iommu->domains[idx] = domains; + } + + *out_iommu = iommu; + return &domains[domain_id & KVM_IOMMU_DOMAIN_ID_LEAF_MASK]; +} + +int kvm_iommu_alloc_domain(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + unsigned long pgd_hva) +{ + int ret = -EINVAL; + struct io_pgtable iopt; + struct kvm_hyp_iommu *iommu; + struct kvm_hyp_iommu_domain *domain; + + hyp_spin_lock(&iommu_lock); + domain = handle_to_domain(iommu_id, domain_id, &iommu); + if (!domain) + goto out_unlock; + + if (domain->refs) + goto out_unlock; + + iopt = domain_to_iopt(iommu, domain, domain_id); + ret = kvm_iommu_ops.alloc_iopt(&iopt, pgd_hva); + if (ret) + goto out_unlock; + + domain->refs = 1; + domain->pgd = iopt.pgd; +out_unlock: + hyp_spin_unlock(&iommu_lock); + return ret; +} + +int kvm_iommu_free_domain(pkvm_handle_t iommu_id, pkvm_handle_t domain_id) +{ + int ret = -EINVAL; + struct io_pgtable iopt; + struct kvm_hyp_iommu *iommu; + struct kvm_hyp_iommu_domain *domain; + + hyp_spin_lock(&iommu_lock); + domain = handle_to_domain(iommu_id, domain_id, &iommu); + if (!domain) + goto out_unlock; + + if (domain->refs != 1) + goto out_unlock; + + iopt = domain_to_iopt(iommu, domain, domain_id); + ret = kvm_iommu_ops.free_iopt(&iopt); + + memset(domain, 0, sizeof(*domain)); + +out_unlock: + hyp_spin_unlock(&iommu_lock); + return ret; +} + +int kvm_iommu_attach_dev(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + u32 endpoint_id) +{ + int ret = -EINVAL; + struct kvm_hyp_iommu *iommu; + struct kvm_hyp_iommu_domain *domain; + + hyp_spin_lock(&iommu_lock); + domain = handle_to_domain(iommu_id, domain_id, &iommu); + if (!domain || !domain->refs || domain->refs == UINT_MAX) + goto out_unlock; + + ret = kvm_iommu_ops.attach_dev(iommu, domain_id, domain, endpoint_id); + if (ret) + goto out_unlock; + + domain->refs++; +out_unlock: + hyp_spin_unlock(&iommu_lock); + return ret; +} + +int kvm_iommu_detach_dev(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + u32 endpoint_id) +{ + int ret = -EINVAL; + struct kvm_hyp_iommu *iommu; + struct kvm_hyp_iommu_domain *domain; + + hyp_spin_lock(&iommu_lock); + domain = handle_to_domain(iommu_id, domain_id, &iommu); + if (!domain || domain->refs <= 1) + goto out_unlock; + + ret = kvm_iommu_ops.detach_dev(iommu, domain_id, domain, endpoint_id); + if (ret) + goto out_unlock; + + domain->refs--; +out_unlock: + hyp_spin_unlock(&iommu_lock); + return ret; +} + +int kvm_iommu_init_device(struct kvm_hyp_iommu *iommu) +{ + void *domains; + + domains = iommu->domains; + iommu->domains = kern_hyp_va(domains); + return pkvm_create_mappings(iommu->domains, iommu->domains + + KVM_IOMMU_DOMAINS_ROOT_ENTRIES, PAGE_HYP); +} + int kvm_iommu_init(void) { enum kvm_pgtable_prot prot; + hyp_spin_lock_init(&iommu_lock); + + if (WARN_ON(!kvm_iommu_ops.get_iommu_by_id || + !kvm_iommu_ops.alloc_iopt || + !kvm_iommu_ops.free_iopt || + !kvm_iommu_ops.attach_dev || + !kvm_iommu_ops.detach_dev)) + return -ENODEV; + /* The memcache is shared with the host */ prot = pkvm_mkstate(PAGE_HYP, PKVM_PAGE_SHARED_OWNED); return pkvm_create_mappings(kvm_hyp_iommu_memcaches, From patchwork Wed Feb 1 12:53:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124386 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AC3F4C05027 for ; Wed, 1 Feb 2023 14:06:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Q9c0crX/ByI4P14dBTs0E17O3DgUpDxCLSTx2gdrg28=; b=S8zU6aY/hKOQgx jEDYvqA1O3Aaq1XcJ6M7arN5+FhAP3usp5ju+Bo0+3hLOYS4umDZWL00cx75Ei6xQbl0gUJkbI6ID fqw1hGuAJxAE2UnO9XtX7HSb6mGSTMwYG9Cdr5EBKSgF4CJ7r9m1eoxeszFz3LMGBxRp4gwDkYj+G ejSHQYuCAPIOzrz6XP1LNSVatEFu+UTncBPsDJhF/3otI5n7LqlQc4iTqS5QeaYd6lS8LUmj4TEIZ MgTziaLXfJC2DH7BzNIY7LYjNpQgtRuX28qsP4PFk6Ia/wFFOUbZJYp6v6RWBWmJTMc7oHLubduAs LqxPjmD2c54EVWWWLFtw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDjs-00CCsx-2B; Wed, 01 Feb 2023 14:05:28 +0000 Received: from mail-wr1-x42f.google.com ([2a00:1450:4864:20::42f]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCi7-00BnI4-Oq for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:37 +0000 Received: by mail-wr1-x42f.google.com with SMTP id o18so7753868wrj.3 for ; Wed, 01 Feb 2023 04:59:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ddRi2xjm5qKwZuQWROegNJjMJDcBpGuXL4kHQQYyIJY=; b=m6g42YGzLMao8QwbIQapNNIlNo6UVD6eQjNNe+hs/Xc015lEu7ZxVowlHKux4AoHxI sb9JQwdCA13OYEGycnWSAcU7p7H//jALkcCbT4USxtFmDjlowAnA8NeI0Ae+BRycKfs0 tQwK5M8h7OT2IhiMMik5L++Xi+7p02Q+XVQ26bTQYVt53Ypb0Xqjrh5+RSQAht8Oh+KS zxBh41JeLuauraexikMDvsV0teAjmi10RikhEUyIIg6Stx2hV+aCk65TMTqzzDFVtD4Z 4eUV2VHSLfFs6iOEN39tRaYqK20zVBuJ2KBBoBZUolvZPGxIMk8ASXMZqu9fzBuWfgMr 1+9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ddRi2xjm5qKwZuQWROegNJjMJDcBpGuXL4kHQQYyIJY=; b=cXzzXbNcVYMbzARDYpH3AHBI7CmVa+yOfBQV99KuBjWx88ddGOrEC+G/l5fnL9HLJe hswvxWNUaDIyV8eeP+PmumgKxT2Wk40Av45Gbt8bSDWAcI5kk8REt1H9OK5k5r1quq31 9zcO5czW/g+U1e4PV5b/uZLJ7rIFVhkwv4FkAHRt51tQzSzRBq/4ys8NuH0wFh8i4SGl q2aPDtDFP846I/4lsGv1p3z0zfllLCptCdyq5EijZfAELOxqE8auSlL75kXs5YsJKNpV TloS0BNFHaYfrVYu+TBfAtT8jazQlXM4ho9j7cDUmH/43E4UwZn/AM/y4T987y/mtVJi Tb+Q== X-Gm-Message-State: AO0yUKWf7AnFflnhv6jGbuj5xJPDz8+U5FdTaDZrLreROx9Q1wloRj+K UIPynrZ5s1I4ZeIpg7itZ2vCFQ== X-Google-Smtp-Source: AK7set8DmcRt2pyERXxrrjdUCl0ThMt1k62he86OoSE1GzaSXLyMzj+He+IP0QvIHgL8Ls+mmdHtOw== X-Received: by 2002:a5d:6f03:0:b0:2bf:b140:ae11 with SMTP id ay3-20020a5d6f03000000b002bfb140ae11mr6293750wrb.63.1675256375261; Wed, 01 Feb 2023 04:59:35 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:34 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 20/45] KVM: arm64: iommu: Add map() and unmap() operations Date: Wed, 1 Feb 2023 12:53:04 +0000 Message-Id: <20230201125328.2186498-21-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045935_870310_3F68EAFE X-CRM114-Status: GOOD ( 15.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Handle map() and unmap() hypercalls by calling the io-pgtable library. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 144 ++++++++++++++++++++++++++ 1 file changed, 144 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c index 7404ea77ed9f..0550e7bdf179 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -183,6 +183,150 @@ int kvm_iommu_detach_dev(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, return ret; } +static int __kvm_iommu_unmap_pages(struct io_pgtable *iopt, unsigned long iova, + size_t pgsize, size_t pgcount) +{ + int ret; + size_t unmapped; + phys_addr_t paddr; + size_t total_unmapped = 0; + size_t size = pgsize * pgcount; + + while (total_unmapped < size) { + paddr = iopt_iova_to_phys(iopt, iova); + if (paddr == 0) + return -EINVAL; + + /* + * One page/block at a time, because the range provided may not + * be physically contiguous, and we need to unshare all physical + * pages. + */ + unmapped = iopt_unmap_pages(iopt, iova, pgsize, 1, NULL); + if (!unmapped) + return -EINVAL; + + ret = __pkvm_host_unshare_dma(paddr, pgsize); + if (ret) + return ret; + + iova += unmapped; + pgcount -= unmapped / pgsize; + total_unmapped += unmapped; + } + + return 0; +} + +#define IOMMU_PROT_MASK (IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE |\ + IOMMU_NOEXEC | IOMMU_MMIO) + +int kvm_iommu_map_pages(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + unsigned long iova, phys_addr_t paddr, size_t pgsize, + size_t pgcount, int prot) +{ + size_t size; + size_t granule; + int ret = -EINVAL; + size_t mapped = 0; + struct io_pgtable iopt; + struct kvm_hyp_iommu *iommu; + size_t pgcount_orig = pgcount; + unsigned long iova_orig = iova; + struct kvm_hyp_iommu_domain *domain; + + if (prot & ~IOMMU_PROT_MASK) + return -EINVAL; + + if (__builtin_mul_overflow(pgsize, pgcount, &size) || + iova + size < iova || paddr + size < paddr) + return -EOVERFLOW; + + hyp_spin_lock(&iommu_lock); + + domain = handle_to_domain(iommu_id, domain_id, &iommu); + if (!domain) + goto err_unlock; + + granule = 1 << __ffs(iommu->pgtable->cfg.pgsize_bitmap); + if (!IS_ALIGNED(iova | paddr | pgsize, granule)) + goto err_unlock; + + ret = __pkvm_host_share_dma(paddr, size, !(prot & IOMMU_MMIO)); + if (ret) + goto err_unlock; + + iopt = domain_to_iopt(iommu, domain, domain_id); + while (pgcount) { + ret = iopt_map_pages(&iopt, iova, paddr, pgsize, pgcount, prot, + 0, &mapped); + WARN_ON(!IS_ALIGNED(mapped, pgsize)); + pgcount -= mapped / pgsize; + if (ret) + goto err_unmap; + iova += mapped; + paddr += mapped; + } + + hyp_spin_unlock(&iommu_lock); + return 0; + +err_unmap: + __kvm_iommu_unmap_pages(&iopt, iova_orig, pgsize, pgcount_orig - pgcount); +err_unlock: + hyp_spin_unlock(&iommu_lock); + return ret; +} + +int kvm_iommu_unmap_pages(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, + unsigned long iova, size_t pgsize, size_t pgcount) +{ + size_t size; + size_t granule; + int ret = -EINVAL; + struct io_pgtable iopt; + struct kvm_hyp_iommu *iommu; + struct kvm_hyp_iommu_domain *domain; + + if (__builtin_mul_overflow(pgsize, pgcount, &size) || + iova + size < iova) + return -EOVERFLOW; + + hyp_spin_lock(&iommu_lock); + domain = handle_to_domain(iommu_id, domain_id, &iommu); + if (!domain) + goto out_unlock; + + granule = 1 << __ffs(iommu->pgtable->cfg.pgsize_bitmap); + if (!IS_ALIGNED(iova | pgsize, granule)) + goto out_unlock; + + iopt = domain_to_iopt(iommu, domain, domain_id); + ret = __kvm_iommu_unmap_pages(&iopt, iova, pgsize, pgcount); +out_unlock: + hyp_spin_unlock(&iommu_lock); + return ret; +} + +phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t iommu_id, + pkvm_handle_t domain_id, unsigned long iova) +{ + phys_addr_t phys = 0; + struct io_pgtable iopt; + struct kvm_hyp_iommu *iommu; + struct kvm_hyp_iommu_domain *domain; + + hyp_spin_lock(&iommu_lock); + domain = handle_to_domain(iommu_id, domain_id, &iommu); + if (domain) { + iopt = domain_to_iopt(iommu, domain, domain_id); + + phys = iopt_iova_to_phys(&iopt, iova); + } + hyp_spin_unlock(&iommu_lock); + return phys; +} + int kvm_iommu_init_device(struct kvm_hyp_iommu *iommu) { void *domains; From patchwork Wed Feb 1 12:53:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124406 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2E970C636CD for ; Wed, 1 Feb 2023 14:25:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Uli7C5ay+ivTymWCWjal51wQSMtlj4n/3YPk0pS87s4=; b=fFdtB5N/4rUUdZ EPYGK9l6nfnF7LAsOcTqvVa7O/cNDuqOGzV9dWstmip/Cbbi4zwOa09YQ2gTBc27Bo0zWBB09YkQQ eMFJapgoX7jWEGLECjRHLRFfAY2TfjhlbbsKpBJ0m049A8TcIoPX/By9vl4ImjCscHmYdS5/Uklv0 CsY/SK6INoyk40mpf9QpE3GAgIwvJLvChWTZuZzfy3S/A8CUfKS9CBW0K49on3kd5e2yLJBJEixy3 qdJY0Al1v/g8Fq5L9jqfQe4gfOcTX+VbegwejgJeKuVhyc8XoOH8p1q8pShIV60Nknmly9KIP8ArR d9UwiH19ofZeQuL4xTow==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNE1p-00CK9W-9K; Wed, 01 Feb 2023 14:24:01 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDhT-00CBgK-9D for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 14:02:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YdP3DX4BPrTeIfyGSx6/ywhf75AZ8OaGFBR5uBxLPeI=; b=Be0Jd2nhdodULhJz4hK6vmPfnt qFOSh788znec9bIWmMEJitfGkGRgV3cocWXfgSffFeeuLDptnnNV75j1cRcZ8vfcQ59iwRj7n+alK m64qAAGTnpUHiv4P0fZLOCmNx7nBQFxWPcMjCDApT1hg2CUSQ8VkgszMbnnpldR0JBGt6KxvSCK7A 1m9yyp3qlbOc0F52rDxd9fF/86w0gKsA/KyfUSataqoSxDh70IsCyw1nP0txf8T0/6UtrnFtaENDA Kw1sCi+x8oQKIRGfsLYvWIc3uyyUhasawv2tsSXDciaN7nXSfcGx2Jf6AjPdjMJvWHMViVEZHrlyS 7F+VpHlw==; Received: from mail-wr1-x434.google.com ([2a00:1450:4864:20::434]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pNChb-004m1u-0A for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:05 +0000 Received: by mail-wr1-x434.google.com with SMTP id d14so17221956wrr.9 for ; Wed, 01 Feb 2023 04:59:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YdP3DX4BPrTeIfyGSx6/ywhf75AZ8OaGFBR5uBxLPeI=; b=gi1u327r3bwhgQ0jcZTy2ePBAObJbV74KeqIs4sLYZ6oCBJp3nQUuE781l7zOxjyJ0 ht6clbf/Nk0rHGcIOoMHocLZTvz8WilfSn+eFjPb6R2j8LiNxXnu+x9FhW0Yr/+GAJfJ oMlpmE2yqEmbbW+m1ySj4CoRKcfvyM//ZIKSPCuqF4iaED1c4MoCccfGSqtvFZZtZx6y Vv1VizuB1FUNOOWizObIzkAeT7mcqoaCmWnvzlnFf569Fk7jM69tjzbXZeBJ9C9xfOBI reVXapwJdiLdHDi91UXtNiQkNhEKs2UNCli76l4WX0/7mDWAxEP3DX3ZwzR8tMqntD6Z MRRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YdP3DX4BPrTeIfyGSx6/ywhf75AZ8OaGFBR5uBxLPeI=; b=Z5bYpLzqKopnIlDaJsqKYguDnPymFfOh+QbO1NNmpmSnl/6Hl33VtlDxRvd9Gp7YIv hv8/nfjhQUEczJ03jFw41z0jbH6UvXt8l0ZyMR7wk0uPV2umU4QJiwnTCeovQT+H2pLK f1iMxK+u3acdb5CDFvz5or0l1W2uXyaXpZDqlppA7AJ4FJfMJa9Qks2eM7GE/v6q5aEj llcJ+hte/lNXMqc611Wpj7daN7KScBUkoNmvIp01kFdrz0/sP18m404GjN19tnRMQxT8 t/YiKyfPxLbljFv/WwTAbWbwL5Lv1z6xNM09LOiaRphrRzHujDZuF5sOEaVs3eWHgT7n iuBw== X-Gm-Message-State: AO0yUKVbVv/fV0e7Tj9Fs9c6tGFav4ss0oRv7YIxuB2J+1o/BxOOFHtT 0JoNZpHfyP2IOZUPuXHMQ+pZMg== X-Google-Smtp-Source: AK7set9uV9gcpJddzgbgGf+mtj837ImA617uXmGOdJV7SZrqmyd74oHFIscY1a8OJzgCGXL0U+HhLg== X-Received: by 2002:adf:f911:0:b0:2bd:e7a0:6b5e with SMTP id b17-20020adff911000000b002bde7a06b5emr2317032wrr.40.1675256376014; Wed, 01 Feb 2023 04:59:36 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:35 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 21/45] KVM: arm64: iommu: Add SMMUv3 driver Date: Wed, 1 Feb 2023 12:53:05 +0000 Message-Id: <20230201125328.2186498-22-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125903_504297_CC39D4D3 X-CRM114-Status: GOOD ( 20.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add the skeleton for an Arm SMMUv3 driver at EL2. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/Kconfig | 10 ++++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 1 + arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/hyp/include/nvhe/iommu.h | 9 +++++++ include/kvm/arm_smmu_v3.h | 22 +++++++++++++++++ arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 27 +++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/setup.c | 2 ++ 7 files changed, 72 insertions(+) create mode 100644 include/kvm/arm_smmu_v3.h create mode 100644 arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index 79707685d54a..1689d416ccd8 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -410,6 +410,16 @@ config ARM_SMMU_V3_SVA Say Y here if your system supports SVA extensions such as PCIe PASID and PRI. +config ARM_SMMU_V3_PKVM + bool "ARM SMMUv3 support for protected Virtual Machines" + depends on KVM && ARM64 + select KVM_IOMMU + help + Enable a SMMUv3 driver in the KVM hypervisor, to protect VMs against + memory accesses from devices owned by the host. + + Say Y here if you intend to enable KVM in protected mode. + config S390_IOMMU def_bool y if S390 && PCI depends on S390 && PCI diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index f7dfc88c9f5b..349c874762c8 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -29,6 +29,7 @@ hyp-obj-$(CONFIG_DEBUG_LIST) += list_debug.o hyp-obj-y += $(lib-objs) hyp-obj-$(CONFIG_KVM_IOMMU) += iommu/iommu.o +hyp-obj-$(CONFIG_ARM_SMMU_V3_PKVM) += iommu/arm-smmu-v3.o ## ## Build rules for compiling nVHE hyp code diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index b8e032bda022..c98ce17f8148 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -379,6 +379,7 @@ extern u64 kvm_nvhe_sym(hyp_cpu_logical_map)[NR_CPUS]; enum kvm_iommu_driver { KVM_IOMMU_DRIVER_NONE, + KVM_IOMMU_DRIVER_SMMUV3, }; struct vcpu_reset_state { diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/include/nvhe/iommu.h index 76d3fa6ce331..0ba59d20bef3 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -5,6 +5,15 @@ #include #include +#if IS_ENABLED(CONFIG_ARM_SMMU_V3_PKVM) +int kvm_arm_smmu_v3_register(void); +#else /* CONFIG_ARM_SMMU_V3_PKVM */ +static inline int kvm_arm_smmu_v3_register(void) +{ + return -EINVAL; +} +#endif /* CONFIG_ARM_SMMU_V3_PKVM */ + #if IS_ENABLED(CONFIG_KVM_IOMMU) int kvm_iommu_init(void); int kvm_iommu_init_device(struct kvm_hyp_iommu *iommu); diff --git a/include/kvm/arm_smmu_v3.h b/include/kvm/arm_smmu_v3.h new file mode 100644 index 000000000000..ebe488b2f93c --- /dev/null +++ b/include/kvm/arm_smmu_v3.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_ARM_SMMU_V3_H +#define __KVM_ARM_SMMU_V3_H + +#include +#include + +#if IS_ENABLED(CONFIG_ARM_SMMU_V3_PKVM) + +struct hyp_arm_smmu_v3_device { + struct kvm_hyp_iommu iommu; +}; + +extern size_t kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_count); +#define kvm_hyp_arm_smmu_v3_count kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_count) + +extern struct hyp_arm_smmu_v3_device *kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_smmus); +#define kvm_hyp_arm_smmu_v3_smmus kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_smmus) + +#endif /* CONFIG_ARM_SMMU_V3_PKVM */ + +#endif /* __KVM_ARM_SMMU_V3_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c new file mode 100644 index 000000000000..c167e4dbd28d --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -0,0 +1,27 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * pKVM hyp driver for the Arm SMMUv3 + * + * Copyright (C) 2022 Linaro Ltd. + */ +#include +#include +#include + +size_t __ro_after_init kvm_hyp_arm_smmu_v3_count; +struct hyp_arm_smmu_v3_device __ro_after_init *kvm_hyp_arm_smmu_v3_smmus; + +static int smmu_init(void) +{ + return -ENOSYS; +} + +static struct kvm_iommu_ops smmu_ops = { + .init = smmu_init, +}; + +int kvm_arm_smmu_v3_register(void) +{ + kvm_iommu_ops = smmu_ops; + return 0; +} diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 3e73c066d560..a25de8c5d489 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -294,6 +294,8 @@ static int select_iommu_ops(enum kvm_iommu_driver driver) switch (driver) { case KVM_IOMMU_DRIVER_NONE: return 0; + case KVM_IOMMU_DRIVER_SMMUV3: + return kvm_arm_smmu_v3_register(); } return -EINVAL; From patchwork Wed Feb 1 12:53:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6F9DC05027 for ; Wed, 1 Feb 2023 14:24:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=c8haw7y5v8YtfJfTn4esebNt1lWAoUdq1BloZ8L3VO8=; b=Zq3LpLkJHY0Y+w Wbu2Lvoyj4+Rh7kleOUBgyiNvOq1e0Q43g6BleCmEUwMjRT52+Qu8K3bNCBCJ1olV87N4dibPo+dN i4BafKRvNbKrrk7VbCPL31E+d9DRN4LIHm7FSTBldIKHzVWz8NpTyRzHEXMpncnexG6e8X0FZBEgk bSANIY7ULl82jYH/TLzAdDx2s1kxFx6ZHKPVNlxzI7Zp8xW2HutrHDhjV/kf3lCnjkrJJdPhXuLyv pNE0/BRrM8ZiUrLSz741lbq3nXgH2oNT1hZHh3FfGHXJZzH4YeOJYdby2sq7kEYdh7G5LDlo4QM7o Lfi3aUflTPfbfIP7yn9g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNE17-00CJnr-26; Wed, 01 Feb 2023 14:23:17 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDhS-00CBgK-D1 for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 14:02:58 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=U/1E+V5Gw5Rx46TOmlvRjgOfJHiZaXebZrwYicZFtkM=; b=M4dpgRPiS74nDY5W6VD1rHSGcW td6oBqT3mcXrFSv3KlLKRyZvRL1yDSUdcXCxWRrkoDw5E5riM+tsmJJICRR5QFHz5dc2IyigKsF0r y1KFL4TAhwbkA7nvJfYM8jzixBIh2LO2//jHNWNDXtN3Vcr2TfqkFyAfXn5uw6cym5EmQ2jE3BlY/ wuzJn9PrGZvmaSC0VCkZBIKFlRJgBHSreQqqcysfvdzCV0JaQClJBGJ/3TgkkMfpG0+p0YdTHYdVb pUGrR89TqV1NgXGlFdqkNA4mL5mgsa9TjcqydTh2rQW8e3jPLXv1TYApYhZn+eTTy4CgEXMZxSuz0 2LcjjZUA==; Received: from mail-wr1-x434.google.com ([2a00:1450:4864:20::434]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pNChb-004m1w-0R for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:22 +0000 Received: by mail-wr1-x434.google.com with SMTP id h16so17197003wrz.12 for ; Wed, 01 Feb 2023 04:59:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=U/1E+V5Gw5Rx46TOmlvRjgOfJHiZaXebZrwYicZFtkM=; b=rbZDUky7hyOUseO47rt8pNa6lE8iRS4ovMkNWarTthOMVpGeSQTtP9qSj/majBB5iD vOznM2LLhpX3MQkqjmXs3AYAF8ghw1OTIEDc76WKJQrvnjh8u0YE9fDgf1Z3b3FrEGW7 pLkItik7KqD7NZnCEul3NZTQpobfUf3cdZGPN9vwUxyAxwJp8RWyqiEdT1difM66qKSX 4xlI7H73dui8mKy5RjxRRlqu1QvGnVfkF8xvsTjM0yHEq/DN+YRnjVrw9cf8hIVX6PL8 XZMuV64GrB2ooxI9UVxJZ0/Ua+0tPoE/N76WNmcnBctnBf8INDZ2DepTqt98rmNCRuus Lh+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U/1E+V5Gw5Rx46TOmlvRjgOfJHiZaXebZrwYicZFtkM=; b=1bkh4OmRk9TVQsH6gIcXIrk/fF9dpU1R4JB1Nv8n9c+2GmQ062b2UohjiFgYPR0Xzo Ja9TofexbjrjIJ9bm0v83ER4qsOFLsXVn1TXxWiwA7O0lycNL7eO8VI3ZT2lI+Wr/6EI VDA+ElJb8Wp4nAvjQopX7mWOON58jbwtXO+69OsCe4ELiapReldAersfRUWQfDHeUcif MhWKCpL2Fw3/BmyHio2ZVBNX7FJ3A7Pb6fbDROUvd9crYG+mAfYuVvnqXOSBRGg9DCbo cBZHqUyjXds1xL5UADY7v181G7HsFAtK0qZ97PyiJAQKKYilTK5LV9kWEANexINfxIBc lQbQ== X-Gm-Message-State: AO0yUKW7xMqZs/Rg30lwcdcNOpZpb5yXwapvU5BP5crhjzzL8tmR6wrg exJER/clWTKudYNXjrWAex110A== X-Google-Smtp-Source: AK7set9e0pBWb6kGtQQaKRSuYaYQ3aayGueAvARL6oWlBrFe/ohTg7/zUWlhu/MDSt+NySkK0eZRyA== X-Received: by 2002:adf:ef05:0:b0:2bf:d72b:d039 with SMTP id e5-20020adfef05000000b002bfd72bd039mr2394106wro.10.1675256376784; Wed, 01 Feb 2023 04:59:36 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:36 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 22/45] KVM: arm64: smmu-v3: Initialize registers Date: Wed, 1 Feb 2023 12:53:06 +0000 Message-Id: <20230201125328.2186498-23-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125903_502900_2B5A5219 X-CRM114-Status: GOOD ( 18.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Ensure all writable registers are properly initialized. We do not touch registers that will not be read by the SMMU due to disabled features, such as event queue registers. Signed-off-by: Jean-Philippe Brucker --- include/kvm/arm_smmu_v3.h | 11 +++ arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 103 +++++++++++++++++++- 2 files changed, 113 insertions(+), 1 deletion(-) diff --git a/include/kvm/arm_smmu_v3.h b/include/kvm/arm_smmu_v3.h index ebe488b2f93c..d4b1e487b7d7 100644 --- a/include/kvm/arm_smmu_v3.h +++ b/include/kvm/arm_smmu_v3.h @@ -7,8 +7,19 @@ #if IS_ENABLED(CONFIG_ARM_SMMU_V3_PKVM) +/* + * Parameters from the trusted host: + * @mmio_addr base address of the SMMU registers + * @mmio_size size of the registers resource + * + * Other members are filled and used at runtime by the SMMU driver. + */ struct hyp_arm_smmu_v3_device { struct kvm_hyp_iommu iommu; + phys_addr_t mmio_addr; + size_t mmio_size; + + void __iomem *base; }; extern size_t kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_count); diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c index c167e4dbd28d..75a6aa01b057 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -4,16 +4,117 @@ * * Copyright (C) 2022 Linaro Ltd. */ +#include #include #include #include +#include +#include + +#define ARM_SMMU_POLL_TIMEOUT_US 1000000 /* 1s! */ size_t __ro_after_init kvm_hyp_arm_smmu_v3_count; struct hyp_arm_smmu_v3_device __ro_after_init *kvm_hyp_arm_smmu_v3_smmus; +#define for_each_smmu(smmu) \ + for ((smmu) = kvm_hyp_arm_smmu_v3_smmus; \ + (smmu) != &kvm_hyp_arm_smmu_v3_smmus[kvm_hyp_arm_smmu_v3_count]; \ + (smmu)++) + +/* + * Wait until @cond is true. + * Return 0 on success, or -ETIMEDOUT + */ +#define smmu_wait(_cond) \ +({ \ + int __i = 0; \ + int __ret = 0; \ + \ + while (!(_cond)) { \ + if (++__i > ARM_SMMU_POLL_TIMEOUT_US) { \ + __ret = -ETIMEDOUT; \ + break; \ + } \ + pkvm_udelay(1); \ + } \ + __ret; \ +}) + +static int smmu_write_cr0(struct hyp_arm_smmu_v3_device *smmu, u32 val) +{ + writel_relaxed(val, smmu->base + ARM_SMMU_CR0); + return smmu_wait(readl_relaxed(smmu->base + ARM_SMMU_CR0ACK) == val); +} + +static int smmu_init_registers(struct hyp_arm_smmu_v3_device *smmu) +{ + u64 val, old; + + if (!(readl_relaxed(smmu->base + ARM_SMMU_GBPA) & GBPA_ABORT)) + return -EINVAL; + + /* Initialize all RW registers that will be read by the SMMU */ + smmu_write_cr0(smmu, 0); + + val = FIELD_PREP(CR1_TABLE_SH, ARM_SMMU_SH_ISH) | + FIELD_PREP(CR1_TABLE_OC, CR1_CACHE_WB) | + FIELD_PREP(CR1_TABLE_IC, CR1_CACHE_WB) | + FIELD_PREP(CR1_QUEUE_SH, ARM_SMMU_SH_ISH) | + FIELD_PREP(CR1_QUEUE_OC, CR1_CACHE_WB) | + FIELD_PREP(CR1_QUEUE_IC, CR1_CACHE_WB); + writel_relaxed(val, smmu->base + ARM_SMMU_CR1); + writel_relaxed(CR2_PTM, smmu->base + ARM_SMMU_CR2); + writel_relaxed(0, smmu->base + ARM_SMMU_IRQ_CTRL); + + val = readl_relaxed(smmu->base + ARM_SMMU_GERROR); + old = readl_relaxed(smmu->base + ARM_SMMU_GERRORN); + /* Service Failure Mode is fatal */ + if ((val ^ old) & GERROR_SFM_ERR) + return -EIO; + /* Clear pending errors */ + writel_relaxed(val, smmu->base + ARM_SMMU_GERRORN); + + return 0; +} + +static int smmu_init_device(struct hyp_arm_smmu_v3_device *smmu) +{ + int ret; + + if (!PAGE_ALIGNED(smmu->mmio_addr | smmu->mmio_size)) + return -EINVAL; + + ret = pkvm_create_hyp_device_mapping(smmu->mmio_addr, smmu->mmio_size, + &smmu->base); + if (IS_ERR(smmu->base)) + return PTR_ERR(smmu->base); + + ret = smmu_init_registers(smmu); + if (ret) + return ret; + + return 0; +} + static int smmu_init(void) { - return -ENOSYS; + int ret; + struct hyp_arm_smmu_v3_device *smmu; + + ret = pkvm_create_mappings(kvm_hyp_arm_smmu_v3_smmus, + kvm_hyp_arm_smmu_v3_smmus + + kvm_hyp_arm_smmu_v3_count, + PAGE_HYP); + if (ret) + return ret; + + for_each_smmu(smmu) { + ret = smmu_init_device(smmu); + if (ret) + return ret; + } + + return 0; } static struct kvm_iommu_ops smmu_ops = { From patchwork Wed Feb 1 12:53:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124388 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3150DC636CD for ; Wed, 1 Feb 2023 14:07:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DS+QaYiI8+l/c1iLDm2E1at/TWWE3z86rwdPSFLCmJU=; b=1z5tBFJlLffcjC c+akUHvZ7Bjf1oOP/WBuq1rRD60vHo7pftG3u6DpBGDXkLeMSl1kgvPfPNtv9DQnlHit68/z7GhqO rO+Oq3i1mkrzcTewb3nGRHVGS0fFGy+HKbpl+E4+5R1xkRCk1B+JjIPXM/f4XThZnLOQZDZEnd8Ww VHue+RBp2cRE6ROZqbKzMkYKVx/hT8s5KSaRg6efq2EGCjLEM6axXOLV8D30ZinWshvhpQrWZy55g 5j6tTQddI02xNHCkALeB8SPh5oOZNo3E5AZ3wuamnyFVcH5u4r7NqKjlkcZuPtj/6V2pOLMm2tWL7 63h8PhcTL8QJ7oOenKIw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDkn-00CDKu-2K; Wed, 01 Feb 2023 14:06:25 +0000 Received: from mail-wr1-x429.google.com ([2a00:1450:4864:20::429]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiA-00BnGh-1m for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:39 +0000 Received: by mail-wr1-x429.google.com with SMTP id m14so16760525wrg.13 for ; Wed, 01 Feb 2023 04:59:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ogoJYdqMJ4g5GUWJrI0QukkMIBQ9PcYbe+mf6Y203jA=; b=D5Y/xRm4xLA6C0D9t+4DIpvSu1El8jqjtQ4txLIlVoel4aG/C2FPWYKJ7R8d08UjtX XOC8EB6SV0W1viHG80fDjiOjStvG8Yxow3hwjDvgD/2JeEij0P/CLnR9YDv2yqKHutCK bgXcq2MpsrQnaKWTzh6oFWhrTcANQh4mNOnDZweaOrzWAb+hj3N2d20B4l/gb7M0KUkr 83cNaZlHF1nMGQ+AzinbTz3b2d8daA6FttBUsvZIVzVatjcW8BHhB+0WpWZEVvHrJJPL pUNpERCH+ruK5JN0Bswao3T/5TukWTNvyXkuGk3gpzuklf/q9Lso2qYTM127FWpREWMA x7zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ogoJYdqMJ4g5GUWJrI0QukkMIBQ9PcYbe+mf6Y203jA=; b=xCvLtHIzpkVFNfFpsKCWf7CK4iRzpS9keXXNySVinIFrUuncj3vNm5ZpcnpZG4as7i FRmo3YRbbOfShTUgbWyIoQmSVp9WYrWKuaysCPgNMnQsXXdztl5IHR1jD5mUjcXHGvRA 8zwd7ic1J9BFki+YCvEzZI2y/8bmclLiu8z846OXFTZVgzns/cgGPScPkaTQr1cQRwqz NC8D5VlIdwAcjDEbZSze2UETR/EE+eECm3nUlzRyUaeR2XSQH9+YaH7NhjG7P0mzTQbR NJkuYJl2Sxfvcmcr0OPBHlSYHwgpPmwNwUkNNONCMsZ0pXGp/J1R7cfpBvIrScd9Gcle /Y3A== X-Gm-Message-State: AO0yUKVG3M5NoI6hZRA1PfMEem4HlchQJld27ABgFyybzTgsfwA8MKXb AbRiSlLE5iG4gyQ6zAGKRj9Wxg== X-Google-Smtp-Source: AK7set9ecmI8fH1x8/IP0jocMgu8/miP1/FCyu/fweWwbQ2tLfit21+GY6n1mksbXY6L6TxQl9v3gg== X-Received: by 2002:a5d:4211:0:b0:2bf:ae1e:5280 with SMTP id n17-20020a5d4211000000b002bfae1e5280mr2124506wrq.59.1675256377529; Wed, 01 Feb 2023 04:59:37 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:37 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 23/45] KVM: arm64: smmu-v3: Setup command queue Date: Wed, 1 Feb 2023 12:53:07 +0000 Message-Id: <20230201125328.2186498-24-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045938_135621_BDCD5208 X-CRM114-Status: GOOD ( 17.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Map the command queue allocated by the host into the hypervisor address space. When the host mappings are finalized, the queue is unmapped from the host. Signed-off-by: Jean-Philippe Brucker --- include/kvm/arm_smmu_v3.h | 4 + arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 148 ++++++++++++++++++++ 2 files changed, 152 insertions(+) diff --git a/include/kvm/arm_smmu_v3.h b/include/kvm/arm_smmu_v3.h index d4b1e487b7d7..da36737bc1e0 100644 --- a/include/kvm/arm_smmu_v3.h +++ b/include/kvm/arm_smmu_v3.h @@ -18,8 +18,12 @@ struct hyp_arm_smmu_v3_device { struct kvm_hyp_iommu iommu; phys_addr_t mmio_addr; size_t mmio_size; + unsigned long features; void __iomem *base; + u32 cmdq_prod; + u64 *cmdq_base; + size_t cmdq_log2size; }; extern size_t kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_count); diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c index 75a6aa01b057..36ee5724f36f 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -40,12 +40,119 @@ struct hyp_arm_smmu_v3_device __ro_after_init *kvm_hyp_arm_smmu_v3_smmus; __ret; \ }) +#define smmu_wait_event(_smmu, _cond) \ +({ \ + if ((_smmu)->features & ARM_SMMU_FEAT_SEV) { \ + while (!(_cond)) \ + wfe(); \ + } \ + smmu_wait(_cond); \ +}) + static int smmu_write_cr0(struct hyp_arm_smmu_v3_device *smmu, u32 val) { writel_relaxed(val, smmu->base + ARM_SMMU_CR0); return smmu_wait(readl_relaxed(smmu->base + ARM_SMMU_CR0ACK) == val); } +#define Q_WRAP(smmu, reg) ((reg) & (1 << (smmu)->cmdq_log2size)) +#define Q_IDX(smmu, reg) ((reg) & ((1 << (smmu)->cmdq_log2size) - 1)) + +static bool smmu_cmdq_full(struct hyp_arm_smmu_v3_device *smmu) +{ + u64 cons = readl_relaxed(smmu->base + ARM_SMMU_CMDQ_CONS); + + return Q_IDX(smmu, smmu->cmdq_prod) == Q_IDX(smmu, cons) && + Q_WRAP(smmu, smmu->cmdq_prod) != Q_WRAP(smmu, cons); +} + +static bool smmu_cmdq_empty(struct hyp_arm_smmu_v3_device *smmu) +{ + u64 cons = readl_relaxed(smmu->base + ARM_SMMU_CMDQ_CONS); + + return Q_IDX(smmu, smmu->cmdq_prod) == Q_IDX(smmu, cons) && + Q_WRAP(smmu, smmu->cmdq_prod) == Q_WRAP(smmu, cons); +} + +static int smmu_add_cmd(struct hyp_arm_smmu_v3_device *smmu, + struct arm_smmu_cmdq_ent *ent) +{ + int i; + int ret; + u64 cmd[CMDQ_ENT_DWORDS] = {}; + int idx = Q_IDX(smmu, smmu->cmdq_prod); + u64 *slot = smmu->cmdq_base + idx * CMDQ_ENT_DWORDS; + + ret = smmu_wait_event(smmu, !smmu_cmdq_full(smmu)); + if (ret) + return ret; + + cmd[0] |= FIELD_PREP(CMDQ_0_OP, ent->opcode); + + switch (ent->opcode) { + case CMDQ_OP_CFGI_ALL: + cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_RANGE, 31); + break; + case CMDQ_OP_CFGI_STE: + cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid); + cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_LEAF, ent->cfgi.leaf); + break; + case CMDQ_OP_TLBI_NSNH_ALL: + break; + case CMDQ_OP_TLBI_S12_VMALL: + cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid); + break; + case CMDQ_OP_TLBI_S2_IPA: + cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num); + cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale); + cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid); + cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf); + cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl); + cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg); + cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_IPA_MASK; + break; + case CMDQ_OP_CMD_SYNC: + cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV); + break; + default: + return -EINVAL; + } + + for (i = 0; i < CMDQ_ENT_DWORDS; i++) + slot[i] = cpu_to_le64(cmd[i]); + + smmu->cmdq_prod++; + writel(Q_IDX(smmu, smmu->cmdq_prod) | Q_WRAP(smmu, smmu->cmdq_prod), + smmu->base + ARM_SMMU_CMDQ_PROD); + return 0; +} + +static int smmu_sync_cmd(struct hyp_arm_smmu_v3_device *smmu) +{ + int ret; + struct arm_smmu_cmdq_ent cmd = { + .opcode = CMDQ_OP_CMD_SYNC, + }; + + ret = smmu_add_cmd(smmu, &cmd); + if (ret) + return ret; + + return smmu_wait_event(smmu, smmu_cmdq_empty(smmu)); +} + +__maybe_unused +static int smmu_send_cmd(struct hyp_arm_smmu_v3_device *smmu, + struct arm_smmu_cmdq_ent *cmd) +{ + int ret = smmu_add_cmd(smmu, cmd); + + if (ret) + return ret; + + return smmu_sync_cmd(smmu); +} + static int smmu_init_registers(struct hyp_arm_smmu_v3_device *smmu) { u64 val, old; @@ -77,6 +184,43 @@ static int smmu_init_registers(struct hyp_arm_smmu_v3_device *smmu) return 0; } +/* Transfer ownership of structures from host to hyp */ +static void *smmu_take_pages(u64 base, size_t size) +{ + void *hyp_ptr; + + hyp_ptr = hyp_phys_to_virt(base); + if (pkvm_create_mappings(hyp_ptr, hyp_ptr + size, PAGE_HYP)) + return NULL; + + return hyp_ptr; +} + +static int smmu_init_cmdq(struct hyp_arm_smmu_v3_device *smmu) +{ + u64 cmdq_base; + size_t cmdq_nr_entries, cmdq_size; + + cmdq_base = readq_relaxed(smmu->base + ARM_SMMU_CMDQ_BASE); + if (cmdq_base & ~(Q_BASE_RWA | Q_BASE_ADDR_MASK | Q_BASE_LOG2SIZE)) + return -EINVAL; + + smmu->cmdq_log2size = cmdq_base & Q_BASE_LOG2SIZE; + cmdq_nr_entries = 1 << smmu->cmdq_log2size; + cmdq_size = cmdq_nr_entries * CMDQ_ENT_DWORDS * 8; + + cmdq_base &= Q_BASE_ADDR_MASK; + smmu->cmdq_base = smmu_take_pages(cmdq_base, cmdq_size); + if (!smmu->cmdq_base) + return -EINVAL; + + memset(smmu->cmdq_base, 0, cmdq_size); + writel_relaxed(0, smmu->base + ARM_SMMU_CMDQ_PROD); + writel_relaxed(0, smmu->base + ARM_SMMU_CMDQ_CONS); + + return 0; +} + static int smmu_init_device(struct hyp_arm_smmu_v3_device *smmu) { int ret; @@ -93,6 +237,10 @@ static int smmu_init_device(struct hyp_arm_smmu_v3_device *smmu) if (ret) return ret; + ret = smmu_init_cmdq(smmu); + if (ret) + return ret; + return 0; } From patchwork Wed Feb 1 12:53:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124394 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F356CC05027 for ; Wed, 1 Feb 2023 14:10:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=p3dLEx5wa2qSIMXxoSMGswWObri1sibKkYFjvtTt+Ms=; b=4MQk6Nj74DuEpL aykDdPyHwFJDvDXuW/aevXtur/O1d8sd+eIzr9+/shoIv7QnzE/Tk8Q+JOsJmqRo9MUgOxaI1635A cEGhTsIaqv+hmzDl4eGhBEhFbRFuRvabLkh7Yu+wqd7FTKHKiL0ibXzaRaU7Bh3T3KCg4hYNO8FYG X5lZ/TOUeg5YA7aMuHweudJPikcw9tiChhvetV8nWGgCfbXUYE6U3FxBOwlRtWGpUgjZoxgXqvxtH El9/gQYd2K1v3nGVMq494shF+rqFoTkXNuiQFoVFwSCpf65GrNzoSDN7Y08hrFej+wXTstLLCoGbs XpHGC5QZC5Z9Gxrn4nwQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDoC-00CEql-Ok; Wed, 01 Feb 2023 14:09:58 +0000 Received: from mail-wr1-f41.google.com ([209.85.221.41]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiE-00BnOl-4g for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:43 +0000 Received: by mail-wr1-f41.google.com with SMTP id h12so17213097wrv.10 for ; Wed, 01 Feb 2023 04:59:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=j6ny/GGJ/xSBkLuV4RcE+GC+n9HdkCFGxovAOHzkf4Y=; b=mXkKJ8GVi2o9hKXvO38ATeKbu/kQNts062SXg30ZfOt60zaBNySaqz9YhPtDU9dW6w AW1yaIveh1cDIXzFT25hr8nkVPVGEJmmUcziyjH8QnV1Ty9HQQpGZek4Wgq/YqwpEhbD eG7u9b+IfLjB3tlihWtJZMF5NBxRwCkUdrAFQF0NZd1cy/kLh0GceoWrOqtWiUNYgCmm uc1674z0i6LT9WQEBxR/UHtAzYZyAqYWjaF022HvtQuOg8HgJ/XwQ0P8xqPQa7NnEaGm OUhefUImf2fERRYo6Z/sVaX3gmmesQ+3UVBrGQjTmYoGKQt7YI3SpD3JTr/+XnxNRbe5 /JNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=j6ny/GGJ/xSBkLuV4RcE+GC+n9HdkCFGxovAOHzkf4Y=; b=7aDyKV57w8zcgSaJyoYCGUOAgttjI4t30B1ysODr8wZJdZszWJORNw0CUDDlQS8f45 DjISMz8J3LQyVzBsKMMD9CJMny1+OdTTz/YwpDSjtwy23OFAvI/KCXUHY68Ddyxq6tnl vRZN9gB8yE863uqWL2o7gkLCyymMprHUkPUvvMurcEI5tot9QLc5QWzGGITIdpuRmacf bou/XDwZNMuJLaVRLrgS9KYKZzhAq0vZZyDkEwrwyhV74EBTgn494LhNrJWHEMhRIEZs +J+9IYU6ZEDFWcEUdpMolj0GaPGz2J6+/c5tJgneRafyf5vg+aQ7NkHLC4eMhLiVr1cX gL3A== X-Gm-Message-State: AO0yUKUOFsZdhjCqz3XStsGG/nmS5io1WWUbIyEtwpOPioTkt8eYChPR 8oXIiplUBF3uzyOutjkKMZxAEg== X-Google-Smtp-Source: AK7set/uX5ReOHShIxYjKCrFXO8QhEm90M+SO3bJfnKAiOUk5aZz9Jj3/ob0BmcC5+s7c5TD4BjyEg== X-Received: by 2002:a5d:6d8e:0:b0:2bf:da39:5628 with SMTP id l14-20020a5d6d8e000000b002bfda395628mr2693278wrs.41.1675256378313; Wed, 01 Feb 2023 04:59:38 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:37 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 24/45] KVM: arm64: smmu-v3: Setup stream table Date: Wed, 1 Feb 2023 12:53:08 +0000 Message-Id: <20230201125328.2186498-25-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045942_219865_DB6004DF X-CRM114-Status: GOOD ( 18.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Map the stream table allocated by the host into the hypervisor address space. When the host mappings are finalized, the table is unmapped from the host. Depending on the host configuration, the stream table may have one or two levels. Populate the level-2 stream table lazily. Signed-off-by: Jean-Philippe Brucker --- include/kvm/arm_smmu_v3.h | 4 + arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 133 +++++++++++++++++++- 2 files changed, 136 insertions(+), 1 deletion(-) diff --git a/include/kvm/arm_smmu_v3.h b/include/kvm/arm_smmu_v3.h index da36737bc1e0..fc67a3bf5709 100644 --- a/include/kvm/arm_smmu_v3.h +++ b/include/kvm/arm_smmu_v3.h @@ -24,6 +24,10 @@ struct hyp_arm_smmu_v3_device { u32 cmdq_prod; u64 *cmdq_base; size_t cmdq_log2size; + u64 *strtab_base; + size_t strtab_num_entries; + size_t strtab_num_l1_entries; + u8 strtab_split; }; extern size_t kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_count); diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c index 36ee5724f36f..021bebebd40c 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -141,7 +141,6 @@ static int smmu_sync_cmd(struct hyp_arm_smmu_v3_device *smmu) return smmu_wait_event(smmu, smmu_cmdq_empty(smmu)); } -__maybe_unused static int smmu_send_cmd(struct hyp_arm_smmu_v3_device *smmu, struct arm_smmu_cmdq_ent *cmd) { @@ -153,6 +152,82 @@ static int smmu_send_cmd(struct hyp_arm_smmu_v3_device *smmu, return smmu_sync_cmd(smmu); } +__maybe_unused +static int smmu_sync_ste(struct hyp_arm_smmu_v3_device *smmu, u32 sid) +{ + struct arm_smmu_cmdq_ent cmd = { + .opcode = CMDQ_OP_CFGI_STE, + .cfgi.sid = sid, + .cfgi.leaf = true, + }; + + return smmu_send_cmd(smmu, &cmd); +} + +static int smmu_alloc_l2_strtab(struct hyp_arm_smmu_v3_device *smmu, u32 idx) +{ + void *table; + u64 l2ptr, span; + + /* Leaf tables must be page-sized */ + if (smmu->strtab_split + ilog2(STRTAB_STE_DWORDS) + 3 != PAGE_SHIFT) + return -EINVAL; + + span = smmu->strtab_split + 1; + if (WARN_ON(span < 1 || span > 11)) + return -EINVAL; + + table = kvm_iommu_donate_page(); + if (!table) + return -ENOMEM; + + l2ptr = hyp_virt_to_phys(table); + if (l2ptr & (~STRTAB_L1_DESC_L2PTR_MASK | ~PAGE_MASK)) + return -EINVAL; + + /* Ensure the empty stream table is visible before the descriptor write */ + wmb(); + + if ((cmpxchg64_relaxed(&smmu->strtab_base[idx], 0, l2ptr | span) != 0)) + kvm_iommu_reclaim_page(table); + + return 0; +} + +__maybe_unused +static u64 *smmu_get_ste_ptr(struct hyp_arm_smmu_v3_device *smmu, u32 sid) +{ + u32 idx; + int ret; + u64 l1std, span, *base; + + if (sid >= smmu->strtab_num_entries) + return NULL; + sid = array_index_nospec(sid, smmu->strtab_num_entries); + + if (!smmu->strtab_split) + return smmu->strtab_base + sid * STRTAB_STE_DWORDS; + + idx = sid >> smmu->strtab_split; + l1std = smmu->strtab_base[idx]; + if (!l1std) { + ret = smmu_alloc_l2_strtab(smmu, idx); + if (ret) + return NULL; + l1std = smmu->strtab_base[idx]; + if (WARN_ON(!l1std)) + return NULL; + } + + span = l1std & STRTAB_L1_DESC_SPAN; + idx = sid & ((1 << smmu->strtab_split) - 1); + if (!span || idx >= (1 << (span - 1))) + return NULL; + + base = hyp_phys_to_virt(l1std & STRTAB_L1_DESC_L2PTR_MASK); + return base + idx * STRTAB_STE_DWORDS; +} + static int smmu_init_registers(struct hyp_arm_smmu_v3_device *smmu) { u64 val, old; @@ -221,6 +296,58 @@ static int smmu_init_cmdq(struct hyp_arm_smmu_v3_device *smmu) return 0; } +static int smmu_init_strtab(struct hyp_arm_smmu_v3_device *smmu) +{ + u64 strtab_base; + size_t strtab_size; + u32 strtab_cfg, fmt; + int split, log2size; + + strtab_base = readq_relaxed(smmu->base + ARM_SMMU_STRTAB_BASE); + if (strtab_base & ~(STRTAB_BASE_ADDR_MASK | STRTAB_BASE_RA)) + return -EINVAL; + + strtab_cfg = readl_relaxed(smmu->base + ARM_SMMU_STRTAB_BASE_CFG); + if (strtab_cfg & ~(STRTAB_BASE_CFG_FMT | STRTAB_BASE_CFG_SPLIT | + STRTAB_BASE_CFG_LOG2SIZE)) + return -EINVAL; + + fmt = FIELD_GET(STRTAB_BASE_CFG_FMT, strtab_cfg); + split = FIELD_GET(STRTAB_BASE_CFG_SPLIT, strtab_cfg); + log2size = FIELD_GET(STRTAB_BASE_CFG_LOG2SIZE, strtab_cfg); + + smmu->strtab_split = split; + smmu->strtab_num_entries = 1 << log2size; + + switch (fmt) { + case STRTAB_BASE_CFG_FMT_LINEAR: + if (split) + return -EINVAL; + smmu->strtab_num_l1_entries = smmu->strtab_num_entries; + strtab_size = smmu->strtab_num_l1_entries * + STRTAB_STE_DWORDS * 8; + break; + case STRTAB_BASE_CFG_FMT_2LVL: + if (split != 6 && split != 8 && split != 10) + return -EINVAL; + smmu->strtab_num_l1_entries = 1 << max(0, log2size - split); + strtab_size = smmu->strtab_num_l1_entries * + STRTAB_L1_DESC_DWORDS * 8; + break; + default: + return -EINVAL; + } + + strtab_base &= STRTAB_BASE_ADDR_MASK; + smmu->strtab_base = smmu_take_pages(strtab_base, strtab_size); + if (!smmu->strtab_base) + return -EINVAL; + + /* Disable all STEs */ + memset(smmu->strtab_base, 0, strtab_size); + return 0; +} + static int smmu_init_device(struct hyp_arm_smmu_v3_device *smmu) { int ret; @@ -241,6 +368,10 @@ static int smmu_init_device(struct hyp_arm_smmu_v3_device *smmu) if (ret) return ret; + ret = smmu_init_strtab(smmu); + if (ret) + return ret; + return 0; } From patchwork Wed Feb 1 12:53:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F008BC05027 for ; Wed, 1 Feb 2023 14:30:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1PM6Ju6T/QMD6HOxPgh4ZLoYCNp8R780tAeYjjBKLRs=; b=Sxu62mNdjoWsDu wrPoZCz5Hry3W6SHGg85l3YFm3rpH2/3UHm7zucN1SKOhH5HS0R/yKbx8ZLtJjhx9lvJ0aU7ZCQiq YvCx400qfXjEDrrV0a1f4cjrPF/vawexQnGag1EQ65DAvmO4oISQVKFfoMcXOr+UKcI4xf637YhCG iQllTCm1U7Nvde5UXNd5GxPiliBaDUhqh1DeGWSOyD+4v1l4H507M4mtzIw8s/nRRgmKWtjNhqQLC 7LafWa3d1ZrtuyLFySUH34AFo7yhGYKAG9fxD8hupjs0G8F9PhnPXm8N9z3lYIhKudiq8OCyEagsE 94cjPaO1Ypdzse1uMDzA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNE6U-00CMFz-KW; Wed, 01 Feb 2023 14:28:52 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDhg-00CBmO-IQ for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 14:03:12 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=I4ZFCF/kj9JqjH+47Ky0Y78YqQW+BJekFTNH3LkgsVQ=; b=NgrSt3gxztPI7uH5dkBd/ANUcK iqWEPKlpWoJWAmg/omdWOuJSZ594a+pEnnZ9pq2s3X4k6Hk9idqcXHTaVy1j7SHgVYZweud4OR3Kt tp305aV4wJuHZeH4ToRAUIczMqjbC7qZKYbfOgzoV4ruVKkX2+YOf2gFTSfnNuhSc04EOXI5rSi8q TJLnGL1datsDb9Obn5B9MxNIAjK4HrXy7Fp1ZntKEg3FlTZyW4KPbtBB8aMILUwqIwYivqpbmpsfR Iggomjnjpw2C5lBhNA1IAom9DvnZDl7ovJnAVA9fAtwwh8tA878Sg7SiwuSKAGHq5B8G6UWdlCrKJ P7oVXv8Q==; Received: from mail-wr1-x432.google.com ([2a00:1450:4864:20::432]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pNChe-004m2X-0G for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:08 +0000 Received: by mail-wr1-x432.google.com with SMTP id t7so8870190wrp.5 for ; Wed, 01 Feb 2023 04:59:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=I4ZFCF/kj9JqjH+47Ky0Y78YqQW+BJekFTNH3LkgsVQ=; b=TIKoMpZW1BvDRnxSXaAWdb9Z6gbW/Bmx02AF0RaQaqWCB0fas41UW6YPpQC5xfHEcd Ugp15psKl7+slDSydMROOXPIKgbP++UGE6DIExTtheeEuElJbB4m4nahwg2Jij3r/NJx ygXJcAoDpYPifyEhGy4lxRCbRq+7MXYTnoNYUvmLQydiBftIuBRwm+UFJ3Mmu49HWHqS 0B+ZQF7hs+S+RvjhEZYVdpqgECox6eBowAI4ERfZn85PRWbQCQOaDy97OjKJZzRLQjJq NOIg53GuyNPkAg01BPG1Xs4QXG4dFDQjEOoaQV4eLqeBpSZAp8r/olni6DrlnAdH/SWu SQmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=I4ZFCF/kj9JqjH+47Ky0Y78YqQW+BJekFTNH3LkgsVQ=; b=vLRTESWM2JsZNWiwm58dZoyCWTjomTyQcJo4pZKi1oVipLS9usQ/haGG03VUn8Ujis ycXaYba/Mm6qoqyTB77y5kCRDY7kS+Y24Y2uDWDIvIxr5I9yj2qEB57Gw72sEeH8HP4v 5FtnXlp1QIHdvCHT5b8eQzQIPvirRU7eahbImao7o0Pgn0vGEXvQ7zBcpi8u9PdMRq8Z j/WS4cmKhs1HcopJ/L6E2wtYEf87QTEtvQ5EugAQGRSVBB/v5ZhT05ebGabs/b1dD1qL HtDRGsPCr71+2ypeNrE7FynTWb33WaOtH5rcOgAu0zylKCHm1jQAAqWRk8vHAzXe0wsi 7jVg== X-Gm-Message-State: AO0yUKVL/rn/icx0UypKzwJza5G6S2DBRty15fy4lp0XFL3NTtUw7RAH qwplJWrk2pBkUkwq8H5AWY+pDg== X-Google-Smtp-Source: AK7set8XSHTrtAK2V3ctWiwWsh1zVJknKd8Kdu/64oZhyD+qkm6qA8gnDokv19FpIPFhHLRnm1ArEg== X-Received: by 2002:adf:d084:0:b0:2bf:c805:7b36 with SMTP id y4-20020adfd084000000b002bfc8057b36mr5646436wrh.49.1675256379109; Wed, 01 Feb 2023 04:59:39 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:38 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 25/45] KVM: arm64: smmu-v3: Reset the device Date: Wed, 1 Feb 2023 12:53:09 +0000 Message-Id: <20230201125328.2186498-26-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125906_506384_6042209F X-CRM114-Status: GOOD ( 13.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that all structures are initialized, send global invalidations and reset the SMMUv3 device. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 36 ++++++++++++++++++++- 1 file changed, 35 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c index 021bebebd40c..81040339ccfe 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -348,6 +348,40 @@ static int smmu_init_strtab(struct hyp_arm_smmu_v3_device *smmu) return 0; } +static int smmu_reset_device(struct hyp_arm_smmu_v3_device *smmu) +{ + int ret; + struct arm_smmu_cmdq_ent cfgi_cmd = { + .opcode = CMDQ_OP_CFGI_ALL, + }; + struct arm_smmu_cmdq_ent tlbi_cmd = { + .opcode = CMDQ_OP_TLBI_NSNH_ALL, + }; + + /* Invalidate all cached configs and TLBs */ + ret = smmu_write_cr0(smmu, CR0_CMDQEN); + if (ret) + return ret; + + ret = smmu_add_cmd(smmu, &cfgi_cmd); + if (ret) + goto err_disable_cmdq; + + ret = smmu_add_cmd(smmu, &tlbi_cmd); + if (ret) + goto err_disable_cmdq; + + ret = smmu_sync_cmd(smmu); + if (ret) + goto err_disable_cmdq; + + /* Enable translation */ + return smmu_write_cr0(smmu, CR0_SMMUEN | CR0_CMDQEN | CR0_ATSCHK); + +err_disable_cmdq: + return smmu_write_cr0(smmu, 0); +} + static int smmu_init_device(struct hyp_arm_smmu_v3_device *smmu) { int ret; @@ -372,7 +406,7 @@ static int smmu_init_device(struct hyp_arm_smmu_v3_device *smmu) if (ret) return ret; - return 0; + return smmu_reset_device(smmu); } static int smmu_init(void) From patchwork Wed Feb 1 12:53:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6CD15C05027 for ; Wed, 1 Feb 2023 14:08:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YQloGxYlb4qNNJRXDsL55zpoUax+3RWeS9w39N7RLIw=; b=vK0o3IX87JmmTI GDn+uI10CklgxT+kNA44PU21RW67dOt+E/V9SDHisajrjffOsAhmREGGKwQoqAmabY0FcAQwt/eu8 uC9gLvHPHrKMLYicnE3LDofWneBmhmvecR37RhwT9whjCZSTjQ2MK0O1qcps4dUIsUOm5FgTywQlw F/CLhjjS3j+JI3kcOBvOx8L0eiJB/mrpETIeP4Ales/B7Vl9q4hnx2EeqYDp0SKT50RiGhC6oQU+Y veWprfzo4QdMDi/EqRKn/1TIlLWsyo6/J1Hq20JXbCejv87am/q7qYII9Y4UQRTuFs4gHniO6Yr9s R9yOxAtVyMZLUxNEjCsQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDmN-00CE5b-Vd; Wed, 01 Feb 2023 14:08:04 +0000 Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiC-00BnIT-Cp for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:42 +0000 Received: by mail-wr1-x42d.google.com with SMTP id bk16so17210484wrb.11 for ; Wed, 01 Feb 2023 04:59:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BKjgmaDxLkmrQ0mpxwx2r9pGg6x4GEO3iHZz6Uf5p3w=; b=li73nmc1UxwrkMWjkGKFFhTVQtrHociEVH9aYlHsY9ZKApJNsJx7aTCpI2BzVQGSVB hBkwUB/RWJWUZ4yqLIl2Y9KPBcRezlTjL3RAB9Ztcn9gHLHOEgjY5BU9iKVYIqN+oZxW musGWcucKajPIP8AF4AZ8MbembXZgbejsfKvMg4DodT/iPV3e4bUFnrqmOCnuq0p/wg/ 2NkB3jwUmyzAg4RRPrK7FpYWu7SEPYM6DT0Wd4jgqd6vATEZkchKW5bNFyCSKBPtm8/S 5JQki4YOGVI2Wz77ZTbUBm1ss/Oms4VhtxdJklLOIydXox4fVdVJ8VbwnbRuehbT4YZz 7J5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BKjgmaDxLkmrQ0mpxwx2r9pGg6x4GEO3iHZz6Uf5p3w=; b=u8IcaNUhRE2VOMah8nAn8XnS5F6ZUU+Y2+vIELauxpQFUpPdmI4+ZPLrhr0S/sWRxZ UrnU5mJRjijMSmwKechRw/KujWbwctMMIRtZoBB1GnUmbcvtUToaHwJCq6ZFMGzQHEEC En/3TKNObLMEWYeymjdML9oPqBqlj0SC9pYKyKJ6D2ZcCBrCWeqbXvjDMSsCjxHZfqBi GrN2DjFTb9RV30N2dvnhuiiSMO4Y294HI+mtORXsw8a6XeI7EcKwp1WPa1zAH5vV4lnx B+RKUT/1lIZmZkh3PaQeR860RqcDZvbK+rgDEDt9Fyre6HPkrU9F2SqJAaN9rO+xl5pt 8N8g== X-Gm-Message-State: AO0yUKXOEMMZCaImQpZmYrJW4MUSkLNoOhe0axzBWocKyZzzPCZgAJQm nT7V8Br1p0YSzJjuBjMGALgIRg== X-Google-Smtp-Source: AK7set929JCQtOsi6+aIpV+17WDiEBZh7VrjLqph/lKy2FUykdpHt3ALamuIzZ3nT3DrqEwV0ZjE+A== X-Received: by 2002:a5d:47a2:0:b0:2bf:b5bd:8f60 with SMTP id 2-20020a5d47a2000000b002bfb5bd8f60mr1982390wrb.61.1675256379854; Wed, 01 Feb 2023 04:59:39 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:39 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 26/45] KVM: arm64: smmu-v3: Support io-pgtable Date: Wed, 1 Feb 2023 12:53:10 +0000 Message-Id: <20230201125328.2186498-27-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045940_455456_D8C2783E X-CRM114-Status: GOOD ( 17.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Implement the hypervisor version of io-pgtable allocation functions, mirroring drivers/iommu/io-pgtable-arm.c. Page allocation uses the IOMMU memcache filled by the host, except for the PGD which may be larger than a page. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/nvhe/Makefile | 2 + arch/arm64/kvm/hyp/include/nvhe/iommu.h | 7 ++ include/linux/io-pgtable-arm.h | 6 ++ .../arm64/kvm/hyp/nvhe/iommu/io-pgtable-arm.c | 97 +++++++++++++++++++ 4 files changed, 112 insertions(+) create mode 100644 arch/arm64/kvm/hyp/nvhe/iommu/io-pgtable-arm.c diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 349c874762c8..8359909bd796 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -30,6 +30,8 @@ hyp-obj-y += $(lib-objs) hyp-obj-$(CONFIG_KVM_IOMMU) += iommu/iommu.o hyp-obj-$(CONFIG_ARM_SMMU_V3_PKVM) += iommu/arm-smmu-v3.o +hyp-obj-$(CONFIG_ARM_SMMU_V3_PKVM) += iommu/io-pgtable-arm.o \ + ../../../../../drivers/iommu/io-pgtable-arm-common.o ## ## Build rules for compiling nVHE hyp code diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/include/nvhe/iommu.h index 0ba59d20bef3..c7744cca6e13 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -6,7 +6,14 @@ #include #if IS_ENABLED(CONFIG_ARM_SMMU_V3_PKVM) +#include + int kvm_arm_smmu_v3_register(void); + +int kvm_arm_io_pgtable_init(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data); +int kvm_arm_io_pgtable_alloc(struct io_pgtable *iop, unsigned long pgd_hva); +int kvm_arm_io_pgtable_free(struct io_pgtable *iop); #else /* CONFIG_ARM_SMMU_V3_PKVM */ static inline int kvm_arm_smmu_v3_register(void) { diff --git a/include/linux/io-pgtable-arm.h b/include/linux/io-pgtable-arm.h index 2b3e69386d08..b89b8ec57721 100644 --- a/include/linux/io-pgtable-arm.h +++ b/include/linux/io-pgtable-arm.h @@ -161,8 +161,14 @@ static inline bool iopte_leaf(arm_lpae_iopte pte, int lvl, return iopte_type(pte) == ARM_LPAE_PTE_TYPE_BLOCK; } +#ifdef __KVM_NVHE_HYPERVISOR__ +#include +#define __arm_lpae_virt_to_phys hyp_virt_to_phys +#define __arm_lpae_phys_to_virt hyp_phys_to_virt +#else #define __arm_lpae_virt_to_phys __pa #define __arm_lpae_phys_to_virt __va +#endif /* Generic functions */ void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/io-pgtable-arm.c b/arch/arm64/kvm/hyp/nvhe/iommu/io-pgtable-arm.c new file mode 100644 index 000000000000..a46490acb45c --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/iommu/io-pgtable-arm.c @@ -0,0 +1,97 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022 Arm Ltd. + */ +#include +#include +#include +#include +#include +#include + +#include +#include + +bool __ro_after_init selftest_running; + +void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp, struct io_pgtable_cfg *cfg) +{ + void *addr = kvm_iommu_donate_page(); + + BUG_ON(size != PAGE_SIZE); + + if (addr && !cfg->coherent_walk) + kvm_flush_dcache_to_poc(addr, size); + + return addr; +} + +void __arm_lpae_free_pages(void *addr, size_t size, struct io_pgtable_cfg *cfg) +{ + BUG_ON(size != PAGE_SIZE); + + if (!cfg->coherent_walk) + kvm_flush_dcache_to_poc(addr, size); + + kvm_iommu_reclaim_page(addr); +} + +void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, int num_entries, + struct io_pgtable_cfg *cfg) +{ + if (!cfg->coherent_walk) + kvm_flush_dcache_to_poc(ptep, sizeof(*ptep) * num_entries); +} + +int kvm_arm_io_pgtable_init(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data) +{ + int ret = arm_lpae_init_pgtable_s2(cfg, data); + + if (ret) + return ret; + + data->iop.cfg = *cfg; + return 0; +} + +int kvm_arm_io_pgtable_alloc(struct io_pgtable *iopt, unsigned long pgd_hva) +{ + size_t pgd_size, alignment; + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(iopt->ops); + + pgd_size = ARM_LPAE_PGD_SIZE(data); + /* + * If it has eight or more entries, the table must be aligned on + * its size. Otherwise 64 bytes. + */ + alignment = max(pgd_size, 8 * sizeof(arm_lpae_iopte)); + if (!IS_ALIGNED(pgd_hva, alignment)) + return -EINVAL; + + iopt->pgd = pkvm_map_donated_memory(pgd_hva, pgd_size); + if (!iopt->pgd) + return -ENOMEM; + + if (!data->iop.cfg.coherent_walk) + kvm_flush_dcache_to_poc(iopt->pgd, pgd_size); + + /* Ensure the empty pgd is visible before any actual TTBR write */ + wmb(); + + return 0; +} + +int kvm_arm_io_pgtable_free(struct io_pgtable *iopt) +{ + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(iopt->ops); + size_t pgd_size = ARM_LPAE_PGD_SIZE(data); + + if (!data->iop.cfg.coherent_walk) + kvm_flush_dcache_to_poc(iopt->pgd, pgd_size); + + /* Free all tables but the pgd */ + __arm_lpae_free_pgtable(data, data->start_level, iopt->pgd, true); + pkvm_unmap_donated_memory(iopt->pgd, pgd_size); + return 0; +} From patchwork Wed Feb 1 12:53:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 10DEAC636CD for ; Wed, 1 Feb 2023 14:09:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6wnxtNijPZlkY5EImXYBALj+GQiwt/+b2upFUBATYkE=; b=ply1G7hB1rGhHt lvGqHa8OhFBFp029Rih3Vmw8MVUlzpNXxmEPOYy+nw8j/oepdbFfAYtf8ikWozsCmvIVJF3QIhH7y 7vniYECs/M9U8IwHyUUWbzuDAqNH8JcXIOiX17vEBfMfluf1HSGx2Wlo7QSp53gnYhB4qIK7y94QR ZDQ9YZAJINfNFfxxFZ5MiJt7a2zlJ5n6WSPSuxyR4VIjhD22cx2lDdr0/w7a46u57xOtqcMa4eg6a YQ22qc5/Am1m4HKBVCeUJHKMaPcTcpCPqoT6x27ZhsYZDKxCRVCXn5ij5esPU2WLzidJpl+/Ile0Y JbBqR2RhL8pocAbSZiyQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDnD-00CEQO-6u; Wed, 01 Feb 2023 14:08:56 +0000 Received: from mail-wr1-x42f.google.com ([2a00:1450:4864:20::42f]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiD-00BnI4-6C for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:42 +0000 Received: by mail-wr1-x42f.google.com with SMTP id o18so7754088wrj.3 for ; Wed, 01 Feb 2023 04:59:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=k/GDOaMle8fCK10H5aUfKS5834jy6OqVt1je6uaYFNc=; b=a/ySk0jxxFQ5Umd321G1J6ggbfnZENu3LP6l8nOj0kuQsqnkHyvy58UCJ+6Y3R86Y7 qciFJoPzshScME2lHsL8LXXpKOGBlcO3vHzloNhsQQctl2aHlghvudN8rptx4J0zWIB2 1qwhgBKnKOUj8Juft52uvLCjLPPDy7lDrmB7mf8R+WC6xa3VSuMx10HbRza3O7DFYYpH r241/v3swxNAnpVjvfgtneMgbEDMBZEM165wWObwT7MD9w8WsNl7qICdc5JE1p5kllaz ZXs7X0SbZ9zAQx0yk03oJFCzCjVSvJ6GCipFrVzA49yvnVsSGuWFKjDlktjL1Z//lDua wYsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k/GDOaMle8fCK10H5aUfKS5834jy6OqVt1je6uaYFNc=; b=hISnrlRReWEbqSGkGifJTz936fkXj/LrA0G8vokY4008TTrT3xHCKyLcV0Xh3g+VC0 Ek41RY2CgvpSx3AGdkNlbKGi2/NnUbLt7hY4OjgOCshUNbqFBGYVItsR2i+935BKP65t UPE6utmCUcdgLLmng/EBuaSWaUwToCoVhD173EqlS3tC0zjdbw4dbrowqOc4wimJg7fN 7YrIlEoPY5mFSynUYx9glKv24HurEM5maRVgjIfkN26e1Cm4IxJBdwTIZVboxYehYtYj QTm+wE5fKzDguprQMjFW0a/IN/oCyQshym9QPhTyp37JKlzhyfutuTCPKSNcyUMdf6YA gcHw== X-Gm-Message-State: AO0yUKXoWU0mBphfwLjU/oiy1vjXg8xYWRTOyxV3I46dp5cFDd2ALDGs QVMR4TSM/kHqezPzK8ZO7Kz2fw== X-Google-Smtp-Source: AK7set/PuZ0iGJgAqsm7rVA84kvYeOepxjawaRC1mJFoy/KL6sqcM9J5i+g+Fl43K/IxMpuWhh+0Hg== X-Received: by 2002:a5d:514d:0:b0:2bf:ae1e:84d2 with SMTP id u13-20020a5d514d000000b002bfae1e84d2mr2263196wrt.12.1675256380688; Wed, 01 Feb 2023 04:59:40 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:40 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 27/45] KVM: arm64: smmu-v3: Setup domains and page table configuration Date: Wed, 1 Feb 2023 12:53:11 +0000 Message-Id: <20230201125328.2186498-28-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045941_253155_519919CD X-CRM114-Status: GOOD ( 20.67 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Setup the stream table entries when the host issues the attach_dev() and detach_dev() hypercalls. The driver holds one io-pgtable configuration for all domains. Signed-off-by: Jean-Philippe Brucker --- include/kvm/arm_smmu_v3.h | 2 + arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 178 +++++++++++++++++++- 2 files changed, 177 insertions(+), 3 deletions(-) diff --git a/include/kvm/arm_smmu_v3.h b/include/kvm/arm_smmu_v3.h index fc67a3bf5709..ed139b0e9612 100644 --- a/include/kvm/arm_smmu_v3.h +++ b/include/kvm/arm_smmu_v3.h @@ -3,6 +3,7 @@ #define __KVM_ARM_SMMU_V3_H #include +#include #include #if IS_ENABLED(CONFIG_ARM_SMMU_V3_PKVM) @@ -28,6 +29,7 @@ struct hyp_arm_smmu_v3_device { size_t strtab_num_entries; size_t strtab_num_l1_entries; u8 strtab_split; + struct arm_lpae_io_pgtable pgtable; }; extern size_t kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_count); diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c index 81040339ccfe..56e313203a16 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -152,7 +152,6 @@ static int smmu_send_cmd(struct hyp_arm_smmu_v3_device *smmu, return smmu_sync_cmd(smmu); } -__maybe_unused static int smmu_sync_ste(struct hyp_arm_smmu_v3_device *smmu, u32 sid) { struct arm_smmu_cmdq_ent cmd = { @@ -194,7 +193,6 @@ static int smmu_alloc_l2_strtab(struct hyp_arm_smmu_v3_device *smmu, u32 idx) return 0; } -__maybe_unused static u64 *smmu_get_ste_ptr(struct hyp_arm_smmu_v3_device *smmu, u32 sid) { u32 idx; @@ -382,6 +380,68 @@ static int smmu_reset_device(struct hyp_arm_smmu_v3_device *smmu) return smmu_write_cr0(smmu, 0); } +static struct hyp_arm_smmu_v3_device *to_smmu(struct kvm_hyp_iommu *iommu) +{ + return container_of(iommu, struct hyp_arm_smmu_v3_device, iommu); +} + +static void smmu_tlb_flush_all(void *cookie) +{ + struct kvm_iommu_tlb_cookie *data = cookie; + struct hyp_arm_smmu_v3_device *smmu = to_smmu(data->iommu); + struct arm_smmu_cmdq_ent cmd = { + .opcode = CMDQ_OP_TLBI_S12_VMALL, + .tlbi.vmid = data->domain_id, + }; + + WARN_ON(smmu_send_cmd(smmu, &cmd)); +} + +static void smmu_tlb_inv_range(struct kvm_iommu_tlb_cookie *data, + unsigned long iova, size_t size, size_t granule, + bool leaf) +{ + struct hyp_arm_smmu_v3_device *smmu = to_smmu(data->iommu); + unsigned long end = iova + size; + struct arm_smmu_cmdq_ent cmd = { + .opcode = CMDQ_OP_TLBI_S2_IPA, + .tlbi.vmid = data->domain_id, + .tlbi.leaf = leaf, + }; + + /* + * There are no mappings at high addresses since we don't use TTB1, so + * no overflow possible. + */ + BUG_ON(end < iova); + + while (iova < end) { + cmd.tlbi.addr = iova; + WARN_ON(smmu_send_cmd(smmu, &cmd)); + BUG_ON(iova + granule < iova); + iova += granule; + } +} + +static void smmu_tlb_flush_walk(unsigned long iova, size_t size, + size_t granule, void *cookie) +{ + smmu_tlb_inv_range(cookie, iova, size, granule, false); +} + +static void smmu_tlb_add_page(struct iommu_iotlb_gather *gather, + unsigned long iova, size_t granule, + void *cookie) +{ + smmu_tlb_inv_range(cookie, iova, granule, granule, true); +} + +static const struct iommu_flush_ops smmu_tlb_ops = { + .tlb_flush_all = smmu_tlb_flush_all, + .tlb_flush_walk = smmu_tlb_flush_walk, + .tlb_add_page = smmu_tlb_add_page, +}; + static int smmu_init_device(struct hyp_arm_smmu_v3_device *smmu) { int ret; @@ -394,6 +454,14 @@ static int smmu_init_device(struct hyp_arm_smmu_v3_device *smmu) if (IS_ERR(smmu->base)) return PTR_ERR(smmu->base); + smmu->iommu.pgtable_cfg.tlb = &smmu_tlb_ops; + + ret = kvm_arm_io_pgtable_init(&smmu->iommu.pgtable_cfg, &smmu->pgtable); + if (ret) + return ret; + + smmu->iommu.pgtable = &smmu->pgtable.iop; + ret = smmu_init_registers(smmu); if (ret) return ret; @@ -406,7 +474,11 @@ static int smmu_init_device(struct hyp_arm_smmu_v3_device *smmu) if (ret) return ret; - return smmu_reset_device(smmu); + ret = smmu_reset_device(smmu); + if (ret) + return ret; + + return kvm_iommu_init_device(&smmu->iommu); } static int smmu_init(void) @@ -414,6 +486,10 @@ static int smmu_init(void) int ret; struct hyp_arm_smmu_v3_device *smmu; + ret = kvm_iommu_init(); + if (ret) + return ret; + ret = pkvm_create_mappings(kvm_hyp_arm_smmu_v3_smmus, kvm_hyp_arm_smmu_v3_smmus + kvm_hyp_arm_smmu_v3_count, @@ -430,8 +506,104 @@ static int smmu_init(void) return 0; } +static struct kvm_hyp_iommu *smmu_id_to_iommu(pkvm_handle_t smmu_id) +{ + if (smmu_id >= kvm_hyp_arm_smmu_v3_count) + return NULL; + smmu_id = array_index_nospec(smmu_id, kvm_hyp_arm_smmu_v3_count); + + return &kvm_hyp_arm_smmu_v3_smmus[smmu_id].iommu; +} + +static int smmu_attach_dev(struct kvm_hyp_iommu *iommu, pkvm_handle_t domain_id, + struct kvm_hyp_iommu_domain *domain, u32 sid) +{ + int i; + int ret; + u64 *dst; + struct io_pgtable_cfg *cfg; + u64 ts, sl, ic, oc, sh, tg, ps; + u64 ent[STRTAB_STE_DWORDS] = {}; + struct hyp_arm_smmu_v3_device *smmu = to_smmu(iommu); + + dst = smmu_get_ste_ptr(smmu, sid); + if (!dst || dst[0]) + return -EINVAL; + + cfg = &smmu->pgtable.iop.cfg; + ps = cfg->arm_lpae_s2_cfg.vtcr.ps; + tg = cfg->arm_lpae_s2_cfg.vtcr.tg; + sh = cfg->arm_lpae_s2_cfg.vtcr.sh; + oc = cfg->arm_lpae_s2_cfg.vtcr.orgn; + ic = cfg->arm_lpae_s2_cfg.vtcr.irgn; + sl = cfg->arm_lpae_s2_cfg.vtcr.sl; + ts = cfg->arm_lpae_s2_cfg.vtcr.tsz; + + ent[0] = STRTAB_STE_0_V | + FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S2_TRANS); + ent[2] = FIELD_PREP(STRTAB_STE_2_VTCR, + FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, ps) | + FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, tg) | + FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, sh) | + FIELD_PREP(STRTAB_STE_2_VTCR_S2OR0, oc) | + FIELD_PREP(STRTAB_STE_2_VTCR_S2IR0, ic) | + FIELD_PREP(STRTAB_STE_2_VTCR_S2SL0, sl) | + FIELD_PREP(STRTAB_STE_2_VTCR_S2T0SZ, ts)) | + FIELD_PREP(STRTAB_STE_2_S2VMID, domain_id) | + STRTAB_STE_2_S2AA64; + ent[3] = hyp_virt_to_phys(domain->pgd) & STRTAB_STE_3_S2TTB_MASK; + + /* + * The SMMU may cache a disabled STE. + * Initialize all fields, sync, then enable it. + */ + for (i = 1; i < STRTAB_STE_DWORDS; i++) + dst[i] = cpu_to_le64(ent[i]); + + ret = smmu_sync_ste(smmu, sid); + if (ret) + return ret; + + WRITE_ONCE(dst[0], cpu_to_le64(ent[0])); + ret = smmu_sync_ste(smmu, sid); + if (ret) + dst[0] = 0; + + return ret; +} + +static int smmu_detach_dev(struct kvm_hyp_iommu *iommu, pkvm_handle_t domain_id, + struct kvm_hyp_iommu_domain *domain, u32 sid) +{ + u64 ttb; + u64 *dst; + int i, ret; + struct hyp_arm_smmu_v3_device *smmu = to_smmu(iommu); + + dst = smmu_get_ste_ptr(smmu, sid); + if (!dst) + return -ENODEV; + + ttb = dst[3] & STRTAB_STE_3_S2TTB_MASK; + + dst[0] = 0; + ret = smmu_sync_ste(smmu, sid); + if (ret) + return ret; + + for (i = 1; i < STRTAB_STE_DWORDS; i++) + dst[i] = 0; + + return smmu_sync_ste(smmu, sid); +} + static struct kvm_iommu_ops smmu_ops = { .init = smmu_init, + .get_iommu_by_id = smmu_id_to_iommu, + .alloc_iopt = kvm_arm_io_pgtable_alloc, + .free_iopt = kvm_arm_io_pgtable_free, + .attach_dev = smmu_attach_dev, + .detach_dev = smmu_detach_dev, }; int kvm_arm_smmu_v3_register(void) From patchwork Wed Feb 1 12:53:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124420 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D94DDC636D3 for ; Wed, 1 Feb 2023 14:31:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=iyje/EJ3uQz51AzUlBRW1eeoWtvR7i6k1JDpITPe8x4=; b=2eEFQ2+fGSVmrm DcJmzXVHiuvF9149zZAs6mX6+YgyE3q8TTQchQRzpsJvK57nsoq+k9PgzWRO1wpHLMJkHMjT1R8Yo yA6mYsb36dqRYEXJQbR1h0//gL4NoDiwvrXknfose8Ro7GYzGeKsXrr4PGNHxyPfph9fIediYyG1c pSOgET6Sh4i+cn6+h+OJpohRoiZqg0KvPe4T8HslYhE2Wm3LHTzndnvrLQuqPVdgPH+bKEkRXnSpZ Xw1hy3gkfoSfDeAhFyHJdnyJf4r9o0IVekA2j2Hp3X3daJS/WHkOrWjyDBdoBDUEie4ArulG/KJDC 8zdpxKWimzqyxoC86+5A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNE7S-00CMds-7o; Wed, 01 Feb 2023 14:29:51 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDhi-00CBmy-13 for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 14:03:14 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vFG9Kxksr5XAD5z+jpFTZuXE6wo3TJrZX1lzDDnThIE=; b=jXA9Z1Pz+n/99nCqdEgFRo/A/T yxNZhSFmGylYU1K+lxfGnq1skn4mK6dPWtj/Wp9+hWzKYXIyGLHplsFI3jFNUs5OImoqGdGYqP6wz AIqrU+/63w4LBwjPLagSsmJZrE39iH7cCQOnHY/97U14RmFJ/lhwG2Qys6w21wab3iOE8+shNpZBr ILtC/rZlEk1lLhJcK1RuaA6nFA7H5phOXagINRKoQqfia7oBl7DkMhSwhPuM6nPFUJh5aNp4wxTSr El2VeLZbjcS2Sj5zZaznjped7tWTC6c8ZPjHGJ0o4Qx9haFBk/9W0Rq9frUO3cyZzFGX24oC9AHJq AKJzAY2g==; Received: from mail-wr1-x42c.google.com ([2a00:1450:4864:20::42c]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pNChg-004m2v-1M for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:10 +0000 Received: by mail-wr1-x42c.google.com with SMTP id t7so8870281wrp.5 for ; Wed, 01 Feb 2023 04:59:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vFG9Kxksr5XAD5z+jpFTZuXE6wo3TJrZX1lzDDnThIE=; b=XQ5yAOTwtSa7tp94IvyGR3xbhNDZa61S076xE+WBkEiPX3SQP4glj9DnCIsmbfPs9u Dv/3/6PfyG17r2c8HIa6nD0O+f+gNGhig98UvWmqpUY/8RShjXUjJLI92xSWYug1OZO3 3kEiXPlErhcHmP2p06hYJWhAkyifML7LCP9PNidG+f4pvYxjzmgP0H4h3YuqsoaHteEG ictCQ+A9VbTIiSFOTJ63N81XZv1l1qoUpOMgH3amdoEH7fyVhbf5znCxC0aewHVSCHd6 hnQSC5sNzfLKD1m8mDLilOrn26EDOE+MdiQHvk5j7E73YHthY6eBH3PGRT2zeT+SQ78R mnQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vFG9Kxksr5XAD5z+jpFTZuXE6wo3TJrZX1lzDDnThIE=; b=nlERmPallD2JhPRNzAa6cCl7T4kI9pQJnrqSFYnd2R0kN2aNZrxTGNGBZqb8r+3Dct 9zs/49YKpvojECsVwFEvovenEgINUMWKPGxwGQatKisjI1WvwC2AEVDD5xy4dCZsk5Lm vBwPmJ8stoFKKduNCnxFXsiGrPdl+5y1zqE5osZW3FQ4Iz6vXBuPUUHc57XwTzKzKY9+ 1S9tFO5YNEqymJ7r4smhIfeTLYIGfKQe2w66/vDUWfeuIOsw9o78uZJVQh6TS3BLLLDI AQ2mk1U+GAkoxRnaYI0XDtv2LWqPOHWLMwN/rUsBFRWWPxtO0fALh9UWTIPSKgQOC/ec J0XQ== X-Gm-Message-State: AO0yUKX4kk27Yd1syOuwi93mZU+jp9SRVwnFQQSofTC4B9cUYjbZpfpJ fmbaVJaPys/I8sAr+yBNalEKtg== X-Google-Smtp-Source: AK7set/oRWkQRL/1T6Fw11ETCaloaCZVNgIY5U+8Bhq0XoWcFkzYXbslbxbBP9XnuIbhQiQhf0YVZg== X-Received: by 2002:a05:6000:1706:b0:2bf:f2f2:7d68 with SMTP id n6-20020a056000170600b002bff2f27d68mr3107911wrc.21.1675256381425; Wed, 01 Feb 2023 04:59:41 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:41 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 28/45] iommu/arm-smmu-v3: Extract driver-specific bits from probe function Date: Wed, 1 Feb 2023 12:53:12 +0000 Message-Id: <20230201125328.2186498-29-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125908_803085_EE3FEB6C X-CRM114-Status: GOOD ( 14.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org As we're about to share the arm_smmu_device_hw_probe() function with the KVM driver, extract two bits that are specific to the normal driver. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 97d24ee5c14d..bcbd691ca96a 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -3454,7 +3454,7 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) if (reg & IDR0_MSI) { smmu->features |= ARM_SMMU_FEAT_MSI; - if (coherent && !disable_msipolling) + if (coherent) smmu->options |= ARM_SMMU_OPT_MSIPOLL; } @@ -3598,11 +3598,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) smmu->oas = 48; } - if (arm_smmu_ops.pgsize_bitmap == -1UL) - arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap; - else - arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap; - /* Set the DMA mask for our table walker */ if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas))) dev_warn(smmu->dev, @@ -3803,6 +3798,14 @@ static int arm_smmu_device_probe(struct platform_device *pdev) if (ret) return ret; + if (disable_msipolling) + smmu->options &= ~ARM_SMMU_OPT_MSIPOLL; + + if (arm_smmu_ops.pgsize_bitmap == -1UL) + arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap; + else + arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap; + /* Initialise in-memory data structures */ ret = arm_smmu_init_structures(smmu); if (ret) From patchwork Wed Feb 1 12:53:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3CE98C05027 for ; Wed, 1 Feb 2023 14:11:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=t5pQrz1W09jI+pi47R4BymyXHqTxvkrjA5q8XeRGbwI=; b=KqXgawFmNQXuRD TmzQ/kct7l9YYGfdSW7+ReOYagPDFyX9r8f4d4pTGfzIs0/sckSXu7omIVBKLy6L1Ndw36TCR9zeB ZEDk3IF5qe+Kp/f/4V5ND6Lm9lLQ5AGAXvyXSMcRSAADvj9dJcEGzE+lWT9Tk9OrPI6JgxHWnQjcx J7pK/zUtJQWBIH1PBuG7bH+MH9Bc5wCtR0WeYu/D27xwNodmgHI1vTWTHQ+ahKzgdB2GSzLFNmBWn L2bolZ3fnR7R0Np0HGkEXN3W9tx2mkw5JD8CAfBjU6iyubo9lzupUdQh0ToLD9QAUlxXIWcyy67Ls pQh9KlwkDgt1hBMhdVUQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDoz-00CFBc-DG; Wed, 01 Feb 2023 14:10:46 +0000 Received: from mail-wr1-x429.google.com ([2a00:1450:4864:20::429]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiE-00BnGh-Q5 for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:46 +0000 Received: by mail-wr1-x429.google.com with SMTP id m14so16760734wrg.13 for ; Wed, 01 Feb 2023 04:59:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eZWyvRGjN1hWh9Kt9qhU/sgEA46LGfzgFOOld/1yFu4=; b=dfScr3h3qshLBf1B014FouB4z/1CopKuz/MO/EE7+b9TesnPty6Em1E6tfi4iIIXBd zZ1RPy2uQ2uzr39P/WO7S5/EstvRwh2WysL1DjmICiH/ol53CsuzE9aMz6WzHz4UM3ms GGy6J3Y0aXV175rCm4P2OqRpvFBIzXuBjucPQV4ykIWmL51Z7F3OVtQSfzAjxAuZrITu 525vg9jwehAFP5gOiHrwXKFPAWLGwxcCtb84aXEBeT+lc4bnHFOf0fOmZ/MHixOqWdDW F4avwa0bBpUs01oIzK+dFZsBKJA9vJqpV2cIASIeYO9D7lVyQULape1NZoML4B+KGba1 VYrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eZWyvRGjN1hWh9Kt9qhU/sgEA46LGfzgFOOld/1yFu4=; b=2qS17DMUh5ZtcugRpXjHjUUBmkSh1Cm+Dc3vcIcxwUnqq26LWMO2qCz/51GEh8cGzD tzTFncEd/YdhrK3DvfIKpfIoWa54wCogShiTkWpdOFKW648J90UkL2ThJYJiqcNalB76 8Ydh4XmtI6EdwwiL2TQ1GVgUl4j0ZwTJRI9U6VmfYFPr4wq9qJlb8tfJ9dflIs9bTWUS Gj/jbIkPs0jbViaRniIiKZJXhYt4NFmhezUWLCAvu1LCngHA9+1RmsLmOlYPxgvQPNGG KECy1sEvjiPv5Jahj8VrDbn1YGLKfroFRZX571oo9X272slzQwygLMPC6Tbe09deAugK K2wA== X-Gm-Message-State: AO0yUKVi5rCgWhVYIxHRzMePgQJOm4JIEIxR97dmLr8QjepawQZLwEMa IxZJ4xHryjwbl+TWJjiWeh8/pw== X-Google-Smtp-Source: AK7set+SAZmma/mIC/SofWrwrAByNMx0pHWI7IJggK0gg3QH5HvjERWTG3NiPGqqX7n2D+i7IeOrpQ== X-Received: by 2002:a5d:4fce:0:b0:2bf:b672:689b with SMTP id h14-20020a5d4fce000000b002bfb672689bmr1568372wrw.62.1675256382262; Wed, 01 Feb 2023 04:59:42 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:41 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 29/45] iommu/arm-smmu-v3: Move some functions to arm-smmu-v3-common.c Date: Wed, 1 Feb 2023 12:53:13 +0000 Message-Id: <20230201125328.2186498-30-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045942_922281_885C23DA X-CRM114-Status: GOOD ( 28.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move functions that can be shared between normal and KVM drivers to arm-smmu-v3-common.c Only straightforward moves here. More subtle factoring will be done in then next patches. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm/arm-smmu-v3/Makefile | 1 + drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 9 + .../arm/arm-smmu-v3/arm-smmu-v3-common.c | 296 ++++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 293 ----------------- 4 files changed, 306 insertions(+), 293 deletions(-) create mode 100644 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c diff --git a/drivers/iommu/arm/arm-smmu-v3/Makefile b/drivers/iommu/arm/arm-smmu-v3/Makefile index 54feb1ecccad..c4fcc796213c 100644 --- a/drivers/iommu/arm/arm-smmu-v3/Makefile +++ b/drivers/iommu/arm/arm-smmu-v3/Makefile @@ -1,5 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_ARM_SMMU_V3) += arm_smmu_v3.o arm_smmu_v3-objs-y += arm-smmu-v3.o +arm_smmu_v3-objs-y += arm-smmu-v3-common.o arm_smmu_v3-objs-$(CONFIG_ARM_SMMU_V3_SVA) += arm-smmu-v3-sva.o arm_smmu_v3-objs := $(arm_smmu_v3-objs-y) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 32ce835ab4eb..59e8101d4ff5 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -269,6 +269,15 @@ extern struct xarray arm_smmu_asid_xa; extern struct mutex arm_smmu_asid_lock; extern struct arm_smmu_ctx_desc quiet_cd; +int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val, + unsigned int reg_off, unsigned int ack_off); +int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr); +int arm_smmu_device_disable(struct arm_smmu_device *smmu); +bool arm_smmu_capable(struct device *dev, enum iommu_cap cap); +struct iommu_group *arm_smmu_device_group(struct device *dev); +int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args); +int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu); + int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, struct arm_smmu_ctx_desc *cd); void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c new file mode 100644 index 000000000000..5e43329c0826 --- /dev/null +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c @@ -0,0 +1,296 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +#include "arm-smmu-v3.h" + +int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) +{ + u32 reg; + bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY; + + /* IDR0 */ + reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0); + + /* 2-level structures */ + if (FIELD_GET(IDR0_ST_LVL, reg) == IDR0_ST_LVL_2LVL) + smmu->features |= ARM_SMMU_FEAT_2_LVL_STRTAB; + + if (reg & IDR0_CD2L) + smmu->features |= ARM_SMMU_FEAT_2_LVL_CDTAB; + + /* + * Translation table endianness. + * We currently require the same endianness as the CPU, but this + * could be changed later by adding a new IO_PGTABLE_QUIRK. + */ + switch (FIELD_GET(IDR0_TTENDIAN, reg)) { + case IDR0_TTENDIAN_MIXED: + smmu->features |= ARM_SMMU_FEAT_TT_LE | ARM_SMMU_FEAT_TT_BE; + break; +#ifdef __BIG_ENDIAN + case IDR0_TTENDIAN_BE: + smmu->features |= ARM_SMMU_FEAT_TT_BE; + break; +#else + case IDR0_TTENDIAN_LE: + smmu->features |= ARM_SMMU_FEAT_TT_LE; + break; +#endif + default: + dev_err(smmu->dev, "unknown/unsupported TT endianness!\n"); + return -ENXIO; + } + + /* Boolean feature flags */ + if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI) + smmu->features |= ARM_SMMU_FEAT_PRI; + + if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS) + smmu->features |= ARM_SMMU_FEAT_ATS; + + if (reg & IDR0_SEV) + smmu->features |= ARM_SMMU_FEAT_SEV; + + if (reg & IDR0_MSI) { + smmu->features |= ARM_SMMU_FEAT_MSI; + if (coherent) + smmu->options |= ARM_SMMU_OPT_MSIPOLL; + } + + if (reg & IDR0_HYP) { + smmu->features |= ARM_SMMU_FEAT_HYP; + if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN)) + smmu->features |= ARM_SMMU_FEAT_E2H; + } + + /* + * The coherency feature as set by FW is used in preference to the ID + * register, but warn on mismatch. + */ + if (!!(reg & IDR0_COHACC) != coherent) + dev_warn(smmu->dev, "IDR0.COHACC overridden by FW configuration (%s)\n", + coherent ? "true" : "false"); + + switch (FIELD_GET(IDR0_STALL_MODEL, reg)) { + case IDR0_STALL_MODEL_FORCE: + smmu->features |= ARM_SMMU_FEAT_STALL_FORCE; + fallthrough; + case IDR0_STALL_MODEL_STALL: + smmu->features |= ARM_SMMU_FEAT_STALLS; + } + + if (reg & IDR0_S1P) + smmu->features |= ARM_SMMU_FEAT_TRANS_S1; + + if (reg & IDR0_S2P) + smmu->features |= ARM_SMMU_FEAT_TRANS_S2; + + if (!(reg & (IDR0_S1P | IDR0_S2P))) { + dev_err(smmu->dev, "no translation support!\n"); + return -ENXIO; + } + + /* We only support the AArch64 table format at present */ + switch (FIELD_GET(IDR0_TTF, reg)) { + case IDR0_TTF_AARCH32_64: + smmu->ias = 40; + fallthrough; + case IDR0_TTF_AARCH64: + break; + default: + dev_err(smmu->dev, "AArch64 table format not supported!\n"); + return -ENXIO; + } + + /* ASID/VMID sizes */ + smmu->asid_bits = reg & IDR0_ASID16 ? 16 : 8; + smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8; + + /* IDR1 */ + reg = readl_relaxed(smmu->base + ARM_SMMU_IDR1); + if (reg & (IDR1_TABLES_PRESET | IDR1_QUEUES_PRESET | IDR1_REL)) { + dev_err(smmu->dev, "embedded implementation not supported\n"); + return -ENXIO; + } + + /* Queue sizes, capped to ensure natural alignment */ + smmu->cmdq.q.llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT, + FIELD_GET(IDR1_CMDQS, reg)); + if (smmu->cmdq.q.llq.max_n_shift <= ilog2(CMDQ_BATCH_ENTRIES)) { + /* + * We don't support splitting up batches, so one batch of + * commands plus an extra sync needs to fit inside the command + * queue. There's also no way we can handle the weird alignment + * restrictions on the base pointer for a unit-length queue. + */ + dev_err(smmu->dev, "command queue size <= %d entries not supported\n", + CMDQ_BATCH_ENTRIES); + return -ENXIO; + } + + smmu->evtq.q.llq.max_n_shift = min_t(u32, EVTQ_MAX_SZ_SHIFT, + FIELD_GET(IDR1_EVTQS, reg)); + smmu->priq.q.llq.max_n_shift = min_t(u32, PRIQ_MAX_SZ_SHIFT, + FIELD_GET(IDR1_PRIQS, reg)); + + /* SID/SSID sizes */ + smmu->ssid_bits = FIELD_GET(IDR1_SSIDSIZE, reg); + smmu->sid_bits = FIELD_GET(IDR1_SIDSIZE, reg); + smmu->iommu.max_pasids = 1UL << smmu->ssid_bits; + + /* + * If the SMMU supports fewer bits than would fill a single L2 stream + * table, use a linear table instead. + */ + if (smmu->sid_bits <= STRTAB_SPLIT) + smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB; + + /* IDR3 */ + reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3); + if (FIELD_GET(IDR3_RIL, reg)) + smmu->features |= ARM_SMMU_FEAT_RANGE_INV; + + /* IDR5 */ + reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5); + + /* Maximum number of outstanding stalls */ + smmu->evtq.max_stalls = FIELD_GET(IDR5_STALL_MAX, reg); + + /* Page sizes */ + if (reg & IDR5_GRAN64K) + smmu->pgsize_bitmap |= SZ_64K | SZ_512M; + if (reg & IDR5_GRAN16K) + smmu->pgsize_bitmap |= SZ_16K | SZ_32M; + if (reg & IDR5_GRAN4K) + smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G; + + /* Input address size */ + if (FIELD_GET(IDR5_VAX, reg) == IDR5_VAX_52_BIT) + smmu->features |= ARM_SMMU_FEAT_VAX; + + /* Output address size */ + switch (FIELD_GET(IDR5_OAS, reg)) { + case IDR5_OAS_32_BIT: + smmu->oas = 32; + break; + case IDR5_OAS_36_BIT: + smmu->oas = 36; + break; + case IDR5_OAS_40_BIT: + smmu->oas = 40; + break; + case IDR5_OAS_42_BIT: + smmu->oas = 42; + break; + case IDR5_OAS_44_BIT: + smmu->oas = 44; + break; + case IDR5_OAS_52_BIT: + smmu->oas = 52; + smmu->pgsize_bitmap |= 1ULL << 42; /* 4TB */ + break; + default: + dev_info(smmu->dev, + "unknown output address size. Truncating to 48-bit\n"); + fallthrough; + case IDR5_OAS_48_BIT: + smmu->oas = 48; + } + + /* Set the DMA mask for our table walker */ + if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas))) + dev_warn(smmu->dev, + "failed to set DMA mask for table walker\n"); + + smmu->ias = max(smmu->ias, smmu->oas); + + if (arm_smmu_sva_supported(smmu)) + smmu->features |= ARM_SMMU_FEAT_SVA; + + dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n", + smmu->ias, smmu->oas, smmu->features); + return 0; +} + +int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val, + unsigned int reg_off, unsigned int ack_off) +{ + u32 reg; + + writel_relaxed(val, smmu->base + reg_off); + return readl_relaxed_poll_timeout(smmu->base + ack_off, reg, reg == val, + 1, ARM_SMMU_POLL_TIMEOUT_US); +} + +/* GBPA is "special" */ +int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr) +{ + int ret; + u32 reg, __iomem *gbpa = smmu->base + ARM_SMMU_GBPA; + + ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE), + 1, ARM_SMMU_POLL_TIMEOUT_US); + if (ret) + return ret; + + reg &= ~clr; + reg |= set; + writel_relaxed(reg | GBPA_UPDATE, gbpa); + ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE), + 1, ARM_SMMU_POLL_TIMEOUT_US); + + if (ret) + dev_err(smmu->dev, "GBPA not responding to update\n"); + return ret; +} + +int arm_smmu_device_disable(struct arm_smmu_device *smmu) +{ + int ret; + + ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_CR0, ARM_SMMU_CR0ACK); + if (ret) + dev_err(smmu->dev, "failed to clear cr0\n"); + + return ret; +} + +bool arm_smmu_capable(struct device *dev, enum iommu_cap cap) +{ + struct arm_smmu_master *master = dev_iommu_priv_get(dev); + + switch (cap) { + case IOMMU_CAP_CACHE_COHERENCY: + /* Assume that a coherent TCU implies coherent TBUs */ + return master->smmu->features & ARM_SMMU_FEAT_COHERENCY; + case IOMMU_CAP_NOEXEC: + return true; + default: + return false; + } +} + + +struct iommu_group *arm_smmu_device_group(struct device *dev) +{ + struct iommu_group *group; + + /* + * We don't support devices sharing stream IDs other than PCI RID + * aliases, since the necessary ID-to-device lookup becomes rather + * impractical given a potential sparse 32-bit stream ID space. + */ + if (dev_is_pci(dev)) + group = pci_device_group(dev); + else + group = generic_device_group(dev); + + return group; +} + +int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args) +{ + return iommu_fwspec_add_ids(dev, args->args, 1); +} diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index bcbd691ca96a..08fd79f66d29 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -17,13 +17,11 @@ #include #include #include -#include #include #include #include #include #include -#include #include #include @@ -1642,8 +1640,6 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev) return IRQ_HANDLED; } -static int arm_smmu_device_disable(struct arm_smmu_device *smmu); - static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev) { u32 gerror, gerrorn, active; @@ -1990,21 +1986,6 @@ static const struct iommu_flush_ops arm_smmu_flush_ops = { }; /* IOMMU API */ -static bool arm_smmu_capable(struct device *dev, enum iommu_cap cap) -{ - struct arm_smmu_master *master = dev_iommu_priv_get(dev); - - switch (cap) { - case IOMMU_CAP_CACHE_COHERENCY: - /* Assume that a coherent TCU implies coherent TBUs */ - return master->smmu->features & ARM_SMMU_FEAT_COHERENCY; - case IOMMU_CAP_NOEXEC: - return true; - default: - return false; - } -} - static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) { struct arm_smmu_domain *smmu_domain; @@ -2698,23 +2679,6 @@ static void arm_smmu_release_device(struct device *dev) kfree(master); } -static struct iommu_group *arm_smmu_device_group(struct device *dev) -{ - struct iommu_group *group; - - /* - * We don't support devices sharing stream IDs other than PCI RID - * aliases, since the necessary ID-to-device lookup becomes rather - * impractical given a potential sparse 32-bit stream ID space. - */ - if (dev_is_pci(dev)) - group = pci_device_group(dev); - else - group = generic_device_group(dev); - - return group; -} - static int arm_smmu_enable_nesting(struct iommu_domain *domain) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); @@ -2730,11 +2694,6 @@ static int arm_smmu_enable_nesting(struct iommu_domain *domain) return ret; } -static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args) -{ - return iommu_fwspec_add_ids(dev, args->args, 1); -} - static void arm_smmu_get_resv_regions(struct device *dev, struct list_head *head) { @@ -3081,38 +3040,6 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu) return arm_smmu_init_strtab(smmu); } -static int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val, - unsigned int reg_off, unsigned int ack_off) -{ - u32 reg; - - writel_relaxed(val, smmu->base + reg_off); - return readl_relaxed_poll_timeout(smmu->base + ack_off, reg, reg == val, - 1, ARM_SMMU_POLL_TIMEOUT_US); -} - -/* GBPA is "special" */ -static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr) -{ - int ret; - u32 reg, __iomem *gbpa = smmu->base + ARM_SMMU_GBPA; - - ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE), - 1, ARM_SMMU_POLL_TIMEOUT_US); - if (ret) - return ret; - - reg &= ~clr; - reg |= set; - writel_relaxed(reg | GBPA_UPDATE, gbpa); - ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE), - 1, ARM_SMMU_POLL_TIMEOUT_US); - - if (ret) - dev_err(smmu->dev, "GBPA not responding to update\n"); - return ret; -} - static void arm_smmu_free_msis(void *data) { struct device *dev = data; @@ -3258,17 +3185,6 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu) return 0; } -static int arm_smmu_device_disable(struct arm_smmu_device *smmu) -{ - int ret; - - ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_CR0, ARM_SMMU_CR0ACK); - if (ret) - dev_err(smmu->dev, "failed to clear cr0\n"); - - return ret; -} - static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) { int ret; @@ -3404,215 +3320,6 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) return 0; } -static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) -{ - u32 reg; - bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY; - - /* IDR0 */ - reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0); - - /* 2-level structures */ - if (FIELD_GET(IDR0_ST_LVL, reg) == IDR0_ST_LVL_2LVL) - smmu->features |= ARM_SMMU_FEAT_2_LVL_STRTAB; - - if (reg & IDR0_CD2L) - smmu->features |= ARM_SMMU_FEAT_2_LVL_CDTAB; - - /* - * Translation table endianness. - * We currently require the same endianness as the CPU, but this - * could be changed later by adding a new IO_PGTABLE_QUIRK. - */ - switch (FIELD_GET(IDR0_TTENDIAN, reg)) { - case IDR0_TTENDIAN_MIXED: - smmu->features |= ARM_SMMU_FEAT_TT_LE | ARM_SMMU_FEAT_TT_BE; - break; -#ifdef __BIG_ENDIAN - case IDR0_TTENDIAN_BE: - smmu->features |= ARM_SMMU_FEAT_TT_BE; - break; -#else - case IDR0_TTENDIAN_LE: - smmu->features |= ARM_SMMU_FEAT_TT_LE; - break; -#endif - default: - dev_err(smmu->dev, "unknown/unsupported TT endianness!\n"); - return -ENXIO; - } - - /* Boolean feature flags */ - if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI) - smmu->features |= ARM_SMMU_FEAT_PRI; - - if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS) - smmu->features |= ARM_SMMU_FEAT_ATS; - - if (reg & IDR0_SEV) - smmu->features |= ARM_SMMU_FEAT_SEV; - - if (reg & IDR0_MSI) { - smmu->features |= ARM_SMMU_FEAT_MSI; - if (coherent) - smmu->options |= ARM_SMMU_OPT_MSIPOLL; - } - - if (reg & IDR0_HYP) { - smmu->features |= ARM_SMMU_FEAT_HYP; - if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN)) - smmu->features |= ARM_SMMU_FEAT_E2H; - } - - /* - * The coherency feature as set by FW is used in preference to the ID - * register, but warn on mismatch. - */ - if (!!(reg & IDR0_COHACC) != coherent) - dev_warn(smmu->dev, "IDR0.COHACC overridden by FW configuration (%s)\n", - coherent ? "true" : "false"); - - switch (FIELD_GET(IDR0_STALL_MODEL, reg)) { - case IDR0_STALL_MODEL_FORCE: - smmu->features |= ARM_SMMU_FEAT_STALL_FORCE; - fallthrough; - case IDR0_STALL_MODEL_STALL: - smmu->features |= ARM_SMMU_FEAT_STALLS; - } - - if (reg & IDR0_S1P) - smmu->features |= ARM_SMMU_FEAT_TRANS_S1; - - if (reg & IDR0_S2P) - smmu->features |= ARM_SMMU_FEAT_TRANS_S2; - - if (!(reg & (IDR0_S1P | IDR0_S2P))) { - dev_err(smmu->dev, "no translation support!\n"); - return -ENXIO; - } - - /* We only support the AArch64 table format at present */ - switch (FIELD_GET(IDR0_TTF, reg)) { - case IDR0_TTF_AARCH32_64: - smmu->ias = 40; - fallthrough; - case IDR0_TTF_AARCH64: - break; - default: - dev_err(smmu->dev, "AArch64 table format not supported!\n"); - return -ENXIO; - } - - /* ASID/VMID sizes */ - smmu->asid_bits = reg & IDR0_ASID16 ? 16 : 8; - smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8; - - /* IDR1 */ - reg = readl_relaxed(smmu->base + ARM_SMMU_IDR1); - if (reg & (IDR1_TABLES_PRESET | IDR1_QUEUES_PRESET | IDR1_REL)) { - dev_err(smmu->dev, "embedded implementation not supported\n"); - return -ENXIO; - } - - /* Queue sizes, capped to ensure natural alignment */ - smmu->cmdq.q.llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT, - FIELD_GET(IDR1_CMDQS, reg)); - if (smmu->cmdq.q.llq.max_n_shift <= ilog2(CMDQ_BATCH_ENTRIES)) { - /* - * We don't support splitting up batches, so one batch of - * commands plus an extra sync needs to fit inside the command - * queue. There's also no way we can handle the weird alignment - * restrictions on the base pointer for a unit-length queue. - */ - dev_err(smmu->dev, "command queue size <= %d entries not supported\n", - CMDQ_BATCH_ENTRIES); - return -ENXIO; - } - - smmu->evtq.q.llq.max_n_shift = min_t(u32, EVTQ_MAX_SZ_SHIFT, - FIELD_GET(IDR1_EVTQS, reg)); - smmu->priq.q.llq.max_n_shift = min_t(u32, PRIQ_MAX_SZ_SHIFT, - FIELD_GET(IDR1_PRIQS, reg)); - - /* SID/SSID sizes */ - smmu->ssid_bits = FIELD_GET(IDR1_SSIDSIZE, reg); - smmu->sid_bits = FIELD_GET(IDR1_SIDSIZE, reg); - smmu->iommu.max_pasids = 1UL << smmu->ssid_bits; - - /* - * If the SMMU supports fewer bits than would fill a single L2 stream - * table, use a linear table instead. - */ - if (smmu->sid_bits <= STRTAB_SPLIT) - smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB; - - /* IDR3 */ - reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3); - if (FIELD_GET(IDR3_RIL, reg)) - smmu->features |= ARM_SMMU_FEAT_RANGE_INV; - - /* IDR5 */ - reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5); - - /* Maximum number of outstanding stalls */ - smmu->evtq.max_stalls = FIELD_GET(IDR5_STALL_MAX, reg); - - /* Page sizes */ - if (reg & IDR5_GRAN64K) - smmu->pgsize_bitmap |= SZ_64K | SZ_512M; - if (reg & IDR5_GRAN16K) - smmu->pgsize_bitmap |= SZ_16K | SZ_32M; - if (reg & IDR5_GRAN4K) - smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G; - - /* Input address size */ - if (FIELD_GET(IDR5_VAX, reg) == IDR5_VAX_52_BIT) - smmu->features |= ARM_SMMU_FEAT_VAX; - - /* Output address size */ - switch (FIELD_GET(IDR5_OAS, reg)) { - case IDR5_OAS_32_BIT: - smmu->oas = 32; - break; - case IDR5_OAS_36_BIT: - smmu->oas = 36; - break; - case IDR5_OAS_40_BIT: - smmu->oas = 40; - break; - case IDR5_OAS_42_BIT: - smmu->oas = 42; - break; - case IDR5_OAS_44_BIT: - smmu->oas = 44; - break; - case IDR5_OAS_52_BIT: - smmu->oas = 52; - smmu->pgsize_bitmap |= 1ULL << 42; /* 4TB */ - break; - default: - dev_info(smmu->dev, - "unknown output address size. Truncating to 48-bit\n"); - fallthrough; - case IDR5_OAS_48_BIT: - smmu->oas = 48; - } - - /* Set the DMA mask for our table walker */ - if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas))) - dev_warn(smmu->dev, - "failed to set DMA mask for table walker\n"); - - smmu->ias = max(smmu->ias, smmu->oas); - - if (arm_smmu_sva_supported(smmu)) - smmu->features |= ARM_SMMU_FEAT_SVA; - - dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n", - smmu->ias, smmu->oas, smmu->features); - return 0; -} - #ifdef CONFIG_ACPI static void acpi_smmu_get_options(u32 model, struct arm_smmu_device *smmu) { From patchwork Wed Feb 1 12:53:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124396 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ACEF7C636D3 for ; Wed, 1 Feb 2023 14:12:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=y0hr1ko0e4do5nff5FNUdZETs3wNkBKn0r6wHweL+i0=; b=OEFTVF1ACum8Mt QN+s3ZjaX1OU4w+rRtKVz1MdnlSjnH8LnqeMjjK7ILDL8AR6okE2dFcVnG6U5W1m6k8YQlGDTBYP/ Dd7sTaNKo8pqCJR26JZtlyxPRUjhI2EO64RzkKgXuvZiIZ9ONTvRX11VjCUKIvbnN/ax4w7viTCx+ yKmtc3He2UlaN27zb4jTNFZjAWEcUXUc9ehzr+wLOxaHaV2Ubi45JTe5/vbfe/KbJ6ujFPsbVtw1J kWvsVInE93psuyn0NQDKjkLRKxdPGWWqpNrPne7sC5Wz4IbVZpDfe2vHErjqijFXfzxBh/zd9Uksa vQf79ozA1tJ5XJqbdykQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDqF-00CFfz-GY; Wed, 01 Feb 2023 14:12:05 +0000 Received: from mail-wr1-x42c.google.com ([2a00:1450:4864:20::42c]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiF-00BnFc-Ib for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:46 +0000 Received: by mail-wr1-x42c.google.com with SMTP id q10so17233476wrm.4 for ; Wed, 01 Feb 2023 04:59:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Y2CxERzENf3OXQ6RA/4LliXUQhYEHALLxxSjcL/OTSc=; b=ND9R85amaEKTBRYBScK+MS8kwPbtxgNTNa3ZbpXyeTz+nJs0MpyMq+XBJ/8KiEtIaw YH74H4wIdJFs2x6ebN8bHiVqXDaPA9vAGTvDoPU14AjYZhEozo/7u3NUWHHVH08YQfkt WE2D7rYAAA215fwgT5lTmjMUhCOMuYHOueyuBKcWLl9g+UYVq17cNKqTEyisHzZ8pSPI l2eobme1YPNBwy4LD1NBkP5V5KPFVYLLAWteQnEgNLhuGO+nhjzjSeIhXdSOgeS20BmD dXrV9KGkppFsV8AZNHxI0Qq/Al1uARMGS0eoWjH2bXIVLyqfFoZ7T9H2r/21JUvFNGlc ogag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Y2CxERzENf3OXQ6RA/4LliXUQhYEHALLxxSjcL/OTSc=; b=jxFnlPahILjztJpeE++8MocfSijbyrIxI7KYgLQnnVjmfd6HFs3fWZEWVcoh3CJsXI 6nWuI/cTrJA9MXsHpeSqcR/pNGkOBMEcXikeYhFA/57dk1hjPSRUF8L/MJJOHwl/keBR nVanw2zX+Etic0h6Wg2JRDaPHpaXroBOFSZTpbH09dsGhikJfosvb/FCCy2gyx6weQ8s y0Jon6w0HMrtfBUboszGl8ceNUnXXNuToWrSx/V0TnkoY2s0O3kUO6uV0IV3fuG9SDMs yQOXNEH4tGBFJLW9fQk0ZBDRSLGTmJicdgbFynWxFpNVAfTqVAueIoDv0pbLNvTutqgA q3pQ== X-Gm-Message-State: AO0yUKUHnzIIFhyGgR43+HWEs4wkzqA68neQzM7OVEy08ssAF7PdoT0R zWyWkRIwP3EBqEKPBxM8z3Gfmg== X-Google-Smtp-Source: AK7set/PK2nsyarD5mtJeYsZRnDC5Z8ddlvBx7Lhat2lkWzB11Nf7Xr1gUPq5poRd3YMJ3fQODqlMQ== X-Received: by 2002:adf:9c8e:0:b0:2bf:d034:f49 with SMTP id d14-20020adf9c8e000000b002bfd0340f49mr2145130wre.1.1675256383089; Wed, 01 Feb 2023 04:59:43 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:42 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 30/45] iommu/arm-smmu-v3: Move queue and table allocation to arm-smmu-v3-common.c Date: Wed, 1 Feb 2023 12:53:14 +0000 Message-Id: <20230201125328.2186498-31-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045943_711890_25EE0DDC X-CRM114-Status: GOOD ( 22.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move more code to arm-smmu-v3-common.c, so that the KVM driver can reuse it. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 8 + .../arm/arm-smmu-v3/arm-smmu-v3-common.c | 190 ++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 215 ++---------------- 3 files changed, 219 insertions(+), 194 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 59e8101d4ff5..8ab84282f62a 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -277,6 +277,14 @@ bool arm_smmu_capable(struct device *dev, enum iommu_cap cap); struct iommu_group *arm_smmu_device_group(struct device *dev); int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args); int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu); +int arm_smmu_init_one_queue(struct arm_smmu_device *smmu, + struct arm_smmu_queue *q, + void __iomem *page, + unsigned long prod_off, + unsigned long cons_off, + size_t dwords, const char *name); +int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid); +int arm_smmu_init_strtab(struct arm_smmu_device *smmu); int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, struct arm_smmu_ctx_desc *cd); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c index 5e43329c0826..9226971b6e53 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c @@ -294,3 +294,193 @@ int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args) { return iommu_fwspec_add_ids(dev, args->args, 1); } + +int arm_smmu_init_one_queue(struct arm_smmu_device *smmu, + struct arm_smmu_queue *q, + void __iomem *page, + unsigned long prod_off, + unsigned long cons_off, + size_t dwords, const char *name) +{ + size_t qsz; + + do { + qsz = ((1 << q->llq.max_n_shift) * dwords) << 3; + q->base = dmam_alloc_coherent(smmu->dev, qsz, &q->base_dma, + GFP_KERNEL); + if (q->base || qsz < PAGE_SIZE) + break; + + q->llq.max_n_shift--; + } while (1); + + if (!q->base) { + dev_err(smmu->dev, + "failed to allocate queue (0x%zx bytes) for %s\n", + qsz, name); + return -ENOMEM; + } + + if (!WARN_ON(q->base_dma & (qsz - 1))) { + dev_info(smmu->dev, "allocated %u entries for %s\n", + 1 << q->llq.max_n_shift, name); + } + + q->prod_reg = page + prod_off; + q->cons_reg = page + cons_off; + q->ent_dwords = dwords; + + q->q_base = Q_BASE_RWA; + q->q_base |= q->base_dma & Q_BASE_ADDR_MASK; + q->q_base |= FIELD_PREP(Q_BASE_LOG2SIZE, q->llq.max_n_shift); + + q->llq.prod = q->llq.cons = 0; + return 0; +} + +/* Stream table initialization functions */ +static void +arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc) +{ + u64 val = 0; + + val |= FIELD_PREP(STRTAB_L1_DESC_SPAN, desc->span); + val |= desc->l2ptr_dma & STRTAB_L1_DESC_L2PTR_MASK; + + /* Ensure the SMMU sees a zeroed table after reading this pointer */ + WRITE_ONCE(*dst, cpu_to_le64(val)); +} + +int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) +{ + size_t size; + void *strtab; + struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg; + struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT]; + + if (desc->l2ptr) + return 0; + + size = 1 << (STRTAB_SPLIT + ilog2(STRTAB_STE_DWORDS) + 3); + strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS]; + + desc->span = STRTAB_SPLIT + 1; + desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma, + GFP_KERNEL); + if (!desc->l2ptr) { + dev_err(smmu->dev, + "failed to allocate l2 stream table for SID %u\n", + sid); + return -ENOMEM; + } + + arm_smmu_write_strtab_l1_desc(strtab, desc); + return 0; +} + +static int arm_smmu_init_l1_strtab(struct arm_smmu_device *smmu) +{ + unsigned int i; + struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg; + void *strtab = smmu->strtab_cfg.strtab; + + cfg->l1_desc = devm_kcalloc(smmu->dev, cfg->num_l1_ents, + sizeof(*cfg->l1_desc), GFP_KERNEL); + if (!cfg->l1_desc) + return -ENOMEM; + + for (i = 0; i < cfg->num_l1_ents; ++i) { + arm_smmu_write_strtab_l1_desc(strtab, &cfg->l1_desc[i]); + strtab += STRTAB_L1_DESC_DWORDS << 3; + } + + return 0; +} + +static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu) +{ + void *strtab; + u64 reg; + u32 size, l1size; + struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg; + + /* Calculate the L1 size, capped to the SIDSIZE. */ + size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3); + size = min(size, smmu->sid_bits - STRTAB_SPLIT); + cfg->num_l1_ents = 1 << size; + + size += STRTAB_SPLIT; + if (size < smmu->sid_bits) + dev_warn(smmu->dev, + "2-level strtab only covers %u/%u bits of SID\n", + size, smmu->sid_bits); + + l1size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3); + strtab = dmam_alloc_coherent(smmu->dev, l1size, &cfg->strtab_dma, + GFP_KERNEL); + if (!strtab) { + dev_err(smmu->dev, + "failed to allocate l1 stream table (%u bytes)\n", + l1size); + return -ENOMEM; + } + cfg->strtab = strtab; + + /* Configure strtab_base_cfg for 2 levels */ + reg = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_2LVL); + reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, size); + reg |= FIELD_PREP(STRTAB_BASE_CFG_SPLIT, STRTAB_SPLIT); + cfg->strtab_base_cfg = reg; + + return arm_smmu_init_l1_strtab(smmu); +} + +static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu) +{ + void *strtab; + u64 reg; + u32 size; + struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg; + + size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3); + strtab = dmam_alloc_coherent(smmu->dev, size, &cfg->strtab_dma, + GFP_KERNEL); + if (!strtab) { + dev_err(smmu->dev, + "failed to allocate linear stream table (%u bytes)\n", + size); + return -ENOMEM; + } + cfg->strtab = strtab; + cfg->num_l1_ents = 1 << smmu->sid_bits; + + /* Configure strtab_base_cfg for a linear table covering all SIDs */ + reg = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_LINEAR); + reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, smmu->sid_bits); + cfg->strtab_base_cfg = reg; + + return 0; +} + +int arm_smmu_init_strtab(struct arm_smmu_device *smmu) +{ + u64 reg; + int ret; + + if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) + ret = arm_smmu_init_strtab_2lvl(smmu); + else + ret = arm_smmu_init_strtab_linear(smmu); + + if (ret) + return ret; + + /* Set the strtab base address */ + reg = smmu->strtab_cfg.strtab_dma & STRTAB_BASE_ADDR_MASK; + reg |= STRTAB_BASE_RA; + smmu->strtab_cfg.strtab_base = reg; + + /* Allocate the first VMID for stage-2 bypass STEs */ + set_bit(0, smmu->vmid_map); + return 0; +} diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 08fd79f66d29..2baaf064a324 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1209,18 +1209,6 @@ bool arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd) } /* Stream table manipulation functions */ -static void -arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc) -{ - u64 val = 0; - - val |= FIELD_PREP(STRTAB_L1_DESC_SPAN, desc->span); - val |= desc->l2ptr_dma & STRTAB_L1_DESC_L2PTR_MASK; - - /* See comment in arm_smmu_write_ctx_desc() */ - WRITE_ONCE(*dst, cpu_to_le64(val)); -} - static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid) { struct arm_smmu_cmdq_ent cmd = { @@ -1395,34 +1383,6 @@ static void arm_smmu_init_bypass_stes(__le64 *strtab, unsigned int nent, bool fo } } -static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) -{ - size_t size; - void *strtab; - struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg; - struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT]; - - if (desc->l2ptr) - return 0; - - size = 1 << (STRTAB_SPLIT + ilog2(STRTAB_STE_DWORDS) + 3); - strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS]; - - desc->span = STRTAB_SPLIT + 1; - desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma, - GFP_KERNEL); - if (!desc->l2ptr) { - dev_err(smmu->dev, - "failed to allocate l2 stream table for SID %u\n", - sid); - return -ENOMEM; - } - - arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT, false); - arm_smmu_write_strtab_l1_desc(strtab, desc); - return 0; -} - static struct arm_smmu_master * arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) { @@ -2515,13 +2475,24 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid) static int arm_smmu_init_sid_strtab(struct arm_smmu_device *smmu, u32 sid) { + int ret; + /* Check the SIDs are in range of the SMMU and our stream table */ if (!arm_smmu_sid_in_range(smmu, sid)) return -ERANGE; /* Ensure l2 strtab is initialised */ - if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) - return arm_smmu_init_l2_strtab(smmu, sid); + if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { + struct arm_smmu_strtab_l1_desc *desc; + + ret = arm_smmu_init_l2_strtab(smmu, sid); + if (ret) + return ret; + + desc = &smmu->strtab_cfg.l1_desc[sid >> STRTAB_SPLIT]; + arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT, + false); + } return 0; } @@ -2821,49 +2792,6 @@ static struct iommu_ops arm_smmu_ops = { }; /* Probing and initialisation functions */ -static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu, - struct arm_smmu_queue *q, - void __iomem *page, - unsigned long prod_off, - unsigned long cons_off, - size_t dwords, const char *name) -{ - size_t qsz; - - do { - qsz = ((1 << q->llq.max_n_shift) * dwords) << 3; - q->base = dmam_alloc_coherent(smmu->dev, qsz, &q->base_dma, - GFP_KERNEL); - if (q->base || qsz < PAGE_SIZE) - break; - - q->llq.max_n_shift--; - } while (1); - - if (!q->base) { - dev_err(smmu->dev, - "failed to allocate queue (0x%zx bytes) for %s\n", - qsz, name); - return -ENOMEM; - } - - if (!WARN_ON(q->base_dma & (qsz - 1))) { - dev_info(smmu->dev, "allocated %u entries for %s\n", - 1 << q->llq.max_n_shift, name); - } - - q->prod_reg = page + prod_off; - q->cons_reg = page + cons_off; - q->ent_dwords = dwords; - - q->q_base = Q_BASE_RWA; - q->q_base |= q->base_dma & Q_BASE_ADDR_MASK; - q->q_base |= FIELD_PREP(Q_BASE_LOG2SIZE, q->llq.max_n_shift); - - q->llq.prod = q->llq.cons = 0; - return 0; -} - static int arm_smmu_cmdq_init(struct arm_smmu_device *smmu) { struct arm_smmu_cmdq *cmdq = &smmu->cmdq; @@ -2918,114 +2846,6 @@ static int arm_smmu_init_queues(struct arm_smmu_device *smmu) PRIQ_ENT_DWORDS, "priq"); } -static int arm_smmu_init_l1_strtab(struct arm_smmu_device *smmu) -{ - unsigned int i; - struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg; - void *strtab = smmu->strtab_cfg.strtab; - - cfg->l1_desc = devm_kcalloc(smmu->dev, cfg->num_l1_ents, - sizeof(*cfg->l1_desc), GFP_KERNEL); - if (!cfg->l1_desc) - return -ENOMEM; - - for (i = 0; i < cfg->num_l1_ents; ++i) { - arm_smmu_write_strtab_l1_desc(strtab, &cfg->l1_desc[i]); - strtab += STRTAB_L1_DESC_DWORDS << 3; - } - - return 0; -} - -static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu) -{ - void *strtab; - u64 reg; - u32 size, l1size; - struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg; - - /* Calculate the L1 size, capped to the SIDSIZE. */ - size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3); - size = min(size, smmu->sid_bits - STRTAB_SPLIT); - cfg->num_l1_ents = 1 << size; - - size += STRTAB_SPLIT; - if (size < smmu->sid_bits) - dev_warn(smmu->dev, - "2-level strtab only covers %u/%u bits of SID\n", - size, smmu->sid_bits); - - l1size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3); - strtab = dmam_alloc_coherent(smmu->dev, l1size, &cfg->strtab_dma, - GFP_KERNEL); - if (!strtab) { - dev_err(smmu->dev, - "failed to allocate l1 stream table (%u bytes)\n", - l1size); - return -ENOMEM; - } - cfg->strtab = strtab; - - /* Configure strtab_base_cfg for 2 levels */ - reg = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_2LVL); - reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, size); - reg |= FIELD_PREP(STRTAB_BASE_CFG_SPLIT, STRTAB_SPLIT); - cfg->strtab_base_cfg = reg; - - return arm_smmu_init_l1_strtab(smmu); -} - -static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu) -{ - void *strtab; - u64 reg; - u32 size; - struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg; - - size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3); - strtab = dmam_alloc_coherent(smmu->dev, size, &cfg->strtab_dma, - GFP_KERNEL); - if (!strtab) { - dev_err(smmu->dev, - "failed to allocate linear stream table (%u bytes)\n", - size); - return -ENOMEM; - } - cfg->strtab = strtab; - cfg->num_l1_ents = 1 << smmu->sid_bits; - - /* Configure strtab_base_cfg for a linear table covering all SIDs */ - reg = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_LINEAR); - reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, smmu->sid_bits); - cfg->strtab_base_cfg = reg; - - arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents, false); - return 0; -} - -static int arm_smmu_init_strtab(struct arm_smmu_device *smmu) -{ - u64 reg; - int ret; - - if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) - ret = arm_smmu_init_strtab_2lvl(smmu); - else - ret = arm_smmu_init_strtab_linear(smmu); - - if (ret) - return ret; - - /* Set the strtab base address */ - reg = smmu->strtab_cfg.strtab_dma & STRTAB_BASE_ADDR_MASK; - reg |= STRTAB_BASE_RA; - smmu->strtab_cfg.strtab_base = reg; - - /* Allocate the first VMID for stage-2 bypass STEs */ - set_bit(0, smmu->vmid_map); - return 0; -} - static int arm_smmu_init_structures(struct arm_smmu_device *smmu) { int ret; @@ -3037,7 +2857,14 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu) if (ret) return ret; - return arm_smmu_init_strtab(smmu); + ret = arm_smmu_init_strtab(smmu); + if (ret) + return ret; + + if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)) + arm_smmu_init_bypass_stes(smmu->strtab_cfg.strtab, + smmu->strtab_cfg.num_l1_ents, false); + return 0; } static void arm_smmu_free_msis(void *data) From patchwork Wed Feb 1 12:53:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D3C62C05027 for ; Wed, 1 Feb 2023 14:14:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gul/paEGuhebMKKC81s8NS0px/HDkrKAvSm2tBBvaSk=; b=DUTNq/A5XgjpGl OiCXkw/bUTcdbHJiFe/wdSPwHJLvqaCNBdU8i6ZC/wpZfJaUk/0zbbKk1rvQzVNDDiNUJjIEPSEPy tPGWRWQdJBN1OkBK3JsdtBB9j3h8Pz3TjRhp1nMwXqKgqgL6yylz0164srd6choquUEqFGnuZWq8b Jh5msXFarWHNtnXRlwP6g9sY2U3TwmhaISqo/v5iBAtnuYxV+5hqPeSP9ONS6Dd8bowqq0v1rizqr q6MQuSyyTrOt+AdiumJXr2FnasDhhkpjbKJ4tyAGWIvKouWwXikXDTXf2XJDLVN/Q/b9SK37UoFgT Ql7I797We88MF0I08ALg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDrF-00CG57-Vg; Wed, 01 Feb 2023 14:13:06 +0000 Received: from mail-wr1-x42f.google.com ([2a00:1450:4864:20::42f]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiG-00BnI4-B2 for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:46 +0000 Received: by mail-wr1-x42f.google.com with SMTP id o18so7754243wrj.3 for ; Wed, 01 Feb 2023 04:59:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=k6RVVU/cDR30+b5v6ryQ9Qt2Qe/cZnMA4vlAa66b2Ys=; b=mu2lBA0BxxQukvTi9iYO7jhBZ8zYvtylLgUP8D/nnzbjtu9Fu04DCXo+9J1OplFloM 2mj2s/TUMEjjGh7q8zbfwjap7eipolwiOV2dAJNt542PBtjSN3gAJiCFlbpAKAN8x236 0URl9fMplXSkkpdd7gl6Id5htSLQPE1CjcYNrCeA6Fka/TxJwWHtJbfqwNRLX/svQ2S+ B7OKZ2fBCpZKe2E0IftP4GTip2tljs8v5kjvZ8IbyV/HNVDtWAt58/uEsW1xrYXxPore SCYiHi/EwNvHSShyXk2rFhXByNJD7hnOTg0itBO2+7QQuUJF8kqLN/fplMCLxCQWJjPB r54g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k6RVVU/cDR30+b5v6ryQ9Qt2Qe/cZnMA4vlAa66b2Ys=; b=JQGegKMEH3//DrZY+hI78+Os7nRfxXWJ8sm2j0Gi1pKdGXp4EtT1UkvZ5HUCC3E903 ArozLurs1ztrMD2UAfExhSLGqj5taB2Iq81g1LMhWkrrQsMc9hNPYT2nrfDvL8dPnVQ4 F+LE+WDzFG/oJ5rNSh4tFQvtOWzIgcytiVewFR5VxafsZ1O1HuUwc3zQyg85O31VhVOu Udy9nsuTJtsigS9Mfhyka1RyZqGsPo8zdnuOr/SN/NMDiQAgESVz+90duHaVviG+fXxu 0w0rJLPpQWDLVwGeo7Es55j2YQTGGvAaRANactoO+CuxfQDWIWJEZNA/pX7PfOxD9Fwg Q36A== X-Gm-Message-State: AO0yUKUKlhrkofzQ3/yHkpk4KQwkgSgVfAZ8lTdrk9b8sCN7J+dzUN7k 5yKJFjhDXWaNR/uIz3sQGmw5Yw== X-Google-Smtp-Source: AK7set9KZvY706CH4tuKx5S8bX1dcbZP46wQLmL4WsIGvMixGthlW8NKXtzmya0mXpXaEnKYzEH3wA== X-Received: by 2002:a5d:5e81:0:b0:2be:c41:4758 with SMTP id ck1-20020a5d5e81000000b002be0c414758mr2211733wrb.38.1675256383907; Wed, 01 Feb 2023 04:59:43 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:43 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 31/45] iommu/arm-smmu-v3: Move firmware probe to arm-smmu-v3-common Date: Wed, 1 Feb 2023 12:53:15 +0000 Message-Id: <20230201125328.2186498-32-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045944_482280_62C3227E X-CRM114-Status: GOOD ( 20.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move the FW probe functions to the common source, and take the opportunity to clean up the 'bypass' behaviour a bit (see dc87a98db751 ("iommu/arm-smmu: Fall back to global bypass")) Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 4 + .../arm/arm-smmu-v3/arm-smmu-v3-common.c | 107 ++++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 106 +---------------- 3 files changed, 114 insertions(+), 103 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 8ab84282f62a..345aac378712 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -276,6 +276,10 @@ int arm_smmu_device_disable(struct arm_smmu_device *smmu); bool arm_smmu_capable(struct device *dev, enum iommu_cap cap); struct iommu_group *arm_smmu_device_group(struct device *dev); int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args); + +struct platform_device; +int arm_smmu_fw_probe(struct platform_device *pdev, + struct arm_smmu_device *smmu, bool *bypass); int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu); int arm_smmu_init_one_queue(struct arm_smmu_device *smmu, struct arm_smmu_queue *q, diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c index 9226971b6e53..4e945df5d64f 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c @@ -1,10 +1,117 @@ // SPDX-License-Identifier: GPL-2.0 +#include #include #include +#include +#include +#include #include #include "arm-smmu-v3.h" +struct arm_smmu_option_prop { + u32 opt; + const char *prop; +}; + +static struct arm_smmu_option_prop arm_smmu_options[] = { + { ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" }, + { ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"}, + { 0, NULL}, +}; + +static void parse_driver_options(struct arm_smmu_device *smmu) +{ + int i = 0; + + do { + if (of_property_read_bool(smmu->dev->of_node, + arm_smmu_options[i].prop)) { + smmu->options |= arm_smmu_options[i].opt; + dev_notice(smmu->dev, "option %s\n", + arm_smmu_options[i].prop); + } + } while (arm_smmu_options[++i].opt); +} + +static int arm_smmu_device_dt_probe(struct platform_device *pdev, + struct arm_smmu_device *smmu, + bool *bypass) +{ + struct device *dev = &pdev->dev; + u32 cells; + + *bypass = true; + if (of_property_read_u32(dev->of_node, "#iommu-cells", &cells)) + dev_err(dev, "missing #iommu-cells property\n"); + else if (cells != 1) + dev_err(dev, "invalid #iommu-cells value (%d)\n", cells); + else + *bypass = false; + + parse_driver_options(smmu); + + if (of_dma_is_coherent(dev->of_node)) + smmu->features |= ARM_SMMU_FEAT_COHERENCY; + + return 0; +} + +#ifdef CONFIG_ACPI +static void acpi_smmu_get_options(u32 model, struct arm_smmu_device *smmu) +{ + switch (model) { + case ACPI_IORT_SMMU_V3_CAVIUM_CN99XX: + smmu->options |= ARM_SMMU_OPT_PAGE0_REGS_ONLY; + break; + case ACPI_IORT_SMMU_V3_HISILICON_HI161X: + smmu->options |= ARM_SMMU_OPT_SKIP_PREFETCH; + break; + } + + dev_notice(smmu->dev, "option mask 0x%x\n", smmu->options); +} + +static int arm_smmu_device_acpi_probe(struct platform_device *pdev, + struct arm_smmu_device *smmu, + bool *bypass) +{ + struct acpi_iort_smmu_v3 *iort_smmu; + struct device *dev = smmu->dev; + struct acpi_iort_node *node; + + node = *(struct acpi_iort_node **)dev_get_platdata(dev); + + /* Retrieve SMMUv3 specific data */ + iort_smmu = (struct acpi_iort_smmu_v3 *)node->node_data; + + acpi_smmu_get_options(iort_smmu->model, smmu); + + if (iort_smmu->flags & ACPI_IORT_SMMU_V3_COHACC_OVERRIDE) + smmu->features |= ARM_SMMU_FEAT_COHERENCY; + + *bypass = false; + return 0; +} + +#else +static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev, + struct arm_smmu_device *smmu, + bool *bypass) +{ + return -ENODEV; +} +#endif + +int arm_smmu_fw_probe(struct platform_device *pdev, + struct arm_smmu_device *smmu, bool *bypass) +{ + if (smmu->dev->of_node) + return arm_smmu_device_dt_probe(pdev, smmu, bypass); + else + return arm_smmu_device_acpi_probe(pdev, smmu, bypass); +} + int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) { u32 reg; diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 2baaf064a324..7cb171304953 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -9,7 +9,6 @@ * This driver is powered by bad coffee and bombay mix. */ -#include #include #include #include @@ -19,9 +18,6 @@ #include #include #include -#include -#include -#include #include #include @@ -64,11 +60,6 @@ static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = { }, }; -struct arm_smmu_option_prop { - u32 opt; - const char *prop; -}; - DEFINE_XARRAY_ALLOC1(arm_smmu_asid_xa); DEFINE_MUTEX(arm_smmu_asid_lock); @@ -78,26 +69,6 @@ DEFINE_MUTEX(arm_smmu_asid_lock); */ struct arm_smmu_ctx_desc quiet_cd = { 0 }; -static struct arm_smmu_option_prop arm_smmu_options[] = { - { ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" }, - { ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"}, - { 0, NULL}, -}; - -static void parse_driver_options(struct arm_smmu_device *smmu) -{ - int i = 0; - - do { - if (of_property_read_bool(smmu->dev->of_node, - arm_smmu_options[i].prop)) { - smmu->options |= arm_smmu_options[i].opt; - dev_notice(smmu->dev, "option %s\n", - arm_smmu_options[i].prop); - } - } while (arm_smmu_options[++i].opt); -} - /* Low-level queue manipulation functions */ static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n) { @@ -3147,70 +3118,6 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) return 0; } -#ifdef CONFIG_ACPI -static void acpi_smmu_get_options(u32 model, struct arm_smmu_device *smmu) -{ - switch (model) { - case ACPI_IORT_SMMU_V3_CAVIUM_CN99XX: - smmu->options |= ARM_SMMU_OPT_PAGE0_REGS_ONLY; - break; - case ACPI_IORT_SMMU_V3_HISILICON_HI161X: - smmu->options |= ARM_SMMU_OPT_SKIP_PREFETCH; - break; - } - - dev_notice(smmu->dev, "option mask 0x%x\n", smmu->options); -} - -static int arm_smmu_device_acpi_probe(struct platform_device *pdev, - struct arm_smmu_device *smmu) -{ - struct acpi_iort_smmu_v3 *iort_smmu; - struct device *dev = smmu->dev; - struct acpi_iort_node *node; - - node = *(struct acpi_iort_node **)dev_get_platdata(dev); - - /* Retrieve SMMUv3 specific data */ - iort_smmu = (struct acpi_iort_smmu_v3 *)node->node_data; - - acpi_smmu_get_options(iort_smmu->model, smmu); - - if (iort_smmu->flags & ACPI_IORT_SMMU_V3_COHACC_OVERRIDE) - smmu->features |= ARM_SMMU_FEAT_COHERENCY; - - return 0; -} -#else -static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev, - struct arm_smmu_device *smmu) -{ - return -ENODEV; -} -#endif - -static int arm_smmu_device_dt_probe(struct platform_device *pdev, - struct arm_smmu_device *smmu) -{ - struct device *dev = &pdev->dev; - u32 cells; - int ret = -EINVAL; - - if (of_property_read_u32(dev->of_node, "#iommu-cells", &cells)) - dev_err(dev, "missing #iommu-cells property\n"); - else if (cells != 1) - dev_err(dev, "invalid #iommu-cells value (%d)\n", cells); - else - ret = 0; - - parse_driver_options(smmu); - - if (of_dma_is_coherent(dev->of_node)) - smmu->features |= ARM_SMMU_FEAT_COHERENCY; - - return ret; -} - static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu) { if (smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY) @@ -3271,16 +3178,9 @@ static int arm_smmu_device_probe(struct platform_device *pdev) return -ENOMEM; smmu->dev = dev; - if (dev->of_node) { - ret = arm_smmu_device_dt_probe(pdev, smmu); - } else { - ret = arm_smmu_device_acpi_probe(pdev, smmu); - if (ret == -ENODEV) - return ret; - } - - /* Set bypass mode according to firmware probing result */ - bypass = !!ret; + ret = arm_smmu_fw_probe(pdev, smmu, &bypass); + if (ret) + return ret; /* Base address */ res = platform_get_resource(pdev, IORESOURCE_MEM, 0); From patchwork Wed Feb 1 12:53:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5096FC05027 for ; Wed, 1 Feb 2023 14:27:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kRIxEB4keWI8UnhURUOch9LBNCx0E59oYHfZ/5pPOzc=; b=Zpb1RoYC6KrRNl G8e6+3Q+gMC64vxECMsJgeIqbxg6NuDujRKkqfvUc+g4aqRiQMF9ioI73lQECseFQJufp5GL6HJl/ t6UcPuBHga62YL0izuseSjWsXhwsaYGNpzHuGxgp7UweLpZFFG1MZnnNF/ccZtw+OpFyMIMDZxeTx 06YTQs9YOcbM9Jyau1GvXDZ9eU+EI0x3jhMK8op08FrW/ps7C1BB/dgQlYh96SD5aPjHwVx9kkc/V 44Vx32DzdSv4GHKpc8b/NsHKH8duMaWvLhJ1dFpUzWWvX8yKhr8/CCDNOtCV0XAku7tChSDirTYop C02T+U8WB8qh4D0K+f/g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNE4E-00CLFS-8o; Wed, 01 Feb 2023 14:26:30 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDhZ-00CBiH-6d for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 14:03:05 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+zOq2rLtD1YPFldzTAQlw7NyDrjfoS45oyn9F6B1b8s=; b=Q42Bi99XVrNXfCgu109837xF0X MeqiRRLXjiB7khDoRGOnl74zs2XHQzaVnlAo4SOEMNS/iJKlwLFM4URFNPHURAEkKUAzVuW7QmHgU KVo+EsML6cemvrWNiCsnLwcBRE0gKhgY9M7nRASu88QEEjBn3eZSvlnR1MXqKZC+zkNEIZqTR2f+p hTMY5aByiCS7S18su9RniOXYOB25zv9N/WLnk9EJqt7wUOgKk1nJzXjHfM/kWLCzsKlQQ70ylD+5+ J0x668TzJVZ2OsbztSqsIbbpqxiNj3MaWhvBW8a98ygJ0lLi94+Ys9VTKs6XJZuur7ROsfLT0EmDa FDc4AtpQ==; Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pNChk-004m3g-0t for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:15 +0000 Received: by mail-wr1-x430.google.com with SMTP id o18so7754289wrj.3 for ; Wed, 01 Feb 2023 04:59:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+zOq2rLtD1YPFldzTAQlw7NyDrjfoS45oyn9F6B1b8s=; b=aNGPAFo2QKGeiRP5S5IH7xD7mskfsJdtbRW+20Bv+RmyiyxyCq2QbNzd/p4ud8dL71 2clglqcQlUZsX+DdlDGVOeXwo4LKP8WX8Ta0Mg2gbAN2H4dvoVB84P8hg2hapAAqUcZw BYYg8mMY1ePI+J8iL2xyozLfJc+Nim3Tx9NkA26QTNpDAMx6xebA044v+p2gqibEx2Wh 7nOUuQHMLK8yf2LC4uW6bNg5hP4vzKvP6/p4NEyyU5mrCgrSfBiS/wvn/dTXWM2dIRIs eTJq3Xd+YQsEF/IsVuqdRrdKTkxT1Rwuut/DLKd18BdqyKdS74/8xQEtN6nWCXLvagsu ypSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+zOq2rLtD1YPFldzTAQlw7NyDrjfoS45oyn9F6B1b8s=; b=KatpBFjXbsod23idq9YsUzkEXn1QmLr+hbGeNC0gNZAEZH48uiSibuGWvk36fXfrGm LhhQAWr+tnJ5Wfd9pU3h7c+GygBCCu6vwqhzrnT7C1q500gManBY7RvROmRfqhkGFc1q Snj6TKPOJdBlNYLTkx/t+lWZqR3+HZediK7ZBdey+plXOUA9pdci6y/5fk56zy3IMiix lYAvXdC4xUr6Cv2cOpxUBf5NRh0+thU0iqZS2k5JEWbil8UGANq4zJ3GMv4FkHHc++q8 ude7tVA4tX9KlaBSRxmWXQ1sbPQv2Xi5j8Inm3V8jDYJqFVLQaXuJKVg++Yvl9I1DH5f jVmg== X-Gm-Message-State: AO0yUKUrEU9nggC1GQAuaHh8ZznyuVWt4gyA92VauQKczFjWqfCXPKvv qJA8bQKFA/8K8dn2LY0Ly/RF0w== X-Google-Smtp-Source: AK7set9KtmjqqwgFLZB2FNx7zyl09FQwFd6hG2Sf0+3Gpskf/K+qKqsqUvXHqew395vtDLCSbNyrtA== X-Received: by 2002:a05:6000:1706:b0:2be:5366:8cdf with SMTP id n6-20020a056000170600b002be53668cdfmr2833558wrc.20.1675256384693; Wed, 01 Feb 2023 04:59:44 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:44 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 32/45] iommu/arm-smmu-v3: Move IOMMU registration to arm-smmu-v3-common.c Date: Wed, 1 Feb 2023 12:53:16 +0000 Message-Id: <20230201125328.2186498-33-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125912_698738_D5BE753F X-CRM114-Status: GOOD ( 15.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The KVM driver will need to implement a few IOMMU ops, so move the helpers to arm-smmu-v3-common. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 4 +++ .../arm/arm-smmu-v3/arm-smmu-v3-common.c | 34 +++++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 17 ++-------- 3 files changed, 40 insertions(+), 15 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 345aac378712..87034da361ca 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -290,6 +290,10 @@ int arm_smmu_init_one_queue(struct arm_smmu_device *smmu, int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid); int arm_smmu_init_strtab(struct arm_smmu_device *smmu); +int arm_smmu_register_iommu(struct arm_smmu_device *smmu, + struct iommu_ops *ops, phys_addr_t ioaddr); +void arm_smmu_unregister_iommu(struct arm_smmu_device *smmu); + int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, struct arm_smmu_ctx_desc *cd); void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c index 4e945df5d64f..7faf28c5a8b4 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c @@ -112,6 +112,13 @@ int arm_smmu_fw_probe(struct platform_device *pdev, return arm_smmu_device_acpi_probe(pdev, smmu, bypass); } +#ifdef CONFIG_ARM_SMMU_V3_SVA +bool __weak arm_smmu_sva_supported(struct arm_smmu_device *smmu) +{ + return false; +} +#endif + int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) { u32 reg; @@ -591,3 +598,30 @@ int arm_smmu_init_strtab(struct arm_smmu_device *smmu) set_bit(0, smmu->vmid_map); return 0; } + +int arm_smmu_register_iommu(struct arm_smmu_device *smmu, + struct iommu_ops *ops, phys_addr_t ioaddr) +{ + int ret; + struct device *dev = smmu->dev; + + ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL, + "smmu3.%pa", &ioaddr); + if (ret) + return ret; + + ret = iommu_device_register(&smmu->iommu, ops, dev); + if (ret) { + dev_err(dev, "Failed to register iommu\n"); + iommu_device_sysfs_remove(&smmu->iommu); + return ret; + } + + return 0; +} + +void arm_smmu_unregister_iommu(struct arm_smmu_device *smmu) +{ + iommu_device_unregister(&smmu->iommu); + iommu_device_sysfs_remove(&smmu->iommu); +} diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 7cb171304953..a972c00700cc 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -3257,27 +3257,14 @@ static int arm_smmu_device_probe(struct platform_device *pdev) return ret; /* And we're up. Go go go! */ - ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL, - "smmu3.%pa", &ioaddr); - if (ret) - return ret; - - ret = iommu_device_register(&smmu->iommu, &arm_smmu_ops, dev); - if (ret) { - dev_err(dev, "Failed to register iommu\n"); - iommu_device_sysfs_remove(&smmu->iommu); - return ret; - } - - return 0; + return arm_smmu_register_iommu(smmu, &arm_smmu_ops, ioaddr); } static int arm_smmu_device_remove(struct platform_device *pdev) { struct arm_smmu_device *smmu = platform_get_drvdata(pdev); - iommu_device_unregister(&smmu->iommu); - iommu_device_sysfs_remove(&smmu->iommu); + arm_smmu_unregister_iommu(smmu); arm_smmu_device_disable(smmu); iopf_queue_free(smmu->evtq.iopf); From patchwork Wed Feb 1 12:53:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124398 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BBFC8C05027 for ; Wed, 1 Feb 2023 14:15:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uUR41bQFUyiCvNjx3H/St3QPGMFAB5hUhiqmjhPTAXA=; b=odk5G6lRekem/t X2gLcZXvajhtWLwCxko3rJkH4rhkLnJAej1YaEQ33o6gpahb4MZoMozr4P8cleSyzKPEnWTvvM6Nl wac865TDxZJh/Z2RcpPZ3qlhXdpJaEx0Uw37MgP32n1M1p00NAhM4L+trhxGpt88dVif33I8z3Wy7 sMUFtJqAx66O4/p12OTMtujyCy15Vamzv85o6c4pbSpRFODQJfqhycSNDiPMX4NRoW+A6Ag7L5qmG 4faqatLl2Ef6YjmxfiYS56c7G4Abc4tNig8M5PEPY9EFDs3/tR4Nw++7VWJ9txIcTvHZVaNSiFTHR n6h1MOW83NrMg3I517vQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDsK-00CGS8-Ru; Wed, 01 Feb 2023 14:14:14 +0000 Received: from mail-wr1-x42f.google.com ([2a00:1450:4864:20::42f]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiI-00BnRw-Se for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:48 +0000 Received: by mail-wr1-x42f.google.com with SMTP id m14so16760866wrg.13 for ; Wed, 01 Feb 2023 04:59:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=clv1kjQYMOIGmkc19gGiQtQwx+05Y1V1yDDCTEN3thM=; b=CLUeoMhy8Ff6Zb3QVKckxMkU4tp+gjgH9dIriSPCjWx7b8/HC6texM6L85NpkwmD+3 COxwiCwLz6Gjhfmt8p7Wl0vIm2fBDmSi7Xq1WqKOg4AmaJZjYCyY1Sp3be1PdWGjCfiW a6FaclkZvroZYhtj1e9XY1x0QgAE5qUvj5prV66bNQ5yq+3ost4pCO9BRm3iPQvteIkE isUcCqbrUJJl49Ea9ULETFKjLJs10f79Fi/i3W38C3aIJt71PYter0eM/6C9btCcxLxB HDbm3UcTQdnGyMu0ZzsV2MY9l9y2+dbFApMFpBBwaeX35qsANMEkkG4vD3YgPBAyVEog NK6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=clv1kjQYMOIGmkc19gGiQtQwx+05Y1V1yDDCTEN3thM=; b=e3W98AJVE5q4jPi02BANJ0l6h4FCibD/C9X022qxDoPS/iVOIWa7GVkubOjbNmYk4z QXv5jSqFgQL+HBevxcovPcojgqXPQPNQrsoY960UJixArk0NU2RAbuN8HPBHgSRoQnbh Dsgh5ZRAkjQNpoiKRp3P/7ugVvLYOGM+rSL4/RckYkftdLvpjz5qKwpvYIt4gmpZENvX XBOFFnNclmRYLKnEgryaVXe0YywIdlc51ykLdlJNXns7DrMMMw4/i0YNVoecL5na+Va0 SpTNvEUuNbOAxJyAzH5eBRrOh6YMCeki7+HBP3ELK4qZKGB0I+Dsthb5apKbaNOpbdLg U+wg== X-Gm-Message-State: AO0yUKWyMO4mJXcSxmAUk6CDIB0Y9y2dmHQPzsrhGoEhJk8YYulsY44L /E4ZQ8KCmkFnt0ccX8zSy0sW4w== X-Google-Smtp-Source: AK7set+U4b2LSdOZp2cey8ccJysEutazWdwuk3lUKTcC6aTJz06Hy8o5MzyXgXc7Yv1TSJPtIZmrwg== X-Received: by 2002:a05:6000:1c12:b0:2bf:b710:c0b with SMTP id ba18-20020a0560001c1200b002bfb7100c0bmr3023896wrb.1.1675256385508; Wed, 01 Feb 2023 04:59:45 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:45 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 33/45] iommu/arm-smmu-v3: Use single pages for level-2 stream tables Date: Wed, 1 Feb 2023 12:53:17 +0000 Message-Id: <20230201125328.2186498-34-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045946_949457_15821B39 X-CRM114-Status: GOOD ( 19.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Rather than using a fixed split point for the stream tables, base it on the page size. It's easier for the KVM driver to pass single pages to the hypervisor when lazily allocating stream tables. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/include/asm/arm-smmu-v3-regs.h | 1 - drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 1 + .../arm/arm-smmu-v3/arm-smmu-v3-common.c | 21 ++++++++++++------- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 10 ++++----- 4 files changed, 19 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/arm-smmu-v3-regs.h b/arch/arm64/include/asm/arm-smmu-v3-regs.h index 646a734f2554..357e52f4038f 100644 --- a/arch/arm64/include/asm/arm-smmu-v3-regs.h +++ b/arch/arm64/include/asm/arm-smmu-v3-regs.h @@ -168,7 +168,6 @@ * 256 lazy entries per table (each table covers a PCI bus) */ #define STRTAB_L1_SZ_SHIFT 20 -#define STRTAB_SPLIT 8 #define STRTAB_L1_DESC_DWORDS 1 #define STRTAB_L1_DESC_SPAN GENMASK_ULL(4, 0) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 87034da361ca..3a4649f43839 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -163,6 +163,7 @@ struct arm_smmu_strtab_cfg { u64 strtab_base; u32 strtab_base_cfg; + u8 split; }; /* An SMMUv3 instance */ diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c index 7faf28c5a8b4..c44075015979 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c @@ -254,11 +254,14 @@ int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) smmu->sid_bits = FIELD_GET(IDR1_SIDSIZE, reg); smmu->iommu.max_pasids = 1UL << smmu->ssid_bits; + /* Use one page per level-2 table */ + smmu->strtab_cfg.split = PAGE_SHIFT - (ilog2(STRTAB_STE_DWORDS) + 3); + /* * If the SMMU supports fewer bits than would fill a single L2 stream * table, use a linear table instead. */ - if (smmu->sid_bits <= STRTAB_SPLIT) + if (smmu->sid_bits <= smmu->strtab_cfg.split) smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB; /* IDR3 */ @@ -470,15 +473,17 @@ int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) size_t size; void *strtab; struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg; - struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT]; + struct arm_smmu_strtab_l1_desc *desc = + &cfg->l1_desc[sid >> smmu->strtab_cfg.split]; if (desc->l2ptr) return 0; - size = 1 << (STRTAB_SPLIT + ilog2(STRTAB_STE_DWORDS) + 3); - strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS]; + size = 1 << (smmu->strtab_cfg.split + ilog2(STRTAB_STE_DWORDS) + 3); + strtab = &cfg->strtab[(sid >> smmu->strtab_cfg.split) * + STRTAB_L1_DESC_DWORDS]; - desc->span = STRTAB_SPLIT + 1; + desc->span = smmu->strtab_cfg.split + 1; desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma, GFP_KERNEL); if (!desc->l2ptr) { @@ -520,10 +525,10 @@ static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu) /* Calculate the L1 size, capped to the SIDSIZE. */ size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3); - size = min(size, smmu->sid_bits - STRTAB_SPLIT); + size = min(size, smmu->sid_bits - smmu->strtab_cfg.split); cfg->num_l1_ents = 1 << size; - size += STRTAB_SPLIT; + size += smmu->strtab_cfg.split; if (size < smmu->sid_bits) dev_warn(smmu->dev, "2-level strtab only covers %u/%u bits of SID\n", @@ -543,7 +548,7 @@ static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu) /* Configure strtab_base_cfg for 2 levels */ reg = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_2LVL); reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, size); - reg |= FIELD_PREP(STRTAB_BASE_CFG_SPLIT, STRTAB_SPLIT); + reg |= FIELD_PREP(STRTAB_BASE_CFG_SPLIT, smmu->strtab_cfg.split); cfg->strtab_base_cfg = reg; return arm_smmu_init_l1_strtab(smmu); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index a972c00700cc..19f170088268 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2156,9 +2156,9 @@ static __le64 *arm_smmu_get_step_for_sid(struct arm_smmu_device *smmu, u32 sid) int idx; /* Two-level walk */ - idx = (sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS; + idx = (sid >> smmu->strtab_cfg.split) * STRTAB_L1_DESC_DWORDS; l1_desc = &cfg->l1_desc[idx]; - idx = (sid & ((1 << STRTAB_SPLIT) - 1)) * STRTAB_STE_DWORDS; + idx = (sid & ((1 << smmu->strtab_cfg.split) - 1)) * STRTAB_STE_DWORDS; step = &l1_desc->l2ptr[idx]; } else { /* Simple linear lookup */ @@ -2439,7 +2439,7 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid) unsigned long limit = smmu->strtab_cfg.num_l1_ents; if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) - limit *= 1UL << STRTAB_SPLIT; + limit *= 1UL << smmu->strtab_cfg.split; return sid < limit; } @@ -2460,8 +2460,8 @@ static int arm_smmu_init_sid_strtab(struct arm_smmu_device *smmu, u32 sid) if (ret) return ret; - desc = &smmu->strtab_cfg.l1_desc[sid >> STRTAB_SPLIT]; - arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT, + desc = &smmu->strtab_cfg.l1_desc[sid >> smmu->strtab_cfg.split]; + arm_smmu_init_bypass_stes(desc->l2ptr, 1 << smmu->strtab_cfg.split, false); } From patchwork Wed Feb 1 12:53:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CEE26C05027 for ; Wed, 1 Feb 2023 14:31:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=od9Q6HLdw4iAmJurfJUmFcwupMiQyWLMyVuDP2nnM2A=; b=O2JPSQRsNOS/JH P1SsmSr+NkEyHYnDVYhrOD2eTiu883Nt99Lzu1rOOBpHbbkfcvK0NKJV1DuUoYQnCVWxH0LbwhFK+ qzA10u0E7Wx5InNKGLQxhStvEDb5Fsxwe1L/OChAX+PN+QJRW4Mt7VJzh8RO968WtcTF+7042NI6Q qd/xFor6gQT0gR+SVjlRpLY6O8WSDiWMseNnI0XDHgNII98TEj41S/lMXRKazcKyjH9MvsjF76Wsm khBoRhzPn9pe5FtvJMKByjanjP6HAEqNSZbyOahfJNKjIDXJ7oOk7jFNFZJJCzVwq8psuhgM6NRZ+ U1CL1wxyvvnTZLqZOSAQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNE8K-00CN1d-4o; Wed, 01 Feb 2023 14:30:44 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDhn-00CBpS-LB for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 14:03:19 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Hw6sl7oFDdNkKrpSvS4DEk1bWbFMgLKs0ATS0rkQZn8=; b=KuzEIpqeXSgGqanL/tXkVC7pYl PtzaJENdEhp8WNA+vq4sCHhE08TkmmHZebJzTlJ1P3pKt+JHkIo1Z2oeLQ+3JdStsWYUL3/oN65yC COueRzwJyZceLM0Yj4SfZSFTtyC2NuF5wZgjY8EiH9Dzz52qooZ2wPlkFTUeg0uHDcNaFW3eVg6Xj zh2MjEWnfjh8/iqnh2DhxpAmm0TdSaiCJCsBo/uO7xnJ8c65CmwZJAr34Wa0HezzAoKsCgCd+4EKq bQSDwyXBl1O67wI9CtpUj8NKoekl9tNd2OhqLXXys7ygmTUg9bht9LbVrYJxK50UU7g6+0DN1bn83 WFfC2afA==; Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pNChl-004m3l-0z for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:19 +0000 Received: by mail-wr1-x42b.google.com with SMTP id t18so17257990wro.1 for ; Wed, 01 Feb 2023 04:59:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Hw6sl7oFDdNkKrpSvS4DEk1bWbFMgLKs0ATS0rkQZn8=; b=CbNaDaiXaXT6GtrdMMULs+E/jbHUR9HhPSILUa/OiUR34pmTwRsBPOIgtXkdjKO84s +tBqu2DpS6al0V7yB61IKnv+b10H2hGIu4mSo23kyjfta6kIjuixhQP0x0rEXX/18oY/ 5a/qIg/aGsRb8qL3tZ5QXMBb/MUc2X2+EhILxj2/ZHfc71FDOYtuWVdYCb75H7I2xGt3 dVv+kvRgOm9tSJFfaFFV6aj2whuG6A6473t88RFMCqgBbAoY2Ueaw7+cCeyrj/MBwuAd 9KMQAhZOpmADWbMrzT36SQbCGLLON1lZWw1KneZXadbfVjbmMX8sVoqte7988sXYtrPm mEqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Hw6sl7oFDdNkKrpSvS4DEk1bWbFMgLKs0ATS0rkQZn8=; b=Ud7uvhCSXmg9YSsrKKeuQmuTuzLb5hLM91FWMuDdHAm2y8vcCa76dDiq44n+FsyRA+ +hkJ32OiAtB9ku2Df5NSRDYt8l1dDqHMO2a7xxp94d+lXPGItHoK9NbGkE/6Qz1sF3a9 h9IO3OL2WxhW7TP8V5Fye5lGV77gPYs32r9HGlk5GBe52q94SQYcacs9B4AgFIIGdGqE NJuCeNQravF3dWxxvEksMGxeOFdlYDs/mD9ozkC0BGn7Rlz5LcCHJNuMc7deJPOsmVSo t8Shiqv1Yg/hVAB5p2KlLbyw0G945DAKdyGa/v0BUAsZgdaYTQQQOBrFLLzrMfZeIsa2 /NbQ== X-Gm-Message-State: AO0yUKXdSR2JBXAIcbUp2ImgGstadrduBVzVcay7StgQ11bbTcvUxuXN hu9XRcH0EMXutVdPJ5L/ba45IQ== X-Google-Smtp-Source: AK7set+Pwyjpj/A8PSxsfEiu2UEOKeiweKmB/2zupayUhYBIdKRAMAWRGCPU2Xjlg1Qgye4T4MctNQ== X-Received: by 2002:a5d:6110:0:b0:2bf:b9a4:f688 with SMTP id v16-20020a5d6110000000b002bfb9a4f688mr2377259wrt.23.1675256386278; Wed, 01 Feb 2023 04:59:46 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:45 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 34/45] iommu/arm-smmu-v3: Add host driver for pKVM Date: Wed, 1 Feb 2023 12:53:18 +0000 Message-Id: <20230201125328.2186498-35-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125913_653363_96A2A894 X-CRM114-Status: GOOD ( 25.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Under protected KVM (pKVM), the host does not have access to guest or hypervisor memory. This means that devices owned by the host must be isolated by the SMMU, and the hypervisor is in charge of the SMMU. Introduce the host component that replaces the normal SMMUv3 driver when pKVM is enabled, and sends configuration and requests to the actual driver running in the hypervisor (EL2). Rather than rely on regular driver probe, pKVM directly calls kvm_arm_smmu_v3_init(), which synchronously finds all SMMUs and hands them to the hypervisor. If the regular driver is enabled, it will not find any free SMMU to drive once it gets probed. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm/arm-smmu-v3/Makefile | 5 ++ include/kvm/arm_smmu_v3.h | 14 +++++ arch/arm64/kvm/arm.c | 18 +++++- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 58 +++++++++++++++++++ 4 files changed, 94 insertions(+), 1 deletion(-) create mode 100644 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c diff --git a/drivers/iommu/arm/arm-smmu-v3/Makefile b/drivers/iommu/arm/arm-smmu-v3/Makefile index c4fcc796213c..a90b97d8bae3 100644 --- a/drivers/iommu/arm/arm-smmu-v3/Makefile +++ b/drivers/iommu/arm/arm-smmu-v3/Makefile @@ -4,3 +4,8 @@ arm_smmu_v3-objs-y += arm-smmu-v3.o arm_smmu_v3-objs-y += arm-smmu-v3-common.o arm_smmu_v3-objs-$(CONFIG_ARM_SMMU_V3_SVA) += arm-smmu-v3-sva.o arm_smmu_v3-objs := $(arm_smmu_v3-objs-y) + +obj-$(CONFIG_ARM_SMMU_V3_PKVM) += arm_smmu_v3_kvm.o +arm_smmu_v3_kvm-objs-y += arm-smmu-v3-kvm.o +arm_smmu_v3_kvm-objs-y += arm-smmu-v3-common.o +arm_smmu_v3_kvm-objs := $(arm_smmu_v3_kvm-objs-y) diff --git a/include/kvm/arm_smmu_v3.h b/include/kvm/arm_smmu_v3.h index ed139b0e9612..373b915b6661 100644 --- a/include/kvm/arm_smmu_v3.h +++ b/include/kvm/arm_smmu_v3.h @@ -40,4 +40,18 @@ extern struct hyp_arm_smmu_v3_device *kvm_nvhe_sym(kvm_hyp_arm_smmu_v3_smmus); #endif /* CONFIG_ARM_SMMU_V3_PKVM */ +#ifndef __KVM_NVHE_HYPERVISOR__ +# if IS_ENABLED(CONFIG_ARM_SMMU_V3_PKVM) +int kvm_arm_smmu_v3_init(unsigned int *count); +void kvm_arm_smmu_v3_remove(void); + +# else /* CONFIG_ARM_SMMU_V3_PKVM */ +static inline int kvm_arm_smmu_v3_init(unsigned int *count) +{ + return -ENODEV; +} +static void kvm_arm_smmu_v3_remove(void) {} +# endif /* CONFIG_ARM_SMMU_V3_PKVM */ +#endif /* __KVM_NVHE_HYPERVISOR__ */ + #endif /* __KVM_ARM_SMMU_V3_H */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 31faae76d519..a4cd09fc4abf 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -44,6 +44,7 @@ #include #include #include +#include static enum kvm_mode kvm_mode = KVM_MODE_DEFAULT; DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); @@ -1901,11 +1902,26 @@ static bool init_psci_relay(void) static int init_stage2_iommu(void) { - return KVM_IOMMU_DRIVER_NONE; + int ret; + unsigned int smmu_count; + + ret = kvm_arm_smmu_v3_init(&smmu_count); + if (ret) + return ret; + else if (!smmu_count) + return KVM_IOMMU_DRIVER_NONE; + return KVM_IOMMU_DRIVER_SMMUV3; } static void remove_stage2_iommu(enum kvm_iommu_driver iommu) { + switch (iommu) { + case KVM_IOMMU_DRIVER_SMMUV3: + kvm_arm_smmu_v3_remove(); + break; + default: + break; + } } static int init_subsystems(void) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c new file mode 100644 index 000000000000..4092da8050ef --- /dev/null +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -0,0 +1,58 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * pKVM host driver for the Arm SMMUv3 + * + * Copyright (C) 2022 Linaro Ltd. + */ +#include + +#include + +#include "arm-smmu-v3.h" + +static int kvm_arm_smmu_probe(struct platform_device *pdev) +{ + return -ENOSYS; +} + +static int kvm_arm_smmu_remove(struct platform_device *pdev) +{ + return 0; +} + +static const struct of_device_id arm_smmu_of_match[] = { + { .compatible = "arm,smmu-v3", }, + { }, +}; + +static struct platform_driver kvm_arm_smmu_driver = { + .driver = { + .name = "kvm-arm-smmu-v3", + .of_match_table = arm_smmu_of_match, + }, + .remove = kvm_arm_smmu_remove, +}; + +/** + * kvm_arm_smmu_v3_init() - Reserve the SMMUv3 for KVM + * @count: on success, number of SMMUs successfully initialized + * + * Return 0 if all present SMMUv3 were probed successfully, or an error. + * If no SMMU was found, return 0, with a count of 0. + */ +int kvm_arm_smmu_v3_init(unsigned int *count) +{ + int ret; + + ret = platform_driver_probe(&kvm_arm_smmu_driver, kvm_arm_smmu_probe); + if (ret) + return ret; + + *count = 0; + return 0; +} + +void kvm_arm_smmu_v3_remove(void) +{ + platform_driver_unregister(&kvm_arm_smmu_driver); +} From patchwork Wed Feb 1 12:53:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124408 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4C6DEC636D3 for ; Wed, 1 Feb 2023 14:26:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4ItyPRCBUskcgkG6/IvScSzIwSTG/8ooEvXAgEHx8nw=; b=eIfJ3OuYs6Nbfl CbnkSLEyFpkrtBKnENgjHVuYV2L8Lmb6TRFL5FgIFzuLuJMhOt2r/uy1O8fTldJ9F0hTE98XZdfZs ZU8lP0TIqAd+tBst4OQB/aqNOOEr7L6PPGLD/Ng8+j9tRGnylue+mMpHj3+jufgDP2q20zAwOYKY5 IVa97zSVn3JFU0A4U9QxCIg9nzoKtbHcie9tIxWv2WJf0YclpToKiNoMVKbsExVdET/u75Ocqvuhi bsy8xhKVr6dHkwye/PWzXP5jkH75PYZBD0cxsMDLDw8u2WVHMZoXbnL4u2cHqebjQe7V3SikFinsp OAU2UNWc5M8MJ7/qqsgA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNE3S-00CKsR-0k; Wed, 01 Feb 2023 14:25:42 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDhY-00CBiH-JU for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 14:03:04 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9Dh/SuicpAUNOw64qrqjagLAfYYRP7GDUICNSAPWkEY=; b=W7rXHMENSCMebFZKwhasZIccvC Kgtd3qClTP3Ad0Dr3DV2QYXdNYg/3BEmsSR+E7TF8yVw+ALOwcr6HwyfHyiEmsxtYt5WmQKhjvUrN 9sKInDLWUEc/lz4Lx7mryq2xFfEdKXAlAZHoF651FRJ7qrrBwWJ51E586zVtkw3Ujw7bRiE5/6Xp6 WG5fOobE7JmUbXwSv89Yr56DcRRwHNJBm+2QihZdqx4o5X6HNrNUKCKYGPRuKI6MX92tHinx7wO4Q nyzs3XB/J4DOcXyivwqEk+YDdeyXxqYzHBnLmZx0RGLCh2seYsXJnkflXOHL/Gyx43PNP+Iz/87NK 2BN3DazA==; Received: from mail-wr1-x432.google.com ([2a00:1450:4864:20::432]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pNChk-004m2X-27 for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:17 +0000 Received: by mail-wr1-x432.google.com with SMTP id t7so8870520wrp.5 for ; Wed, 01 Feb 2023 04:59:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9Dh/SuicpAUNOw64qrqjagLAfYYRP7GDUICNSAPWkEY=; b=sXyiSrRhQ03FjrQeAHXjhFoy/QwVyYKEscKoUYb3OZRIaH21HsrZEtSD5AL7zuiIFV vDmfHFdxA8m9jWRbzQBklMRmgEyQ8+yrurctzwyqsImPtLAcqi2b+RXEPT6gAqI7QsND +2RffjhJqi6hLIOeiiX9cDf5HGfIXxluoUmwfM5XmIHifSZEe86EmJPHIsdVqVX6OOzg e1uI55R5fQEInMR7Q4YM0Y6z0F2RxFH8mM5Slfd91Sypx3NOF0Hf+r9f59c975D6zMRc mL3qqvwrhHy14hSMyfJGg5gGthQWrjaqR0dYTk08kP4fcqUHPg0Bbxxmk0MaT4GaZEDO x2jA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9Dh/SuicpAUNOw64qrqjagLAfYYRP7GDUICNSAPWkEY=; b=aKaYOfOqzokoKp4Q0AEiaTSlXRVGH4sL0YixKPiK3ZWN9UErlqpdupWY6156rrgKjR azq3Q3AyDFuQzaNoQ4HgV0dFTBG7IuYl4MqaOR6ONDAmCSUzxyQp1nVMZQOIm7VqX5Dz 7W/D6MC3Fq1w78KBJ4j2Ykb2DesxRUH79G9APvkK2qOsLytiISIwppmw8HtXMBBk+sXP ApZoWHZHfOgTssKLUB2OX0+vzuFPVEiThnmRnEPDrC+riiRuNhkV7FcZvQaeFiJBcZyV SjopWWVdY6AYT/gPK6qakkI9gqDWuxFJ3A57v6XtbfLwg3LxSJr0iPdaLEnYkOJru3nx ztLw== X-Gm-Message-State: AO0yUKUm/bivI89wC4N79x3NooqOD+xG5l42IQra4G35RQn6znGWG4E5 BE2M9jBmxxO3cY+dSOahTSZWIg== X-Google-Smtp-Source: AK7set+68/Lq7+7dw7L5pKpw0HIza+anyCOmSfiVtB5lzEVXRlZZF1Y8GjmD8XI2Ho4LfqXQo9439g== X-Received: by 2002:a5d:5092:0:b0:2bf:c753:573f with SMTP id a18-20020a5d5092000000b002bfc753573fmr1910182wrt.24.1675256387073; Wed, 01 Feb 2023 04:59:47 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:46 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 35/45] iommu/arm-smmu-v3-kvm: Pass a list of SMMU devices to the hypervisor Date: Wed, 1 Feb 2023 12:53:19 +0000 Message-Id: <20230201125328.2186498-36-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125913_157253_032F4949 X-CRM114-Status: GOOD ( 24.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Build a list of devices and donate the page to the hypervisor. At this point the host is trusted and this would be a good opportunity to provide more information about the system. For example, which devices are owned by the host (perhaps via the VMID and SW bits in the stream table, although we populate the stream table lazily at the moment.) Signed-off-by: Jean-Philippe Brucker --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 123 +++++++++++++++++- 1 file changed, 120 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index 4092da8050ef..1e0daf9ea4ac 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -4,15 +4,78 @@ * * Copyright (C) 2022 Linaro Ltd. */ +#include #include #include #include "arm-smmu-v3.h" +struct host_arm_smmu_device { + struct arm_smmu_device smmu; + pkvm_handle_t id; +}; + +#define smmu_to_host(_smmu) \ + container_of(_smmu, struct host_arm_smmu_device, smmu); + +static size_t kvm_arm_smmu_cur; +static size_t kvm_arm_smmu_count; +static struct hyp_arm_smmu_v3_device *kvm_arm_smmu_array; + static int kvm_arm_smmu_probe(struct platform_device *pdev) { - return -ENOSYS; + int ret; + bool bypass; + size_t size; + phys_addr_t ioaddr; + struct resource *res; + struct arm_smmu_device *smmu; + struct device *dev = &pdev->dev; + struct host_arm_smmu_device *host_smmu; + struct hyp_arm_smmu_v3_device *hyp_smmu; + + if (kvm_arm_smmu_cur >= kvm_arm_smmu_count) + return -ENOSPC; + + hyp_smmu = &kvm_arm_smmu_array[kvm_arm_smmu_cur]; + + host_smmu = devm_kzalloc(dev, sizeof(*host_smmu), GFP_KERNEL); + if (!host_smmu) + return -ENOMEM; + + smmu = &host_smmu->smmu; + smmu->dev = dev; + + ret = arm_smmu_fw_probe(pdev, smmu, &bypass); + if (ret || bypass) + return ret ?: -EINVAL; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + size = resource_size(res); + if (size < SZ_128K) { + dev_err(dev, "unsupported MMIO region size (%pr)\n", res); + return -EINVAL; + } + ioaddr = res->start; + host_smmu->id = kvm_arm_smmu_cur; + + smmu->base = devm_ioremap_resource(dev, res); + if (IS_ERR(smmu->base)) + return PTR_ERR(smmu->base); + + ret = arm_smmu_device_hw_probe(smmu); + if (ret) + return ret; + + platform_set_drvdata(pdev, host_smmu); + + /* Hypervisor parameters */ + hyp_smmu->mmio_addr = ioaddr; + hyp_smmu->mmio_size = size; + kvm_arm_smmu_cur++; + + return 0; } static int kvm_arm_smmu_remove(struct platform_device *pdev) @@ -33,6 +96,36 @@ static struct platform_driver kvm_arm_smmu_driver = { .remove = kvm_arm_smmu_remove, }; +static int kvm_arm_smmu_array_alloc(void) +{ + int smmu_order; + struct device_node *np; + + kvm_arm_smmu_count = 0; + for_each_compatible_node(np, NULL, "arm,smmu-v3") + kvm_arm_smmu_count++; + + if (!kvm_arm_smmu_count) + return 0; + + /* Allocate the parameter list shared with the hypervisor */ + smmu_order = get_order(kvm_arm_smmu_count * sizeof(*kvm_arm_smmu_array)); + kvm_arm_smmu_array = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, + smmu_order); + if (!kvm_arm_smmu_array) + return -ENOMEM; + + return 0; +} + +static void kvm_arm_smmu_array_free(void) +{ + int order; + + order = get_order(kvm_arm_smmu_count * sizeof(*kvm_arm_smmu_array)); + free_pages((unsigned long)kvm_arm_smmu_array, order); +} + /** * kvm_arm_smmu_v3_init() - Reserve the SMMUv3 for KVM * @count: on success, number of SMMUs successfully initialized @@ -44,12 +137,36 @@ int kvm_arm_smmu_v3_init(unsigned int *count) { int ret; + /* + * Check whether any device owned by the host is behind an SMMU. + */ + ret = kvm_arm_smmu_array_alloc(); + *count = kvm_arm_smmu_count; + if (ret || !kvm_arm_smmu_count) + return ret; + ret = platform_driver_probe(&kvm_arm_smmu_driver, kvm_arm_smmu_probe); if (ret) - return ret; + goto err_free; - *count = 0; + if (kvm_arm_smmu_cur != kvm_arm_smmu_count) { + /* A device exists but failed to probe */ + ret = -EUNATCH; + goto err_free; + } + + /* + * These variables are stored in the nVHE image, and won't be accessible + * after KVM initialization. Ownership of kvm_arm_smmu_array will be + * transferred to the hypervisor as well. + */ + kvm_hyp_arm_smmu_v3_smmus = kern_hyp_va(kvm_arm_smmu_array); + kvm_hyp_arm_smmu_v3_count = kvm_arm_smmu_count; return 0; + +err_free: + kvm_arm_smmu_array_free(); + return ret; } void kvm_arm_smmu_v3_remove(void) From patchwork Wed Feb 1 12:53:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 55ECDC636CD for ; Wed, 1 Feb 2023 14:17:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uKrU2W7tXzmHXaZSQtBL4B26PcdS7Dp7M51t+BcXbHU=; b=QeaDS/VRFEq+/5 3tGTq/UF9exvy5STL5vcDjh6HSaNkQDXVqAxaep30cOY/aLHfhTSB7LGgwEy9JnGEne5OcR6bc2M0 29nTNxQ6XyLOjCzmmmo6FRU7B3GeJ4VvkMqOk/DCX4xQ9t3Tm5/vY+jRH5HYFsVDXnQvF9XEe0K6+ LzgCfzYghCiaAn8GcC4g/EFv6GLHT/JVEbj3wiUGOlxt3daDW5ShZ82ys9WcNCsJcF0aWvv3f7Aed LGmHxgZyF8anQBDKhtiZUscpjPesB0D95x9ma1Ju8fTe8KcOC5qS96XlF3SwXk/KLVqFDCwtSHkwT FPK9pprZwh9Ha7Ys6HuA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDu9-00CH6L-5m; Wed, 01 Feb 2023 14:16:06 +0000 Received: from mail-wr1-x42c.google.com ([2a00:1450:4864:20::42c]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiK-00BnFc-BJ for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:49 +0000 Received: by mail-wr1-x42c.google.com with SMTP id q10so17233701wrm.4 for ; Wed, 01 Feb 2023 04:59:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=T1XbjFRROGqTsKVaKlJ9OMpS1XHI6QRbg+Uxdf993wI=; b=vGASOMbylysHRfrGLv6YhTO7WEjv6c1RwzKjV7XsdcywE+jEFIsdsb5w+2OghxLS9/ qXlv60FR7L4TDsiNTkGc7CFP9sOBBSMgFZJP6RehQWKk+xblpDCkv+NdoiRZJc4lo+MB OHZI/Ska0NQGQp6sf6XYaGIC4f/Sqp2c09J4jQvzaOnijt2sBekilaucOj/jUKSNd/AX 5pRusIs+/gQ0kDODgdeueanv2yN6KNe+s7ycRNU9ypbQUkV21yxob/1nlBgVHMhIfud/ OgRirqcvdAhkcPaU2A0Q77RWKieLbANDuuIfIpzi/IVRF8f+n9FVapUusjVNOE82ZS44 c4Sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=T1XbjFRROGqTsKVaKlJ9OMpS1XHI6QRbg+Uxdf993wI=; b=M1Dv+4mffRgkLcWqSG5DgcOCkrylR6/sucYAGyZi/6zIGmt3SXJOF6gKC3tbGjhfNQ EzgoHZckE085GaCwuMRR/SZI8+Svifsq6cgMR608TediqQjHpg/xnRZmfukkjC15NmIW idPw5dVBHlqFhbfruXmzShm+N+BppdbzzfCF1aJ8Aq0xHKTwrgLVrwWKHHknitLU7VJ4 WTQYH7K6vNufV6UB0i8xx14YKc/fIO1eog0FBLZGrQACbiS37QREjbhSGK33FDC0yj5u L5gvJ9d6LACWMjtfT/Mmbkc3v2/7E5maEo4OTidM5/b7dJSsS5K827Szs6JnRQ9TVvl4 /ufQ== X-Gm-Message-State: AO0yUKXi8y5atG5Pmoz0nKSMXHsMG3a874Y05wBvq96r53U0Vfrl4mrP TnDzGXZiMHNA6mTVSo5oPCpfFA== X-Google-Smtp-Source: AK7set/A9Wvge/x9I0c0N1VcVRmzSfGiz2QGPnhn/EemCOX5Ccigvrx/7k9pg7r6kMkCHM3qkXEWvg== X-Received: by 2002:a5d:6b48:0:b0:2bf:963a:f266 with SMTP id x8-20020a5d6b48000000b002bf963af266mr1586152wrw.27.1675256387822; Wed, 01 Feb 2023 04:59:47 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:47 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 36/45] iommu/arm-smmu-v3-kvm: Validate device features Date: Wed, 1 Feb 2023 12:53:20 +0000 Message-Id: <20230201125328.2186498-37-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045948_414485_3322CDCC X-CRM114-Status: GOOD ( 14.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The KVM hypervisor driver supports a small subset of features. Ensure the implementation is compatible, and disable some unused features. Signed-off-by: Jean-Philippe Brucker --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 57 +++++++++++++++++++ 1 file changed, 57 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index 1e0daf9ea4ac..2cc632f6b256 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -23,6 +23,59 @@ static size_t kvm_arm_smmu_cur; static size_t kvm_arm_smmu_count; static struct hyp_arm_smmu_v3_device *kvm_arm_smmu_array; +static bool kvm_arm_smmu_validate_features(struct arm_smmu_device *smmu) +{ + unsigned long oas; + unsigned int required_features = + ARM_SMMU_FEAT_TRANS_S2 | + ARM_SMMU_FEAT_TT_LE; + unsigned int forbidden_features = + ARM_SMMU_FEAT_STALL_FORCE; + unsigned int keep_features = + ARM_SMMU_FEAT_2_LVL_STRTAB | + ARM_SMMU_FEAT_2_LVL_CDTAB | + ARM_SMMU_FEAT_TT_LE | + ARM_SMMU_FEAT_SEV | + ARM_SMMU_FEAT_COHERENCY | + ARM_SMMU_FEAT_TRANS_S1 | + ARM_SMMU_FEAT_TRANS_S2 | + ARM_SMMU_FEAT_VAX | + ARM_SMMU_FEAT_RANGE_INV; + + if (smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY) { + dev_err(smmu->dev, "unsupported layout\n"); + return false; + } + + if ((smmu->features & required_features) != required_features) { + dev_err(smmu->dev, "missing features 0x%x\n", + required_features & ~smmu->features); + return false; + } + + if (smmu->features & forbidden_features) { + dev_err(smmu->dev, "features 0x%x forbidden\n", + smmu->features & forbidden_features); + return false; + } + + smmu->features &= keep_features; + + /* + * This can be relaxed (although the spec says that OAS "must match + * the system physical address size."), but requires some changes. All + * table and queue allocations must use GFP_DMA* to ensure the SMMU can + * access them. + */ + oas = get_kvm_ipa_limit(); + if (smmu->oas < oas) { + dev_err(smmu->dev, "incompatible address size\n"); + return false; + } + + return true; +} + static int kvm_arm_smmu_probe(struct platform_device *pdev) { int ret; @@ -68,11 +121,15 @@ static int kvm_arm_smmu_probe(struct platform_device *pdev) if (ret) return ret; + if (!kvm_arm_smmu_validate_features(smmu)) + return -ENODEV; + platform_set_drvdata(pdev, host_smmu); /* Hypervisor parameters */ hyp_smmu->mmio_addr = ioaddr; hyp_smmu->mmio_size = size; + hyp_smmu->features = smmu->features; kvm_arm_smmu_cur++; return 0; From patchwork Wed Feb 1 12:53:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124422 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A74B8C636CD for ; Wed, 1 Feb 2023 14:32:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=m3UGftK1+AaMniz4APewaUuyUvlJmXETBND/POelzgE=; b=IXSUKYTUY2km55 DP1GptEEL9E8pL9UojpvTtpLlivw78RKdclkiDJs4m85RRC3lvy1FmIe7PPlGSwz7Mms5bzkXbbqS ckfn8dQ/sAsWCVOw/RnJpdOUfuRpo85voaSKFQfuLEP4w5ANusNE/Ti/aeN5AeXRCVFlawPLMWMH7 LZTKcALCWyeVelJJxBVFD3/pDD/v/ca+U7IC4SjSzKqvI4B7JZ8DjWK7z2FMOB27n/ZV9zwRdHljF F+oEQfTpiGeFywh4tjWmZWGpkvthGYxn/NyG3q8+EBL1WqJ1qRLxB6k6h+RTa7by3lTKcbFzCM+6H YKkdDCAN5IiOO+U5KSvg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNE9B-00CNOX-DE; Wed, 01 Feb 2023 14:31:39 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDhr-00CBrE-Rn for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 14:03:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DAR858KRQvKPg9d8btbK56RFwJ6U3P1FOrADuTdQSiU=; b=C/9plR/am4yMi7SpfiLylPyjPq y0yUB2gCWu3W0w5O1X3xNYaO4I/QBfK6c8prb3q9K3xKl872/bFPaP06x4ML5p6c3UcX77zZc4lHM YSNFE5LIbeGwMTQgENUL8s3yqxsis3te+pfC84pNyWdNmCGH9CTsTpslO9E1iurPiFX4Iq7vMRtuE k5j7F/iq5kt0VszXxmHxeR4qC8KPzj7vd1i6h5OHr8ZGC723h59BL+OdoO068TPwrvPX3NFilusxR oASzCfzwoSQEM/BWFRbxXe1Vk0ETIO2OAXjdTlpuQIxtnhlMxM6Mkyh3i/kGsPOsRoNQzHxY8z0nW +vtyQGRw==; Received: from mail-wm1-f49.google.com ([209.85.128.49]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pNChn-004m5C-2r for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:23 +0000 Received: by mail-wm1-f49.google.com with SMTP id q10-20020a1cf30a000000b003db0edfdb74so2388196wmq.1 for ; Wed, 01 Feb 2023 04:59:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DAR858KRQvKPg9d8btbK56RFwJ6U3P1FOrADuTdQSiU=; b=Ggyb3+Q97jwaq1nLUdEGCN/DmIa9T+TVIlNdQsa9leOlAfSvRNt1uXcyTN1GD+135G R77p3qJrpFcF9f5i8a0xPRilpSW6r6jSZeWNj5JfGc7ve7E9J0dCL9QoWw6rHLjBJhMA l+yjZVQR/gIfvs29AwC7zMLP2kmS34HW6UiaOlED2PTNOSPOcLJa65+ozh4XI3tOYbPh ybxXFnOfIvQ2gh3mm+WWi1Y5RYjUZY9QPb3v90eB/lCy2CilYg/1B9N+AZKXfR+eNB76 cmC8MG7IxBkKv/RY7OILPUPSV/AOHj4cLG9vAlzQqkHcuLxlwSO5PajbdTbxsob4IR/y QfLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DAR858KRQvKPg9d8btbK56RFwJ6U3P1FOrADuTdQSiU=; b=qgZNgUF2scGFho09SNVLY47Vg6rmmJqb5sjo7KgeK1eEw1FpE+CgLsuT6/g7qViP9S ZsSsQ0vxzi6y/xdTm7NkfiEEMg3+gS2VIo9Iz0pYdzQT3kuW5I/wuEb/CK64TZ+EoadT kcji+o8uLm1sd+ogjb09+G07TBreGYMcnEuVr1dHkUPhmdo8513qX3NeH5esOBJuao7U AbGL/hHh4r9fKN2cG8iIcp/DQGOJg30huq0nWgdCGaRvbmgKB6QblsO32F6+y/A/buUg 9XLfGEiOLqz+Eo0h3J5gI+8uih5Ny5uatP3o8ZFXgCwTrGZ+9KBLO4xx4Uz8YwP3Z9pG aS2Q== X-Gm-Message-State: AO0yUKUJ8oBA9pCgRgoLZiLNUGCu1LjdrPC3v7jlzMdw3UTmFvhc2GbJ rm9AqjD8T3EtfWBXh7/HoO+TfA== X-Google-Smtp-Source: AK7set+iGCoWOT606yNywkv2PHuH9O8/l5rQaiMO8pv3IURPxlu09QCKUnSrv4mv91hqJwJ7knQZYQ== X-Received: by 2002:a05:600c:4f07:b0:3da:fd06:a6f1 with SMTP id l7-20020a05600c4f0700b003dafd06a6f1mr1937037wmq.31.1675256388617; Wed, 01 Feb 2023 04:59:48 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:48 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 37/45] iommu/arm-smmu-v3-kvm: Allocate structures and reset device Date: Wed, 1 Feb 2023 12:53:21 +0000 Message-Id: <20230201125328.2186498-38-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125916_278883_4BE583F2 X-CRM114-Status: GOOD ( 14.94 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Allocate the structures that will be shared between hypervisor and SMMU: command queue and stream table. Install them in the MMIO registers, along with some configuration bits. After hyp initialization, the host won't have access to those pages anymore. Signed-off-by: Jean-Philippe Brucker --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 56 +++++++++++++++++++ 1 file changed, 56 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index 2cc632f6b256..8808890f4dc0 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -14,6 +14,7 @@ struct host_arm_smmu_device { struct arm_smmu_device smmu; pkvm_handle_t id; + u32 boot_gbpa; }; #define smmu_to_host(_smmu) \ @@ -76,6 +77,38 @@ static bool kvm_arm_smmu_validate_features(struct arm_smmu_device *smmu) return true; } +static int kvm_arm_smmu_device_reset(struct host_arm_smmu_device *host_smmu) +{ + int ret; + u32 reg; + struct arm_smmu_device *smmu = &host_smmu->smmu; + + reg = readl_relaxed(smmu->base + ARM_SMMU_CR0); + if (reg & CR0_SMMUEN) + dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n"); + + /* Disable bypass */ + host_smmu->boot_gbpa = readl_relaxed(smmu->base + ARM_SMMU_GBPA); + ret = arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0); + if (ret) + return ret; + + ret = arm_smmu_device_disable(smmu); + if (ret) + return ret; + + /* Stream table */ + writeq_relaxed(smmu->strtab_cfg.strtab_base, + smmu->base + ARM_SMMU_STRTAB_BASE); + writel_relaxed(smmu->strtab_cfg.strtab_base_cfg, + smmu->base + ARM_SMMU_STRTAB_BASE_CFG); + + /* Command queue */ + writeq_relaxed(smmu->cmdq.q.q_base, smmu->base + ARM_SMMU_CMDQ_BASE); + + return 0; +} + static int kvm_arm_smmu_probe(struct platform_device *pdev) { int ret; @@ -124,6 +157,20 @@ static int kvm_arm_smmu_probe(struct platform_device *pdev) if (!kvm_arm_smmu_validate_features(smmu)) return -ENODEV; + ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, smmu->base, + ARM_SMMU_CMDQ_PROD, ARM_SMMU_CMDQ_CONS, + CMDQ_ENT_DWORDS, "cmdq"); + if (ret) + return ret; + + ret = arm_smmu_init_strtab(smmu); + if (ret) + return ret; + + ret = kvm_arm_smmu_device_reset(host_smmu); + if (ret) + return ret; + platform_set_drvdata(pdev, host_smmu); /* Hypervisor parameters */ @@ -137,6 +184,15 @@ static int kvm_arm_smmu_probe(struct platform_device *pdev) static int kvm_arm_smmu_remove(struct platform_device *pdev) { + struct host_arm_smmu_device *host_smmu = platform_get_drvdata(pdev); + struct arm_smmu_device *smmu = &host_smmu->smmu; + + /* + * There was an error during hypervisor setup. The hyp driver may + * have already enabled the device, so disable it. + */ + arm_smmu_device_disable(smmu); + arm_smmu_update_gbpa(smmu, host_smmu->boot_gbpa, GBPA_ABORT); return 0; } From patchwork Wed Feb 1 12:53:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 099E6C05027 for ; Wed, 1 Feb 2023 13:23:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WGnd+R2P8OLU1XK6JsqTpMji2KuSxyF0c9SR0ko05yQ=; b=z8Igbq6RYUB+Ko x1JX4Kqd5kId0MTnG2wkOgXrizMwkhp24X2zc6Diu2snk5LhKIBNSsEg4svd06ImHoZpKFYq0BeU/ YoaqykFtk85CUn/tS5Tj9sfEvsGlRe24H3nw997rDAHgZiB5kvsV9f0SSp0QuMrtPYZ/fXoGr4Fa4 ZBN8oQw2KzUWpNCAG+Si4BCu6Yu7TQflFdyn9q3G7wGom2xDuNZImWfqUMuaaBbFj1Jyjx7sdPSYf RQa4BkrDAOqxgO5Asky+Spd74mo8u6lpclG1qHlIYCYTeu3qOqhz0bvmjoCosYPjjNsNPR4LrJic5 h1lT9RgaF1p+4ax/STRQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pND4G-00C1ja-7F; Wed, 01 Feb 2023 13:22:28 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pND4E-00C1id-GH for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 13:22:26 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/mQgjjPfntTEYktl5YrbY8Zrdu7ool4i44pQG74D6kk=; b=PQ9JQNEnQW9FAncMG3NtsPTFFJ G/cbEPyclaE7T/PmzVznnd9Wk/0Wfi3sgydNB99DEuyfKP5y9t4Sc/TVRStvJuOu9PBjL6J7GTj13 WOyMjpwMCXOY3fgvPWlk+N6ClRr4oXJQe64ZP2Kwd5PeHzW71Zh/f7nl/7RvWv1ZuR3MqZYALwbJf CLTLujxzW1St5xo7MaZa3jEqG5oghCkGxrSJ15m4iJopyBY77M0t4NYfiMJukmXVk2vDo4A9a8v9j qmYhakiE2MR349W7ToZRAnrhzC5QwsV16ouKrcNgKhozqOFZNBlGBw2/foYrhb+XE/Duy8rmi4qyt 65USi9PQ==; Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiP-00CIy0-4f for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:56 +0000 Received: by mail-wr1-x42d.google.com with SMTP id q10so17233771wrm.4 for ; Wed, 01 Feb 2023 04:59:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/mQgjjPfntTEYktl5YrbY8Zrdu7ool4i44pQG74D6kk=; b=uqT0FSjB+l/XDMkVXFvBdgrL2J1Pb16FQPBIibne3w+RA4w7ZHPEnJd22+zxU8HfKd uiqDg2LvM9mRSy4apdViIghUV5RiGpJ9xBgsXYd6cKHBt/TpwnC8GMrC5ko0Iaweq0z/ CmUdRKtmq7JFkN77fcuz8OnmfGchAp3YYrnMcIiHnhVzF9vSnEeydQzkuMixbhCMfjEz vUrnAhJySZtl5+SiR5kS/OhbSljiIekXWL9ZacrQilp0vifaVxWXWanc9DLxC94l3IoI MYetvwJmX0Zxy5mpnmnLi/ktsZyCSUnBrEMc+P2kFBHo928MJkgF0p9IzZDGEvEHMqtX s3Ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/mQgjjPfntTEYktl5YrbY8Zrdu7ool4i44pQG74D6kk=; b=wO8vbkgSI1QOf5Ub/du1i2aJZgU0PgLBVZ4rz7AwFIRNlYVYGjbW6sWpdYRysAGZ/O c7gQZaWsxoWg/qA/yl1NxpHk331EOV1u1RZk7VcnY7qAEJV/NNM8kXgoxTp+aiymuSz5 uW35ZExDD/ceaLcWt/O+NMs6g8gp+mr7gUFy97aqLs8b+emVLEyLNXng18jnW3f2Woiu nbdhYB41QK3H4GIWvLiQQtqSEUZ7JFrvI1esa+XuXpm2OTlq78k4PXv454yPVNKuBa6A 0AIsBGkRH2kCOHOFPnm0Im4w+gueJP2pN/0OL4KgNBmhTxYnD/9kabB/7h3Dmcaq7yz2 sqzw== X-Gm-Message-State: AO0yUKVT2T5xw8i3ZJ3BcmuyYMzZftlKTbcKG0DMtywnse6iK4AQYUZr Qs6KlMh8rH9TExO5tnQOPrq/fQ== X-Google-Smtp-Source: AK7set//4GU7QGGJ9JCNwt9lq+EdDL/CqmMxgfwo0Z2qvjFGyJ3BYHcNzGcdlCitnKLcy3y+gHrMfQ== X-Received: by 2002:a05:6000:186a:b0:298:4baf:ac8a with SMTP id d10-20020a056000186a00b002984bafac8amr2908303wri.44.1675256389370; Wed, 01 Feb 2023 04:59:49 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:48 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 38/45] iommu/arm-smmu-v3-kvm: Add per-cpu page queue Date: Wed, 1 Feb 2023 12:53:22 +0000 Message-Id: <20230201125328.2186498-39-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125953_754799_CAE585E5 X-CRM114-Status: GOOD ( 18.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Allocate page queues shared with the hypervisor for page donation and reclaim. A local_lock ensures that only one thread fills the queue during a hypercall. Signed-off-by: Jean-Philippe Brucker --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 93 ++++++++++++++++++- 1 file changed, 92 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index 8808890f4dc0..755c77bc0417 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -5,6 +5,7 @@ * Copyright (C) 2022 Linaro Ltd. */ #include +#include #include #include @@ -23,6 +24,81 @@ struct host_arm_smmu_device { static size_t kvm_arm_smmu_cur; static size_t kvm_arm_smmu_count; static struct hyp_arm_smmu_v3_device *kvm_arm_smmu_array; +static struct kvm_hyp_iommu_memcache *kvm_arm_smmu_memcache; + +static DEFINE_PER_CPU(local_lock_t, memcache_lock) = + INIT_LOCAL_LOCK(memcache_lock); + +static void *kvm_arm_smmu_alloc_page(void *opaque) +{ + struct arm_smmu_device *smmu = opaque; + struct page *p; + + p = alloc_pages_node(dev_to_node(smmu->dev), GFP_ATOMIC, 0); + if (!p) + return NULL; + + return page_address(p); +} + +static void kvm_arm_smmu_free_page(void *va, void *opaque) +{ + free_page((unsigned long)va); +} + +static phys_addr_t kvm_arm_smmu_host_pa(void *va) +{ + return __pa(va); +} + +static void *kvm_arm_smmu_host_va(phys_addr_t pa) +{ + return __va(pa); +} + +__maybe_unused +static int kvm_arm_smmu_topup_memcache(struct arm_smmu_device *smmu) +{ + struct kvm_hyp_memcache *mc; + int cpu = raw_smp_processor_id(); + + lockdep_assert_held(this_cpu_ptr(&memcache_lock)); + mc = &kvm_arm_smmu_memcache[cpu].pages; + + if (!kvm_arm_smmu_memcache[cpu].needs_page) + return -EBADE; + + kvm_arm_smmu_memcache[cpu].needs_page = false; + return __topup_hyp_memcache(mc, 1, kvm_arm_smmu_alloc_page, + kvm_arm_smmu_host_pa, smmu); +} + +__maybe_unused +static void kvm_arm_smmu_reclaim_memcache(void) +{ + struct kvm_hyp_memcache *mc; + int cpu = raw_smp_processor_id(); + + lockdep_assert_held(this_cpu_ptr(&memcache_lock)); + mc = &kvm_arm_smmu_memcache[cpu].pages; + + __free_hyp_memcache(mc, kvm_arm_smmu_free_page, + kvm_arm_smmu_host_va, NULL); +} + +/* + * Issue hypercall, and retry after filling the memcache if necessary. + * After the call, reclaim pages pushed in the memcache by the hypervisor. + */ +#define kvm_call_hyp_nvhe_mc(smmu, ...) \ +({ \ + int __ret; \ + do { \ + __ret = kvm_call_hyp_nvhe(__VA_ARGS__); \ + } while (__ret && !kvm_arm_smmu_topup_memcache(smmu)); \ + kvm_arm_smmu_reclaim_memcache(); \ + __ret; \ +}) static bool kvm_arm_smmu_validate_features(struct arm_smmu_device *smmu) { @@ -211,7 +287,7 @@ static struct platform_driver kvm_arm_smmu_driver = { static int kvm_arm_smmu_array_alloc(void) { - int smmu_order; + int smmu_order, mc_order; struct device_node *np; kvm_arm_smmu_count = 0; @@ -228,7 +304,17 @@ static int kvm_arm_smmu_array_alloc(void) if (!kvm_arm_smmu_array) return -ENOMEM; + mc_order = get_order(NR_CPUS * sizeof(*kvm_arm_smmu_memcache)); + kvm_arm_smmu_memcache = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, + mc_order); + if (!kvm_arm_smmu_memcache) + goto err_free_array; + return 0; + +err_free_array: + free_pages((unsigned long)kvm_arm_smmu_array, smmu_order); + return -ENOMEM; } static void kvm_arm_smmu_array_free(void) @@ -237,6 +323,8 @@ static void kvm_arm_smmu_array_free(void) order = get_order(kvm_arm_smmu_count * sizeof(*kvm_arm_smmu_array)); free_pages((unsigned long)kvm_arm_smmu_array, order); + order = get_order(NR_CPUS * sizeof(*kvm_arm_smmu_memcache)); + free_pages((unsigned long)kvm_arm_smmu_memcache, order); } /** @@ -272,9 +360,12 @@ int kvm_arm_smmu_v3_init(unsigned int *count) * These variables are stored in the nVHE image, and won't be accessible * after KVM initialization. Ownership of kvm_arm_smmu_array will be * transferred to the hypervisor as well. + * + * kvm_arm_smmu_memcache is shared between hypervisor and host. */ kvm_hyp_arm_smmu_v3_smmus = kern_hyp_va(kvm_arm_smmu_array); kvm_hyp_arm_smmu_v3_count = kvm_arm_smmu_count; + kvm_hyp_iommu_memcaches = kern_hyp_va(kvm_arm_smmu_memcache); return 0; err_free: From patchwork Wed Feb 1 12:53:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124400 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0FE82C05027 for ; Wed, 1 Feb 2023 14:18:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6ibZWa3eT2g3fmO6Hz8jZfqiuTrTKgonXsEyr0aszRc=; b=h2zJ/TyyOjulO6 uLUxsRrQo8g53xpSbDbjF4uIUJhxcykMEQcAhvEwVqB0OYR7Xqqz238E7U45fH+2leHhC8pvh4cOB lkl4ZgpDACxa2e7u6JIHxR+/EczMbb7dNAoLDW09Y3G0pOF7iu5+aVC2jICal62JkR2gPYF36zVu6 qjqeN8mm3lsDCoaqXV3LFe+rYLq9V9OBp0AeGqHA0lcVdU5f5JmfCrEy4AHEU0wxE0SPCfCW795XN 69hAHlzxXB3dCuKmyhXpoGSNlXthG5WLgwSr2iAdMkTYYg5IUrPgwi40I+oJwK2GMmLozIP1bRv5X BsiOCyQ0RPm0Bs0uqf4Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDvZ-00CHbt-1e; Wed, 01 Feb 2023 14:17:33 +0000 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiM-00BnFT-RB for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:52 +0000 Received: by mail-wr1-x42b.google.com with SMTP id d14so17222576wrr.9 for ; Wed, 01 Feb 2023 04:59:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YMChfuBGSkCRdEX7dXsLebM7lClmVnVl8CvGgr6HzRY=; b=srCsWRWYFUxl5KjOXa87GRxFlT6IkdbliPebR7kMfam6aaIU88cEBhi5qgJreiVImp oDINu2/MTn+O5qImdl6mtgdhDF/BEWTZy+UMXvQ7hsnYPbNCqogwRCRIpeyrKuNkcTIE pfqCS5kzvG+HrVW0OllEqKe+t2Znov5jY1F8RGXQyMDLGHxiCIZKM5Fx57gp+UhhluH6 W6kq1D/CTZP3THYxNmReLiFFFesMewDb9B/9HcV4Za6MSn3WuaR6Q8sW5L3Y0p4lHZ1X CaEJa6OYiT6noSUV7s0lOcfcPVytx8h2idueowJNA4xzccdmCgvBAh7+Yc6VkyLgHGvx ggmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YMChfuBGSkCRdEX7dXsLebM7lClmVnVl8CvGgr6HzRY=; b=i1N3jp5WaC5FyIs+bbHOiEQmqduYuONseZldsapMBzt+WmgW7FEKeRgM4d4CeEdK6H lUG54P/gkMpMj8bt1YA60oGEmHVnnNfR1Q/mzxuNucedrpx0YV8GoiqFMZckAwyF4TuX cYHTsiV1QRFWZBuqK05gD3BT8XfPnnqySVUj+BrLRijDLOX4diq1g7kpAbe2cwB8LCAG 6YrHN+ZtT3PIj8P0/Y5ciGksF3UFu/4ed/1kQiS/Uo8XSn4c2kALOV4+2rtZJjgcLUEQ CSVpuV1za0fGaIJqcdQqwiZDJtqWgECWNQPXPoqYvdURciCFezerlelzd+QVo+EKLZ3a PmoA== X-Gm-Message-State: AO0yUKUreZHa3FrVX4h2SaH5KEbxYmMqfp49wWatcLNmXAOtt7XA0w3J 4AJbpt6F+uSuc8RliQyFMb42Hw== X-Google-Smtp-Source: AK7set8WgL+UDWIXb0NySo6GmdM6brHPe1cx2lexpL4QuzIoR1sXY+vgwT5LNEovfuXVdncaE2W8cg== X-Received: by 2002:a05:6000:613:b0:2bf:d937:3589 with SMTP id bn19-20020a056000061300b002bfd9373589mr6629086wrb.14.1675256390366; Wed, 01 Feb 2023 04:59:50 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:49 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 39/45] iommu/arm-smmu-v3-kvm: Initialize page table configuration Date: Wed, 1 Feb 2023 12:53:23 +0000 Message-Id: <20230201125328.2186498-40-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045950_897024_7AD8DFC2 X-CRM114-Status: GOOD ( 16.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Prepare the stage-2 I/O page table configuration that will be used by the hypervisor driver. Signed-off-by: Jean-Philippe Brucker --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 29 +++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index 755c77bc0417..55489d56fb5b 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -16,6 +16,7 @@ struct host_arm_smmu_device { struct arm_smmu_device smmu; pkvm_handle_t id; u32 boot_gbpa; + unsigned int pgd_order; }; #define smmu_to_host(_smmu) \ @@ -192,6 +193,7 @@ static int kvm_arm_smmu_probe(struct platform_device *pdev) size_t size; phys_addr_t ioaddr; struct resource *res; + struct io_pgtable_cfg cfg; struct arm_smmu_device *smmu; struct device *dev = &pdev->dev; struct host_arm_smmu_device *host_smmu; @@ -233,6 +235,31 @@ static int kvm_arm_smmu_probe(struct platform_device *pdev) if (!kvm_arm_smmu_validate_features(smmu)) return -ENODEV; + /* + * Stage-1 should be easy to support, though we do need to allocate a + * context descriptor table. + */ + cfg = (struct io_pgtable_cfg) { + .fmt = ARM_64_LPAE_S2, + .pgsize_bitmap = smmu->pgsize_bitmap, + .ias = smmu->ias, + .oas = smmu->oas, + .coherent_walk = smmu->features & ARM_SMMU_FEAT_COHERENCY, + }; + + /* + * Choose the page and address size. Compute the PGD size and number of + * levels as well, so we know how much memory to pre-allocate. + */ + ret = io_pgtable_configure(&cfg, &size); + if (ret) + return ret; + + host_smmu->pgd_order = get_order(size); + smmu->pgsize_bitmap = cfg.pgsize_bitmap; + smmu->ias = cfg.ias; + smmu->oas = cfg.oas; + ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, smmu->base, ARM_SMMU_CMDQ_PROD, ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS, "cmdq"); @@ -253,6 +280,8 @@ static int kvm_arm_smmu_probe(struct platform_device *pdev) hyp_smmu->mmio_addr = ioaddr; hyp_smmu->mmio_size = size; hyp_smmu->features = smmu->features; + hyp_smmu->iommu.pgtable_cfg = cfg; + kvm_arm_smmu_cur++; return 0; From patchwork Wed Feb 1 12:53:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124404 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1A61EC636CD for ; Wed, 1 Feb 2023 14:22:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dLn/LOljhUuQ68dDYjpEf6lWzeVPXMM5YBNrbD+QRos=; b=0W9NBvxOLUMJWe 9Ut8pd/7wzsg3cb9AKSUfMkcwFxqDsMxb9H99Qm9cMbXxSqN9fWeBqasvk/GbVEcYuBjawYtLzIeO NJipndwSMMQ6utbVXc4mD4xfgG+9RBzqnnJsk/0P+UAvI4k7ThHeKkMsFRC71G+/OjaomG1uRSNyC fVHzzOfWqTl2qkCRUItOB5iQdHHilwErqxJAVvO1Wz0BEF0sSPmE2AN43nc9ImDtqj4Nea0fcmUgY /acaTYSDNMTwO4OiBiZ8CAOCIjbCa5gukToQFyJxuJ6rr1FpVp5RIEenBjw7qz1VvC21cl5pfYYRl S9n5BQV9ZeBdcKr//ywA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDzX-00CJ7g-R0; Wed, 01 Feb 2023 14:21:40 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDh3-00CBTC-2w for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 14:02:33 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ppIcOxQzodRq67gk6vKw3mDff3ka1nzGshTj/M9s6gU=; b=XCTDhki3DQ4TLfCq/3o8ReWR/5 mzQzKe0YipdjaiUrLi8fYNGUPLcYboR0XaH6ecGmiFjZOm/JG/R9Ji9uySbpHovq/IbfDtqH5p+oG 41umdIVDPaFv7fFfTkQENpTd39dYcY0ayRMJXMV2aNYA0piklXe80hgMjN7OY4tMhvcZuXWjWcRtg zbT3ERIiiomax6yYin59biB42yoG0Oe5KGOsLcfWPEnSpfp3NXnv8PeYffA1c9k1QaiCiEytEtB8K sg2wvN+IUfCU5T+DzEraFBZ+0yPyof3N3OiInj5LYrQe91tGXPZKXWzE5Ny+nkQhzjBKsLdEtJKFy R1S40FGA==; Received: from mail-wr1-x429.google.com ([2a00:1450:4864:20::429]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pNCho-004m0q-2h for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:25 +0000 Received: by mail-wr1-x429.google.com with SMTP id r2so17219487wrv.7 for ; Wed, 01 Feb 2023 04:59:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ppIcOxQzodRq67gk6vKw3mDff3ka1nzGshTj/M9s6gU=; b=jleX4Kb9vHaIr5lr3vyl0Ub3ZcNhzpnouvv+CkIVbkNJVnc0SDvAllr6MV3fHIDyxp HvfH5UGfuu/bpRcz80AL19f4M3GEPxihj8njwygHsnorYLkgtjo1Z3NS62kWOgH2zTHV 0U/nB5UVu3ZZjEvtjYSXztcQxEv9Hi/8sJZTXJ0ncrril9a/uQDtEjY2sPvL+FQsoSnW Egbx0iT4sfA73eaXrrvx14AhUCeSorSD+4t2nXZFKjuvPmNqcn/SOnAwYdSaCenhj8FT nHucTqtMxSGfOR7jEPyIY6rEJkfPkaaJACQi5k1t1pUO/sQR7OP5Zw5BlYpgnAou+q2t 8LEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ppIcOxQzodRq67gk6vKw3mDff3ka1nzGshTj/M9s6gU=; b=Wp8hO33l3S2aV2ne2/8g6o6hIN3SEW0fqDqWYhRvp2gPBSynnSAf7fHXjHwh+hQu6r 6GQX/a0PDJ2qk/koe6PIHVyHDapAN+4KkvzcCkM2dClkkjG+6zvTMoL36oQE0oBCo6qm O9aUPspQE9BJM9YSsI40PNkvGUQTV0ngziPTHwaD9T6H+0bhKUZaNp0vecFZ7dGKtlLk DUJ1onr/IWYSvmFAZwqO6AWvhq14uK7R65LWpN0k5BByChYBXkQN14yvkPYL8QJaeYj2 ddWoGI35085udIDBVUtrkNH+fkZPwZOWmzNHeJ3ZOgtuOb1WL2hIZLK7dMJiy1/raprH kCHQ== X-Gm-Message-State: AO0yUKVwAU5/xvdikUuGYy2tQsweNs4YNUtqhI6fiKBB9NecgMcK3UWM wwjfJi1RmDPomRk3oj5ZWLzWaA== X-Google-Smtp-Source: AK7set/Dpmjc+ms2UJSfAaO5FJ8lpQvzo1k02M1aY0CKDkRuCMOGO3vtPrB8nDGmTEPQxkdrikrbJg== X-Received: by 2002:a05:6000:15cd:b0:2bf:d8ed:ba46 with SMTP id y13-20020a05600015cd00b002bfd8edba46mr2767625wry.47.1675256391143; Wed, 01 Feb 2023 04:59:51 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:50 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 40/45] iommu/arm-smmu-v3-kvm: Add IOMMU ops Date: Wed, 1 Feb 2023 12:53:24 +0000 Message-Id: <20230201125328.2186498-41-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125917_307300_A747B9F2 X-CRM114-Status: GOOD ( 24.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Forward alloc_domain(), attach_dev(), map_pages(), etc to the hypervisor. Signed-off-by: Jean-Philippe Brucker --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 330 +++++++++++++++++- 1 file changed, 328 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index 55489d56fb5b..930d78f6e29f 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -22,10 +22,28 @@ struct host_arm_smmu_device { #define smmu_to_host(_smmu) \ container_of(_smmu, struct host_arm_smmu_device, smmu); +struct kvm_arm_smmu_master { + struct arm_smmu_device *smmu; + struct device *dev; + struct kvm_arm_smmu_domain *domain; +}; + +struct kvm_arm_smmu_domain { + struct iommu_domain domain; + struct arm_smmu_device *smmu; + struct mutex init_mutex; + unsigned long pgd; + pkvm_handle_t id; +}; + +#define to_kvm_smmu_domain(_domain) \ + container_of(_domain, struct kvm_arm_smmu_domain, domain) + static size_t kvm_arm_smmu_cur; static size_t kvm_arm_smmu_count; static struct hyp_arm_smmu_v3_device *kvm_arm_smmu_array; static struct kvm_hyp_iommu_memcache *kvm_arm_smmu_memcache; +static DEFINE_IDA(kvm_arm_smmu_domain_ida); static DEFINE_PER_CPU(local_lock_t, memcache_lock) = INIT_LOCAL_LOCK(memcache_lock); @@ -57,7 +75,6 @@ static void *kvm_arm_smmu_host_va(phys_addr_t pa) return __va(pa); } -__maybe_unused static int kvm_arm_smmu_topup_memcache(struct arm_smmu_device *smmu) { struct kvm_hyp_memcache *mc; @@ -74,7 +91,6 @@ static int kvm_arm_smmu_topup_memcache(struct arm_smmu_device *smmu) kvm_arm_smmu_host_pa, smmu); } -__maybe_unused static void kvm_arm_smmu_reclaim_memcache(void) { struct kvm_hyp_memcache *mc; @@ -101,6 +117,299 @@ static void kvm_arm_smmu_reclaim_memcache(void) __ret; \ }) +static struct platform_driver kvm_arm_smmu_driver; + +static struct arm_smmu_device * +kvm_arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode) +{ + struct device *dev; + + dev = driver_find_device_by_fwnode(&kvm_arm_smmu_driver.driver, fwnode); + put_device(dev); + return dev ? dev_get_drvdata(dev) : NULL; +} + +static struct iommu_ops kvm_arm_smmu_ops; + +static struct iommu_device *kvm_arm_smmu_probe_device(struct device *dev) +{ + struct arm_smmu_device *smmu; + struct kvm_arm_smmu_master *master; + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + + if (!fwspec || fwspec->ops != &kvm_arm_smmu_ops) + return ERR_PTR(-ENODEV); + + if (WARN_ON_ONCE(dev_iommu_priv_get(dev))) + return ERR_PTR(-EBUSY); + + smmu = kvm_arm_smmu_get_by_fwnode(fwspec->iommu_fwnode); + if (!smmu) + return ERR_PTR(-ENODEV); + + master = kzalloc(sizeof(*master), GFP_KERNEL); + if (!master) + return ERR_PTR(-ENOMEM); + + master->dev = dev; + master->smmu = smmu; + dev_iommu_priv_set(dev, master); + + return &smmu->iommu; +} + +static void kvm_arm_smmu_release_device(struct device *dev) +{ + struct kvm_arm_smmu_master *master = dev_iommu_priv_get(dev); + + kfree(master); + iommu_fwspec_free(dev); +} + +static struct iommu_domain *kvm_arm_smmu_domain_alloc(unsigned type) +{ + struct kvm_arm_smmu_domain *kvm_smmu_domain; + + /* + * We don't support + * - IOMMU_DOMAIN_IDENTITY because we rely on the host telling the + * hypervisor which pages are used for DMA. + * - IOMMU_DOMAIN_DMA_FQ because lazy unmap would clash with memory + * donation to guests. + */ + if (type != IOMMU_DOMAIN_DMA && + type != IOMMU_DOMAIN_UNMANAGED) + return NULL; + + kvm_smmu_domain = kzalloc(sizeof(*kvm_smmu_domain), GFP_KERNEL); + if (!kvm_smmu_domain) + return NULL; + + mutex_init(&kvm_smmu_domain->init_mutex); + + return &kvm_smmu_domain->domain; +} + +static int kvm_arm_smmu_domain_finalize(struct kvm_arm_smmu_domain *kvm_smmu_domain, + struct kvm_arm_smmu_master *master) +{ + int ret = 0; + struct page *p; + unsigned long pgd; + struct arm_smmu_device *smmu = master->smmu; + struct host_arm_smmu_device *host_smmu = smmu_to_host(smmu); + + if (kvm_smmu_domain->smmu) { + if (kvm_smmu_domain->smmu != smmu) + return -EINVAL; + return 0; + } + + ret = ida_alloc_range(&kvm_arm_smmu_domain_ida, 0, 1 << smmu->vmid_bits, + GFP_KERNEL); + if (ret < 0) + return ret; + kvm_smmu_domain->id = ret; + + /* + * PGD allocation does not use the memcache because it may be of higher + * order when concatenated. + */ + p = alloc_pages_node(dev_to_node(smmu->dev), GFP_KERNEL | __GFP_ZERO, + host_smmu->pgd_order); + if (!p) + return -ENOMEM; + + pgd = (unsigned long)page_to_virt(p); + + local_lock_irq(&memcache_lock); + ret = kvm_call_hyp_nvhe_mc(smmu, __pkvm_host_iommu_alloc_domain, + host_smmu->id, kvm_smmu_domain->id, pgd); + local_unlock_irq(&memcache_lock); + if (ret) + goto err_free; + + kvm_smmu_domain->domain.pgsize_bitmap = smmu->pgsize_bitmap; + kvm_smmu_domain->domain.geometry.aperture_end = (1UL << smmu->ias) - 1; + kvm_smmu_domain->domain.geometry.force_aperture = true; + kvm_smmu_domain->smmu = smmu; + kvm_smmu_domain->pgd = pgd; + + return 0; + +err_free: + free_pages(pgd, host_smmu->pgd_order); + ida_free(&kvm_arm_smmu_domain_ida, kvm_smmu_domain->id); + return ret; +} + +static void kvm_arm_smmu_domain_free(struct iommu_domain *domain) +{ + int ret; + struct kvm_arm_smmu_domain *kvm_smmu_domain = to_kvm_smmu_domain(domain); + struct arm_smmu_device *smmu = kvm_smmu_domain->smmu; + + if (smmu) { + struct host_arm_smmu_device *host_smmu = smmu_to_host(smmu); + + ret = kvm_call_hyp_nvhe(__pkvm_host_iommu_free_domain, + host_smmu->id, kvm_smmu_domain->id); + /* + * On failure, leak the pgd because it probably hasn't been + * reclaimed by the host. + */ + if (!WARN_ON(ret)) + free_pages(kvm_smmu_domain->pgd, host_smmu->pgd_order); + ida_free(&kvm_arm_smmu_domain_ida, kvm_smmu_domain->id); + } + kfree(kvm_smmu_domain); +} + +static int kvm_arm_smmu_detach_dev(struct host_arm_smmu_device *host_smmu, + struct kvm_arm_smmu_master *master) +{ + int i, ret; + struct arm_smmu_device *smmu = &host_smmu->smmu; + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(master->dev); + + if (!master->domain) + return 0; + + for (i = 0; i < fwspec->num_ids; i++) { + int sid = fwspec->ids[i]; + + ret = kvm_call_hyp_nvhe(__pkvm_host_iommu_detach_dev, + host_smmu->id, master->domain->id, sid); + if (ret) { + dev_err(smmu->dev, "cannot detach device %s (0x%x): %d\n", + dev_name(master->dev), sid, ret); + break; + } + } + + master->domain = NULL; + + return ret; +} + +static int kvm_arm_smmu_attach_dev(struct iommu_domain *domain, + struct device *dev) +{ + int i, ret; + struct arm_smmu_device *smmu; + struct host_arm_smmu_device *host_smmu; + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + struct kvm_arm_smmu_master *master = dev_iommu_priv_get(dev); + struct kvm_arm_smmu_domain *kvm_smmu_domain = to_kvm_smmu_domain(domain); + + if (!master) + return -ENODEV; + + smmu = master->smmu; + host_smmu = smmu_to_host(smmu); + + ret = kvm_arm_smmu_detach_dev(host_smmu, master); + if (ret) + return ret; + + mutex_lock(&kvm_smmu_domain->init_mutex); + ret = kvm_arm_smmu_domain_finalize(kvm_smmu_domain, master); + mutex_unlock(&kvm_smmu_domain->init_mutex); + if (ret) + return ret; + + local_lock_irq(&memcache_lock); + for (i = 0; i < fwspec->num_ids; i++) { + int sid = fwspec->ids[i]; + + ret = kvm_call_hyp_nvhe_mc(smmu, __pkvm_host_iommu_attach_dev, + host_smmu->id, kvm_smmu_domain->id, + sid); + if (ret) { + dev_err(smmu->dev, "cannot attach device %s (0x%x): %d\n", + dev_name(dev), sid, ret); + goto out_unlock; + } + } + master->domain = kvm_smmu_domain; + +out_unlock: + if (ret) + kvm_arm_smmu_detach_dev(host_smmu, master); + local_unlock_irq(&memcache_lock); + return ret; +} + +static int kvm_arm_smmu_map_pages(struct iommu_domain *domain, + unsigned long iova, phys_addr_t paddr, + size_t pgsize, size_t pgcount, int prot, + gfp_t gfp, size_t *mapped) +{ + int ret; + unsigned long irqflags; + struct kvm_arm_smmu_domain *kvm_smmu_domain = to_kvm_smmu_domain(domain); + struct arm_smmu_device *smmu = kvm_smmu_domain->smmu; + struct host_arm_smmu_device *host_smmu = smmu_to_host(smmu); + + local_lock_irqsave(&memcache_lock, irqflags); + ret = kvm_call_hyp_nvhe_mc(smmu, __pkvm_host_iommu_map_pages, + host_smmu->id, kvm_smmu_domain->id, iova, + paddr, pgsize, pgcount, prot); + local_unlock_irqrestore(&memcache_lock, irqflags); + if (ret) + return ret; + + *mapped = pgsize * pgcount; + return 0; +} + +static size_t kvm_arm_smmu_unmap_pages(struct iommu_domain *domain, + unsigned long iova, size_t pgsize, + size_t pgcount, + struct iommu_iotlb_gather *iotlb_gather) +{ + int ret; + unsigned long irqflags; + struct kvm_arm_smmu_domain *kvm_smmu_domain = to_kvm_smmu_domain(domain); + struct arm_smmu_device *smmu = kvm_smmu_domain->smmu; + struct host_arm_smmu_device *host_smmu = smmu_to_host(smmu); + + local_lock_irqsave(&memcache_lock, irqflags); + ret = kvm_call_hyp_nvhe_mc(smmu, __pkvm_host_iommu_unmap_pages, + host_smmu->id, kvm_smmu_domain->id, iova, + pgsize, pgcount); + local_unlock_irqrestore(&memcache_lock, irqflags); + + return ret ? 0 : pgsize * pgcount; +} + +static phys_addr_t kvm_arm_smmu_iova_to_phys(struct iommu_domain *domain, + dma_addr_t iova) +{ + struct kvm_arm_smmu_domain *kvm_smmu_domain = to_kvm_smmu_domain(domain); + struct host_arm_smmu_device *host_smmu = smmu_to_host(kvm_smmu_domain->smmu); + + return kvm_call_hyp_nvhe(__pkvm_host_iommu_iova_to_phys, host_smmu->id, + kvm_smmu_domain->id, iova); +} + +static struct iommu_ops kvm_arm_smmu_ops = { + .capable = arm_smmu_capable, + .device_group = arm_smmu_device_group, + .of_xlate = arm_smmu_of_xlate, + .probe_device = kvm_arm_smmu_probe_device, + .release_device = kvm_arm_smmu_release_device, + .domain_alloc = kvm_arm_smmu_domain_alloc, + .owner = THIS_MODULE, + .default_domain_ops = &(const struct iommu_domain_ops) { + .attach_dev = kvm_arm_smmu_attach_dev, + .free = kvm_arm_smmu_domain_free, + .map_pages = kvm_arm_smmu_map_pages, + .unmap_pages = kvm_arm_smmu_unmap_pages, + .iova_to_phys = kvm_arm_smmu_iova_to_phys, + } +}; + static bool kvm_arm_smmu_validate_features(struct arm_smmu_device *smmu) { unsigned long oas; @@ -186,6 +495,12 @@ static int kvm_arm_smmu_device_reset(struct host_arm_smmu_device *host_smmu) return 0; } +static void *kvm_arm_smmu_alloc_domains(struct arm_smmu_device *smmu) +{ + return (void *)devm_get_free_pages(smmu->dev, GFP_KERNEL | __GFP_ZERO, + get_order(KVM_IOMMU_DOMAINS_ROOT_SIZE)); +} + static int kvm_arm_smmu_probe(struct platform_device *pdev) { int ret; @@ -274,6 +589,16 @@ static int kvm_arm_smmu_probe(struct platform_device *pdev) if (ret) return ret; + hyp_smmu->iommu.domains = kvm_arm_smmu_alloc_domains(smmu); + if (!hyp_smmu->iommu.domains) + return -ENOMEM; + + hyp_smmu->iommu.nr_domains = 1 << smmu->vmid_bits; + + ret = arm_smmu_register_iommu(smmu, &kvm_arm_smmu_ops, ioaddr); + if (ret) + return ret; + platform_set_drvdata(pdev, host_smmu); /* Hypervisor parameters */ @@ -296,6 +621,7 @@ static int kvm_arm_smmu_remove(struct platform_device *pdev) * There was an error during hypervisor setup. The hyp driver may * have already enabled the device, so disable it. */ + arm_smmu_unregister_iommu(smmu); arm_smmu_device_disable(smmu); arm_smmu_update_gbpa(smmu, host_smmu->boot_gbpa, GBPA_ABORT); return 0; From patchwork Wed Feb 1 12:53:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B4666C636CD for ; Wed, 1 Feb 2023 14:19:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NCylvE5aY9183mjocd7f3Mi1KVnGcdHSGmjl0imz5dU=; b=Hlux4MkDHAFh+z DxzY/icW3c76a4J9jzgaMQ8doGKG97Qv/KvHzBnMCeKuzZF4gjtVqVASwB7aW0IxsyJXflNTCY2SF LBGwLmvjptQAiiaT3+nJoFzLR498bsNcZE9+/rj3uy64iGr15eVNyVHUECT3mNpZOpMQoNkYme9kQ XVZjq3OgmgX/hYHcxQRNu2wzycrKMPpyUKp5codukZ6yI117cIh2kayVXUHWV4iIzD9crjH/YHaeA glLC/Y+myRMSDgzRmfpb8t+i8T0LjBQ0sWPC2OZsc3BfEp2lJQRUElb8XW2csiVQZ2rZjRBOxNTus EF+Jg9UbkdZKNIymiymQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDwM-00CHsr-Ua; Wed, 01 Feb 2023 14:18:24 +0000 Received: from mail-wr1-x429.google.com ([2a00:1450:4864:20::429]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiO-00BnGh-Bn for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:53 +0000 Received: by mail-wr1-x429.google.com with SMTP id m14so16761114wrg.13 for ; Wed, 01 Feb 2023 04:59:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3XQ8jSpys2Hws9/7ixJG0IKBklTR26Fi7D4CXfNjCZM=; b=GWi08w8bd79vvKrXk4BmiTVSIiEylmmpgFWd2pmPUbgeLTqOFj0BxthGC7YaErnCgM +3wzH8OdCcZne3jjsM3iX6FsHh2Qw19742RfrWV0OYuOAdgikEKePVcpTnrnNNpv/OFZ Qt8b8r18XGgsfkaJBSyC7bENzqhfBnWrK4JVC1xFCtwNme4oMvRC7uUW2QKOCgcEY76Z 0gj8tAD4dK4bj/e/ok5esM2nOlSukbC74w2yWTnJYK0GaPyR7G03JlruX/JPDpt9wXFA xB8eodzYviO5c6lRMG/LMY/TCARvCBwZnKsSutQ33xe3LQZYEABHMW6UqQOzFKEQJAZD GGZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3XQ8jSpys2Hws9/7ixJG0IKBklTR26Fi7D4CXfNjCZM=; b=MGLfEPBorpKg+HJowtD54OvIs/fEIW84VD4yHagPSVwtySBiRrbZ6/F1DUXarLzaTT mw8EzIa5dHuMnrVzM6cjTjlCwc1EtuNsp+fdOHounWWKreWuH/EfnBiAEA9vQL7iONIM PtVHx0FQSsq3h7ICEmqviXiGMx/Evzl9jfu6ouIKP+vf/yMY1WOoyj3c1AniGPEEStZ6 yuHlssA3tGxNI/pBkRhXn6DzqYb8wFyykW6POEmToi5/KwafQ/P/LG2vjgU3q+lND7cZ eAvdB4lyGfTXEeIg4qSWAgi8Lwisirr9oE7dSVJ6K91QZKDM+MEQuVEHvYXDAfKow1vx WZLA== X-Gm-Message-State: AO0yUKVDP75436bVLb8YboUX3COL5vrWLNkulG3f587LeIJEUWTJ2ef1 vCuE1mKiQgy/dIk41RzFv5fONQ== X-Google-Smtp-Source: AK7set+p3RZCyUqjM7HvNLHrPv8GDGafCYhqQ48BWrrnF24tXgDMI5DbpRPrwpMXGq/D5XyaJddJZw== X-Received: by 2002:a5d:4c49:0:b0:2bd:ed75:808c with SMTP id n9-20020a5d4c49000000b002bded75808cmr5620860wrt.38.1675256391893; Wed, 01 Feb 2023 04:59:51 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:51 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 41/45] KVM: arm64: pkvm: Add __pkvm_host_add_remove_page() Date: Wed, 1 Feb 2023 12:53:25 +0000 Message-Id: <20230201125328.2186498-42-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045952_418711_7D4718FB X-CRM114-Status: GOOD ( 12.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a small helper to remove and add back a page from the host stage-2. This will be used to temporarily unmap a piece of shared sram (device memory) from the host while we handle a SCMI request, preventing the host from modifying the request after it is verified. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 17 +++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index a363d58a998b..a7b28307604d 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -75,6 +75,7 @@ int __pkvm_guest_share_host(struct pkvm_hyp_vcpu *hyp_vcpu, u64 ipa); int __pkvm_guest_unshare_host(struct pkvm_hyp_vcpu *hyp_vcpu, u64 ipa); int __pkvm_host_share_dma(u64 phys_addr, size_t size, bool is_ram); int __pkvm_host_unshare_dma(u64 phys_addr, size_t size); +int __pkvm_host_add_remove_page(u64 pfn, bool remove); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index dcf08ce03790..6c3eeea4d4f5 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1954,3 +1954,20 @@ int __pkvm_host_reclaim_page(u64 pfn) return ret; } + +/* + * Temporarily unmap a page from the host stage-2, if @remove is true, or put it + * back. After restoring the ownership to host, the page will be lazy-mapped. + */ +int __pkvm_host_add_remove_page(u64 pfn, bool remove) +{ + int ret; + u64 host_addr = hyp_pfn_to_phys(pfn); + u8 owner = remove ? PKVM_ID_HYP : PKVM_ID_HOST; + + host_lock_component(); + ret = host_stage2_set_owner_locked(host_addr, PAGE_SIZE, owner); + host_unlock_component(); + + return ret; +} From patchwork Wed Feb 1 12:53:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A248DC05027 for ; Wed, 1 Feb 2023 14:21:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nkQMB1QaAhd0UoXqe90HaNoOTzEZDeTsA0rjInvsdl8=; b=4iMxDjD9PBMMid Anto33D89DlRzfD4/CAhpSVA9qCg0EZJf70r6B2MQMLoWaWl/kdDx9ScuQDo5aawINthXBlI7Qmer 3DfUCACOUjMxNYkRaUY6umOkYW3MCuD7C+8AlTwFmte44c2WvJvgA9OUxx/R1Vs0EefrlNSRGOBqk xbq2G+As8ljPdlSMrZhOgSKIB3Mt51u4A82CmADVRoW2oqeU3FiYqqLcCnoKZyJXuSC6a/UzvEHjy 4LtCEa6+UkKPgvXjjR39ocFSgzeRHVdO0X5pbEUJ6gRLZE1AkQThYLrhqrTZAZDT9Rd7+Q7gHBsQF HtgwpSQg6dLNkXmcMzDQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDyU-00CIg2-3v; Wed, 01 Feb 2023 14:20:34 +0000 Received: from mail-wm1-x329.google.com ([2a00:1450:4864:20::329]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiU-00BnXG-3U for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 13:00:00 +0000 Received: by mail-wm1-x329.google.com with SMTP id c4-20020a1c3504000000b003d9e2f72093so1314152wma.1 for ; Wed, 01 Feb 2023 04:59:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eJX+SGqG6zLegXNP2FyB5mHsuPTdUwTTVk34rvHHB9Y=; b=PX2X8jEZRjn8B8OW49X9Nl3Le/FoV4QE6thuBsRX/+nQGyRsVqDLBvVfNztzNB/jbw RLqoljwE0InPvfppftU1jxH8AjJ6YPIrVNh760Ncn4iJufwpkb9ZDy9ZXYGc2qs2IcdU 47ectI5CDPmjzt1TBluvUasl1VQFttGX+Iziaax7lP9KI6UO0kUxBesC7i7kyN5/IRwZ MVokWcuVTv6QYBqKthSeBEFFzcGhVBuYtsUydbkiSpZN5vdHeQHr9ASZt9CvkNdqX/Ku 7MT5C/UmYdvFTN2Y9xq4gImapqPicu2RyWwf9LGwsoGavKOq7GZR1/ftyYs/K1PpcHyk uzKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eJX+SGqG6zLegXNP2FyB5mHsuPTdUwTTVk34rvHHB9Y=; b=vmIjaq2OQ2OeFnPtkAeeOY3F3DSFMF0iGBjIfIHyVK6WcE//kNcaJ665uLfTCQNeQe x1FPIr2HevHScsqNkLZrDp6o6y8mslHbrVphn4aCIj6TKU8HpOyNpt2v5sDvMttrQQz2 9mqjRixR2Wf0Eki5e2onm3Dit/nwH5Eg0lW07NDnvSdl6j0INbQMcEFG0xJR7oVw0Njl H5GUDQU7OMNsK7CAnHeUUhe+4XHltMwtBbUG5iMGC6KnT8DbNSQgYdqyoFd5c6qqG6Tm 4Of1BC13hhdONwM/KDsY1hY23xHLYE5IhR+j1+gLf0422F6+7oOgEMNZzd7OOG0l6D9V bZ6Q== X-Gm-Message-State: AO0yUKV+J8TQOFPPTL5uKu+sEuxqRgWFgv4gGbQkYyNFoYsTZRwMAt7v s8Yf39JgO9+08+6tI4PWgflC6A== X-Google-Smtp-Source: AK7set/gosJV6CCV/9cnL0aXRK2ZgMnlEyfxjA1eQ/7IxDIjm2hJYiDK0rKFD5ZmlMlkIiJhOyQH2g== X-Received: by 2002:a05:600c:3789:b0:3dc:54e9:dfd7 with SMTP id o9-20020a05600c378900b003dc54e9dfd7mr2105226wmr.25.1675256392661; Wed, 01 Feb 2023 04:59:52 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:52 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 42/45] KVM: arm64: pkvm: Support SCMI power domain Date: Wed, 1 Feb 2023 12:53:26 +0000 Message-Id: <20230201125328.2186498-43-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045958_173204_3C57C7A1 X-CRM114-Status: GOOD ( 36.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hypervisor needs to catch power domain changes for devices it owns, such as the SMMU. Possible reasons: * Ensure that software and hardware states are consistent. The driver does not attempt to modify the state while the device is off. * Save and restore the device state. * Enforce dependency between consumers and suppliers. For example ensure that endpoints are off before turning the SMMU off, in case a powered off SMMU lets DMA through. However this is normally enforced by firmware. Add a SCMI power domain, as the standard method for device power management on Arm. Other methods can be added to kvm_power_domain later. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/nvhe/Makefile | 1 + arch/arm64/include/asm/kvm_hyp.h | 1 + arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 26 ++ .../arm64/kvm/hyp/include/nvhe/trap_handler.h | 2 + include/kvm/power_domain.h | 22 ++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 4 +- arch/arm64/kvm/hyp/nvhe/power/scmi.c | 233 ++++++++++++++++++ 7 files changed, 287 insertions(+), 2 deletions(-) create mode 100644 include/kvm/power_domain.h create mode 100644 arch/arm64/kvm/hyp/nvhe/power/scmi.c diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 8359909bd796..583a1f920c81 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -32,6 +32,7 @@ hyp-obj-$(CONFIG_KVM_IOMMU) += iommu/iommu.o hyp-obj-$(CONFIG_ARM_SMMU_V3_PKVM) += iommu/arm-smmu-v3.o hyp-obj-$(CONFIG_ARM_SMMU_V3_PKVM) += iommu/io-pgtable-arm.o \ ../../../../../drivers/iommu/io-pgtable-arm-common.o +hyp-obj-y += power/scmi.o ## ## Build rules for compiling nVHE hyp code diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 0226a719e28f..91b792d1c074 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -104,6 +104,7 @@ void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu); u64 __guest_enter(struct kvm_vcpu *vcpu); bool kvm_host_psci_handler(struct kvm_cpu_context *host_ctxt); +bool kvm_host_scmi_handler(struct kvm_cpu_context *host_ctxt); #ifdef __KVM_NVHE_HYPERVISOR__ void __noreturn __hyp_do_panic(struct kvm_cpu_context *host_ctxt, u64 spsr, diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index 746dc1c05a8e..1025354b4650 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -8,6 +8,7 @@ #define __ARM64_KVM_NVHE_PKVM_H__ #include +#include #include #include @@ -112,4 +113,29 @@ struct pkvm_hyp_vcpu *pkvm_mpidr_to_hyp_vcpu(struct pkvm_hyp_vm *vm, u64 mpidr); int pkvm_timer_init(void); void pkvm_udelay(unsigned long usecs); +struct kvm_power_domain_ops { + int (*power_on)(struct kvm_power_domain *pd); + int (*power_off)(struct kvm_power_domain *pd); +}; + +int pkvm_init_scmi_pd(struct kvm_power_domain *pd, + const struct kvm_power_domain_ops *ops); + +/* + * Register a power domain. When the hypervisor catches power requests from the + * host for this power domain, it calls the power ops with @pd as argument. + */ +static inline int pkvm_init_power_domain(struct kvm_power_domain *pd, + const struct kvm_power_domain_ops *ops) +{ + switch (pd->type) { + case KVM_POWER_DOMAIN_NONE: + return 0; + case KVM_POWER_DOMAIN_ARM_SCMI: + return pkvm_init_scmi_pd(pd, ops); + default: + return -EOPNOTSUPP; + } +} + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h b/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h index 1e6d995968a1..0e6bb92ccdb7 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h +++ b/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h @@ -15,4 +15,6 @@ #define DECLARE_REG(type, name, ctxt, reg) \ type name = (type)cpu_reg(ctxt, (reg)) +void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); + #endif /* __ARM64_KVM_NVHE_TRAP_HANDLER_H__ */ diff --git a/include/kvm/power_domain.h b/include/kvm/power_domain.h new file mode 100644 index 000000000000..3dcb40005a04 --- /dev/null +++ b/include/kvm/power_domain.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_POWER_DOMAIN_H +#define __KVM_POWER_DOMAIN_H + +enum kvm_power_domain_type { + KVM_POWER_DOMAIN_NONE, + KVM_POWER_DOMAIN_ARM_SCMI, +}; + +struct kvm_power_domain { + enum kvm_power_domain_type type; + union { + struct { + u32 smc_id; + u32 domain_id; + phys_addr_t shmem_base; + size_t shmem_size; + } arm_scmi; + }; +}; + +#endif /* __KVM_POWER_DOMAIN_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 34ec46b890f0..ad0877e6ea54 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -37,8 +37,6 @@ DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); struct kvm_iommu_ops kvm_iommu_ops; -void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); - typedef void (*hyp_entry_exit_handler_fn)(struct pkvm_hyp_vcpu *); static void handle_pvm_entry_wfx(struct pkvm_hyp_vcpu *hyp_vcpu) @@ -1217,6 +1215,8 @@ static void handle_host_smc(struct kvm_cpu_context *host_ctxt) bool handled; handled = kvm_host_psci_handler(host_ctxt); + if (!handled) + handled = kvm_host_scmi_handler(host_ctxt); if (!handled) default_host_smc_handler(host_ctxt); diff --git a/arch/arm64/kvm/hyp/nvhe/power/scmi.c b/arch/arm64/kvm/hyp/nvhe/power/scmi.c new file mode 100644 index 000000000000..e9ac33f3583c --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/power/scmi.c @@ -0,0 +1,233 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022 Linaro Ltd. + */ + +#include + +#include +#include +#include +#include + +/* SCMI protocol */ +#define SCMI_PROTOCOL_POWER_DOMAIN 0x11 + +/* shmem registers */ +#define SCMI_SHM_CHANNEL_STATUS 0x4 +#define SCMI_SHM_CHANNEL_FLAGS 0x10 +#define SCMI_SHM_LENGTH 0x14 +#define SCMI_SHM_MESSAGE_HEADER 0x18 +#define SCMI_SHM_MESSAGE_PAYLOAD 0x1c + +/* channel status */ +#define SCMI_CHN_FREE (1U << 0) +#define SCMI_CHN_ERROR (1U << 1) + +/* channel flags */ +#define SCMI_CHN_IRQ (1U << 0) + +/* message header */ +#define SCMI_HDR_TOKEN GENMASK(27, 18) +#define SCMI_HDR_PROTOCOL_ID GENMASK(17, 10) +#define SCMI_HDR_MESSAGE_TYPE GENMASK(9, 8) +#define SCMI_HDR_MESSAGE_ID GENMASK(7, 0) + +/* power domain */ +#define SCMI_PD_STATE_SET 0x4 +#define SCMI_PD_STATE_SET_FLAGS 0x0 +#define SCMI_PD_STATE_SET_DOMAIN_ID 0x4 +#define SCMI_PD_STATE_SET_POWER_STATE 0x8 + +#define SCMI_PD_STATE_SET_STATUS 0x0 + +#define SCMI_PD_STATE_SET_FLAGS_ASYNC (1U << 0) + +#define SCMI_PD_POWER_ON 0 +#define SCMI_PD_POWER_OFF (1U << 30) + +#define SCMI_SUCCESS 0 + + +static struct { + u32 smc_id; + phys_addr_t shmem_pfn; + size_t shmem_size; + void __iomem *shmem; +} scmi_channel; + +#define MAX_POWER_DOMAINS 16 + +struct scmi_power_domain { + struct kvm_power_domain *pd; + const struct kvm_power_domain_ops *ops; +}; + +static struct scmi_power_domain scmi_power_domains[MAX_POWER_DOMAINS]; +static int scmi_power_domain_count; + +#define SCMI_POLL_TIMEOUT_US 1000000 /* 1s! */ + +/* Forward the command to EL3, and wait for completion */ +static int scmi_run_command(struct kvm_cpu_context *host_ctxt) +{ + u32 reg; + unsigned long i = 0; + + __kvm_hyp_host_forward_smc(host_ctxt); + + do { + reg = readl_relaxed(scmi_channel.shmem + SCMI_SHM_CHANNEL_STATUS); + if (reg & SCMI_CHN_FREE) + break; + + if (WARN_ON(++i > SCMI_POLL_TIMEOUT_US)) + return -ETIMEDOUT; + + pkvm_udelay(1); + } while (!(reg & (SCMI_CHN_FREE | SCMI_CHN_ERROR))); + + if (reg & SCMI_CHN_ERROR) + return -EIO; + + reg = readl_relaxed(scmi_channel.shmem + SCMI_SHM_MESSAGE_PAYLOAD + + SCMI_PD_STATE_SET_STATUS); + if (reg != SCMI_SUCCESS) + return -EIO; + + return 0; +} + +static void __kvm_host_scmi_handler(struct kvm_cpu_context *host_ctxt) +{ + int i; + u32 reg; + struct scmi_power_domain *scmi_pd = NULL; + + /* + * FIXME: the spec does not really allow for an intermediary filtering + * messages on the channel: as soon as the host clears SCMI_CHN_FREE, + * the server may process the message. It doesn't have to wait for a + * doorbell and could just poll on the shared mem. Unlikely in practice, + * but this code is not correct without a spec change requiring the + * server to observe an SMC before processing the message. + */ + reg = readl_relaxed(scmi_channel.shmem + SCMI_SHM_CHANNEL_STATUS); + if (reg & (SCMI_CHN_FREE | SCMI_CHN_ERROR)) + return; + + reg = readl_relaxed(scmi_channel.shmem + SCMI_SHM_MESSAGE_HEADER); + if (FIELD_GET(SCMI_HDR_PROTOCOL_ID, reg) != SCMI_PROTOCOL_POWER_DOMAIN) + goto out_forward_smc; + + if (FIELD_GET(SCMI_HDR_MESSAGE_ID, reg) != SCMI_PD_STATE_SET) + goto out_forward_smc; + + reg = readl_relaxed(scmi_channel.shmem + SCMI_SHM_MESSAGE_PAYLOAD + + SCMI_PD_STATE_SET_FLAGS); + if (WARN_ON(reg & SCMI_PD_STATE_SET_FLAGS_ASYNC)) + /* We don't support async requests at the moment */ + return; + + reg = readl_relaxed(scmi_channel.shmem + SCMI_SHM_MESSAGE_PAYLOAD + + SCMI_PD_STATE_SET_DOMAIN_ID); + + for (i = 0; i < MAX_POWER_DOMAINS; i++) { + if (!scmi_power_domains[i].pd) + break; + + if (reg == scmi_power_domains[i].pd->arm_scmi.domain_id) { + scmi_pd = &scmi_power_domains[i]; + break; + } + } + if (!scmi_pd) + goto out_forward_smc; + + reg = readl_relaxed(scmi_channel.shmem + SCMI_SHM_MESSAGE_PAYLOAD + + SCMI_PD_STATE_SET_POWER_STATE); + switch (reg) { + case SCMI_PD_POWER_ON: + if (scmi_run_command(host_ctxt)) + break; + + scmi_pd->ops->power_on(scmi_pd->pd); + break; + case SCMI_PD_POWER_OFF: + scmi_pd->ops->power_off(scmi_pd->pd); + + if (scmi_run_command(host_ctxt)) + scmi_pd->ops->power_on(scmi_pd->pd); + break; + } + return; + +out_forward_smc: + __kvm_hyp_host_forward_smc(host_ctxt); +} + +bool kvm_host_scmi_handler(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, func_id, host_ctxt, 0); + + if (!scmi_channel.shmem || func_id != scmi_channel.smc_id) + return false; /* Unhandled */ + + /* + * Prevent the host from modifying the request while it is in flight. + * One page is enough, SCMI messages are smaller than that. + * + * FIXME: the host is allowed to poll the shmem while the request is in + * flight, or read shmem when receiving the SCMI interrupt. Although + * it's unlikely with the SMC-based transport, this too requires some + * tightening in the spec. + */ + if (WARN_ON(__pkvm_host_add_remove_page(scmi_channel.shmem_pfn, true))) + return true; + + __kvm_host_scmi_handler(host_ctxt); + + WARN_ON(__pkvm_host_add_remove_page(scmi_channel.shmem_pfn, false)); + return true; /* Handled */ +} + +int pkvm_init_scmi_pd(struct kvm_power_domain *pd, + const struct kvm_power_domain_ops *ops) +{ + int ret; + + if (!IS_ALIGNED(pd->arm_scmi.shmem_base, PAGE_SIZE) || + pd->arm_scmi.shmem_size < PAGE_SIZE) { + return -EINVAL; + } + + if (!scmi_channel.shmem) { + unsigned long shmem; + + /* FIXME: Do we need to mark those pages shared in the host s2? */ + ret = __pkvm_create_private_mapping(pd->arm_scmi.shmem_base, + pd->arm_scmi.shmem_size, + PAGE_HYP_DEVICE, + &shmem); + if (ret) + return ret; + + scmi_channel.smc_id = pd->arm_scmi.smc_id; + scmi_channel.shmem_pfn = hyp_phys_to_pfn(pd->arm_scmi.shmem_base); + scmi_channel.shmem = (void *)shmem; + + } else if (scmi_channel.shmem_pfn != + hyp_phys_to_pfn(pd->arm_scmi.shmem_base) || + scmi_channel.smc_id != pd->arm_scmi.smc_id) { + /* We support a single channel at the moment */ + return -ENXIO; + } + + if (scmi_power_domain_count == MAX_POWER_DOMAINS) + return -ENOSPC; + + scmi_power_domains[scmi_power_domain_count].pd = pd; + scmi_power_domains[scmi_power_domain_count].ops = ops; + scmi_power_domain_count++; + return 0; +} From patchwork Wed Feb 1 12:53:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124282 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 68F69C636CD for ; Wed, 1 Feb 2023 13:23:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VrVE18CUp+2oppYxXXcVo+zRYXIcNOjichWM1z79Vz8=; b=rjyWokirdQmjr6 5mksfON1bNB/J+vldZmiFiM3qu9BNSFkxZQKZ3rJMa7RVlsnOEsJj4+A1gG3i9e48wbcu5SiWUkhx Qj2IC+KcjLhA5b8a0yL7zcZoxg1MsbqiLoMDV8L9HqjzL/KITgzx8prFCJSeYwL2QWSbZsp4MDAFD Y61f8ztD8RlkdoJ24yPdQsPWlUjVmD1uHo2mc0AZeVVGDI/aOWyw7YWQ1oivuptdIbgoZlFK3mOld ucy4PaNo8FuQE9ZwGPzOTGSnIFrVxAk3n1rBoVea68hmCjS69CnxYvaHw6EPhfhBRvvO7gm5FwVTC Ah3wUvmGX5cDsURfriVQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pND4Q-00C1nf-CF; Wed, 01 Feb 2023 13:22:38 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pND4I-00C1id-BI for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 13:22:30 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ZemYmGEwO1gb7z6YKtNUBIsszZurcCZTkm3NxNyR8KY=; b=VF66RYuiSNSQzysjVrD4XmZp1r KwrvEXlB3Cg9yaKy1Zj4JIbOR2l+I+CcLSGNHKnSWIJIoZdt1l3QeYp5SCQ78EVplCEnCjj2Rgeko MZdh/AMxoRa8UNy3ADwiVvVUwsV5OZ++FBo29HyKZt0nIRW2gGwL8pS9z2oSz1ohCz2saNt+SGvET HkKxg7BJyjwaBHWS637gf9QDYkdGbKl442ocXYtmPE+kXLRlfTksoNpMrd5HMSad1+QlmQiT0q7Dn whN7t6yqEP+RAGjOHzBZyZw7s33VtPw4r25F8DG9+Nbe3J1oa/wrWU3Jymh75Feu/uqT08FRKFw0Y IbYE8TTw==; Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiQ-00CIyN-4a for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:56 +0000 Received: by mail-wr1-x42d.google.com with SMTP id r2so17219568wrv.7 for ; Wed, 01 Feb 2023 04:59:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZemYmGEwO1gb7z6YKtNUBIsszZurcCZTkm3NxNyR8KY=; b=JOD8m4yzhH9gPfdkJLa+7RNLQC7WIcJKjPQe2frqePj3MBMrnCqzRD8MnDLYyAJrI/ Sg6Sbu8gNVeKJMv9yIUPA0lfo7Aie2omWbfh5YXJZbVUOKZL/DAneVLlWJdqDAMeaC1y rt2gbxdwPcUDyWQue47pDhzBpMR8EIcp6xqnOz+OyULHG4gI7L3PuBoNhffsm/JJXmzN U2m7KVpH3hkYk4tNgNgBr4V0gW1hOSVX7R1i4YPkmllBg2AMkelRDVo4m4/NYXGs1lPc Y3M5oZWSzjMx5G1IYBHDukSiGPsJqeK/YQJf+C7BbbeTCW42VGlJlC2miKsxNH4pSDqj XApw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZemYmGEwO1gb7z6YKtNUBIsszZurcCZTkm3NxNyR8KY=; b=2J1862SNmQDIDlngS2/OGvE8Mv+Xap5z2hAhVNw5P/JuDPutMjJh5YZjzrzeD7FPOB gw6qkQrgyE6dZhENjxBZOTGqp7FX9O5dPR4Mg8Qt36SDSC0pl6UESScnI/AVZjfzupvT GXfz8sXWznbVaUWt6Kn13mzeD0WRbO/WQSxITyGFIoFzjC2S/SmUQbm+zmFmYZhlFVJk v7k5hJdiwE7I7CIQOWnLNWTaBd5xazb+BftMtwJO6B7HQk48ZMNYwX4z3KBP/1TNjMpk CGozQIzxKyufKQBPJODOmLTBd/cbgvjneo4dhvNoS0GP+YNWaShCbSTX6sH1ovvYnDnk PZHA== X-Gm-Message-State: AO0yUKVrR46m1tyvIe/IuZ0k5H1yxlqGbcIn9WvLAST0WAoeWCS8Ol8z 7cKDi2PA8wjDJyOwKvSaBpAgvQ== X-Google-Smtp-Source: AK7set/TyPhNoqKUzXf4QDTT+ly131TfiVmHNekKz3oVZiuW5PpzJGZ8W/3imN/eBr/WwOkzC5/vTA== X-Received: by 2002:a5d:6110:0:b0:2bf:b9a4:f688 with SMTP id v16-20020a5d6110000000b002bfb9a4f688mr2377463wrt.23.1675256393453; Wed, 01 Feb 2023 04:59:53 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:53 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 43/45] KVM: arm64: smmu-v3: Support power management Date: Wed, 1 Feb 2023 12:53:27 +0000 Message-Id: <20230201125328.2186498-44-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125954_725385_16B3AF15 X-CRM114-Status: GOOD ( 19.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add power domain ops to the hypervisor IOMMU driver. We currently make these assumptions: * The register state is retained across power off. * The TLBs are clean on power on. * Another privileged software (EL3 or SCP FW) handles dependencies between SMMU and endpoints. So we just need to make sure that the CPU does not touch the SMMU registers while it is powered off. Signed-off-by: Jean-Philippe Brucker --- include/kvm/arm_smmu_v3.h | 4 +++ include/kvm/iommu.h | 4 +++ arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c | 12 +++++++ arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 36 +++++++++++++++++++++ 4 files changed, 56 insertions(+) diff --git a/include/kvm/arm_smmu_v3.h b/include/kvm/arm_smmu_v3.h index 373b915b6661..d345cd616407 100644 --- a/include/kvm/arm_smmu_v3.h +++ b/include/kvm/arm_smmu_v3.h @@ -12,6 +12,9 @@ * Parameters from the trusted host: * @mmio_addr base address of the SMMU registers * @mmio_size size of the registers resource + * @caches_clean_on_power_on + * is it safe to elide cache and TLB invalidation commands + * while the SMMU is OFF * * Other members are filled and used at runtime by the SMMU driver. */ @@ -20,6 +23,7 @@ struct hyp_arm_smmu_v3_device { phys_addr_t mmio_addr; size_t mmio_size; unsigned long features; + bool caches_clean_on_power_on; void __iomem *base; u32 cmdq_prod; diff --git a/include/kvm/iommu.h b/include/kvm/iommu.h index 2bbe5f7bf726..ab888da731bc 100644 --- a/include/kvm/iommu.h +++ b/include/kvm/iommu.h @@ -3,6 +3,7 @@ #define __KVM_IOMMU_H #include +#include #include /* @@ -10,6 +11,7 @@ * @pgtable_cfg: page table configuration * @domains: root domain table * @nr_domains: max number of domains (exclusive) + * @power_domain: power domain information * * Other members are filled and used at runtime by the IOMMU driver. */ @@ -17,8 +19,10 @@ struct kvm_hyp_iommu { struct io_pgtable_cfg pgtable_cfg; void **domains; size_t nr_domains; + struct kvm_power_domain power_domain; struct io_pgtable_params *pgtable; + bool power_is_off; }; struct kvm_hyp_iommu_memcache { diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c index 56e313203a16..20610ebf04c2 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/arm-smmu-v3.c @@ -83,6 +83,9 @@ static int smmu_add_cmd(struct hyp_arm_smmu_v3_device *smmu, int idx = Q_IDX(smmu, smmu->cmdq_prod); u64 *slot = smmu->cmdq_base + idx * CMDQ_ENT_DWORDS; + if (smmu->iommu.power_is_off) + return -EPIPE; + ret = smmu_wait_event(smmu, !smmu_cmdq_full(smmu)); if (ret) return ret; @@ -160,6 +163,9 @@ static int smmu_sync_ste(struct hyp_arm_smmu_v3_device *smmu, u32 sid) .cfgi.leaf = true, }; + if (smmu->iommu.power_is_off && smmu->caches_clean_on_power_on) + return 0; + return smmu_send_cmd(smmu, &cmd); } @@ -394,6 +400,9 @@ static void smmu_tlb_flush_all(void *cookie) .tlbi.vmid = data->domain_id, }; + if (smmu->iommu.power_is_off && smmu->caches_clean_on_power_on) + return; + WARN_ON(smmu_send_cmd(smmu, &cmd)); } @@ -409,6 +418,9 @@ static void smmu_tlb_inv_range(struct kvm_iommu_tlb_cookie *data, .tlbi.leaf = leaf, }; + if (smmu->iommu.power_is_off && smmu->caches_clean_on_power_on) + return; + /* * There are no mappings at high addresses since we don't use TTB1, so * no overflow possible. diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c index 0550e7bdf179..2fb5514ee0ef 100644 --- a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -327,10 +327,46 @@ phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t iommu_id, return phys; } +static int iommu_power_on(struct kvm_power_domain *pd) +{ + struct kvm_hyp_iommu *iommu = container_of(pd, struct kvm_hyp_iommu, + power_domain); + + /* + * We currently assume that the device retains its architectural state + * across power off, hence no save/restore. + */ + hyp_spin_lock(&iommu_lock); + iommu->power_is_off = false; + hyp_spin_unlock(&iommu_lock); + return 0; +} + +static int iommu_power_off(struct kvm_power_domain *pd) +{ + struct kvm_hyp_iommu *iommu = container_of(pd, struct kvm_hyp_iommu, + power_domain); + + hyp_spin_lock(&iommu_lock); + iommu->power_is_off = true; + hyp_spin_unlock(&iommu_lock); + return 0; +} + +static const struct kvm_power_domain_ops iommu_power_ops = { + .power_on = iommu_power_on, + .power_off = iommu_power_off, +}; + int kvm_iommu_init_device(struct kvm_hyp_iommu *iommu) { + int ret; void *domains; + ret = pkvm_init_power_domain(&iommu->power_domain, &iommu_power_ops); + if (ret) + return ret; + domains = iommu->domains; iommu->domains = kern_hyp_va(domains); return pkvm_create_mappings(iommu->domains, iommu->domains + From patchwork Wed Feb 1 12:53:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AC7C9C05027 for ; Wed, 1 Feb 2023 14:26:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KAxl+hZ6433NBNlURKZDm5AUwsrDkMR0cLpnUDsFC8w=; b=AeJBdU+uEslxNT X7ttxfu6RXQZ3/1mBgiVjuUBFSXnzMgLcrkZXvtaSRojwgo5aP34aZ9kv3XN5HP82JU79kaO+ReTT 1wrZK3oep9ybzX5Dz6OD3yWfeFzclCmRoQ1+TE6YouzkTqZF3lqb5mfRVlj9CILbcbseV1MI4Hzdq ydWY17wGJLCESj4F381i3M+gCOaWwThtESNbaw4s+Qg53voZgpcKmmBAzNey3EQtrjVK0ilMgVqFR 2gSMIuN4L1Gy/2Kr5WJ5xGoXyYZGCDmF4CPID4gYuFwE4wJmLRnjG4+KjWV0hJCrDGZKDlOMBpxxi EorPL3QFRW7qpT1viupw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNE2j-00CKYn-OR; Wed, 01 Feb 2023 14:24:58 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDhV-00CBhk-AX for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 14:03:01 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6jBsZPV3gipM57S7lAbp42QSD9Ff7K7v1FMixncXNZ8=; b=kD96iDIIgMFlMEE9Zv/SrgSgEl 7R63ekUEkWrIoa7oXzzHm2fXsZIdL7DpUIdcYtpEPfcpRub54iVc6AekuFGZcQOfVY09OdzsE+mHC aa4xGqc8UAPjZf8cXy2pcLAzsrqXYhucGUDwFstRQNxFs+fEIytX7uqo1eHbQH0l3KAFaRi/Zkflb AKqiBoIbRsF34fg42CsXTRQuZQ9hkKT2D9CLBQKicLCl/JP+tRXnEjWu/IPJm7FEI9w5pjbe4DRV0 zfTRlKNyVwkVi1qgufiDLwArtCnskUhaM1cOCJDdzXiWAdHCtrfj/jp5zopBTMXnonihuEnR1Dufm i6RIvfbQ==; Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pNChs-004m6Q-1x for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:25 +0000 Received: by mail-wm1-x333.google.com with SMTP id k16so12616439wms.2 for ; Wed, 01 Feb 2023 04:59:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6jBsZPV3gipM57S7lAbp42QSD9Ff7K7v1FMixncXNZ8=; b=jqqn6lEE7IVGOwDOGUD7sfXW39/KCeRyDY2/IuSjMlatK+c9ZDof8lJLlOvnpmvWTA tlSQGbKWdoTrC/MOU9HAgwktipxQF610xlaH/dYWp+pmmSLnjBtuGF9Qz95H06qb4MwO yAncx8QkWUkTvExb8Th18C0UWALTn9rEn+/sdpo30zE2K+g2rBF/PNys2yCbMr6w+Wmp DDS1ImzyVxYg4MQ2uUELVaL6vRyTQgT4Fqez6mH6JQcaaAothn5+3OPvFjFMrR8thoAg NqSNMBiuDqpmnSyRybV9FGZN/QmMk9u78DMmhJDZ3DZf9x1VV6qC7nVG8reMe3AtwLcM EisA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6jBsZPV3gipM57S7lAbp42QSD9Ff7K7v1FMixncXNZ8=; b=uIsx4x0+2eRlvf9XEu+1p4kKb4rF+q+FwJnG7pk05pEX8ZjIpTfZ9I5kVcGPA5xwIL g8Ej++CYtpc4fJCEtsNq8YszESy0avG7g9uhFrN46pFQqlvtF96a5yQSEku98xmH6Vcy UcB/JqDkf1n8LbtdLO8a3s/lKmKrXdgqYgUz0xKS2c69+6NoWNEGgYHyLXQBX/Bzjtoz y0qhsnLtUSd2zMefCTgd+VNqr8/64pXINyBEBMDMCglm4ggkGMkZqUYc3SVOUK0SA0hF Ro+T2ThzhhoIE3lklRbGm31ss9bPyYPtggb4tMtZWsZrsodWxm49dxqZuanBTmDqZQ5/ QWfQ== X-Gm-Message-State: AO0yUKVc5nlvi8HTzp77HIyEJLP3ohkamQHGx5zISKAMTaEzFKEfZSk7 oUpr1wJUOUZ3DAkrJD93gzHjF1eCosXTDNVl X-Google-Smtp-Source: AK7set8FozBKO7/9zd4lyvhy8/1NGzLPtlJydoeNS4hNBerIjmIljLa1nO5xlOdJHO+Uy2Ds9zc7/A== X-Received: by 2002:a05:600c:3d0b:b0:3d0:7fee:8a70 with SMTP id bh11-20020a05600c3d0b00b003d07fee8a70mr2118223wmb.19.1675256394202; Wed, 01 Feb 2023 04:59:54 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:53 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 44/45] iommu/arm-smmu-v3-kvm: Support power management with SCMI SMC Date: Wed, 1 Feb 2023 12:53:28 +0000 Message-Id: <20230201125328.2186498-45-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125921_823324_024FE1DD X-CRM114-Status: GOOD ( 20.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Discover SCMI parameters for the SMMUv3 power domain, and pass them to the hypervisor. Power management must use a method based on SMC, so the hypervisor driver can catch them and keep the software state in sync with the hardware. Signed-off-by: Jean-Philippe Brucker --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 76 +++++++++++++++++++ 1 file changed, 76 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index 930d78f6e29f..198e41d808b0 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -6,6 +6,7 @@ */ #include #include +#include #include #include @@ -495,6 +496,75 @@ static int kvm_arm_smmu_device_reset(struct host_arm_smmu_device *host_smmu) return 0; } +static int kvm_arm_probe_scmi_pd(struct device_node *scmi_node, + struct kvm_power_domain *pd) +{ + int ret; + struct resource res; + struct of_phandle_args args; + + pd->type = KVM_POWER_DOMAIN_ARM_SCMI; + + ret = of_parse_phandle_with_args(scmi_node, "shmem", NULL, 0, &args); + if (ret) + return ret; + + ret = of_address_to_resource(args.np, 0, &res); + if (ret) + goto out_put_nodes; + + ret = of_property_read_u32(scmi_node, "arm,smc-id", + &pd->arm_scmi.smc_id); + if (ret) + goto out_put_nodes; + + /* + * The shared buffer is unmapped from the host while a request is in + * flight, so it has to be on its own page. + */ + if (!IS_ALIGNED(res.start, SZ_64K) || resource_size(&res) < SZ_64K) { + ret = -EINVAL; + goto out_put_nodes; + } + + pd->arm_scmi.shmem_base = res.start; + pd->arm_scmi.shmem_size = resource_size(&res); + +out_put_nodes: + of_node_put(args.np); + return ret; +} + +/* TODO: Move this. None of it is specific to SMMU */ +static int kvm_arm_probe_power_domain(struct device *dev, + struct kvm_power_domain *pd) +{ + int ret; + struct device_node *parent; + struct of_phandle_args args; + + if (!of_get_property(dev->of_node, "power-domains", NULL)) + return 0; + + ret = of_parse_phandle_with_args(dev->of_node, "power-domains", + "#power-domain-cells", 0, &args); + if (ret) + return ret; + + parent = of_get_parent(args.np); + if (parent && of_device_is_compatible(parent, "arm,scmi-smc") && + args.args_count > 0) { + pd->arm_scmi.domain_id = args.args[0]; + ret = kvm_arm_probe_scmi_pd(parent, pd); + } else { + dev_err(dev, "Unsupported PM method for %pOF\n", args.np); + ret = -EINVAL; + } + of_node_put(parent); + of_node_put(args.np); + return ret; +} + static void *kvm_arm_smmu_alloc_domains(struct arm_smmu_device *smmu) { return (void *)devm_get_free_pages(smmu->dev, GFP_KERNEL | __GFP_ZERO, @@ -513,6 +583,7 @@ static int kvm_arm_smmu_probe(struct platform_device *pdev) struct device *dev = &pdev->dev; struct host_arm_smmu_device *host_smmu; struct hyp_arm_smmu_v3_device *hyp_smmu; + struct kvm_power_domain power_domain = {}; if (kvm_arm_smmu_cur >= kvm_arm_smmu_count) return -ENOSPC; @@ -530,6 +601,10 @@ static int kvm_arm_smmu_probe(struct platform_device *pdev) if (ret || bypass) return ret ?: -EINVAL; + ret = kvm_arm_probe_power_domain(dev, &power_domain); + if (ret) + return ret; + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); size = resource_size(res); if (size < SZ_128K) { @@ -606,6 +681,7 @@ static int kvm_arm_smmu_probe(struct platform_device *pdev) hyp_smmu->mmio_size = size; hyp_smmu->features = smmu->features; hyp_smmu->iommu.pgtable_cfg = cfg; + hyp_smmu->iommu.power_domain = power_domain; kvm_arm_smmu_cur++; From patchwork Wed Feb 1 12:53:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124402 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C7B50C636CD for ; Wed, 1 Feb 2023 14:20:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+PcaLcP9Ja2Ts4VABop8iWx35Uv48Ymh13F8LF5ej0Q=; b=cPVQ4jIdWJkT/7 cm6o4J4riLvNPRY1TKM3nI7159opB+VfTIq6ObUC3qqXKDDGoxb6GFSZcDDHB185Xs4UyvgQV8Npy kEd8arxAeM25AL3cjIgbWhBScfUfyl+Jzvu1ndozce4u5tYrmZJ79WRcuvZwjWYG4H+g6G8ymi5+D 2otVUBm6tZbyL2zrv7DsHfmaopOzRZT5nqeMS61a1X6mG1I2MvmOXmrqNixgSPoBeJzlg9ucAZ9/s eCHQZ67d5StMnK8xImQMTqGFkakj7b2F7leEyxdHbnGVQVdhj0XkHeqDT7ICwuHslVOoAy5PmiaSz 5z2IdqAcvPqGkFid/CZA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDxL-00CIEv-GU; Wed, 01 Feb 2023 14:19:25 +0000 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiR-00BnGy-DD for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:56 +0000 Received: by mail-wr1-x42b.google.com with SMTP id a3so10535022wrt.6 for ; Wed, 01 Feb 2023 04:59:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+QwuD9TuGNSCLvPM2blxffUhnZ2RtJhExgcIPoLd+QM=; b=UOhwrweWo/dydtFfuzgOipklc5rQNm46Jqg9SLNN7DWbSNa217VcZcYEtfDbWKZ+ro pTpyujtdOm5j39A7vRbwrc8tS/GrxqLOzHnnEUY7yEGvujURXprc4qYFtTNAo9N1kcm4 uYS9th+lV2uBPj3HZEnl/Z6KRtqw0Hwdb748wqe6A1px9tdg7Ot6+y9BVFUlqYq3XpXm BuSX32azdekNqG+ojYiLZsVLQhK0RuRQk6FIokSXLujaG/wGGCMS6oWZ0ogTCbS7YIma qoW223lpxoJMh5dJHTtTCurAbnQ1ZZTT96CtKot03dCMBiSYzhTq3qML2YsGbb6CPITP vhCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+QwuD9TuGNSCLvPM2blxffUhnZ2RtJhExgcIPoLd+QM=; b=KHv1fm9dQyiOLWmPrttMc/lYIdaH6WNz6HU98wHCiTTOoNc+CPnEVbrggVrP1LfUoD MZEabQIQvL2UUqmJgC7nNlqPV2Me3v/wJiLbQKASkpWg9oRxenFw5BBEd3imK5bZJgws 9ZqNJSsRSyBqbbt7yDX6msO0DOCM3Lr5dAiXccuYjAj48BB7hHXUbEdMTz4doL7uEyUD 8oHE9LlKyMeR+OlHJrSkZxcb8O2a4ESWyov0Xkc1d7FKNraLsNUiXnKRhX7MBwDD+4pe QSagKPEATfAp/NZgMIyhM1tK79vbtWBWvvC7cQIg77UptmnBRUGqh2POXXqe5KvEvipJ gg6Q== X-Gm-Message-State: AO0yUKUQZVIGAKx32XBWcpaDjb2Q3K74OyAsR5otY5kGDxaq4wVoQftp FOk2oSGgqqBFn8MaJ9a2vpNK2g== X-Google-Smtp-Source: AK7set9bqN341dPHhfT97aUYzzSZBGnfTu3JIZgicnUoniM+wDk0UFc0vzIdu1RIl/3gjqCIATPA8g== X-Received: by 2002:a05:6000:10c3:b0:2bf:b2c2:e122 with SMTP id b3-20020a05600010c300b002bfb2c2e122mr2349373wrx.29.1675256394953; Wed, 01 Feb 2023 04:59:54 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:54 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 45/45] iommu/arm-smmu-v3-kvm: Enable runtime PM Date: Wed, 1 Feb 2023 12:53:29 +0000 Message-Id: <20230201125328.2186498-46-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045955_481216_43EF1196 X-CRM114-Status: GOOD ( 21.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Enable runtime PM for the KVM SMMUv3 driver. The PM link to DMA masters dictates when the SMMU should be powered on. Signed-off-by: Jean-Philippe Brucker --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 54 +++++++++++++++++++ 1 file changed, 54 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index 198e41d808b0..cd865049f89a 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -8,6 +8,7 @@ #include #include #include +#include #include @@ -18,6 +19,7 @@ struct host_arm_smmu_device { pkvm_handle_t id; u32 boot_gbpa; unsigned int pgd_order; + atomic_t initialized; }; #define smmu_to_host(_smmu) \ @@ -134,8 +136,10 @@ static struct iommu_ops kvm_arm_smmu_ops; static struct iommu_device *kvm_arm_smmu_probe_device(struct device *dev) { + int ret; struct arm_smmu_device *smmu; struct kvm_arm_smmu_master *master; + struct host_arm_smmu_device *host_smmu; struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); if (!fwspec || fwspec->ops != &kvm_arm_smmu_ops) @@ -156,7 +160,28 @@ static struct iommu_device *kvm_arm_smmu_probe_device(struct device *dev) master->smmu = smmu; dev_iommu_priv_set(dev, master); + if (!device_link_add(dev, smmu->dev, + DL_FLAG_PM_RUNTIME | DL_FLAG_RPM_ACTIVE | + DL_FLAG_AUTOREMOVE_SUPPLIER)) { + ret = -ENOLINK; + goto err_free; + } + + /* + * If the SMMU has just been initialized by the hypervisor, release the + * extra PM reference taken by kvm_arm_smmu_probe(). Not sure yet how + * to improve this. Maybe have KVM call us back when it finished + * initializing? + */ + host_smmu = smmu_to_host(smmu); + if (atomic_add_unless(&host_smmu->initialized, 1, 1)) + pm_runtime_put_noidle(smmu->dev); + return &smmu->iommu; + +err_free: + kfree(master); + return ERR_PTR(ret); } static void kvm_arm_smmu_release_device(struct device *dev) @@ -685,6 +710,30 @@ static int kvm_arm_smmu_probe(struct platform_device *pdev) kvm_arm_smmu_cur++; + /* + * The state of endpoints dictates when the SMMU is powered off. To turn + * the SMMU on and off, a genpd driver uses SCMI over the SMC transport, + * or some other platform-specific SMC. Those power requests are caught + * by the hypervisor, so that the hyp driver doesn't touch the hardware + * state while it is off. + * + * We are making a big assumption here, that TLBs and caches are invalid + * on power on, and therefore we don't need to wake the SMMU when + * modifying page tables, stream tables and context tables. If this + * assumption does not hold on some systems, then we'll need to grab RPM + * reference in map(), attach(), etc, so the hyp driver can send + * invalidations. + */ + hyp_smmu->caches_clean_on_power_on = true; + + pm_runtime_set_active(dev); + pm_runtime_enable(dev); + /* + * Take a reference to keep the SMMU powered on while the hypervisor + * initializes it. + */ + pm_runtime_resume_and_get(dev); + return 0; } @@ -697,6 +746,11 @@ static int kvm_arm_smmu_remove(struct platform_device *pdev) * There was an error during hypervisor setup. The hyp driver may * have already enabled the device, so disable it. */ + + if (!atomic_read(&host_smmu->initialized)) + pm_runtime_put_noidle(&pdev->dev); + pm_runtime_disable(&pdev->dev); + pm_runtime_set_suspended(&pdev->dev); arm_smmu_unregister_iommu(smmu); arm_smmu_device_disable(smmu); arm_smmu_update_gbpa(smmu, host_smmu->boot_gbpa, GBPA_ABORT);