From patchwork Wed Dec 27 16:13:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yi L" X-Patchwork-Id: 13505355 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A3E545BF1; Wed, 27 Dec 2023 16:14:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="VzEPdn9z" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703693641; x=1735229641; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Y9zEpidkk+p5pkry7bIuq4ZUGve0T4w0aUbFgMriFaU=; b=VzEPdn9zz4V/O3L8NwWtvqge0g7ce3mkxCY4iDmZ1pSklFbM6t+0DPOD GoMiECOXeyvqxKJtS2rWpIQQdDvtWYkC962hwuBEp7DYx1j7lcVdcDDwq 5FTXk65n1sJgmTB356eGEd5BqKLRI9k8IDwk2g+imbeXWfQv+jjWX53tE bcb48MZ8iiXeotdS6CW98LXY/YeA/g1brgTf467zcXlHlMWIFuSw5H3Ri roG71+YO6+lymMlrsWC/fGoctMNZ3yiEYsAJpOkpZCDSyqYRTXmzsj7HZ aP+yMD8vyhx4z2hLgynpvSko3bkQ1oDvVs6BHgFqLViSMQ+GaceYmSoux A==; X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="396186182" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="396186182" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2023 08:13:59 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="781775182" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="781775182" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by fmsmga007.fm.intel.com with ESMTP; 27 Dec 2023 08:13:58 -0800 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com, yan.y.zhao@intel.com, j.granados@samsung.com Subject: [PATCH v8 01/10] iommu: Add cache_invalidate_user op Date: Wed, 27 Dec 2023 08:13:45 -0800 Message-Id: <20231227161354.67701-2-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231227161354.67701-1-yi.l.liu@intel.com> References: <20231227161354.67701-1-yi.l.liu@intel.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Lu Baolu The updates of the PTEs in the nested page table will be propagated to the hardware caches. Add a new domain op cache_invalidate_user for the userspace to flush the hardware caches for a nested domain through iommufd. No wrapper for it, as it's only supposed to be used by iommufd. Then, pass in invalidation requests in form of a user data array conatining a number of invalidation data entries. Signed-off-by: Lu Baolu Reviewed-by: Kevin Tian Signed-off-by: Nicolin Chen Signed-off-by: Yi Liu --- include/linux/iommu.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 6291aa7b079b..93c0d12dd047 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -284,6 +284,23 @@ struct iommu_user_data { size_t len; }; +/** + * struct iommu_user_data_array - iommu driver specific user space data array + * @type: The data type of all the entries in the user buffer array + * @uptr: Pointer to the user buffer array + * @entry_len: The fixed-width length of an entry in the array, in bytes + * @entry_num: The number of total entries in the array + * + * The user buffer includes an array of requests with format defined in + * include/uapi/linux/iommufd.h + */ +struct iommu_user_data_array { + unsigned int type; + void __user *uptr; + size_t entry_len; + u32 entry_num; +}; + /** * __iommu_copy_struct_from_user - Copy iommu driver specific user space data * @dst_data: Pointer to an iommu driver specific user data that is defined in @@ -440,6 +457,13 @@ struct iommu_ops { * @iotlb_sync_map: Sync mappings created recently using @map to the hardware * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush * queue + * @cache_invalidate_user: Flush hardware cache for user space IO page table. + * The @domain must be IOMMU_DOMAIN_NESTED. The @array + * passes in the cache invalidation requests, in form + * of a driver data structure. The driver must update + * array->entry_num to report the number of handled + * invalidation requests. The driver data structure + * must be defined in include/uapi/linux/iommufd.h * @iova_to_phys: translate iova to physical address * @enforce_cache_coherency: Prevent any kind of DMA from bypassing IOMMU_CACHE, * including no-snoop TLPs on PCIe or other platform @@ -465,6 +489,8 @@ struct iommu_domain_ops { size_t size); void (*iotlb_sync)(struct iommu_domain *domain, struct iommu_iotlb_gather *iotlb_gather); + int (*cache_invalidate_user)(struct iommu_domain *domain, + struct iommu_user_data_array *array); phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova); From patchwork Wed Dec 27 16:13:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yi L" X-Patchwork-Id: 13505356 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1BC0345C14; Wed, 27 Dec 2023 16:14:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Bv1A/S6j" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703693642; x=1735229642; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=99TYuN49vvjoLfyYIXLqTlwPER2a6yyjiYKDv8MLFag=; b=Bv1A/S6jKsfNSby9ucEgI8RNxtkzMjJEE99PgfQMrkUNx08SWyK65cWp 9cbrahzV75PEHF2YK9OMoEK8QJCYBQ68g954yCPwcRz/dhYu86sJQTz2a TiMjAl4JJige1spagiB4kr35FekrrcZxURZYZNkua+QuTAfqs18hDUafP wj+SIGTQxzc/iy1prhW4zNh11i4/KwvhOTgocQdUVDS3IFQvatDffsAEX iprEuz2vWNlKjQspFIFLkeXVJPaGJ7begpf9lIw55BgFHM25DIfITIXFK uFtkAlS9S7Cm47kDjj+fnoiUr6XvrSVaasyJ829r5H+JnFLiPC1QwUSVe g==; X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="396186201" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="396186201" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2023 08:14:01 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="781775190" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="781775190" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by fmsmga007.fm.intel.com with ESMTP; 27 Dec 2023 08:14:00 -0800 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com, yan.y.zhao@intel.com, j.granados@samsung.com Subject: [PATCH v8 02/10] iommufd: Add IOMMU_HWPT_INVALIDATE Date: Wed, 27 Dec 2023 08:13:46 -0800 Message-Id: <20231227161354.67701-3-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231227161354.67701-1-yi.l.liu@intel.com> References: <20231227161354.67701-1-yi.l.liu@intel.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In nested translation, the stage-1 page table is user-managed but cached by the IOMMU hardware, so an update on present page table entries in the stage-1 page table should be followed with a cache invalidation. Add an IOMMU_HWPT_INVALIDATE ioctl to support such a cache invalidation. It takes hwpt_id to specify the iommu_domain, and a multi-entry array to support multiple invalidation data in one ioctl. enum iommu_hwpt_invalidate_data_type is defined to tag the data type of the entries in the multi-entry array. Co-developed-by: Nicolin Chen Signed-off-by: Nicolin Chen Signed-off-by: Yi Liu Reviewed-by: Kevin Tian --- drivers/iommu/iommufd/hw_pagetable.c | 41 +++++++++++++++++++++++ drivers/iommu/iommufd/iommufd_private.h | 10 ++++++ drivers/iommu/iommufd/main.c | 3 ++ include/uapi/linux/iommufd.h | 43 +++++++++++++++++++++++++ 4 files changed, 97 insertions(+) diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/hw_pagetable.c index cbb5df0a6c32..4e8711f19f72 100644 --- a/drivers/iommu/iommufd/hw_pagetable.c +++ b/drivers/iommu/iommufd/hw_pagetable.c @@ -371,3 +371,44 @@ int iommufd_hwpt_get_dirty_bitmap(struct iommufd_ucmd *ucmd) iommufd_put_object(ucmd->ictx, &hwpt_paging->common.obj); return rc; } + +int iommufd_hwpt_invalidate(struct iommufd_ucmd *ucmd) +{ + struct iommu_hwpt_invalidate *cmd = ucmd->cmd; + struct iommu_user_data_array data_array = { + .type = cmd->data_type, + .uptr = u64_to_user_ptr(cmd->data_uptr), + .entry_len = cmd->entry_len, + .entry_num = cmd->entry_num, + }; + struct iommufd_hw_pagetable *hwpt; + u32 done_num = 0; + int rc; + + if (cmd->__reserved) { + rc = -EOPNOTSUPP; + goto out; + } + + if (cmd->entry_num && (!cmd->data_uptr || !cmd->entry_len)) { + rc = -EINVAL; + goto out; + } + + hwpt = iommufd_get_hwpt_nested(ucmd, cmd->hwpt_id); + if (IS_ERR(hwpt)) { + rc = PTR_ERR(hwpt); + goto out; + } + + rc = hwpt->domain->ops->cache_invalidate_user(hwpt->domain, + &data_array); + done_num = data_array.entry_num; + + iommufd_put_object(ucmd->ictx, &hwpt->obj); +out: + cmd->entry_num = done_num; + if (iommufd_ucmd_respond(ucmd, sizeof(*cmd))) + return -EFAULT; + return rc; +} diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h index abae041e256f..991f864d1f9b 100644 --- a/drivers/iommu/iommufd/iommufd_private.h +++ b/drivers/iommu/iommufd/iommufd_private.h @@ -328,6 +328,15 @@ iommufd_get_hwpt_paging(struct iommufd_ucmd *ucmd, u32 id) IOMMUFD_OBJ_HWPT_PAGING), struct iommufd_hwpt_paging, common.obj); } + +static inline struct iommufd_hw_pagetable * +iommufd_get_hwpt_nested(struct iommufd_ucmd *ucmd, u32 id) +{ + return container_of(iommufd_get_object(ucmd->ictx, id, + IOMMUFD_OBJ_HWPT_NESTED), + struct iommufd_hw_pagetable, obj); +} + int iommufd_hwpt_set_dirty_tracking(struct iommufd_ucmd *ucmd); int iommufd_hwpt_get_dirty_bitmap(struct iommufd_ucmd *ucmd); @@ -345,6 +354,7 @@ void iommufd_hwpt_paging_abort(struct iommufd_object *obj); void iommufd_hwpt_nested_destroy(struct iommufd_object *obj); void iommufd_hwpt_nested_abort(struct iommufd_object *obj); int iommufd_hwpt_alloc(struct iommufd_ucmd *ucmd); +int iommufd_hwpt_invalidate(struct iommufd_ucmd *ucmd); static inline void iommufd_hw_pagetable_put(struct iommufd_ctx *ictx, struct iommufd_hw_pagetable *hwpt) diff --git a/drivers/iommu/iommufd/main.c b/drivers/iommu/iommufd/main.c index c9091e46d208..39b32932c61e 100644 --- a/drivers/iommu/iommufd/main.c +++ b/drivers/iommu/iommufd/main.c @@ -322,6 +322,7 @@ union ucmd_buffer { struct iommu_hw_info info; struct iommu_hwpt_alloc hwpt; struct iommu_hwpt_get_dirty_bitmap get_dirty_bitmap; + struct iommu_hwpt_invalidate cache; struct iommu_hwpt_set_dirty_tracking set_dirty_tracking; struct iommu_ioas_alloc alloc; struct iommu_ioas_allow_iovas allow_iovas; @@ -360,6 +361,8 @@ static const struct iommufd_ioctl_op iommufd_ioctl_ops[] = { __reserved), IOCTL_OP(IOMMU_HWPT_GET_DIRTY_BITMAP, iommufd_hwpt_get_dirty_bitmap, struct iommu_hwpt_get_dirty_bitmap, data), + IOCTL_OP(IOMMU_HWPT_INVALIDATE, iommufd_hwpt_invalidate, + struct iommu_hwpt_invalidate, __reserved), IOCTL_OP(IOMMU_HWPT_SET_DIRTY_TRACKING, iommufd_hwpt_set_dirty_tracking, struct iommu_hwpt_set_dirty_tracking, __reserved), IOCTL_OP(IOMMU_IOAS_ALLOC, iommufd_ioas_alloc_ioctl, diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h index 0b2bc6252e2c..824560c50ec6 100644 --- a/include/uapi/linux/iommufd.h +++ b/include/uapi/linux/iommufd.h @@ -49,6 +49,7 @@ enum { IOMMUFD_CMD_GET_HW_INFO, IOMMUFD_CMD_HWPT_SET_DIRTY_TRACKING, IOMMUFD_CMD_HWPT_GET_DIRTY_BITMAP, + IOMMUFD_CMD_HWPT_INVALIDATE, }; /** @@ -613,4 +614,46 @@ struct iommu_hwpt_get_dirty_bitmap { #define IOMMU_HWPT_GET_DIRTY_BITMAP _IO(IOMMUFD_TYPE, \ IOMMUFD_CMD_HWPT_GET_DIRTY_BITMAP) +/** + * enum iommu_hwpt_invalidate_data_type - IOMMU HWPT Cache Invalidation + * Data Type + * @IOMMU_HWPT_INVALIDATE_DATA_VTD_S1: Invalidation data for VTD_S1 + */ +enum iommu_hwpt_invalidate_data_type { + IOMMU_HWPT_INVALIDATE_DATA_VTD_S1, +}; + +/** + * struct iommu_hwpt_invalidate - ioctl(IOMMU_HWPT_INVALIDATE) + * @size: sizeof(struct iommu_hwpt_invalidate) + * @hwpt_id: ID of a nested HWPT for cache invalidation + * @data_uptr: User pointer to an array of driver-specific cache invalidation + * data. + * @data_type: One of enum iommu_hwpt_invalidate_data_type, defining the data + * type of all the entries in the invalidation request array. It + * should be a type supported by the hwpt pointed by @hwpt_id. + * @entry_len: Length (in bytes) of a request entry in the request array + * @entry_num: Input the number of cache invalidation requests in the array. + * Output the number of requests successfully handled by kernel. + * @__reserved: Must be 0. + * + * Invalidate the iommu cache for user-managed page table. Modifications on a + * user-managed page table should be followed by this operation to sync cache. + * Each ioctl can support one or more cache invalidation requests in the array + * that has a total size of @entry_len * @entry_num. + * + * An empty invalidation request array by setting @entry_num==0 is allowed, and + * @entry_len and @data_uptr would be ignored in this case. This can be used to + * check if the given @data_type is supported or not by kernel. + */ +struct iommu_hwpt_invalidate { + __u32 size; + __u32 hwpt_id; + __aligned_u64 data_uptr; + __u32 data_type; + __u32 entry_len; + __u32 entry_num; + __u32 __reserved; +}; +#define IOMMU_HWPT_INVALIDATE _IO(IOMMUFD_TYPE, IOMMUFD_CMD_HWPT_INVALIDATE) #endif From patchwork Wed Dec 27 16:13:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yi L" X-Patchwork-Id: 13505357 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 787A246522; Wed, 27 Dec 2023 16:14:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="BzvQWq4y" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703693643; x=1735229643; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a0MtJfDdMrZSQXPjcVw/ypmtgFbBgW6CUZyjsKPTKtk=; b=BzvQWq4ylseL8HXf1RswmyvgF0l8VR7e15wtV2leCVELn/kZejpDrfmi aI10/IRPg4rvNa4frozSM7C+K7KQOavAw8wiPalLbP1J6C+GnuWOhjiSn r3t/8qMVU8MSA0jipIEeudBdr3Ss60fT7lwcy5zmP7XLhvnxQEo1FsKvn lt1RpqgO1KP3EGGc3rJN1bbZyd+1av2wVpZFzSGrUF+9eoyelNFDWS8iv o+IQUKS2ZbhbttovwhxSYbMEhKS89zg6YZomvOi6wdK6pc+tyPbNYjEjF 0aEDtUGEsUvkRI9rfe0MdFF62GS85JC4P2UjDV0mMuXFunFwJ1EIi4ftG g==; X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="396186218" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="396186218" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2023 08:14:02 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="781775196" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="781775196" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by fmsmga007.fm.intel.com with ESMTP; 27 Dec 2023 08:14:01 -0800 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com, yan.y.zhao@intel.com, j.granados@samsung.com Subject: [PATCH v8 03/10] iommu: Add iommu_copy_struct_from_user_array helper Date: Wed, 27 Dec 2023 08:13:47 -0800 Message-Id: <20231227161354.67701-4-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231227161354.67701-1-yi.l.liu@intel.com> References: <20231227161354.67701-1-yi.l.liu@intel.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Nicolin Chen Wrap up the data pointer/num sanity and __iommu_copy_struct_from_user call for iommu drivers to copy driver specific data at a specific location in the struct iommu_user_data_array, and iommu_respond_struct_to_user_array() to copy response to a specific location in the struct iommu_user_data_array. And expect it to be used in cache_invalidate_user ops for example. Reviewed-by: Kevin Tian Signed-off-by: Nicolin Chen Co-developed-by: Yi Liu Signed-off-by: Yi Liu --- include/linux/iommu.h | 74 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 74 insertions(+) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 93c0d12dd047..c3434c9eaa6d 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -341,6 +341,80 @@ static inline int __iommu_copy_struct_from_user( sizeof(*kdst), \ offsetofend(typeof(*kdst), min_last)) +/** + * __iommu_copy_struct_from_user_array - Copy iommu driver specific user space + * data from an iommu_user_data_array + * @dst_data: Pointer to an iommu driver specific user data that is defined in + * include/uapi/linux/iommufd.h + * @src_array: Pointer to a struct iommu_user_data_array for a user space array + * @data_type: The data type of the @dst_data. Must match with @src_array.type + * @index: Index to the location in the array to copy user data from + * @data_len: Length of current user data structure, i.e. sizeof(struct _dst) + * @min_len: Initial length of user data structure for backward compatibility. + * This should be offsetofend using the last member in the user data + * struct that was initially added to include/uapi/linux/iommufd.h + */ +static inline int +__iommu_copy_struct_from_user_array(void *dst_data, + const struct iommu_user_data_array *src_array, + unsigned int data_type, unsigned int index, + size_t data_len, size_t min_len) +{ + struct iommu_user_data src_data; + + if (WARN_ON(!src_array || index >= src_array->entry_num)) + return -EINVAL; + if (!src_array->entry_num) + return -EINVAL; + src_data.uptr = src_array->uptr + src_array->entry_len * index; + src_data.len = src_array->entry_len; + src_data.type = src_array->type; + + return __iommu_copy_struct_from_user(dst_data, &src_data, data_type, + data_len, min_len); +} + +/** + * iommu_copy_struct_from_user_array - Copy iommu driver specific user space + * data from an iommu_user_data_array + * @kdst: Pointer to an iommu driver specific user data that is defined in + * include/uapi/linux/iommufd.h + * @user_array: Pointer to a struct iommu_user_data_array for a user space + * array + * @data_type: The data type of the @kdst. Must match with @user_array->type + * @index: Index to the location in the array to copy user data from + * @min_last: The last memember of the data structure @kdst points in the + * initial version. + * Return 0 for success, otherwise -error. + */ +#define iommu_copy_struct_from_user_array(kdst, user_array, data_type, \ + index, min_last) \ + __iommu_copy_struct_from_user_array(kdst, user_array, data_type, \ + index, sizeof(*kdst), \ + offsetofend(typeof(*kdst), \ + min_last)) + +/** + * iommu_respond_struct_to_user_array - Copy the response in @ksrc back to + * a specific entry of user array + * @user_array: Pointer to a struct iommu_user_data_array for a user space + * array + * @index: Index to the location in the array to copy response + * @ksrc: Pointer to kernel structure + * @klen: Length of @ksrc struct + * + * This only copies response of one entry (@index) in @user_array. + */ +static inline int +iommu_respond_struct_to_user_array(const struct iommu_user_data_array *array, + unsigned int index, void *ksrc, size_t klen) +{ + if (copy_to_user(array->uptr + array->entry_len * index, + ksrc, min_t(size_t, array->entry_len, klen))) + return -EFAULT; + return 0; +} + /** * struct iommu_ops - iommu ops and capabilities * @capable: check capability From patchwork Wed Dec 27 16:13:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yi L" X-Patchwork-Id: 13505358 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA52846B87; Wed, 27 Dec 2023 16:14:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ZYhWmrF2" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703693644; x=1735229644; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YaKsYJmpoP1J2+af2wMppPzhvG3pSWEOhFLhzXoh4BU=; b=ZYhWmrF2w1SvPxUy2s+i04RkaPSbjT+0wl7Kt40MmvelMxnrhQ8f9XsO 3ZQeGU5KHzOm6ziAbh68R6gK7kfGmU+HJF2+ZjweEF1MjRKkRNISQOuo2 +0YV33xqDPL15scGmVJqDCl3GgVu+TEKg0fG44aiVZjSPEfU2ADVsCAxT T9zYgHENXmxcpekVqYIYXWF/SiTzlQQeAnVxD2gnPB6YguNOAYIxqYKK7 Dg636FlqGMmjYhCORFgFKzLLbRrSKfHukC7cPORwl3MaKfYAGeNKa5voC GS+ZDWijRp2u+5VlW/rwPsiplEu1grgRUrTT5onYZUQevbva0ba/TBG1I A==; X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="396186231" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="396186231" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2023 08:14:04 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="781775205" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="781775205" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by fmsmga007.fm.intel.com with ESMTP; 27 Dec 2023 08:14:02 -0800 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com, yan.y.zhao@intel.com, j.granados@samsung.com Subject: [PATCH v8 04/10] iommufd/selftest: Add mock_domain_cache_invalidate_user support Date: Wed, 27 Dec 2023 08:13:48 -0800 Message-Id: <20231227161354.67701-5-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231227161354.67701-1-yi.l.liu@intel.com> References: <20231227161354.67701-1-yi.l.liu@intel.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Nicolin Chen Add mock_domain_cache_invalidate_user() data structure to support user space selftest program to cover user cache invalidation pathway. Reviewed-by: Kevin Tian Signed-off-by: Nicolin Chen Co-developed-by: Yi Liu Signed-off-by: Yi Liu --- drivers/iommu/iommufd/iommufd_test.h | 34 ++++++++++++++++ drivers/iommu/iommufd/selftest.c | 60 ++++++++++++++++++++++++++++ 2 files changed, 94 insertions(+) diff --git a/drivers/iommu/iommufd/iommufd_test.h b/drivers/iommu/iommufd/iommufd_test.h index 7910fbe1962d..2eef5afde711 100644 --- a/drivers/iommu/iommufd/iommufd_test.h +++ b/drivers/iommu/iommufd/iommufd_test.h @@ -148,4 +148,38 @@ struct iommu_hwpt_selftest { __u32 iotlb; }; +/* Should not be equal to any defined value in enum iommu_hwpt_invalidate_data_type */ +#define IOMMU_HWPT_INVALIDATE_DATA_SELFTEST 0xdeadbeef +#define IOMMU_HWPT_INVALIDATE_DATA_SELFTEST_INVALID 0xdadbeef + +/** + * enum iommu_hwpt_invalidate_selftest_error - Hardware error of invalidation + * @IOMMU_TEST_INVALIDATE_FAKE_ERROR: Fake hw error per test program's request + */ +enum iommu_hwpt_invalidate_selftest_error { + IOMMU_TEST_INVALIDATE_FAKE_ERROR = (1 << 0) +}; + +/** + * struct iommu_hwpt_invalidate_selftest - Invalidation data for Mock driver + * (IOMMU_HWPT_INVALIDATE_DATA_SELFTEST) + * @flags: Invalidate flags + * @iotlb_id: Invalidate iotlb entry index + * @hw_error: One of enum iommu_hwpt_invalidate_selftest_error + * @__reserved: Must be 0 + * + * If IOMMU_TEST_INVALIDATE_ALL is set in @flags, @iotlb_id will be ignored + * @hw_error meaningful only if the request is processed successfully. + * If IOMMU_TEST_INVALIDATE_FLAG_TRIGGER_ERROR is set in @flags, report a + * hw error back, cache would not be invalidated in this case. + */ +struct iommu_hwpt_invalidate_selftest { +#define IOMMU_TEST_INVALIDATE_FLAG_ALL (1 << 0) +#define IOMMU_TEST_INVALIDATE_FLAG_TRIGGER_ERROR (1 << 1) + __u32 flags; + __u32 iotlb_id; + __u32 hw_error; + __u32 __reserved; +}; + #endif diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c index 022ef8f55088..ebc6c15abf67 100644 --- a/drivers/iommu/iommufd/selftest.c +++ b/drivers/iommu/iommufd/selftest.c @@ -473,9 +473,69 @@ static void mock_domain_free_nested(struct iommu_domain *domain) kfree(mock_nested); } +static int +mock_domain_cache_invalidate_user(struct iommu_domain *domain, + struct iommu_user_data_array *array) +{ + struct mock_iommu_domain_nested *mock_nested = + container_of(domain, struct mock_iommu_domain_nested, domain); + u32 hw_error = 0, processed = 0; + struct iommu_hwpt_invalidate_selftest inv; + int i = 0, j; + int rc = 0; + + if (array->type != IOMMU_HWPT_INVALIDATE_DATA_SELFTEST) { + rc = -EINVAL; + goto out; + } + + for ( ; i < array->entry_num; i++) { + rc = iommu_copy_struct_from_user_array(&inv, array, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + i, __reserved); + if (rc) + break; + + if ((inv.flags & ~(IOMMU_TEST_INVALIDATE_FLAG_ALL | + IOMMU_TEST_INVALIDATE_FLAG_TRIGGER_ERROR)) || + inv.__reserved) { + rc = -EOPNOTSUPP; + break; + } + + if (inv.iotlb_id > MOCK_NESTED_DOMAIN_IOTLB_ID_MAX) { + rc = -EINVAL; + break; + } + + if (inv.flags & IOMMU_TEST_INVALIDATE_FLAG_TRIGGER_ERROR) { + hw_error = IOMMU_TEST_INVALIDATE_FAKE_ERROR; + } else if (inv.flags & IOMMU_TEST_INVALIDATE_FLAG_ALL) { + /* Invalidate all mock iotlb entries and ignore iotlb_id */ + for (j = 0; j < MOCK_NESTED_DOMAIN_IOTLB_NUM; j++) + mock_nested->iotlb[j] = 0; + } else { + mock_nested->iotlb[inv.iotlb_id] = 0; + } + + inv.hw_error = hw_error; + rc = iommu_respond_struct_to_user_array(array, i, (void *)&inv, + sizeof(inv)); + if (rc) + break; + + processed++; + } + +out: + array->entry_num = processed; + return rc; +} + static struct iommu_domain_ops domain_nested_ops = { .free = mock_domain_free_nested, .attach_dev = mock_domain_nop_attach, + .cache_invalidate_user = mock_domain_cache_invalidate_user, }; static inline struct iommufd_hw_pagetable * From patchwork Wed Dec 27 16:13:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yi L" X-Patchwork-Id: 13505359 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CE71647A48; Wed, 27 Dec 2023 16:14:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hxtCOQYp" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703693645; x=1735229645; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=w9DQy+hnxhoJ51h3H6W2Kl69R5yaKXv0XDk0dB+JKAk=; b=hxtCOQYpYqCdNzdttv2jugalwropJd6hLGvE2HXUmW7jL78E0Hg5k1/t HpHIlQPEme/Fpd4wPKEz2U+8Yl0sA0iWE4wbCvvDP7ag8PmMoiW89o+KJ qcHOeD0wHfPNGOQHAE0i+2O73NowqioCxQY7Op3fyWbYa8bu0HJw/aKzV znLqJMZG9ADbgKINF1qJKbqxRgyuw8NwabLXHH7S30+CykiSCRCwJrtx1 OZ0r26Y49tp7sguqWRjqAajGly7WS5PtaJIa2iZEUfJMWRHdzUZy3aOr4 k1CKeDWHRA+5cI3naMZqMUDj0I6HDIXNhG08lljOBMRctYjhOiQvFGes3 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="396186242" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="396186242" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2023 08:14:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="781775210" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="781775210" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by fmsmga007.fm.intel.com with ESMTP; 27 Dec 2023 08:14:04 -0800 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com, yan.y.zhao@intel.com, j.granados@samsung.com Subject: [PATCH v8 05/10] iommufd/selftest: Add IOMMU_TEST_OP_MD_CHECK_IOTLB test op Date: Wed, 27 Dec 2023 08:13:49 -0800 Message-Id: <20231227161354.67701-6-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231227161354.67701-1-yi.l.liu@intel.com> References: <20231227161354.67701-1-yi.l.liu@intel.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Nicolin Chen Allow to test whether IOTLB has been invalidated or not. Reviewed-by: Kevin Tian Signed-off-by: Nicolin Chen Signed-off-by: Yi Liu --- drivers/iommu/iommufd/iommufd_test.h | 5 ++++ drivers/iommu/iommufd/selftest.c | 26 +++++++++++++++++++ tools/testing/selftests/iommu/iommufd.c | 4 +++ tools/testing/selftests/iommu/iommufd_utils.h | 24 +++++++++++++++++ 4 files changed, 59 insertions(+) diff --git a/drivers/iommu/iommufd/iommufd_test.h b/drivers/iommu/iommufd/iommufd_test.h index 2eef5afde711..1cedd6b5ba2b 100644 --- a/drivers/iommu/iommufd/iommufd_test.h +++ b/drivers/iommu/iommufd/iommufd_test.h @@ -21,6 +21,7 @@ enum { IOMMU_TEST_OP_ACCESS_REPLACE_IOAS, IOMMU_TEST_OP_MOCK_DOMAIN_FLAGS, IOMMU_TEST_OP_DIRTY, + IOMMU_TEST_OP_MD_CHECK_IOTLB, }; enum { @@ -121,6 +122,10 @@ struct iommu_test_cmd { __aligned_u64 uptr; __aligned_u64 out_nr_dirty; } dirty; + struct { + __u32 id; + __u32 iotlb; + } check_iotlb; }; __u32 last; }; diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c index ebc6c15abf67..9528528cab27 100644 --- a/drivers/iommu/iommufd/selftest.c +++ b/drivers/iommu/iommufd/selftest.c @@ -853,6 +853,28 @@ static int iommufd_test_md_check_refs(struct iommufd_ucmd *ucmd, return 0; } +static int iommufd_test_md_check_iotlb(struct iommufd_ucmd *ucmd, + u32 mockpt_id, unsigned int iotlb_id, + u32 iotlb) +{ + struct mock_iommu_domain_nested *mock_nested; + struct iommufd_hw_pagetable *hwpt; + int rc = 0; + + hwpt = get_md_pagetable_nested(ucmd, mockpt_id, &mock_nested); + if (IS_ERR(hwpt)) + return PTR_ERR(hwpt); + + mock_nested = container_of(hwpt->domain, + struct mock_iommu_domain_nested, domain); + + if (iotlb_id > MOCK_NESTED_DOMAIN_IOTLB_ID_MAX || + mock_nested->iotlb[iotlb_id] != iotlb) + rc = -EINVAL; + iommufd_put_object(ucmd->ictx, &hwpt->obj); + return rc; +} + struct selftest_access { struct iommufd_access *access; struct file *file; @@ -1334,6 +1356,10 @@ int iommufd_test(struct iommufd_ucmd *ucmd) return iommufd_test_md_check_refs( ucmd, u64_to_user_ptr(cmd->check_refs.uptr), cmd->check_refs.length, cmd->check_refs.refs); + case IOMMU_TEST_OP_MD_CHECK_IOTLB: + return iommufd_test_md_check_iotlb(ucmd, cmd->id, + cmd->check_iotlb.id, + cmd->check_iotlb.iotlb); case IOMMU_TEST_OP_CREATE_ACCESS: return iommufd_test_create_access(ucmd, cmd->id, cmd->create_access.flags); diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c index 6ed328c863c4..c8763b880a16 100644 --- a/tools/testing/selftests/iommu/iommufd.c +++ b/tools/testing/selftests/iommu/iommufd.c @@ -330,6 +330,10 @@ TEST_F(iommufd_ioas, alloc_hwpt_nested) &nested_hwpt_id[1], IOMMU_HWPT_DATA_SELFTEST, &data, sizeof(data)); + test_cmd_hwpt_check_iotlb_all(nested_hwpt_id[0], + IOMMU_TEST_IOTLB_DEFAULT); + test_cmd_hwpt_check_iotlb_all(nested_hwpt_id[1], + IOMMU_TEST_IOTLB_DEFAULT); /* Negative test: a nested hwpt on top of a nested hwpt */ test_err_hwpt_alloc_nested(EINVAL, self->device_id, diff --git a/tools/testing/selftests/iommu/iommufd_utils.h b/tools/testing/selftests/iommu/iommufd_utils.h index ad9202335656..fe0a0f566b67 100644 --- a/tools/testing/selftests/iommu/iommufd_utils.h +++ b/tools/testing/selftests/iommu/iommufd_utils.h @@ -195,6 +195,30 @@ static int _test_cmd_hwpt_alloc(int fd, __u32 device_id, __u32 pt_id, _test_cmd_hwpt_alloc(self->fd, device_id, pt_id, flags, \ hwpt_id, data_type, data, data_len)) +#define test_cmd_hwpt_check_iotlb(hwpt_id, iotlb_id, expected) \ + ({ \ + struct iommu_test_cmd test_cmd = { \ + .size = sizeof(test_cmd), \ + .op = IOMMU_TEST_OP_MD_CHECK_IOTLB, \ + .id = hwpt_id, \ + .check_iotlb = { \ + .id = iotlb_id, \ + .iotlb = expected, \ + }, \ + }; \ + ASSERT_EQ(0, \ + ioctl(self->fd, \ + _IOMMU_TEST_CMD(IOMMU_TEST_OP_MD_CHECK_IOTLB), \ + &test_cmd)); \ + }) + +#define test_cmd_hwpt_check_iotlb_all(hwpt_id, expected) \ + ({ \ + int i; \ + for (i = 0; i < MOCK_NESTED_DOMAIN_IOTLB_NUM; i++) \ + test_cmd_hwpt_check_iotlb(hwpt_id, i, expected); \ + }) + static int _test_cmd_access_replace_ioas(int fd, __u32 access_id, unsigned int ioas_id) { From patchwork Wed Dec 27 16:13:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yi L" X-Patchwork-Id: 13505360 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23AD947F66; Wed, 27 Dec 2023 16:14:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FNIer9Ax" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703693647; x=1735229647; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qKSCh/YMvxeo+LBo/v2xOCmef4ysQUSLhTdnqykZs+4=; b=FNIer9Axx7OVofqHqwrHQYsVEx9yoqHx/WOtD/YopINNK2wBeXwtQKYx +45UA7VNhiFxHBucGywcn3T1gWcW5rAJWgrL+05G/OLKJ3TGW0MEoxosL 8hoARVdy5jXISloU5RHdpWqSMZycyr5sAY/v57MXZyWOJFH02rrv3YhlR F9k3ILGo+qHYpdzYPT/aVDRxEFVkXJCA0DVI1hYmQy990veleueARNTuq 6xpjt2GzKmgTeTedl52MViY6G20fBlCuN27/lkuCo4pyBhmzJgZ0cVuRn QXjFt7c+Mi7F9VjwT06vJuqkHcKbtUYEB9aB10Mtc2TQUPEsosvgH+Xhj w==; X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="396186256" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="396186256" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2023 08:14:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="781775214" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="781775214" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by fmsmga007.fm.intel.com with ESMTP; 27 Dec 2023 08:14:05 -0800 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com, yan.y.zhao@intel.com, j.granados@samsung.com Subject: [PATCH v8 06/10] iommufd/selftest: Add coverage for IOMMU_HWPT_INVALIDATE ioctl Date: Wed, 27 Dec 2023 08:13:50 -0800 Message-Id: <20231227161354.67701-7-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231227161354.67701-1-yi.l.liu@intel.com> References: <20231227161354.67701-1-yi.l.liu@intel.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Nicolin Chen Add test cases for the IOMMU_HWPT_INVALIDATE ioctl and verify it by using the new IOMMU_TEST_OP_MD_CHECK_IOTLB. Reviewed-by: Kevin Tian Signed-off-by: Nicolin Chen Co-developed-by: Yi Liu Signed-off-by: Yi Liu --- tools/testing/selftests/iommu/iommufd.c | 175 ++++++++++++++++++ tools/testing/selftests/iommu/iommufd_utils.h | 33 ++++ 2 files changed, 208 insertions(+) diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c index c8763b880a16..5c6c1342f371 100644 --- a/tools/testing/selftests/iommu/iommufd.c +++ b/tools/testing/selftests/iommu/iommufd.c @@ -116,6 +116,7 @@ TEST_F(iommufd, cmd_length) TEST_LENGTH(iommu_destroy, IOMMU_DESTROY, id); TEST_LENGTH(iommu_hw_info, IOMMU_GET_HW_INFO, __reserved); TEST_LENGTH(iommu_hwpt_alloc, IOMMU_HWPT_ALLOC, __reserved); + TEST_LENGTH(iommu_hwpt_invalidate, IOMMU_HWPT_INVALIDATE, __reserved); TEST_LENGTH(iommu_ioas_alloc, IOMMU_IOAS_ALLOC, out_ioas_id); TEST_LENGTH(iommu_ioas_iova_ranges, IOMMU_IOAS_IOVA_RANGES, out_iova_alignment); @@ -271,7 +272,9 @@ TEST_F(iommufd_ioas, alloc_hwpt_nested) struct iommu_hwpt_selftest data = { .iotlb = IOMMU_TEST_IOTLB_DEFAULT, }; + struct iommu_hwpt_invalidate_selftest inv_reqs[2] = {}; uint32_t nested_hwpt_id[2] = {}; + uint32_t num_inv; uint32_t parent_hwpt_id = 0; uint32_t parent_hwpt_id_not_work = 0; uint32_t test_hwpt_id = 0; @@ -344,6 +347,178 @@ TEST_F(iommufd_ioas, alloc_hwpt_nested) EXPECT_ERRNO(EBUSY, _test_ioctl_destroy(self->fd, parent_hwpt_id)); + /* hwpt_invalidate only supports a user-managed hwpt (nested) */ + num_inv = 1; + test_err_hwpt_invalidate(ENOENT, parent_hwpt_id, inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(!num_inv); + + /* Check data_type by passing zero-length array */ + num_inv = 0; + test_cmd_hwpt_invalidate(nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(!num_inv); + + /* Negative test: Invalid data_type */ + num_inv = 1; + test_err_hwpt_invalidate(EINVAL, nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST_INVALID, + sizeof(*inv_reqs), &num_inv); + assert(!num_inv); + + /* Negative test: structure size sanity */ + num_inv = 1; + test_err_hwpt_invalidate(EINVAL, nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs) + 1, &num_inv); + assert(!num_inv); + + num_inv = 1; + test_err_hwpt_invalidate(EINVAL, nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + 1, &num_inv); + assert(!num_inv); + + /* Negative test: invalid flag is passed */ + num_inv = 1; + inv_reqs[0].flags = 0xffffffff; + test_err_hwpt_invalidate(EOPNOTSUPP, nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(!num_inv); + + /* Negative test: non-zero __reserved is passed */ + num_inv = 1; + inv_reqs[0].flags = 0; + inv_reqs[0].__reserved = 0x1234; + test_err_hwpt_invalidate(EOPNOTSUPP, nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(!num_inv); + + /* Negative test: invalid data_uptr when array is not empty */ + num_inv = 1; + inv_reqs[0].flags = 0; + test_err_hwpt_invalidate(EINVAL, nested_hwpt_id[0], NULL, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(!num_inv); + + /* Negative test: invalid entry_len when array is not empty */ + num_inv = 1; + inv_reqs[0].flags = 0; + test_err_hwpt_invalidate(EINVAL, nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + 0, &num_inv); + assert(!num_inv); + + /* Negative test: invalid iotlb_id */ + num_inv = 1; + inv_reqs[0].flags = 0; + inv_reqs[0].__reserved = 0; + inv_reqs[0].iotlb_id = MOCK_NESTED_DOMAIN_IOTLB_ID_MAX + 1; + test_err_hwpt_invalidate(EINVAL, nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(!num_inv); + + /* Negative test: trigger error */ + num_inv = 1; + inv_reqs[0].flags = IOMMU_TEST_INVALIDATE_FLAG_TRIGGER_ERROR; + inv_reqs[0].iotlb_id = 0; + test_cmd_hwpt_invalidate(nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(num_inv == 1); + assert(inv_reqs[0].hw_error == IOMMU_TEST_INVALIDATE_FAKE_ERROR); + + /* + * Invalidate the 1st iotlb entry but fail the 2nd request + * - mock driver error, the hw_error field is meaningful, + * the ioctl returns 0. + */ + num_inv = 2; + inv_reqs[0].flags = 0; + inv_reqs[0].iotlb_id = 0; + inv_reqs[1].flags = IOMMU_TEST_INVALIDATE_FLAG_TRIGGER_ERROR; + inv_reqs[1].iotlb_id = 1; + test_cmd_hwpt_invalidate(nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(num_inv == 2); + assert(!inv_reqs[0].hw_error); + assert(inv_reqs[1].hw_error == IOMMU_TEST_INVALIDATE_FAKE_ERROR); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 0, 0); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 1, + IOMMU_TEST_IOTLB_DEFAULT); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 2, + IOMMU_TEST_IOTLB_DEFAULT); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 3, + IOMMU_TEST_IOTLB_DEFAULT); + + /* + * Invalidate the 1st iotlb entry but fail the 2nd request + * - ioctl error, the hw_error field is meaningless + */ + num_inv = 2; + inv_reqs[0].flags = 0; + inv_reqs[0].iotlb_id = 0; + inv_reqs[1].flags = 0; + inv_reqs[1].iotlb_id = MOCK_NESTED_DOMAIN_IOTLB_ID_MAX + 1; + test_err_hwpt_invalidate(EINVAL, nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(num_inv == 1); + assert(!inv_reqs[0].hw_error); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 0, 0); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 1, + IOMMU_TEST_IOTLB_DEFAULT); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 2, + IOMMU_TEST_IOTLB_DEFAULT); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 3, + IOMMU_TEST_IOTLB_DEFAULT); + + /* Invalidate the 2nd iotlb entry and verify */ + num_inv = 1; + inv_reqs[0].flags = 0; + inv_reqs[0].iotlb_id = 1; + test_cmd_hwpt_invalidate(nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(!inv_reqs[0].hw_error); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 0, 0); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 1, 0); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 2, + IOMMU_TEST_IOTLB_DEFAULT); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 3, + IOMMU_TEST_IOTLB_DEFAULT); + + /* Invalidate the 3rd and 4th iotlb entries and verify */ + num_inv = 2; + inv_reqs[0].flags = 0; + inv_reqs[0].iotlb_id = 2; + inv_reqs[1].flags = 0; + inv_reqs[1].iotlb_id = 3; + test_cmd_hwpt_invalidate(nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(num_inv == 2); + assert(!inv_reqs[0].hw_error); + assert(!inv_reqs[1].hw_error); + test_cmd_hwpt_check_iotlb_all(nested_hwpt_id[0], 0); + + /* Invalidate all iotlb entries for nested_hwpt_id[1] and verify */ + num_inv = 1; + inv_reqs[0].flags = IOMMU_TEST_INVALIDATE_FLAG_ALL; + test_cmd_hwpt_invalidate(nested_hwpt_id[1], inv_reqs, + IOMMU_HWPT_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(num_inv == 1); + assert(!inv_reqs[0].hw_error); + test_cmd_hwpt_check_iotlb_all(nested_hwpt_id[1], 0); + /* Attach device to nested_hwpt_id[0] that then will be busy */ test_cmd_mock_domain_replace(self->stdev_id, nested_hwpt_id[0]); EXPECT_ERRNO(EBUSY, diff --git a/tools/testing/selftests/iommu/iommufd_utils.h b/tools/testing/selftests/iommu/iommufd_utils.h index fe0a0f566b67..7f41fb796a8a 100644 --- a/tools/testing/selftests/iommu/iommufd_utils.h +++ b/tools/testing/selftests/iommu/iommufd_utils.h @@ -219,6 +219,39 @@ static int _test_cmd_hwpt_alloc(int fd, __u32 device_id, __u32 pt_id, test_cmd_hwpt_check_iotlb(hwpt_id, i, expected); \ }) +static int _test_cmd_hwpt_invalidate(int fd, __u32 hwpt_id, void *reqs, + uint32_t data_type, uint32_t lreq, + uint32_t *nreqs) +{ + struct iommu_hwpt_invalidate cmd = { + .size = sizeof(cmd), + .hwpt_id = hwpt_id, + .data_type = data_type, + .data_uptr = (uint64_t)reqs, + .entry_len = lreq, + .entry_num = *nreqs, + }; + int rc = ioctl(fd, IOMMU_HWPT_INVALIDATE, &cmd); + *nreqs = cmd.entry_num; + return rc; +} + +#define test_cmd_hwpt_invalidate(hwpt_id, reqs, data_type, lreq, nreqs) \ + ({ \ + ASSERT_EQ(0, \ + _test_cmd_hwpt_invalidate(self->fd, hwpt_id, reqs, \ + data_type, \ + lreq, nreqs)); \ + }) +#define test_err_hwpt_invalidate(_errno, hwpt_id, reqs, data_type, lreq, \ + nreqs) \ + ({ \ + EXPECT_ERRNO(_errno, \ + _test_cmd_hwpt_invalidate(self->fd, hwpt_id, \ + reqs, data_type, \ + lreq, nreqs)); \ + }) + static int _test_cmd_access_replace_ioas(int fd, __u32 access_id, unsigned int ioas_id) { From patchwork Wed Dec 27 16:13:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yi L" X-Patchwork-Id: 13505361 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FDDE481BE; Wed, 27 Dec 2023 16:14:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WHrmRt5I" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703693648; x=1735229648; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sEjQ7ZSFTbSHrg6rJ47RrtVWPn+HOd0wma9HUMaCT74=; b=WHrmRt5IhXshKBkxYSAg55yTK8P1LqD1842aabfzYUP7i+lKbrtd9z0K l8UJpcLEpiWGG44uYY1nnf2vVkLrs6mSH/spBEVH6PkvKq7p1gUFVbsSb i/sFEOSCFM4YIseUL8PdrxJqvICzmNgjwkFdRrg8RFVeAr0zMjSQsRQId BmXGSw80yKBN+ZcUHtPC/tW/b2zrapNC2rurFE4gCaiZ1Cpu8GQi5Yl5j qJdRb1+7e8rTMWspmLzi3bkUkY7imy5gNf4nGUvLfbloGTnA0qV1WEcbb lTmBVog0SLMC5QTGGeeBXabCHvSoVTqX3Ae+1+gKcYYujDZChW60bUGvb Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="396186274" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="396186274" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2023 08:14:07 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="781775217" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="781775217" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by fmsmga007.fm.intel.com with ESMTP; 27 Dec 2023 08:14:06 -0800 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com, yan.y.zhao@intel.com, j.granados@samsung.com Subject: [PATCH v8 07/10] iommu/vt-d: Allow qi_submit_sync() to return the QI faults Date: Wed, 27 Dec 2023 08:13:51 -0800 Message-Id: <20231227161354.67701-8-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231227161354.67701-1-yi.l.liu@intel.com> References: <20231227161354.67701-1-yi.l.liu@intel.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Lu Baolu This allows qi_submit_sync() to return back faults to callers. Signed-off-by: Lu Baolu Signed-off-by: Yi Liu --- drivers/iommu/intel/dmar.c | 31 +++++++++++++++++++---------- drivers/iommu/intel/iommu.h | 2 +- drivers/iommu/intel/irq_remapping.c | 2 +- drivers/iommu/intel/pasid.c | 2 +- drivers/iommu/intel/svm.c | 6 +++--- 5 files changed, 26 insertions(+), 17 deletions(-) diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c index 23cb80d62a9a..701705b3a8ea 100644 --- a/drivers/iommu/intel/dmar.c +++ b/drivers/iommu/intel/dmar.c @@ -1267,7 +1267,8 @@ static void qi_dump_fault(struct intel_iommu *iommu, u32 fault) (unsigned long long)desc->qw1); } -static int qi_check_fault(struct intel_iommu *iommu, int index, int wait_index) +static int qi_check_fault(struct intel_iommu *iommu, int index, + int wait_index, u32 *fsts) { u32 fault; int head, tail; @@ -1278,8 +1279,12 @@ static int qi_check_fault(struct intel_iommu *iommu, int index, int wait_index) return -EAGAIN; fault = readl(iommu->reg + DMAR_FSTS_REG); - if (fault & (DMA_FSTS_IQE | DMA_FSTS_ITE | DMA_FSTS_ICE)) + fault &= DMA_FSTS_IQE | DMA_FSTS_ITE | DMA_FSTS_ICE; + if (fault) { + if (fsts) + *fsts |= fault; qi_dump_fault(iommu, fault); + } /* * If IQE happens, the head points to the descriptor associated @@ -1342,9 +1347,11 @@ static int qi_check_fault(struct intel_iommu *iommu, int index, int wait_index) * time, a wait descriptor will be appended to each submission to ensure * hardware has completed the invalidation before return. Wait descriptors * can be part of the submission but it will not be polled for completion. + * If callers are interested in the QI faults that occur during the handling + * of requests, the QI faults are saved in @fault. */ int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc, - unsigned int count, unsigned long options) + unsigned int count, unsigned long options, u32 *fault) { struct q_inval *qi = iommu->qi; s64 devtlb_start_ktime = 0; @@ -1376,6 +1383,8 @@ int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc, restart: rc = 0; + if (fault) + *fault = 0; raw_spin_lock_irqsave(&qi->q_lock, flags); /* @@ -1430,7 +1439,7 @@ int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc, * a deadlock where the interrupt context can wait indefinitely * for free slots in the queue. */ - rc = qi_check_fault(iommu, index, wait_index); + rc = qi_check_fault(iommu, index, wait_index, fault); if (rc) break; @@ -1476,7 +1485,7 @@ void qi_global_iec(struct intel_iommu *iommu) desc.qw3 = 0; /* should never fail */ - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } void qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid, u8 fm, @@ -1490,7 +1499,7 @@ void qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid, u8 fm, desc.qw2 = 0; desc.qw3 = 0; - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr, @@ -1514,7 +1523,7 @@ void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr, desc.qw2 = 0; desc.qw3 = 0; - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid, @@ -1545,7 +1554,7 @@ void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid, desc.qw2 = 0; desc.qw3 = 0; - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } /* PASID-based IOTLB invalidation */ @@ -1586,7 +1595,7 @@ void qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr, QI_EIOTLB_AM(mask); } - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } /* PASID-based device IOTLB Invalidate */ @@ -1639,7 +1648,7 @@ void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid, desc.qw1 |= QI_DEV_EIOTLB_SIZE; } - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } void qi_flush_pasid_cache(struct intel_iommu *iommu, u16 did, @@ -1649,7 +1658,7 @@ void qi_flush_pasid_cache(struct intel_iommu *iommu, u16 did, desc.qw0 = QI_PC_PASID(pasid) | QI_PC_DID(did) | QI_PC_GRAN(granu) | QI_PC_TYPE; - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } /* diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index ce030c5b5772..c6de958e4f54 100644 --- a/drivers/iommu/intel/iommu.h +++ b/drivers/iommu/intel/iommu.h @@ -881,7 +881,7 @@ void qi_flush_pasid_cache(struct intel_iommu *iommu, u16 did, u64 granu, u32 pasid); int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc, - unsigned int count, unsigned long options); + unsigned int count, unsigned long options, u32 *fault); /* * Options used in qi_submit_sync: * QI_OPT_WAIT_DRAIN - Wait for PRQ drain completion, spec 6.5.2.8. diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_remapping.c index 29b9e55dcf26..f834afa3672d 100644 --- a/drivers/iommu/intel/irq_remapping.c +++ b/drivers/iommu/intel/irq_remapping.c @@ -153,7 +153,7 @@ static int qi_flush_iec(struct intel_iommu *iommu, int index, int mask) desc.qw2 = 0; desc.qw3 = 0; - return qi_submit_sync(iommu, &desc, 1, 0); + return qi_submit_sync(iommu, &desc, 1, 0, NULL); } static int modify_irte(struct irq_2_iommu *irq_iommu, diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c index 74e8e4c17e81..67f924760ba8 100644 --- a/drivers/iommu/intel/pasid.c +++ b/drivers/iommu/intel/pasid.c @@ -467,7 +467,7 @@ pasid_cache_invalidation_with_pasid(struct intel_iommu *iommu, desc.qw2 = 0; desc.qw3 = 0; - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } static void diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index ac12f76c1212..660d049ad5b6 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -543,7 +543,7 @@ void intel_drain_pasid_prq(struct device *dev, u32 pasid) QI_DEV_IOTLB_PFSID(info->pfsid); qi_retry: reinit_completion(&iommu->prq_complete); - qi_submit_sync(iommu, desc, 3, QI_OPT_WAIT_DRAIN); + qi_submit_sync(iommu, desc, 3, QI_OPT_WAIT_DRAIN, NULL); if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) { wait_for_completion(&iommu->prq_complete); goto qi_retry; @@ -646,7 +646,7 @@ static void handle_bad_prq_event(struct intel_iommu *iommu, desc.qw3 = 0; } - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } static irqreturn_t prq_event_thread(int irq, void *d) @@ -811,7 +811,7 @@ int intel_svm_page_response(struct device *dev, ktime_to_ns(ktime_get()) - prm->private_data[0]); } - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } out: return ret; From patchwork Wed Dec 27 16:13:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yi L" X-Patchwork-Id: 13505362 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D59A6482FE; Wed, 27 Dec 2023 16:14:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="JYOWIVKU" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703693649; x=1735229649; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6z+F2MIobYMCwciVcWcITx/eB+oS3TBjMhYvHV4t75M=; b=JYOWIVKURPhzRAJ96fQRwycbM+pd7HaeFBoxrbfE1hHI4xS86hfPMaFq xiOwAbMunbIKBH+PzFIeFfECjNzrjDShCmi4dkAVHKHwoJTb/0P9CymzE sBOyaQ1PrBNROGNfYjma+X+RQ6i54tneRrnmWVj676Ufl7GWdLRbV2O3r sbihlYA08dVjDcwI9GiOpNjkomaoFJgig4o+TvUleymgwAIdgLvyThGQ/ Kg1QasZ3U8tsC9dHBR6YM1xYjKKBg6cHLjEn+DZHXIYx1RGXp9HN3PXbq Ij1YrqU6gYYS0leOqfA6wHcS7zO51h0qjbhH1Yz0sDNd4Tkxsz7Qnfti2 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="396186294" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="396186294" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2023 08:14:09 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="781775222" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="781775222" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by fmsmga007.fm.intel.com with ESMTP; 27 Dec 2023 08:14:07 -0800 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com, yan.y.zhao@intel.com, j.granados@samsung.com Subject: [PATCH v8 08/10] iommu/vt-d: Convert stage-1 cache invalidation to return QI fault Date: Wed, 27 Dec 2023 08:13:52 -0800 Message-Id: <20231227161354.67701-9-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231227161354.67701-1-yi.l.liu@intel.com> References: <20231227161354.67701-1-yi.l.liu@intel.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Lu Baolu This makes the pasid based cache invalidation and device TLB invalidation to return QI faults to callers. This is needed when usersapce invalidates cache after modifying the stage-1 page table used in nested translation. Hardware errors during invalidation should be reported to user. Signed-off-by: Lu Baolu Signed-off-by: Yi Liu Reviewed-by: Kevin Tian --- drivers/iommu/intel/dmar.c | 13 +++++++------ drivers/iommu/intel/iommu.c | 12 ++++++------ drivers/iommu/intel/iommu.h | 6 +++--- drivers/iommu/intel/pasid.c | 12 +++++++----- drivers/iommu/intel/svm.c | 8 ++++---- 5 files changed, 27 insertions(+), 24 deletions(-) diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c index 701705b3a8ea..91635bd6493d 100644 --- a/drivers/iommu/intel/dmar.c +++ b/drivers/iommu/intel/dmar.c @@ -1527,7 +1527,7 @@ void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr, } void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid, - u16 qdep, u64 addr, unsigned mask) + u16 qdep, u64 addr, unsigned mask, u32 *fault) { struct qi_desc desc; @@ -1554,12 +1554,12 @@ void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid, desc.qw2 = 0; desc.qw3 = 0; - qi_submit_sync(iommu, &desc, 1, 0, NULL); + qi_submit_sync(iommu, &desc, 1, 0, fault); } /* PASID-based IOTLB invalidation */ void qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr, - unsigned long npages, bool ih) + unsigned long npages, bool ih, u32 *fault) { struct qi_desc desc = {.qw2 = 0, .qw3 = 0}; @@ -1595,12 +1595,13 @@ void qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr, QI_EIOTLB_AM(mask); } - qi_submit_sync(iommu, &desc, 1, 0, NULL); + qi_submit_sync(iommu, &desc, 1, 0, fault); } /* PASID-based device IOTLB Invalidate */ void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid, - u32 pasid, u16 qdep, u64 addr, unsigned int size_order) + u32 pasid, u16 qdep, u64 addr, + unsigned int size_order, u32 *fault) { unsigned long mask = 1UL << (VTD_PAGE_SHIFT + size_order - 1); struct qi_desc desc = {.qw1 = 0, .qw2 = 0, .qw3 = 0}; @@ -1648,7 +1649,7 @@ void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid, desc.qw1 |= QI_DEV_EIOTLB_SIZE; } - qi_submit_sync(iommu, &desc, 1, 0, NULL); + qi_submit_sync(iommu, &desc, 1, 0, fault); } void qi_flush_pasid_cache(struct intel_iommu *iommu, u16 did, diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 897159dba47d..68e494f1d03a 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -1462,7 +1462,7 @@ static void __iommu_flush_dev_iotlb(struct device_domain_info *info, sid = info->bus << 8 | info->devfn; qdep = info->ats_qdep; qi_flush_dev_iotlb(info->iommu, sid, info->pfsid, - qdep, addr, mask); + qdep, addr, mask, NULL); quirk_extra_dev_tlb_flush(info, addr, mask, IOMMU_NO_PASID, qdep); } @@ -1490,7 +1490,7 @@ static void iommu_flush_dev_iotlb(struct dmar_domain *domain, PCI_DEVID(info->bus, info->devfn), info->pfsid, dev_pasid->pasid, info->ats_qdep, addr, - mask); + mask, NULL); } spin_unlock_irqrestore(&domain->lock, flags); } @@ -1505,10 +1505,10 @@ static void domain_flush_pasid_iotlb(struct intel_iommu *iommu, spin_lock_irqsave(&domain->lock, flags); list_for_each_entry(dev_pasid, &domain->dev_pasids, link_domain) - qi_flush_piotlb(iommu, did, dev_pasid->pasid, addr, npages, ih); + qi_flush_piotlb(iommu, did, dev_pasid->pasid, addr, npages, ih, NULL); if (!list_empty(&domain->devices)) - qi_flush_piotlb(iommu, did, IOMMU_NO_PASID, addr, npages, ih); + qi_flush_piotlb(iommu, did, IOMMU_NO_PASID, addr, npages, ih, NULL); spin_unlock_irqrestore(&domain->lock, flags); } @@ -5195,10 +5195,10 @@ void quirk_extra_dev_tlb_flush(struct device_domain_info *info, sid = PCI_DEVID(info->bus, info->devfn); if (pasid == IOMMU_NO_PASID) { qi_flush_dev_iotlb(info->iommu, sid, info->pfsid, - qdep, address, mask); + qdep, address, mask, NULL); } else { qi_flush_dev_iotlb_pasid(info->iommu, sid, info->pfsid, - pasid, qdep, address, mask); + pasid, qdep, address, mask, NULL); } } diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index c6de958e4f54..ce9bd08dcd05 100644 --- a/drivers/iommu/intel/iommu.h +++ b/drivers/iommu/intel/iommu.h @@ -866,14 +866,14 @@ void qi_flush_context(struct intel_iommu *iommu, u16 did, void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr, unsigned int size_order, u64 type); void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid, - u16 qdep, u64 addr, unsigned mask); + u16 qdep, u64 addr, unsigned mask, u32 *fault); void qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr, - unsigned long npages, bool ih); + unsigned long npages, bool ih, u32 *fault); void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid, u32 pasid, u16 qdep, u64 addr, - unsigned int size_order); + unsigned int size_order, u32 *fault); void quirk_extra_dev_tlb_flush(struct device_domain_info *info, unsigned long address, unsigned long pages, u32 pasid, u16 qdep); diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c index 67f924760ba8..4a7fe551d8a6 100644 --- a/drivers/iommu/intel/pasid.c +++ b/drivers/iommu/intel/pasid.c @@ -492,9 +492,11 @@ devtlb_invalidation_with_pasid(struct intel_iommu *iommu, * efficient to flush devTLB specific to the PASID. */ if (pasid == IOMMU_NO_PASID) - qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64 - VTD_PAGE_SHIFT); + qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, + 64 - VTD_PAGE_SHIFT, NULL); else - qi_flush_dev_iotlb_pasid(iommu, sid, pfsid, pasid, qdep, 0, 64 - VTD_PAGE_SHIFT); + qi_flush_dev_iotlb_pasid(iommu, sid, pfsid, pasid, qdep, 0, + 64 - VTD_PAGE_SHIFT, NULL); } void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev, @@ -521,7 +523,7 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev, pasid_cache_invalidation_with_pasid(iommu, did, pasid); if (pgtt == PASID_ENTRY_PGTT_PT || pgtt == PASID_ENTRY_PGTT_FL_ONLY) - qi_flush_piotlb(iommu, did, pasid, 0, -1, 0); + qi_flush_piotlb(iommu, did, pasid, 0, -1, 0, NULL); else iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH); @@ -543,7 +545,7 @@ static void pasid_flush_caches(struct intel_iommu *iommu, if (cap_caching_mode(iommu->cap)) { pasid_cache_invalidation_with_pasid(iommu, did, pasid); - qi_flush_piotlb(iommu, did, pasid, 0, -1, 0); + qi_flush_piotlb(iommu, did, pasid, 0, -1, 0, NULL); } else { iommu_flush_write_buffer(iommu); } @@ -834,7 +836,7 @@ void intel_pasid_setup_page_snoop_control(struct intel_iommu *iommu, * Addr[63:12]=0x7FFFFFFF_FFFFF) to affected functions */ pasid_cache_invalidation_with_pasid(iommu, did, pasid); - qi_flush_piotlb(iommu, did, pasid, 0, -1, 0); + qi_flush_piotlb(iommu, did, pasid, 0, -1, 0, NULL); /* Device IOTLB doesn't need to be flushed in caching mode. */ if (!cap_caching_mode(iommu->cap)) diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index 660d049ad5b6..bf7b4c5c21f4 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -179,11 +179,11 @@ static void __flush_svm_range_dev(struct intel_svm *svm, if (WARN_ON(!pages)) return; - qi_flush_piotlb(sdev->iommu, sdev->did, svm->pasid, address, pages, ih); + qi_flush_piotlb(sdev->iommu, sdev->did, svm->pasid, address, pages, ih, NULL); if (info->ats_enabled) { qi_flush_dev_iotlb_pasid(sdev->iommu, sdev->sid, info->pfsid, svm->pasid, sdev->qdep, address, - order_base_2(pages)); + order_base_2(pages), NULL); quirk_extra_dev_tlb_flush(info, address, order_base_2(pages), svm->pasid, sdev->qdep); } @@ -225,11 +225,11 @@ static void intel_flush_svm_all(struct intel_svm *svm) list_for_each_entry_rcu(sdev, &svm->devs, list) { info = dev_iommu_priv_get(sdev->dev); - qi_flush_piotlb(sdev->iommu, sdev->did, svm->pasid, 0, -1UL, 0); + qi_flush_piotlb(sdev->iommu, sdev->did, svm->pasid, 0, -1UL, 0, NULL); if (info->ats_enabled) { qi_flush_dev_iotlb_pasid(sdev->iommu, sdev->sid, info->pfsid, svm->pasid, sdev->qdep, - 0, 64 - VTD_PAGE_SHIFT); + 0, 64 - VTD_PAGE_SHIFT, NULL); quirk_extra_dev_tlb_flush(info, 0, 64 - VTD_PAGE_SHIFT, svm->pasid, sdev->qdep); } From patchwork Wed Dec 27 16:13:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yi L" X-Patchwork-Id: 13505363 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B07E48CE5; Wed, 27 Dec 2023 16:14:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="oDF7m4SM" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703693651; x=1735229651; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WdtqJXn4o/JksNspoDogjPrqBmrQwHzi0UiIL1TFC3M=; b=oDF7m4SMhO/maK+SVnqW4DfJ0Y8rRl5SZgIkMtQLVqk/DFIPwyjgvkA8 zpEzd8jaNPzHvphLK8SCrkr/s2gU4ZHVcErUQke+PmDRf59cmjAU1bt6Q oYZ2cecKy7TI8VEHp4UBQpuBIuEJX29hfZzHWodIVWzecipgNDDb9Zdpc IGGrm/hywDgL95db8wkomQiBl6t67xLBbAGhxSa9DB8tgWjPh1bRAUChA m32Nem728GsZ4broB9j2WF8ZpISrUNDeba6Q8abFi7XfH8ZrJgSzV0mUM zujhtJzr1Hy85lHuYCuBJfFbbfNucqwb/SXxlR7WeSi/MnWrxVQsRTISz A==; X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="396186309" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="396186309" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2023 08:14:10 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="781775227" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="781775227" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by fmsmga007.fm.intel.com with ESMTP; 27 Dec 2023 08:14:09 -0800 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com, yan.y.zhao@intel.com, j.granados@samsung.com Subject: [PATCH v8 09/10] iommufd: Add data structure for Intel VT-d stage-1 cache invalidation Date: Wed, 27 Dec 2023 08:13:53 -0800 Message-Id: <20231227161354.67701-10-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231227161354.67701-1-yi.l.liu@intel.com> References: <20231227161354.67701-1-yi.l.liu@intel.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This adds the data structure invalidating caches for the nested domain allocated with IOMMU_HWPT_DATA_VTD_S1 type. Signed-off-by: Lu Baolu Signed-off-by: Yi Liu Reviewed-by: Kevin Tian --- include/uapi/linux/iommufd.h | 55 ++++++++++++++++++++++++++++++++++++ 1 file changed, 55 insertions(+) diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h index 824560c50ec6..2067aa00d2a3 100644 --- a/include/uapi/linux/iommufd.h +++ b/include/uapi/linux/iommufd.h @@ -623,6 +623,61 @@ enum iommu_hwpt_invalidate_data_type { IOMMU_HWPT_INVALIDATE_DATA_VTD_S1, }; +/** + * enum iommu_hwpt_vtd_s1_invalidate_flags - Flags for Intel VT-d + * stage-1 cache invalidation + * @IOMMU_VTD_INV_FLAGS_LEAF: Indicates whether the invalidation applies + * to all-levels page structure cache or just + * the leaf PTE cache. + */ +enum iommu_hwpt_vtd_s1_invalidate_flags { + IOMMU_VTD_INV_FLAGS_LEAF = 1 << 0, +}; + +/** + * enum iommu_hwpt_vtd_s1_invalidate_error - Hardware error of invalidation + * @IOMMU_HWPT_INVALIDATE_VTD_S1_ICE: Invalidation Completion Error, details + * refer to 11.4.7.1 Fault Status Register + * of VT-d specification. + * @IOMMU_HWPT_INVALIDATE_VTD_S1_ITE: Invalidation Time-out Error, details + * refer to 11.4.7.1 Fault Status Register + * of VT-d specification. + */ +enum iommu_hwpt_vtd_s1_invalidate_error { + IOMMU_HWPT_INVALIDATE_VTD_S1_ICE = 1 << 0, + IOMMU_HWPT_INVALIDATE_VTD_S1_ITE = 1 << 1, +}; + +/** + * struct iommu_hwpt_vtd_s1_invalidate - Intel VT-d cache invalidation + * (IOMMU_HWPT_INVALIDATE_DATA_VTD_S1) + * @addr: The start address of the range to be invalidated. It needs to + * be 4KB aligned. + * @npages: Number of contiguous 4K pages to be invalidated. + * @flags: Combination of enum iommu_hwpt_vtd_s1_invalidate_flags + * @hw_error: One of enum iommu_hwpt_vtd_s1_invalidate_error + * + * The Intel VT-d specific invalidation data for user-managed stage-1 cache + * invalidation in nested translation. Userspace uses this structure to + * tell the impacted cache scope after modifying the stage-1 page table. + * + * Invalidating all the caches related to the page table by setting @addr + * to be 0 and @npages to be U64_MAX. + * + * The device TLB will be invalidated automatically if ATS is enabled. + * + * The @hw_error is meaningful when the entry is handled by the kernel. + * Check the entry_num output of IOMMU_HWPT_INVALIDATE ioctl to know the + * handled entries. @hw_error only covers the errors detected by hardware. + * The software detected errors would go through the normal ioctl errno. + */ +struct iommu_hwpt_vtd_s1_invalidate { + __aligned_u64 addr; + __aligned_u64 npages; + __u32 flags; + __u32 hw_error; +}; + /** * struct iommu_hwpt_invalidate - ioctl(IOMMU_HWPT_INVALIDATE) * @size: sizeof(struct iommu_hwpt_invalidate) From patchwork Wed Dec 27 16:13:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yi L" X-Patchwork-Id: 13505364 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 16540495DE; Wed, 27 Dec 2023 16:14:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="THCJ0Oqu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703693652; x=1735229652; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=n7TK3hOHBZCHKUUtwN8CSWmxd8FxVEH9nRA4PZnV3dY=; b=THCJ0OquEigJrN2T+GPy9Wx2bYXRBY00zlBj3xLDQS3G90dB+PPjpifZ 6DGoZfBAv2w9qQPE/hFflkI7NZ53qtGzlzS2glSiM8RD4xZ+VAY3K92Dg 2NiLfRxkaNqnr17WanhTA/bPgiXyeQBCWjrOXXMQd2blNJbVJz7PzBf17 SOTgHoOKYXQQhUorectPfXOaUneeZ7nVRKmeZLumyLdSmGDRu2/yyoNvW TbLRffTOou5MnH7GNFHPEnTqPdipvdpqfhqj0Wlkras4GRxuPuVJ/hcVk oHuRCnPYlz1FsbkxjuC7auXM3mxZ1Tyep6izoUwa3ixO7MaNjZvNNu4R6 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="396186320" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="396186320" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2023 08:14:11 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10936"; a="781775232" X-IronPort-AV: E=Sophos;i="6.04,309,1695711600"; d="scan'208";a="781775232" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by fmsmga007.fm.intel.com with ESMTP; 27 Dec 2023 08:14:10 -0800 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com, yan.y.zhao@intel.com, j.granados@samsung.com Subject: [PATCH v8 10/10] iommu/vt-d: Add iotlb flush for nested domain Date: Wed, 27 Dec 2023 08:13:54 -0800 Message-Id: <20231227161354.67701-11-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231227161354.67701-1-yi.l.liu@intel.com> References: <20231227161354.67701-1-yi.l.liu@intel.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Lu Baolu This implements the .cache_invalidate_user() callback to support iotlb flush for nested domain. Signed-off-by: Lu Baolu Co-developed-by: Yi Liu Signed-off-by: Yi Liu --- drivers/iommu/intel/nested.c | 118 +++++++++++++++++++++++++++++++++++ 1 file changed, 118 insertions(+) diff --git a/drivers/iommu/intel/nested.c b/drivers/iommu/intel/nested.c index b5a5563ab32c..cc9887a68318 100644 --- a/drivers/iommu/intel/nested.c +++ b/drivers/iommu/intel/nested.c @@ -73,9 +73,127 @@ static void intel_nested_domain_free(struct iommu_domain *domain) kfree(to_dmar_domain(domain)); } +static void nested_flush_pasid_iotlb(struct intel_iommu *iommu, + struct dmar_domain *domain, u64 addr, + unsigned long npages, bool ih) +{ + u16 did = domain_id_iommu(domain, iommu); + unsigned long flags; + + spin_lock_irqsave(&domain->lock, flags); + if (!list_empty(&domain->devices)) + qi_flush_piotlb(iommu, did, IOMMU_NO_PASID, addr, + npages, ih, NULL); + spin_unlock_irqrestore(&domain->lock, flags); +} + +static void nested_flush_dev_iotlb(struct dmar_domain *domain, u64 addr, + unsigned mask, u32 *fault) +{ + struct device_domain_info *info; + unsigned long flags; + u16 sid, qdep; + + spin_lock_irqsave(&domain->lock, flags); + list_for_each_entry(info, &domain->devices, link) { + if (!info->ats_enabled) + continue; + sid = info->bus << 8 | info->devfn; + qdep = info->ats_qdep; + qi_flush_dev_iotlb(info->iommu, sid, info->pfsid, + qdep, addr, mask, fault); + quirk_extra_dev_tlb_flush(info, addr, mask, + IOMMU_NO_PASID, qdep); + } + spin_unlock_irqrestore(&domain->lock, flags); +} + +static void intel_nested_flush_cache(struct dmar_domain *domain, u64 addr, + unsigned long npages, bool ih, u32 *error) +{ + struct iommu_domain_info *info; + unsigned long i; + unsigned mask; + u32 fault; + + xa_for_each(&domain->iommu_array, i, info) + nested_flush_pasid_iotlb(info->iommu, domain, addr, npages, ih); + + if (!domain->has_iotlb_device) + return; + + if (npages == U64_MAX) + mask = 64 - VTD_PAGE_SHIFT; + else + mask = ilog2(__roundup_pow_of_two(npages)); + + nested_flush_dev_iotlb(domain, addr, mask, &fault); + + /* + * Invalidation queue error (i.e. IQE) will not be reported to user + * as it's caused only by driver internal bug. + */ + if (fault & DMA_FSTS_ICE) + *error |= IOMMU_HWPT_INVALIDATE_VTD_S1_ICE; + if (fault & DMA_FSTS_ITE) + *error |= IOMMU_HWPT_INVALIDATE_VTD_S1_ITE; +} + +static int intel_nested_cache_invalidate_user(struct iommu_domain *domain, + struct iommu_user_data_array *array) +{ + struct dmar_domain *dmar_domain = to_dmar_domain(domain); + struct iommu_hwpt_vtd_s1_invalidate inv_entry; + u32 processed = 0; + int ret = 0; + u32 index; + + if (array->type != IOMMU_HWPT_INVALIDATE_DATA_VTD_S1) { + ret = -EINVAL; + goto out; + } + + for (index = 0; index < array->entry_num; index++) { + ret = iommu_copy_struct_from_user_array(&inv_entry, array, + IOMMU_HWPT_INVALIDATE_DATA_VTD_S1, + index, hw_error); + if (ret) + break; + + if (inv_entry.flags & ~IOMMU_VTD_INV_FLAGS_LEAF) { + ret = -EOPNOTSUPP; + break; + } + + if (!IS_ALIGNED(inv_entry.addr, VTD_PAGE_SIZE) || + ((inv_entry.npages == U64_MAX) && inv_entry.addr)) { + ret = -EINVAL; + break; + } + + intel_nested_flush_cache(dmar_domain, inv_entry.addr, + inv_entry.npages, + inv_entry.flags & IOMMU_VTD_INV_FLAGS_LEAF, + &inv_entry.hw_error); + + ret = iommu_respond_struct_to_user_array(array, index, + (void *)&inv_entry, + sizeof(inv_entry)); + if (ret) + break; + + processed++; + } + +out: + array->entry_num = processed; + return ret; +} + static const struct iommu_domain_ops intel_nested_domain_ops = { .attach_dev = intel_nested_attach_dev, .free = intel_nested_domain_free, + .cache_invalidate_user = intel_nested_cache_invalidate_user, }; struct iommu_domain *intel_nested_domain_alloc(struct iommu_domain *parent,