From patchwork Fri Oct 20 09:24:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi Liu X-Patchwork-Id: 13430367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A90AC19F5C for ; Fri, 20 Oct 2023 09:24:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376623AbjJTJYe (ORCPT ); Fri, 20 Oct 2023 05:24:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376612AbjJTJYc (ORCPT ); Fri, 20 Oct 2023 05:24:32 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE979D5A; Fri, 20 Oct 2023 02:24:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697793870; x=1729329870; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0njwcX5/NU7naKi/ROVrRqxboo7jMq2jUYAkQziXQWU=; b=J8gfwDbyQTiuY/K2M5/w0CpSDnjcz0yiqW0l8EP2g1IYy+u9y1P3/BzX j0/xONOSh3QLXGpMowV1QFpUJZxGEWqzv3A585t7loG+cALqTETnnqhES yqOFI/OpeRxE/Mh4SXejcnJebFeRtzA3lSGKQXxwF+95Q58Dd225rzjzJ fwDT3noapCkopNSULJsFGWljtj+tYCEfLBd5gQfPqYLmlbDnmlnmHLlrA yHZlUJNIgplJ4RpLsQO/MUbRb06JOv5Kt6PEs3Snl+X5r0jBBsvghFcWS e2s+CP/h/I8mSTaY+DKN96470chootSSm0s3/dfNIYc0qrW5LnPZR2ALS A==; X-IronPort-AV: E=McAfee;i="6600,9927,10868"; a="472685456" X-IronPort-AV: E=Sophos;i="6.03,238,1694761200"; d="scan'208";a="472685456" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 02:24:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10868"; a="750859668" X-IronPort-AV: E=Sophos;i="6.03,238,1694761200"; d="scan'208";a="750859668" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by orsmga007.jf.intel.com with ESMTP; 20 Oct 2023 02:24:28 -0700 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com Subject: [PATCH v5 1/6] iommu: Add cache_invalidate_user op Date: Fri, 20 Oct 2023 02:24:21 -0700 Message-Id: <20231020092426.13907-2-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231020092426.13907-1-yi.l.liu@intel.com> References: <20231020092426.13907-1-yi.l.liu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lu Baolu The updates of the PTEs in the nested page table will be propagated to the hardware caches on both IOMMU (IOTLB) and devices (DevTLB/ATC). Add a new domain op cache_invalidate_user for the userspace to flush the hardware caches for a nested domain through iommufd. No wrapper for it, as it's only supposed to be used by iommufd. Then, pass in invalidation requests in form of a user data array conatining a number of invalidation data entries. Signed-off-by: Lu Baolu Reviewed-by: Kevin Tian Signed-off-by: Nicolin Chen Signed-off-by: Yi Liu --- include/linux/iommu.h | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 48b8a9a03ae7..de52835446f4 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -246,6 +246,24 @@ struct iommu_user_data { size_t len; }; +/** + * struct iommu_user_data_array - iommu driver specific user space data array + * @type: The data type of all the entries in the user buffer array + * @uptr: Pointer to the user buffer array for copy_from_user() + * @entry_len: The fixed-width length of a entry in the array, in bytes + * @entry_num: The number of total entries in the array + * + * A array having a @entry_num number of @entry_len sized entries, each entry is + * user space data, an uAPI defined in include/uapi/linux/iommufd.h where @type + * is also defined as enum iommu_xyz_data_type. + */ +struct iommu_user_data_array { + unsigned int type; + void __user *uptr; + size_t entry_len; + int entry_num; +}; + /** * __iommu_copy_struct_from_user - Copy iommu driver specific user space data * @dst_data: Pointer to an iommu driver specific user data that is defined in @@ -396,6 +414,15 @@ struct iommu_ops { * @iotlb_sync_map: Sync mappings created recently using @map to the hardware * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush * queue + * @cache_invalidate_user: Flush hardware cache for user space IO page table. + * The @domain must be IOMMU_DOMAIN_NESTED. The @array + * passes in the cache invalidation requests, in form + * of a driver data structure. The driver must update + * array->entry_num to report the number of handled + * invalidation requests. The 32-bit @error_code can + * forward a driver specific error code to user space. + * Both the driver data structure and the error code + * must be defined in include/uapi/linux/iommufd.h * @iova_to_phys: translate iova to physical address * @enforce_cache_coherency: Prevent any kind of DMA from bypassing IOMMU_CACHE, * including no-snoop TLPs on PCIe or other platform @@ -425,6 +452,9 @@ struct iommu_domain_ops { size_t size); void (*iotlb_sync)(struct iommu_domain *domain, struct iommu_iotlb_gather *iotlb_gather); + int (*cache_invalidate_user)(struct iommu_domain *domain, + struct iommu_user_data_array *array, + u32 *error_code); phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova); From patchwork Fri Oct 20 09:24:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi Liu X-Patchwork-Id: 13430368 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C444CDB47E for ; Fri, 20 Oct 2023 09:24:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376688AbjJTJYk (ORCPT ); Fri, 20 Oct 2023 05:24:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376656AbjJTJYe (ORCPT ); Fri, 20 Oct 2023 05:24:34 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07FD0106; Fri, 20 Oct 2023 02:24:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697793872; x=1729329872; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2PM7o5xYDJmlHz5TKehP6+0yc9LVqHG0PRNTYo4xOBg=; b=fq/1VI6XpsOhLLNmLf5+YnjRcUCYmRoFWDAlr5xfaHprjM3de/ybrSHp 2uWsqEWXHrFptgZj4xejTFq69hC7WJDrtQdl6kODYC33lonxRtg4QRCIl WxjbpDavMinwmzUo1Vz2f1wzmcr1LRnhrqIuOX+EZabwqdfB3ZrHz0PnD EDAzy3wdsEF8/fNMQhTguB6K8Ur1cgsAwo7BrjWhHos7ezJH486d4INrx xIKDPNMFJ4voOoot3AqjW3zUFbK6OLGJlQQKjeZm7bpg5IaVmSXn7ieQp lIv5BwefLL4KdaT9V3+H/BaW8UZaWzGRQWvF/gFELEfohp6T3hjaDCXPx A==; X-IronPort-AV: E=McAfee;i="6600,9927,10868"; a="472685474" X-IronPort-AV: E=Sophos;i="6.03,238,1694761200"; d="scan'208";a="472685474" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 02:24:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10868"; a="750859673" X-IronPort-AV: E=Sophos;i="6.03,238,1694761200"; d="scan'208";a="750859673" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by orsmga007.jf.intel.com with ESMTP; 20 Oct 2023 02:24:29 -0700 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com Subject: [PATCH v5 2/6] iommufd: Add IOMMU_HWPT_INVALIDATE Date: Fri, 20 Oct 2023 02:24:22 -0700 Message-Id: <20231020092426.13907-3-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231020092426.13907-1-yi.l.liu@intel.com> References: <20231020092426.13907-1-yi.l.liu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In nested translation, the stage-1 page table is user-managed but cached by the IOMMU hardware, so an update on present page table entries in the stage-1 page table should be followed with a cache invalidation. Add an IOMMU_HWPT_INVALIDATE ioctl to support such a cache invalidation. It takes hwpt_id to specify the iommu_domain, and a multi-entry array to support multiple invalidation requests in one ioctl. Check cache_invalidate_user op in the iommufd_hw_pagetable_alloc_nested, since all nested domains need that. Co-developed-by: Nicolin Chen Signed-off-by: Nicolin Chen Signed-off-by: Yi Liu --- drivers/iommu/iommufd/hw_pagetable.c | 35 ++++++++++++++++++++++++ drivers/iommu/iommufd/iommufd_private.h | 9 +++++++ drivers/iommu/iommufd/main.c | 3 +++ include/uapi/linux/iommufd.h | 36 +++++++++++++++++++++++++ 4 files changed, 83 insertions(+) diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/hw_pagetable.c index 362c694aac9b..6090d59c4982 100644 --- a/drivers/iommu/iommufd/hw_pagetable.c +++ b/drivers/iommu/iommufd/hw_pagetable.c @@ -238,6 +238,11 @@ iommufd_hwpt_nested_alloc(struct iommufd_ctx *ictx, rc = -EINVAL; goto out_abort; } + /* Driver is buggy by missing cache_invalidate_user in domain_ops */ + if (WARN_ON_ONCE(!hwpt->domain->ops->cache_invalidate_user)) { + rc = -EINVAL; + goto out_abort; + } return hwpt_nested; out_abort: @@ -323,3 +328,33 @@ int iommufd_hwpt_alloc(struct iommufd_ucmd *ucmd) iommufd_put_object(&idev->obj); return rc; } + +int iommufd_hwpt_invalidate(struct iommufd_ucmd *ucmd) +{ + struct iommu_hwpt_invalidate *cmd = ucmd->cmd; + struct iommu_user_data_array data_array = { + .type = cmd->req_type, + .uptr = u64_to_user_ptr(cmd->reqs_uptr), + .entry_len = cmd->req_len, + .entry_num = cmd->req_num, + }; + struct iommufd_hw_pagetable *hwpt; + int rc = 0; + + if (cmd->req_type == IOMMU_HWPT_DATA_NONE) + return -EINVAL; + if (!cmd->reqs_uptr || !cmd->req_len || !cmd->req_num) + return -EINVAL; + + hwpt = iommufd_hw_pagetable_get_nested(ucmd, cmd->hwpt_id); + if (IS_ERR(hwpt)) + return PTR_ERR(hwpt); + + rc = hwpt->domain->ops->cache_invalidate_user(hwpt->domain, &data_array, + &cmd->out_driver_error_code); + cmd->req_num = data_array.entry_num; + if (iommufd_ucmd_respond(ucmd, sizeof(*cmd))) + return -EFAULT; + iommufd_put_object(&hwpt->obj); + return rc; +} diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h index a829458fe762..7ec629599844 100644 --- a/drivers/iommu/iommufd/iommufd_private.h +++ b/drivers/iommu/iommufd/iommufd_private.h @@ -279,6 +279,7 @@ void iommufd_hwpt_paging_abort(struct iommufd_object *obj); void iommufd_hwpt_nested_destroy(struct iommufd_object *obj); void iommufd_hwpt_nested_abort(struct iommufd_object *obj); int iommufd_hwpt_alloc(struct iommufd_ucmd *ucmd); +int iommufd_hwpt_invalidate(struct iommufd_ucmd *ucmd); static inline void iommufd_hw_pagetable_put(struct iommufd_ctx *ictx, struct iommufd_hw_pagetable *hwpt) @@ -300,6 +301,14 @@ static inline void iommufd_hw_pagetable_put(struct iommufd_ctx *ictx, refcount_dec(&hwpt->obj.users); } +static inline struct iommufd_hw_pagetable * +iommufd_hw_pagetable_get_nested(struct iommufd_ucmd *ucmd, u32 id) +{ + return container_of(iommufd_get_object(ucmd->ictx, id, + IOMMUFD_OBJ_HWPT_NESTED), + struct iommufd_hw_pagetable, obj); +} + struct iommufd_group { struct kref ref; struct mutex lock; diff --git a/drivers/iommu/iommufd/main.c b/drivers/iommu/iommufd/main.c index f94c14f20918..1f2d2d262e12 100644 --- a/drivers/iommu/iommufd/main.c +++ b/drivers/iommu/iommufd/main.c @@ -307,6 +307,7 @@ union ucmd_buffer { struct iommu_destroy destroy; struct iommu_hw_info info; struct iommu_hwpt_alloc hwpt; + struct iommu_hwpt_invalidate cache; struct iommu_ioas_alloc alloc; struct iommu_ioas_allow_iovas allow_iovas; struct iommu_ioas_copy ioas_copy; @@ -342,6 +343,8 @@ static const struct iommufd_ioctl_op iommufd_ioctl_ops[] = { __reserved), IOCTL_OP(IOMMU_HWPT_ALLOC, iommufd_hwpt_alloc, struct iommu_hwpt_alloc, __reserved), + IOCTL_OP(IOMMU_HWPT_INVALIDATE, iommufd_hwpt_invalidate, + struct iommu_hwpt_invalidate, out_driver_error_code), IOCTL_OP(IOMMU_IOAS_ALLOC, iommufd_ioas_alloc_ioctl, struct iommu_ioas_alloc, out_ioas_id), IOCTL_OP(IOMMU_IOAS_ALLOW_IOVAS, iommufd_ioas_allow_iovas, diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h index d82bc663c5cd..fc305a48ab81 100644 --- a/include/uapi/linux/iommufd.h +++ b/include/uapi/linux/iommufd.h @@ -47,6 +47,7 @@ enum { IOMMUFD_CMD_VFIO_IOAS, IOMMUFD_CMD_HWPT_ALLOC, IOMMUFD_CMD_GET_HW_INFO, + IOMMUFD_CMD_HWPT_INVALIDATE, }; /** @@ -477,4 +478,39 @@ struct iommu_hw_info { __u32 __reserved; }; #define IOMMU_GET_HW_INFO _IO(IOMMUFD_TYPE, IOMMUFD_CMD_GET_HW_INFO) + +/** + * struct iommu_hwpt_invalidate - ioctl(IOMMU_HWPT_INVALIDATE) + * @size: sizeof(struct iommu_hwpt_invalidate) + * @hwpt_id: HWPT ID of a nested HWPT for cache invalidation + * @reqs_uptr: User pointer to an array having @req_num of cache invalidation + * requests. The request entries in the array are of fixed width + * @req_len, and contain a user data structure for invalidation + * request specific to the given hardware page table. + * @req_type: One of enum iommu_hwpt_data_type, defining the data type of all + * the entries in the invalidation request array. It should suit + * with the data_type passed per the allocation of the hwpt pointed + * by @hwpt_id. + * @req_len: Length (in bytes) of a request entry in the request array + * @req_num: Input the number of cache invalidation requests in the array. + * Output the number of requests successfully handled by kernel. + * @out_driver_error_code: Report a driver speicifc error code upon failure. + * It's optional, driver has a choice to fill it or + * not. + * + * Invalidate the iommu cache for user-managed page table. Modifications on a + * user-managed page table should be followed by this operation to sync cache. + * Each ioctl can support one or more cache invalidation requests in the array + * that has a total size of @req_len * @req_num. + */ +struct iommu_hwpt_invalidate { + __u32 size; + __u32 hwpt_id; + __aligned_u64 reqs_uptr; + __u32 req_type; + __u32 req_len; + __u32 req_num; + __u32 out_driver_error_code; +}; +#define IOMMU_HWPT_INVALIDATE _IO(IOMMUFD_TYPE, IOMMUFD_CMD_HWPT_INVALIDATE) #endif From patchwork Fri Oct 20 09:24:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi Liu X-Patchwork-Id: 13430369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FABDCDB474 for ; Fri, 20 Oct 2023 09:24:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376684AbjJTJYt (ORCPT ); Fri, 20 Oct 2023 05:24:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376614AbjJTJYf (ORCPT ); Fri, 20 Oct 2023 05:24:35 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A6D2D51; Fri, 20 Oct 2023 02:24:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697793873; x=1729329873; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HTvHwI4EhRpQB7I9M65i/pRD2X6B+RdjiZDfOoc2Ot8=; b=Cd1p0UEp+PhCMy/gagfpL74wmuW8eejukpF2TfHLR4fLnC/K5xmVqd+C zu/q9hp4RoU+9KZ2gJXicMG8Vw5t18JuvgECVSkKHirS8URYg2cuse+f0 PthW1zZihS3xY1s6jA05q/DWUVQGVdQIdB3GDqT5kW9+1EmFzG94Y7PA3 /G8V4yx/8SgD69iOBDYyTTBp3MhbaoL2R8FDxYrrv76gJZ0g40Y6i8IAG 5vCtwQ0p0BPPIaKJjz7sGwEeB44c8k/T/ZvtXPVk3G8heZwfQJiRMko/G Zrik4PYri6zGjZR8ha2akjI2njMKTM9t+UrhFpVhByDpGScpgtUtYgyFY A==; X-IronPort-AV: E=McAfee;i="6600,9927,10868"; a="472685490" X-IronPort-AV: E=Sophos;i="6.03,238,1694761200"; d="scan'208";a="472685490" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 02:24:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10868"; a="750859678" X-IronPort-AV: E=Sophos;i="6.03,238,1694761200"; d="scan'208";a="750859678" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by orsmga007.jf.intel.com with ESMTP; 20 Oct 2023 02:24:29 -0700 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com Subject: [PATCH v5 3/6] iommu: Add iommu_copy_struct_from_user_array helper Date: Fri, 20 Oct 2023 02:24:23 -0700 Message-Id: <20231020092426.13907-4-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231020092426.13907-1-yi.l.liu@intel.com> References: <20231020092426.13907-1-yi.l.liu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Nicolin Chen Wrap up the data type/pointer/num sanity and __iommu_copy_struct_from_user call for iommu drivers to copy driver specific data at a specific location in the struct iommu_user_data_array. And expect it to be used in cache_invalidate_user ops for example. Signed-off-by: Nicolin Chen Co-developed-by: Yi Liu Signed-off-by: Yi Liu --- include/linux/iommu.h | 54 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index de52835446f4..c0ee1d5d9447 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -305,6 +305,60 @@ __iommu_copy_struct_from_user(void *dst_data, __iommu_copy_struct_from_user(kdst, user_data, data_type, sizeof(*kdst), \ offsetofend(typeof(*kdst), min_last)) +/** + * __iommu_copy_struct_from_user_array - Copy iommu driver specific user space + * data from an iommu_user_data_array + * @dst_data: Pointer to an iommu driver specific user data that is defined in + * include/uapi/linux/iommufd.h + * @src_array: Pointer to a struct iommu_user_data_array for a user space array + * @data_type: The data type of the @dst_data. Must match with @src_array.type + * @index: Index to offset the location in the array to copy user data from + * @data_len: Length of current user data structure, i.e. sizeof(struct _dst) + * @min_len: Initial length of user data structure for backward compatibility. + * This should be offsetofend using the last member in the user data + * struct that was initially added to include/uapi/linux/iommufd.h + */ +static inline int +__iommu_copy_struct_from_user_array(void *dst_data, + const struct iommu_user_data_array *src_array, + unsigned int data_type, unsigned int index, + size_t data_len, size_t min_len) +{ + struct iommu_user_data src_data; + + if (src_array->type != data_type) + return -EINVAL; + if (WARN_ON(!src_array || index >= src_array->entry_num)) + return -EINVAL; + if (!src_array->entry_num) + return -EINVAL; + src_data.uptr = src_array->uptr + src_array->entry_len * index; + src_data.len = src_array->entry_len; + src_data.type = src_array->type; + + return __iommu_copy_struct_from_user(dst_data, &src_data, data_type, + data_len, min_len); +} + +/** + * iommu_copy_struct_from_user_array - Copy iommu driver specific user space + * data from an iommu_user_data_array + * @kdst: Pointer to an iommu driver specific user data that is defined in + * include/uapi/linux/iommufd.h + * @user_array: Pointer to a struct iommu_user_data_array for a user space array + * @data_type: The data type of the @kdst. Must match with @user_array->type + * @index: Index to offset the location in the array to copy user data from + * @min_last: The last memember of the data structure @kdst points in the + * initial version. + * Return 0 for success, otherwise -error. + */ +#define iommu_copy_struct_from_user_array(kdst, user_array, data_type, \ + index, min_last) \ + __iommu_copy_struct_from_user_array(kdst, user_array, data_type, \ + index, sizeof(*kdst), \ + offsetofend(typeof(*kdst), \ + min_last)) + /** * struct iommu_ops - iommu ops and capabilities * @capable: check capability From patchwork Fri Oct 20 09:24:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi Liu X-Patchwork-Id: 13430370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C417EC19F5C for ; Fri, 20 Oct 2023 09:24:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376720AbjJTJYz (ORCPT ); Fri, 20 Oct 2023 05:24:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376682AbjJTJYg (ORCPT ); Fri, 20 Oct 2023 05:24:36 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1CB0D55; Fri, 20 Oct 2023 02:24:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697793873; x=1729329873; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hmwgRc80fGvIZ+07OlU/ados8MLJWNBOFdMd2h2m8n4=; b=GZjN6cWdNVTdSIqfizbYRDjKtePQztrD9i3oZGmnwLqJJpoD/qSg4XW+ AAEhfbuhHDZkfn/yHf7jU39kkzLCkbBHOK+UKmMI6byXSH3hZRVf2PFqG Cm7//hlyCZjKBMN2L+lQHjzV5pD81l+Zi7jGHdpr0elLAeEzbckCse/Fc R8grp0DAgsp9C20DSmGE9zpbZvsNJok0o1IlKVhAIYIGiGkRDhN0FR0kf IeYq9HRvM8HB8k1Mt7s7T90oX2eC+0feqkY7hmHKTxkOZVpz2bo40iVWy VaGZp+q0LcEonvJhifS8LfeOHPbaOSnUkszn09ohywnjRDTmnF+/fSd0M A==; X-IronPort-AV: E=McAfee;i="6600,9927,10868"; a="472685508" X-IronPort-AV: E=Sophos;i="6.03,238,1694761200"; d="scan'208";a="472685508" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 02:24:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10868"; a="750859682" X-IronPort-AV: E=Sophos;i="6.03,238,1694761200"; d="scan'208";a="750859682" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by orsmga007.jf.intel.com with ESMTP; 20 Oct 2023 02:24:30 -0700 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com Subject: [PATCH v5 4/6] iommufd/selftest: Add mock_domain_cache_invalidate_user support Date: Fri, 20 Oct 2023 02:24:24 -0700 Message-Id: <20231020092426.13907-5-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231020092426.13907-1-yi.l.liu@intel.com> References: <20231020092426.13907-1-yi.l.liu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Nicolin Chen Add mock_domain_cache_invalidate_user() data structure to support user space selftest program to cover user cache invalidation pathway. Signed-off-by: Nicolin Chen Signed-off-by: Yi Liu --- drivers/iommu/iommufd/iommufd_test.h | 17 +++++++++++ drivers/iommu/iommufd/selftest.c | 43 ++++++++++++++++++++++++++++ 2 files changed, 60 insertions(+) diff --git a/drivers/iommu/iommufd/iommufd_test.h b/drivers/iommu/iommufd/iommufd_test.h index 440ec3d64099..a96838516db9 100644 --- a/drivers/iommu/iommufd/iommufd_test.h +++ b/drivers/iommu/iommufd/iommufd_test.h @@ -127,4 +127,21 @@ struct iommu_hwpt_selftest { __u32 iotlb; }; +/** + * struct iommu_hwpt_invalidate_selftest + * + * @flags: invalidate flags + * @iotlb_id: invalidate iotlb entry index + * + * If IOMMU_TEST_INVALIDATE_ALL is set in @flags, @iotlb_id will be ignored + */ +struct iommu_hwpt_invalidate_selftest { +#define IOMMU_TEST_INVALIDATE_ALL (1ULL << 0) + __u32 flags; + __u32 iotlb_id; +}; + +#define IOMMU_TEST_INVALIDATE_ERR_FETCH 0xdeadbeee +#define IOMMU_TEST_INVALIDATE_ERR_REQ 0xdeadbeef + #endif diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c index 5f513c5d6876..abd942cf3a79 100644 --- a/drivers/iommu/iommufd/selftest.c +++ b/drivers/iommu/iommufd/selftest.c @@ -404,9 +404,52 @@ static void mock_domain_free_nested(struct iommu_domain *domain) kfree(mock_nested); } +static int +mock_domain_cache_invalidate_user(struct iommu_domain *domain, + struct iommu_user_data_array *array, + u32 *error_code) +{ + struct mock_iommu_domain_nested *mock_nested = + container_of(domain, struct mock_iommu_domain_nested, domain); + struct iommu_hwpt_invalidate_selftest inv; + int rc = 0; + int i, j; + + if (domain->type != IOMMU_DOMAIN_NESTED) + return -EINVAL; + + for (i = 0; i < array->entry_num; i++) { + rc = iommu_copy_struct_from_user_array(&inv, array, + IOMMU_HWPT_DATA_SELFTEST, + i, iotlb_id); + if (rc) { + *error_code = IOMMU_TEST_INVALIDATE_ERR_FETCH; + goto err; + } + /* Invalidate all mock iotlb entries and ignore iotlb_id */ + if (inv.flags & IOMMU_TEST_INVALIDATE_ALL) { + for (j = 0; j < MOCK_NESTED_DOMAIN_IOTLB_NUM; j++) + mock_nested->iotlb[j] = 0; + continue; + } + /* Treat out-of-boundry iotlb_id as a request error */ + if (inv.iotlb_id > MOCK_NESTED_DOMAIN_IOTLB_ID_MAX) { + *error_code = IOMMU_TEST_INVALIDATE_ERR_REQ; + rc = -EINVAL; + goto err; + } + mock_nested->iotlb[inv.iotlb_id] = 0; + } + +err: + array->entry_num = i; + return rc; +} + static struct iommu_domain_ops domain_nested_ops = { .free = mock_domain_free_nested, .attach_dev = mock_domain_nop_attach, + .cache_invalidate_user = mock_domain_cache_invalidate_user, }; static inline struct iommufd_hw_pagetable * From patchwork Fri Oct 20 09:24:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi Liu X-Patchwork-Id: 13430371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A376CDB47E for ; Fri, 20 Oct 2023 09:25:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376682AbjJTJZB (ORCPT ); Fri, 20 Oct 2023 05:25:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376694AbjJTJYp (ORCPT ); Fri, 20 Oct 2023 05:24:45 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D1C8D5A; Fri, 20 Oct 2023 02:24:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697793874; x=1729329874; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Mesj9VsQ2q8HSWMcHXIgOJYVRv65WYH80rYMTqkoSts=; b=lpKKZ50kwxMbxmWEBX38AdiYa6a5F1NGAXKjeYkYpxjHANH6DevFq0VJ gI1i/sHOAozLt/gf6yxzHv1jWyrpzCWPe8orZ6RfAOddQgdK8DB0EgSDJ jGFJT1siXKJlH8zT/6PpyK3KmAAEHDidl3cLvK435BHKgt7T+35tlKLYE OhfZWpvWRa8IrLp+74VRU4IYmNC+MBdhQwwsgkwSVf4rvNH4P3EEUzjb0 /7rnExTQJZKXYU0mQRWj3+1F+pnMEsJhUfd27YRRv4cXQc0kcDwOpqcye y+Nalb12Y3DJtUQoFI2+UaHWLzSgMQkKtxRzb5SDYm0BpDXIMiBrjdCPE Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10868"; a="472685531" X-IronPort-AV: E=Sophos;i="6.03,238,1694761200"; d="scan'208";a="472685531" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 02:24:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10868"; a="750859685" X-IronPort-AV: E=Sophos;i="6.03,238,1694761200"; d="scan'208";a="750859685" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by orsmga007.jf.intel.com with ESMTP; 20 Oct 2023 02:24:31 -0700 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com Subject: [PATCH v5 5/6] iommufd/selftest: Add IOMMU_TEST_OP_MD_CHECK_IOTLB test op Date: Fri, 20 Oct 2023 02:24:25 -0700 Message-Id: <20231020092426.13907-6-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231020092426.13907-1-yi.l.liu@intel.com> References: <20231020092426.13907-1-yi.l.liu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Nicolin Chen Allow to test whether IOTLB has been invalidated or not. Signed-off-by: Nicolin Chen Signed-off-by: Yi Liu --- drivers/iommu/iommufd/iommufd_test.h | 5 ++++ drivers/iommu/iommufd/selftest.c | 26 +++++++++++++++++++ tools/testing/selftests/iommu/iommufd.c | 4 +++ tools/testing/selftests/iommu/iommufd_utils.h | 24 +++++++++++++++++ 4 files changed, 59 insertions(+) diff --git a/drivers/iommu/iommufd/iommufd_test.h b/drivers/iommu/iommufd/iommufd_test.h index a96838516db9..9cf82d242aa6 100644 --- a/drivers/iommu/iommufd/iommufd_test.h +++ b/drivers/iommu/iommufd/iommufd_test.h @@ -19,6 +19,7 @@ enum { IOMMU_TEST_OP_SET_TEMP_MEMORY_LIMIT, IOMMU_TEST_OP_MOCK_DOMAIN_REPLACE, IOMMU_TEST_OP_ACCESS_REPLACE_IOAS, + IOMMU_TEST_OP_MD_CHECK_IOTLB, }; enum { @@ -100,6 +101,10 @@ struct iommu_test_cmd { struct { __u32 ioas_id; } access_replace_ioas; + struct { + __u32 id; + __u32 iotlb; + } check_iotlb; }; __u32 last; }; diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c index abd942cf3a79..112a934cbaca 100644 --- a/drivers/iommu/iommufd/selftest.c +++ b/drivers/iommu/iommufd/selftest.c @@ -759,6 +759,28 @@ static int iommufd_test_md_check_refs(struct iommufd_ucmd *ucmd, return 0; } +static int iommufd_test_md_check_iotlb(struct iommufd_ucmd *ucmd, + u32 mockpt_id, unsigned int iotlb_id, + u32 iotlb) +{ + struct mock_iommu_domain_nested *mock_nested; + struct iommufd_hw_pagetable *hwpt; + int rc = 0; + + hwpt = get_md_pagetable_nested(ucmd, mockpt_id, &mock_nested); + if (IS_ERR(hwpt)) + return PTR_ERR(hwpt); + + mock_nested = container_of(hwpt->domain, + struct mock_iommu_domain_nested, domain); + + if (iotlb_id > MOCK_NESTED_DOMAIN_IOTLB_ID_MAX || + mock_nested->iotlb[iotlb_id] != iotlb) + rc = -EINVAL; + iommufd_put_object(&hwpt->obj); + return rc; +} + struct selftest_access { struct iommufd_access *access; struct file *file; @@ -1172,6 +1194,10 @@ int iommufd_test(struct iommufd_ucmd *ucmd) return iommufd_test_md_check_refs( ucmd, u64_to_user_ptr(cmd->check_refs.uptr), cmd->check_refs.length, cmd->check_refs.refs); + case IOMMU_TEST_OP_MD_CHECK_IOTLB: + return iommufd_test_md_check_iotlb(ucmd, cmd->id, + cmd->check_iotlb.id, + cmd->check_iotlb.iotlb); case IOMMU_TEST_OP_CREATE_ACCESS: return iommufd_test_create_access(ucmd, cmd->id, cmd->create_access.flags); diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c index 7ab9ca06460d..e0c66e70d978 100644 --- a/tools/testing/selftests/iommu/iommufd.c +++ b/tools/testing/selftests/iommu/iommufd.c @@ -332,6 +332,10 @@ TEST_F(iommufd_ioas, alloc_hwpt_nested) 0, &nested_hwpt_id[1], IOMMU_HWPT_DATA_SELFTEST, &data, sizeof(data)); + test_cmd_hwpt_check_iotlb_all(nested_hwpt_id[0], + IOMMU_TEST_IOTLB_DEFAULT); + test_cmd_hwpt_check_iotlb_all(nested_hwpt_id[1], + IOMMU_TEST_IOTLB_DEFAULT); /* Negative test: a nested hwpt on top of a nested hwpt */ test_err_hwpt_alloc_nested(EINVAL, diff --git a/tools/testing/selftests/iommu/iommufd_utils.h b/tools/testing/selftests/iommu/iommufd_utils.h index 1a5fa44b1627..ef7a346dba3a 100644 --- a/tools/testing/selftests/iommu/iommufd_utils.h +++ b/tools/testing/selftests/iommu/iommufd_utils.h @@ -145,6 +145,30 @@ static int _test_cmd_hwpt_alloc(int fd, __u32 device_id, __u32 pt_id, _test_cmd_hwpt_alloc(self->fd, device_id, pt_id, flags, \ hwpt_id, data_type, data, data_len)) +#define test_cmd_hwpt_check_iotlb(hwpt_id, iotlb_id, expected) \ + ({ \ + struct iommu_test_cmd test_cmd = { \ + .size = sizeof(test_cmd), \ + .op = IOMMU_TEST_OP_MD_CHECK_IOTLB, \ + .id = hwpt_id, \ + .check_iotlb = { \ + .id = iotlb_id, \ + .iotlb = expected, \ + }, \ + }; \ + ASSERT_EQ(0, \ + ioctl(self->fd, \ + _IOMMU_TEST_CMD(IOMMU_TEST_OP_MD_CHECK_IOTLB), \ + &test_cmd)); \ + }) + +#define test_cmd_hwpt_check_iotlb_all(hwpt_id, expected) \ + ({ \ + int i; \ + for (i = 0; i < MOCK_NESTED_DOMAIN_IOTLB_NUM; i++) \ + test_cmd_hwpt_check_iotlb(hwpt_id, i, expected); \ + }) + static int _test_cmd_access_replace_ioas(int fd, __u32 access_id, unsigned int ioas_id) { From patchwork Fri Oct 20 09:24:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi Liu X-Patchwork-Id: 13430372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5158C001DF for ; Fri, 20 Oct 2023 09:25:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376750AbjJTJZL (ORCPT ); Fri, 20 Oct 2023 05:25:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376708AbjJTJYv (ORCPT ); Fri, 20 Oct 2023 05:24:51 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF282D5E; Fri, 20 Oct 2023 02:24:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697793874; x=1729329874; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CDuEodg+jMw+x43UiLiPo7Jd4ccwMA7AswFl0BfqSXI=; b=WP2EtZGyb2oFKMK8cybG6oSI+Nwv+r8bJgmqzzkbfpo7Z/TapTR5I+W5 SfBuGKUbPz/IMYB9ew0Wu0sx8s0Ye9y+7eHSawqVxh+fbvIRAdkael17j h6D1LOrJfe7OmPE9OLba9B6K8tmiLq9/lqXEoTbG0TMgKmeKQUNFWmhiE TILq9WQUSdK2MvthXFvr0LsOBc/ElZbpS8loy6TzVKiv/XG6yk5w4Qmsk rHLN7TprL453btE4TvqjG1TmOacwcLXKLkb56sbjEeEA0t2jDcPJt7M+J ttei3OiRf24B0gp5UR20zGg0vgsYg1FKYBG0jzh4sBqXg17J/c/zEGNHz A==; X-IronPort-AV: E=McAfee;i="6600,9927,10868"; a="472685547" X-IronPort-AV: E=Sophos;i="6.03,238,1694761200"; d="scan'208";a="472685547" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 02:24:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10868"; a="750859688" X-IronPort-AV: E=Sophos;i="6.03,238,1694761200"; d="scan'208";a="750859688" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by orsmga007.jf.intel.com with ESMTP; 20 Oct 2023 02:24:31 -0700 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com Subject: [PATCH v5 6/6] iommufd/selftest: Add coverage for IOMMU_HWPT_INVALIDATE ioctl Date: Fri, 20 Oct 2023 02:24:26 -0700 Message-Id: <20231020092426.13907-7-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231020092426.13907-1-yi.l.liu@intel.com> References: <20231020092426.13907-1-yi.l.liu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Nicolin Chen Add test cases for the IOMMU_HWPT_INVALIDATE ioctl and verify it by using the new IOMMU_TEST_OP_MD_CHECK_IOTLB. Signed-off-by: Nicolin Chen Signed-off-by: Yi Liu --- tools/testing/selftests/iommu/iommufd.c | 71 +++++++++++++++++++ tools/testing/selftests/iommu/iommufd_utils.h | 39 ++++++++++ 2 files changed, 110 insertions(+) diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c index e0c66e70d978..456c89eb8bd1 100644 --- a/tools/testing/selftests/iommu/iommufd.c +++ b/tools/testing/selftests/iommu/iommufd.c @@ -116,6 +116,7 @@ TEST_F(iommufd, cmd_length) TEST_LENGTH(iommu_destroy, IOMMU_DESTROY, id); TEST_LENGTH(iommu_hw_info, IOMMU_GET_HW_INFO, __reserved); TEST_LENGTH(iommu_hwpt_alloc, IOMMU_HWPT_ALLOC, __reserved); + TEST_LENGTH(iommu_hwpt_invalidate, IOMMU_HWPT_INVALIDATE, out_driver_error_code); TEST_LENGTH(iommu_ioas_alloc, IOMMU_IOAS_ALLOC, out_ioas_id); TEST_LENGTH(iommu_ioas_iova_ranges, IOMMU_IOAS_IOVA_RANGES, out_iova_alignment); @@ -271,7 +272,9 @@ TEST_F(iommufd_ioas, alloc_hwpt_nested) struct iommu_hwpt_selftest data = { .iotlb = IOMMU_TEST_IOTLB_DEFAULT, }; + struct iommu_hwpt_invalidate_selftest inv_reqs[2] = {0}; uint32_t nested_hwpt_id[2] = {}; + uint32_t num_inv, driver_error; uint32_t parent_hwpt_id = 0; uint32_t parent_hwpt_id_not_work = 0; uint32_t test_hwpt_id = 0; @@ -347,6 +350,74 @@ TEST_F(iommufd_ioas, alloc_hwpt_nested) EXPECT_ERRNO(EBUSY, _test_ioctl_destroy(self->fd, parent_hwpt_id)); + /* hwpt_invalidate only supports a user-managed hwpt (nested) */ + num_inv = 1; + test_err_hwpt_invalidate(ENOENT, parent_hwpt_id, inv_reqs, + IOMMU_HWPT_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv, NULL); + /* Negative test: wrong data type */ + num_inv = 1; + test_err_hwpt_invalidate(EINVAL, nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_DATA_NONE, + sizeof(*inv_reqs), &num_inv, + &driver_error); + /* Negative test: structure size sanity */ + num_inv = 1; + test_err_hwpt_invalidate(EINVAL, nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_DATA_SELFTEST, + sizeof(*inv_reqs) + 1, &num_inv, + &driver_error); + assert(driver_error == IOMMU_TEST_INVALIDATE_ERR_FETCH); + + num_inv = 1; + test_err_hwpt_invalidate(EINVAL, nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_DATA_SELFTEST, + 1, &num_inv, &driver_error); + assert(driver_error == IOMMU_TEST_INVALIDATE_ERR_FETCH); + + /* Invalidate the 1st iotlb entry but fail the 2nd request */ + num_inv = 2; + inv_reqs[0].iotlb_id = 0; + inv_reqs[1].iotlb_id = MOCK_NESTED_DOMAIN_IOTLB_ID_MAX + 1; + test_err_hwpt_invalidate(EINVAL, nested_hwpt_id[0], inv_reqs, + IOMMU_HWPT_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv, + &driver_error); + assert(num_inv == 1); + assert(driver_error == IOMMU_TEST_INVALIDATE_ERR_REQ); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 0, 0); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 1, + IOMMU_TEST_IOTLB_DEFAULT); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 2, + IOMMU_TEST_IOTLB_DEFAULT); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 3, + IOMMU_TEST_IOTLB_DEFAULT); + + /* Invalidate the 2nd iotlb entry and verify */ + num_inv = 1; + inv_reqs[0].iotlb_id = 1; + test_cmd_hwpt_invalidate(nested_hwpt_id[0], inv_reqs, + sizeof(*inv_reqs), &num_inv); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 0, 0); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 1, 0); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 2, + IOMMU_TEST_IOTLB_DEFAULT); + test_cmd_hwpt_check_iotlb(nested_hwpt_id[0], 3, + IOMMU_TEST_IOTLB_DEFAULT); + /* Invalidate the 3rd and 4th iotlb entries and verify */ + num_inv = 2; + inv_reqs[0].iotlb_id = 2; + inv_reqs[1].iotlb_id = 3; + test_cmd_hwpt_invalidate(nested_hwpt_id[0], inv_reqs, + sizeof(*inv_reqs), &num_inv); + test_cmd_hwpt_check_iotlb_all(nested_hwpt_id[0], 0); + /* Invalidate all iotlb entries for nested_hwpt_id[1] and verify */ + num_inv = 1; + inv_reqs[0].flags = IOMMU_TEST_INVALIDATE_ALL; + test_cmd_hwpt_invalidate(nested_hwpt_id[1], inv_reqs, + sizeof(*inv_reqs), &num_inv); + test_cmd_hwpt_check_iotlb_all(nested_hwpt_id[1], 0); + /* Attach device to nested_hwpt_id[0] that then will be busy */ test_cmd_mock_domain_replace(self->stdev_id, nested_hwpt_id[0]); diff --git a/tools/testing/selftests/iommu/iommufd_utils.h b/tools/testing/selftests/iommu/iommufd_utils.h index ef7a346dba3a..2e9eb53a1924 100644 --- a/tools/testing/selftests/iommu/iommufd_utils.h +++ b/tools/testing/selftests/iommu/iommufd_utils.h @@ -169,6 +169,45 @@ static int _test_cmd_hwpt_alloc(int fd, __u32 device_id, __u32 pt_id, test_cmd_hwpt_check_iotlb(hwpt_id, i, expected); \ }) +static int _test_cmd_hwpt_invalidate(int fd, __u32 hwpt_id, void *reqs, + uint32_t req_type, uint32_t lreq, + uint32_t *nreqs, uint32_t *driver_error) +{ + struct iommu_hwpt_invalidate cmd = { + .size = sizeof(cmd), + .hwpt_id = hwpt_id, + .req_type = req_type, + .reqs_uptr = (uint64_t)reqs, + .req_len = lreq, + .req_num = *nreqs, + }; + int rc = ioctl(fd, IOMMU_HWPT_INVALIDATE, &cmd); + *nreqs = cmd.req_num; + if (driver_error) + *driver_error = cmd.out_driver_error_code; + return rc; +} + +#define test_cmd_hwpt_invalidate(hwpt_id, reqs, lreq, nreqs) \ + ({ \ + uint32_t error, num = *nreqs; \ + ASSERT_EQ(0, \ + _test_cmd_hwpt_invalidate(self->fd, hwpt_id, reqs, \ + IOMMU_HWPT_DATA_SELFTEST, \ + lreq, nreqs, &error)); \ + assert(num == *nreqs); \ + assert(error == 0); \ + }) +#define test_err_hwpt_invalidate(_errno, hwpt_id, reqs, req_type, lreq, \ + nreqs, driver_error) \ + ({ \ + EXPECT_ERRNO(_errno, \ + _test_cmd_hwpt_invalidate(self->fd, hwpt_id, \ + reqs, req_type, \ + lreq, nreqs, \ + driver_error)); \ + }) + static int _test_cmd_access_replace_ioas(int fd, __u32 access_id, unsigned int ioas_id) {