From patchwork Wed Jun 24 08:55:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi Liu X-Patchwork-Id: 11622673 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 55CA792A for ; Wed, 24 Jun 2020 08:50:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 45A3B20B80 for ; Wed, 24 Jun 2020 08:50:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389425AbgFXIuC (ORCPT ); Wed, 24 Jun 2020 04:50:02 -0400 Received: from mga01.intel.com ([192.55.52.88]:1309 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388627AbgFXItC (ORCPT ); Wed, 24 Jun 2020 04:49:02 -0400 IronPort-SDR: XOGK6MIGM+rF5RYvYWTBpm6eVcT8fjPhavwPfmqDCRELtFYeSrxGjFV6mTv8KJ6xJYLoiHnEG1 s5cydVAjyegA== X-IronPort-AV: E=McAfee;i="6000,8403,9661"; a="162484884" X-IronPort-AV: E=Sophos;i="5.75,274,1589266800"; d="scan'208";a="162484884" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jun 2020 01:48:57 -0700 IronPort-SDR: THVJXCEkFHZnO1tmq2+rljLU7NZo52RLORfu12BgWwchCSjBKNX/3VBUhd1P+aXw6WSzoAr1Zp o3g7OVZO80gQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,274,1589266800"; d="scan'208";a="275624524" Received: from jacob-builder.jf.intel.com ([10.7.199.155]) by orsmga003.jf.intel.com with ESMTP; 24 Jun 2020 01:48:56 -0700 From: Liu Yi L To: alex.williamson@redhat.com, eric.auger@redhat.com, baolu.lu@linux.intel.com, joro@8bytes.org Cc: kevin.tian@intel.com, jacob.jun.pan@linux.intel.com, ashok.raj@intel.com, yi.l.liu@intel.com, jun.j.tian@intel.com, yi.y.sun@intel.com, jean-philippe@linaro.org, peterx@redhat.com, hao.wu@intel.com, iommu@lists.linux-foundation.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 11/14] vfio/type1: Add vSVA support for IOMMU-backed mdevs Date: Wed, 24 Jun 2020 01:55:24 -0700 Message-Id: <1592988927-48009-12-git-send-email-yi.l.liu@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1592988927-48009-1-git-send-email-yi.l.liu@intel.com> References: <1592988927-48009-1-git-send-email-yi.l.liu@intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Recent years, mediated device pass-through framework (e.g. vfio-mdev) is used to achieve flexible device sharing across domains (e.g. VMs). Also there are hardware assisted mediated pass-through solutions from platform vendors. e.g. Intel VT-d scalable mode which supports Intel Scalable I/O Virtualization technology. Such mdevs are called IOMMU- backed mdevs as there are IOMMU enforced DMA isolation for such mdevs. In kernel, IOMMU-backed mdevs are exposed to IOMMU layer by aux-domain concept, which means mdevs are protected by an iommu domain which is auxiliary to the domain that the kernel driver primarily uses for DMA API. Details can be found in the KVM presentation as below: https://events19.linuxfoundation.org/wp-content/uploads/2017/12/\ Hardware-Assisted-Mediated-Pass-Through-with-VFIO-Kevin-Tian-Intel.pdf This patch extends NESTING_IOMMU ops to IOMMU-backed mdev devices. The main requirement is to use the auxiliary domain associated with mdev. Cc: Kevin Tian CC: Jacob Pan CC: Jun Tian Cc: Alex Williamson Cc: Eric Auger Cc: Jean-Philippe Brucker Cc: Joerg Roedel Cc: Lu Baolu Signed-off-by: Liu Yi L --- v1 -> v2: *) check the iommu_device to ensure the handling mdev is IOMMU-backed --- drivers/vfio/vfio_iommu_type1.c | 40 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 36 insertions(+), 4 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 4c21300..e1a794c 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2378,20 +2378,41 @@ static int vfio_iommu_resv_refresh(struct vfio_iommu *iommu, return ret; } +static struct device *vfio_get_iommu_device(struct vfio_group *group, + struct device *dev) +{ + if (group->mdev_group) + return vfio_mdev_get_iommu_device(dev); + else + return dev; +} + static int vfio_dev_bind_gpasid_fn(struct device *dev, void *data) { struct domain_capsule *dc = (struct domain_capsule *)data; unsigned long arg = *(unsigned long *) dc->data; + struct device *iommu_device; + + iommu_device = vfio_get_iommu_device(dc->group, dev); + if (!iommu_device) + return -EINVAL; - return iommu_sva_bind_gpasid(dc->domain, dev, (void __user *) arg); + return iommu_sva_bind_gpasid(dc->domain, iommu_device, + (void __user *) arg); } static int vfio_dev_unbind_gpasid_fn(struct device *dev, void *data) { struct domain_capsule *dc = (struct domain_capsule *)data; unsigned long arg = *(unsigned long *) dc->data; + struct device *iommu_device; - iommu_sva_unbind_gpasid(dc->domain, dev, (void __user *) arg); + iommu_device = vfio_get_iommu_device(dc->group, dev); + if (!iommu_device) + return -EINVAL; + + iommu_sva_unbind_gpasid(dc->domain, iommu_device, + (void __user *) arg); return 0; } @@ -2400,8 +2421,13 @@ static int __vfio_dev_unbind_gpasid_fn(struct device *dev, void *data) struct domain_capsule *dc = (struct domain_capsule *)data; struct iommu_gpasid_bind_data *unbind_data = (struct iommu_gpasid_bind_data *) dc->data; + struct device *iommu_device; + + iommu_device = vfio_get_iommu_device(dc->group, dev); + if (!iommu_device) + return -EINVAL; - __iommu_sva_unbind_gpasid(dc->domain, dev, unbind_data); + __iommu_sva_unbind_gpasid(dc->domain, iommu_device, unbind_data); return 0; } @@ -3084,8 +3110,14 @@ static int vfio_dev_cache_invalidate_fn(struct device *dev, void *data) { struct domain_capsule *dc = (struct domain_capsule *)data; unsigned long arg = *(unsigned long *) dc->data; + struct device *iommu_device; + + iommu_device = vfio_get_iommu_device(dc->group, dev); + if (!iommu_device) + return -EINVAL; - iommu_cache_invalidate(dc->domain, dev, (void __user *) arg); + iommu_cache_invalidate(dc->domain, iommu_device, + (void __user *) arg); return 0; }