From patchwork Fri Sep 24 15:56:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12515881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1807FC433F5 for ; Fri, 24 Sep 2021 16:03:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 00812600D4 for ; Fri, 24 Sep 2021 16:03:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347271AbhIXQFb (ORCPT ); Fri, 24 Sep 2021 12:05:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347253AbhIXQFa (ORCPT ); Fri, 24 Sep 2021 12:05:30 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82555C061571 for ; Fri, 24 Sep 2021 09:03:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wEXFR7qyYNxk9nk4dtiOok3t89gBB7JW5ihVqd858Tg=; b=Lj88K0vRdBcc0aGw6uFnx9w9as OiV1n/QtIUbFy+0ahup1Kv7UNbHzsFh5ODtV7l0MKw4bF1g8pnNe9IOeEEuJ2i0TXWRdZSiflA7gd RxtZ+co233hzIgzyq6xwgMtcWkjjGy2+LY0b4X/cKpp0UtIGaBIsqomtlxACP5Ard2S6dJDInfqJV OkPNUeUuPN2ujoRsxP7Veq8aESayW029zarEdKmFZU3IHLjeuf9aqRpf3aBJdS1yqnTjbcHcPoWQ5 OIDa9IaYtk3wQN1hVHzRBvW3/R6h7rodsQlOAYPm24u5KJRSP2Hy+p0otCpuv5FuAJ/6+V9/nLPEI +z9Z8SHw==; Received: from [2001:4bb8:184:72db:e8f6:c47e:192b:5622] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTndz-007N7S-O5; Fri, 24 Sep 2021 16:02:19 +0000 From: Christoph Hellwig To: Alex Williamson Cc: Diana Craciun , Cornelia Huck , Kirti Wankhede , Eric Auger , Jason Gunthorpe , Terrence Xu , kvm@vger.kernel.org, Jason Gunthorpe , Kevin Tian Subject: [PATCH 05/15] vfio: refactor noiommu group creation Date: Fri, 24 Sep 2021 17:56:55 +0200 Message-Id: <20210924155705.4258-6-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210924155705.4258-1-hch@lst.de> References: <20210924155705.4258-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Split the actual noiommu group creation from vfio_iommu_group_get into a new helper, and open code the rest of vfio_iommu_group_get in its only caller. This creates an entirely separate and clear code path for the noiommu group creation. Signed-off-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe Reviewed-by: Kevin Tian --- drivers/vfio/vfio.c | 104 ++++++++++++++++++++++---------------------- 1 file changed, 52 insertions(+), 52 deletions(-) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 2b2679c7126fdf..b1ed156adccd04 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -169,50 +169,6 @@ static void vfio_release_device_set(struct vfio_device *device) xa_unlock(&vfio_device_set_xa); } -static struct iommu_group *vfio_iommu_group_get(struct device *dev) -{ - struct iommu_group *group; - int __maybe_unused ret; - - group = iommu_group_get(dev); - -#ifdef CONFIG_VFIO_NOIOMMU - /* - * With noiommu enabled, an IOMMU group will be created for a device - * that doesn't already have one and doesn't have an iommu_ops on their - * bus. We set iommudata simply to be able to identify these groups - * as special use and for reclamation later. - */ - if (group || !noiommu || iommu_present(dev->bus)) - return group; - - group = iommu_group_alloc(); - if (IS_ERR(group)) - return NULL; - - iommu_group_set_name(group, "vfio-noiommu"); - iommu_group_set_iommudata(group, &noiommu, NULL); - ret = iommu_group_add_device(group, dev); - if (ret) { - iommu_group_put(group); - return NULL; - } - - /* - * Where to taint? At this point we've added an IOMMU group for a - * device that is not backed by iommu_ops, therefore any iommu_ - * callback using iommu_ops can legitimately Oops. So, while we may - * be about to give a DMA capable device to a user without IOMMU - * protection, which is clearly taint-worthy, let's go ahead and do - * it here. - */ - add_taint(TAINT_USER, LOCKDEP_STILL_OK); - dev_warn(dev, "Adding kernel taint for vfio-noiommu group on device\n"); -#endif - - return group; -} - #ifdef CONFIG_VFIO_NOIOMMU static void *vfio_noiommu_open(unsigned long arg) { @@ -823,12 +779,61 @@ void vfio_uninit_group_dev(struct vfio_device *device) } EXPORT_SYMBOL_GPL(vfio_uninit_group_dev); -struct vfio_group *vfio_group_find_or_alloc(struct device *dev) +#ifdef CONFIG_VFIO_NOIOMMU +static struct vfio_group *vfio_noiommu_group_alloc(struct device *dev) { struct iommu_group *iommu_group; struct vfio_group *group; + int ret; + + iommu_group = iommu_group_alloc(); + if (IS_ERR(iommu_group)) + return ERR_CAST(iommu_group); - iommu_group = vfio_iommu_group_get(dev); + iommu_group_set_name(iommu_group, "vfio-noiommu"); + iommu_group_set_iommudata(iommu_group, &noiommu, NULL); + ret = iommu_group_add_device(iommu_group, dev); + if (ret) + goto out_put_group; + + group = vfio_create_group(iommu_group); + if (IS_ERR(group)) { + ret = PTR_ERR(group); + goto out_remove_device; + } + + return group; + +out_remove_device: + iommu_group_remove_device(dev); +out_put_group: + iommu_group_put(iommu_group); + return ERR_PTR(ret); +} +#endif + +static struct vfio_group *vfio_group_find_or_alloc(struct device *dev) +{ + struct iommu_group *iommu_group; + struct vfio_group *group; + + iommu_group = iommu_group_get(dev); +#ifdef CONFIG_VFIO_NOIOMMU + if (!iommu_group && noiommu && !iommu_present(dev->bus)) { + /* + * With noiommu enabled, create an IOMMU group for devices that + * don't already have one and don't have an iommu_ops on their + * bus. Taint the kernel because we're about to give a DMA + * capable device to a user without IOMMU protection. + */ + group = vfio_noiommu_group_alloc(dev); + if (!IS_ERR(group)) { + add_taint(TAINT_USER, LOCKDEP_STILL_OK); + dev_warn(dev, "Adding kernel taint for vfio-noiommu group on device\n"); + } + return group; + } +#endif if (!iommu_group) return ERR_PTR(-EINVAL); @@ -840,14 +845,9 @@ struct vfio_group *vfio_group_find_or_alloc(struct device *dev) /* a newly created vfio_group keeps the reference. */ group = vfio_create_group(iommu_group); if (IS_ERR(group)) - goto out_remove; + goto out_put; return group; -out_remove: -#ifdef CONFIG_VFIO_NOIOMMU - if (iommu_group_get_iommudata(iommu_group) == &noiommu) - iommu_group_remove_device(dev); -#endif out_put: iommu_group_put(iommu_group); return group;