From patchwork Tue Mar 9 06:22:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shenming Lu X-Patchwork-Id: 12124217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BC77C433DB for ; Tue, 9 Mar 2021 06:24:38 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6C06764FDF for ; Tue, 9 Mar 2021 06:24:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6C06764FDF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=df38K93nTRFRnViiqmaddHSifxTrsAWlRkmQKA2r6vw=; b=O8iPFhqabssqsmmZ6uYZGLZy1 fHHxF8ZawcfrD6dF9j/K8IJyneOlm6RKlQpUhEgMq7QggSd7qgrk4p93TLvpmBQs3OCm3T/d4Znq9 aChxZnbNoWcXWHvfyJpDxx6PD0hwJ5DBwS4tkJ2oX+A9Loy+Hr3KATC6xvsk5Dyz9iYDG9ZPhRTDA 0aleKfGycZvQs3HJaF4aAPIRjIqiv2vjgH7EL0CUWdjt9HMaNGcElFOKQiBnkED3+6P5lP2BdJexV ISqAPHXqysarxdGAsYFmgphqaMLp8SoVXmrihYlD80xwU3yC87rycfxCkOkAg2AoJcBnn6PLaBFhd +7I0kZ0oQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lJVm3-003sp3-Rn; Tue, 09 Mar 2021 06:23:20 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lJVlL-003scj-Qa for linux-arm-kernel@lists.infradead.org; Tue, 09 Mar 2021 06:22:39 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4DvlSv01FRz17Hqs; Tue, 9 Mar 2021 14:20:47 +0800 (CST) Received: from DESKTOP-7FEPK9S.china.huawei.com (10.174.184.135) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Tue, 9 Mar 2021 14:22:24 +0800 From: Shenming Lu To: Alex Williamson , Cornelia Huck , Will Deacon , Robin Murphy , Joerg Roedel , Jean-Philippe Brucker , Eric Auger , , , , , CC: Kevin Tian , , Christoph Hellwig , Lu Baolu , Jonathan Cameron , Barry Song , , , , Subject: [RFC PATCH v2 6/6] vfio: Add nested IOPF support Date: Tue, 9 Mar 2021 14:22:07 +0800 Message-ID: <20210309062207.505-7-lushenming@huawei.com> X-Mailer: git-send-email 2.27.0.windows.1 In-Reply-To: <20210309062207.505-1-lushenming@huawei.com> References: <20210309062207.505-1-lushenming@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.135] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210309_062236_566377_E5ABEE71 X-CRM114-Status: GOOD ( 18.67 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To set up nested mode, drivers such as vfio_pci have to register a handler to receive stage/level 1 faults from the IOMMU, but since currently each device can only have one iommu dev fault handler, and if stage 2 IOPF is already enabled (VFIO_IOMMU_ENABLE_IOPF), we choose to update the registered handler (a combined one) via flags (set IOPF_REPORT_NESTED_L1_CONCERNED), and further deliver the received stage 1 faults in the handler to the guest through a newly added vfio_device_ops callback. Signed-off-by: Shenming Lu --- drivers/vfio/vfio.c | 83 +++++++++++++++++++++++++++++++++ drivers/vfio/vfio_iommu_type1.c | 37 +++++++++++++++ include/linux/vfio.h | 9 ++++ 3 files changed, 129 insertions(+) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 77b29bbd3027..c6a01d947d0d 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -2389,6 +2389,89 @@ int vfio_iommu_dev_fault_handler(struct iommu_fault *fault, void *data) } EXPORT_SYMBOL_GPL(vfio_iommu_dev_fault_handler); +int vfio_iommu_dev_fault_handler_unregister_nested(struct device *dev) +{ + struct vfio_container *container; + struct vfio_group *group; + struct vfio_iommu_driver *driver; + int ret; + + if (!dev) + return -EINVAL; + + group = vfio_group_get_from_dev(dev); + if (!group) + return -ENODEV; + + ret = vfio_group_add_container_user(group); + if (ret) + goto out; + + container = group->container; + driver = container->iommu_driver; + if (likely(driver && driver->ops->unregister_hdlr_nested)) + ret = driver->ops->unregister_hdlr_nested(container->iommu_data, + dev); + else + ret = -ENOTTY; + + vfio_group_try_dissolve_container(group); + +out: + vfio_group_put(group); + return ret; +} +EXPORT_SYMBOL_GPL(vfio_iommu_dev_fault_handler_unregister_nested); + +/* + * Register/Update the VFIO page fault handler + * to receive nested stage/level 1 faults. + */ +int vfio_iommu_dev_fault_handler_register_nested(struct device *dev) +{ + struct vfio_container *container; + struct vfio_group *group; + struct vfio_iommu_driver *driver; + int ret; + + if (!dev) + return -EINVAL; + + group = vfio_group_get_from_dev(dev); + if (!group) + return -ENODEV; + + ret = vfio_group_add_container_user(group); + if (ret) + goto out; + + container = group->container; + driver = container->iommu_driver; + if (likely(driver && driver->ops->register_hdlr_nested)) + ret = driver->ops->register_hdlr_nested(container->iommu_data, + dev); + else + ret = -ENOTTY; + + vfio_group_try_dissolve_container(group); + +out: + vfio_group_put(group); + return ret; +} +EXPORT_SYMBOL_GPL(vfio_iommu_dev_fault_handler_register_nested); + +int vfio_transfer_dev_fault(struct device *dev, struct iommu_fault *fault) +{ + struct vfio_device *device = dev_get_drvdata(dev); + + if (unlikely(!device->ops->transfer)) + return -EOPNOTSUPP; + + return device->ops->transfer(device->device_data, fault); +} +EXPORT_SYMBOL_GPL(vfio_transfer_dev_fault); + /** * Module/class support */ diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 8d14ced649a6..62ad4a47de4a 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -3581,6 +3581,13 @@ static int vfio_iommu_type1_dma_map_iopf(void *iommu_data, enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID; struct iommu_page_response resp = {0}; + /* + * When configured in nested mode, further deliver + * stage/level 1 faults to the guest. + */ + if (iommu->nesting && !(fault->prm.flags & IOMMU_FAULT_PAGE_REQUEST_L2)) + return vfio_transfer_dev_fault(dev, fault); + mutex_lock(&iommu->lock); dma = vfio_find_dma(iommu, iova, PAGE_SIZE); @@ -3654,6 +3661,34 @@ static int vfio_iommu_type1_dma_map_iopf(void *iommu_data, return 0; } +static int vfio_iommu_type1_register_hdlr_nested(void *iommu_data, + struct device *dev) +{ + struct vfio_iommu *iommu = iommu_data; + + if (iommu->iopf_enabled) + return iommu_update_device_fault_handler(dev, ~0, + IOPF_REPORT_NESTED_L1_CONCERNED); + else + return iommu_register_device_fault_handler(dev, + vfio_iommu_dev_fault_handler, + IOPF_REPORT_NESTED | + IOPF_REPORT_NESTED_L1_CONCERNED, + dev); +} + +static int vfio_iommu_type1_unregister_hdlr_nested(void *iommu_data, + struct device *dev) +{ + struct vfio_iommu *iommu = iommu_data; + + if (iommu->iopf_enabled) + return iommu_update_device_fault_handler(dev, + ~IOPF_REPORT_NESTED_L1_CONCERNED, 0); + else + return iommu_unregister_device_fault_handler(dev); +} + static const struct vfio_iommu_driver_ops vfio_iommu_driver_ops_type1 = { .name = "vfio-iommu-type1", .owner = THIS_MODULE, @@ -3670,6 +3705,8 @@ static const struct vfio_iommu_driver_ops vfio_iommu_driver_ops_type1 = { .group_iommu_domain = vfio_iommu_type1_group_iommu_domain, .notify = vfio_iommu_type1_notify, .dma_map_iopf = vfio_iommu_type1_dma_map_iopf, + .register_hdlr_nested = vfio_iommu_type1_register_hdlr_nested, + .unregister_hdlr_nested = vfio_iommu_type1_unregister_hdlr_nested, }; static int __init vfio_iommu_type1_init(void) diff --git a/include/linux/vfio.h b/include/linux/vfio.h index 73af317a4343..60e935e4851b 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -29,6 +29,7 @@ * @match: Optional device name match callback (return: 0 for no-match, >0 for * match, -errno for abort (ex. match with insufficient or incorrect * additional args) + * @transfer: Optional. Transfer the received faults to the guest for nested mode. */ struct vfio_device_ops { char *name; @@ -43,6 +44,7 @@ struct vfio_device_ops { int (*mmap)(void *device_data, struct vm_area_struct *vma); void (*request)(void *device_data, unsigned int count); int (*match)(void *device_data, char *buf); + int (*transfer)(void *device_data, struct iommu_fault *fault); }; extern struct iommu_group *vfio_iommu_group_get(struct device *dev); @@ -102,6 +104,10 @@ struct vfio_iommu_driver_ops { int (*dma_map_iopf)(void *iommu_data, struct iommu_fault *fault, struct device *dev); + int (*register_hdlr_nested)(void *iommu_data, + struct device *dev); + int (*unregister_hdlr_nested)(void *iommu_data, + struct device *dev); }; extern int vfio_register_iommu_driver(const struct vfio_iommu_driver_ops *ops); @@ -164,6 +170,9 @@ struct kvm; extern void vfio_group_set_kvm(struct vfio_group *group, struct kvm *kvm); extern int vfio_iommu_dev_fault_handler(struct iommu_fault *fault, void *data); +extern int vfio_iommu_dev_fault_handler_unregister_nested(struct device *dev); +extern int vfio_iommu_dev_fault_handler_register_nested(struct device *dev); +extern int vfio_transfer_dev_fault(struct device *dev, struct iommu_fault *fault); /* * Sub-module helpers