From patchwork Tue Mar 9 06:22:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shenming Lu X-Patchwork-Id: 12124229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC289C433E6 for ; Tue, 9 Mar 2021 06:23:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C0B1465295 for ; Tue, 9 Mar 2021 06:23:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230063AbhCIGXA (ORCPT ); Tue, 9 Mar 2021 01:23:00 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:13583 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229901AbhCIGWa (ORCPT ); Tue, 9 Mar 2021 01:22:30 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4DvlSp03NJz17Hbm; Tue, 9 Mar 2021 14:20:42 +0800 (CST) Received: from DESKTOP-7FEPK9S.china.huawei.com (10.174.184.135) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Tue, 9 Mar 2021 14:22:22 +0800 From: Shenming Lu To: Alex Williamson , Cornelia Huck , Will Deacon , Robin Murphy , Joerg Roedel , Jean-Philippe Brucker , Eric Auger , , , , , CC: Kevin Tian , , Christoph Hellwig , Lu Baolu , Jonathan Cameron , Barry Song , , , , Subject: [RFC PATCH v2 5/6] vfio: No need to statically pin and map if IOPF enabled Date: Tue, 9 Mar 2021 14:22:06 +0800 Message-ID: <20210309062207.505-6-lushenming@huawei.com> X-Mailer: git-send-email 2.27.0.windows.1 In-Reply-To: <20210309062207.505-1-lushenming@huawei.com> References: <20210309062207.505-1-lushenming@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.135] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If IOPF enabled for the VFIO container, there is no need to statically pin and map the entire DMA range, we can do it on demand. And unmap according to the IOPF mapped bitmap when removing vfio_dma. Signed-off-by: Shenming Lu --- drivers/vfio/vfio_iommu_type1.c | 35 ++++++++++++++++++++++++++++----- 1 file changed, 30 insertions(+), 5 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 3997473be4a7..8d14ced649a6 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -165,6 +165,7 @@ struct vfio_regions { #define IOPF_MAPPED_BITMAP_GET(dma, i) \ ((dma->iopf_mapped_bitmap[(i) / BITS_PER_LONG] \ >> ((i) % BITS_PER_LONG)) & 0x1) +#define IOPF_MAPPED_BITMAP_BYTES(n) DIRTY_BITMAP_BYTES(n) #define WAITED 1 @@ -877,7 +878,8 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, * already pinned and accounted. Accouting should be done if there is no * iommu capable domain in the container. */ - do_accounting = !IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu); + do_accounting = !IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) || + iommu->iopf_enabled; for (i = 0; i < npage; i++) { struct vfio_pfn *vpfn; @@ -966,7 +968,8 @@ static int vfio_iommu_type1_unpin_pages(void *iommu_data, mutex_lock(&iommu->lock); - do_accounting = !IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu); + do_accounting = !IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) || + iommu->iopf_enabled; for (i = 0; i < npage; i++) { struct vfio_dma *dma; dma_addr_t iova; @@ -1087,7 +1090,7 @@ static long vfio_unmap_unpin(struct vfio_iommu *iommu, struct vfio_dma *dma, if (!dma->size) return 0; - if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) + if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) || iommu->iopf_enabled) return 0; /* @@ -1187,11 +1190,20 @@ static void vfio_unmap_partial_iopf(struct vfio_iommu *iommu, } } +static void vfio_dma_clean_iopf(struct vfio_iommu *iommu, struct vfio_dma *dma) +{ + vfio_unmap_partial_iopf(iommu, dma, dma->iova, dma->iova + dma->size); + + kfree(dma->iopf_mapped_bitmap); +} + static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma) { WARN_ON(!RB_EMPTY_ROOT(&dma->pfn_list)); vfio_unmap_unpin(iommu, dma, true); vfio_unlink_dma(iommu, dma); + if (iommu->iopf_enabled) + vfio_dma_clean_iopf(iommu, dma); put_task_struct(dma->task); vfio_dma_bitmap_free(dma); if (dma->vaddr_invalid) { @@ -1655,6 +1667,16 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, goto out_unlock; } + if (iommu->iopf_enabled) { + dma->iopf_mapped_bitmap = kvzalloc(IOPF_MAPPED_BITMAP_BYTES( + size >> PAGE_SHIFT), GFP_KERNEL); + if (!dma->iopf_mapped_bitmap) { + ret = -ENOMEM; + kfree(dma); + goto out_unlock; + } + } + iommu->dma_avail--; dma->iova = iova; dma->vaddr = vaddr; @@ -1694,8 +1716,11 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, /* Insert zero-sized and grow as we map chunks of it */ vfio_link_dma(iommu, dma); - /* Don't pin and map if container doesn't contain IOMMU capable domain*/ - if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) + /* + * Don't pin and map if container doesn't contain IOMMU capable domain, + * or IOPF enabled for the container. + */ + if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) || iommu->iopf_enabled) dma->size = size; else ret = vfio_pin_map_dma(iommu, dma, size);