From patchwork Thu Jan 7 09:28:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12003329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1E04C4332B for ; Thu, 7 Jan 2021 09:30:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 73A9B2333F for ; Thu, 7 Jan 2021 09:30:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727857AbhAGJac (ORCPT ); Thu, 7 Jan 2021 04:30:32 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:10556 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727785AbhAGJab (ORCPT ); Thu, 7 Jan 2021 04:30:31 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DBLWb5xgWzMGSd; Thu, 7 Jan 2021 17:28:27 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.498.0; Thu, 7 Jan 2021 17:29:29 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Kirti Wankhede , Cornelia Huck , Will Deacon , Marc Zyngier , Catalin Marinas CC: Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , "Daniel Lezcano" , Thomas Gleixner , Suzuki K Poulose , Julien Thierry , Andrew Morton , Alexios Zavras , , Subject: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose Date: Thu, 7 Jan 2021 17:28:57 +0800 Message-ID: <20210107092901.19712-2-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210107092901.19712-1-zhukeqian1@huawei.com> References: <20210107092901.19712-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap is easy to lose dirty log. For example, after promoting pinned_scope of vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose dirty log that occurs before vfio_iommu is promoted. The key point is that pinned-dirty is not a real dirty tracking way, it can't continuously track dirty pages, but just restrict dirty scope. It is essentially the same as fully-dirty. Fully-dirty is of full-scope and pinned-dirty is of pinned-scope. So we must mark pinned-dirty or fully-dirty after we start dirty tracking or clear dirty bitmap, to ensure that dirty log is marked right away. Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking") Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++----------- 1 file changed, 22 insertions(+), 11 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index bceda5e8baaa..b0a26e8e0adf 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct vfio_dma *dma) dma->bitmap = NULL; } -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) +static void vfio_dma_populate_bitmap_pinned(struct vfio_dma *dma, size_t pgsize) { struct rb_node *p; unsigned long pgshift = __ffs(pgsize); @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) } } +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma, size_t pgsize) +{ + unsigned long pgshift = __ffs(pgsize); + unsigned long nbits = dma->size >> pgshift; + + bitmap_set(dma->bitmap, 0, nbits); +} + +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu, + struct vfio_dma *dma) +{ + size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap); + + if (iommu->pinned_page_dirty_scope) + vfio_dma_populate_bitmap_pinned(dma, pgsize); + else if (dma->iommu_mapped) + vfio_dma_populate_bitmap_full(dma, pgsize); +} + static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) { struct rb_node *n; @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) } return ret; } - vfio_dma_populate_bitmap(dma, pgsize); + vfio_dma_populate_bitmap(iommu, dma); } return 0; } @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, unsigned long shift = bit_offset % BITS_PER_LONG; unsigned long leftover; - /* - * mark all pages dirty if any IOMMU capable device is not able - * to report dirty pages and all pages are pinned and mapped. - */ - if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped) - bitmap_set(dma->bitmap, 0, nbits); - if (shift) { bitmap_shift_left(dma->bitmap, dma->bitmap, shift, nbits + shift); @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, struct vfio_dma *dma; struct rb_node *n; unsigned long pgshift = __ffs(iommu->pgsize_bitmap); - size_t pgsize = (size_t)1 << pgshift; int ret; /* @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, * pages which are marked dirty by vfio_dma_rw() */ bitmap_clear(dma->bitmap, 0, dma->size >> pgshift); - vfio_dma_populate_bitmap(dma, pgsize); + vfio_dma_populate_bitmap(iommu, dma); } return 0; } From patchwork Thu Jan 7 09:28:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12003333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3ACA2C43332 for ; Thu, 7 Jan 2021 09:30:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 016B123340 for ; Thu, 7 Jan 2021 09:30:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727792AbhAGJaX (ORCPT ); Thu, 7 Jan 2021 04:30:23 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:10558 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727782AbhAGJaV (ORCPT ); Thu, 7 Jan 2021 04:30:21 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DBLWb6N3FzMGSP; Thu, 7 Jan 2021 17:28:27 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.498.0; Thu, 7 Jan 2021 17:29:30 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Kirti Wankhede , Cornelia Huck , Will Deacon , Marc Zyngier , Catalin Marinas CC: Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , "Daniel Lezcano" , Thomas Gleixner , Suzuki K Poulose , Julien Thierry , Andrew Morton , Alexios Zavras , , Subject: [PATCH 2/5] vfio/iommu_type1: Populate dirty bitmap for new vfio_dma Date: Thu, 7 Jan 2021 17:28:58 +0800 Message-ID: <20210107092901.19712-3-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210107092901.19712-1-zhukeqian1@huawei.com> References: <20210107092901.19712-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We should populate dirty bitmap for newly added vfio_dma as it can be accessed arbitrarily if vfio_iommu is not promoted to pinned_scope. vfio_dma_populate_bitmap can handle this properly. Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking") Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index b0a26e8e0adf..29c8702c3b6e 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1413,7 +1413,9 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, if (!ret && iommu->dirty_page_tracking) { ret = vfio_dma_bitmap_alloc(dma, pgsize); - if (ret) + if (!ret) + vfio_dma_populate_bitmap(iommu, dma); + else vfio_remove_dma(iommu, dma); } From patchwork Thu Jan 7 09:28:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12003335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1102EC433DB for ; Thu, 7 Jan 2021 09:31:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C2F502333C for ; Thu, 7 Jan 2021 09:31:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727876AbhAGJat (ORCPT ); Thu, 7 Jan 2021 04:30:49 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:10113 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726607AbhAGJaW (ORCPT ); Thu, 7 Jan 2021 04:30:22 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4DBLWv36XKz15nXd; Thu, 7 Jan 2021 17:28:43 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.498.0; Thu, 7 Jan 2021 17:29:31 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Kirti Wankhede , Cornelia Huck , Will Deacon , Marc Zyngier , Catalin Marinas CC: Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , "Daniel Lezcano" , Thomas Gleixner , Suzuki K Poulose , Julien Thierry , Andrew Morton , Alexios Zavras , , Subject: [PATCH 3/5] vfio/iommu_type1: Populate dirty bitmap when attach group Date: Thu, 7 Jan 2021 17:28:59 +0800 Message-ID: <20210107092901.19712-4-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210107092901.19712-1-zhukeqian1@huawei.com> References: <20210107092901.19712-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Attach an iommu backend group will potentially access all dma ranges. We should traverse all dma ranges to mark dirty. Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking") Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 29c8702c3b6e..26b7eb2a5cfc 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -255,6 +255,17 @@ static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu, vfio_dma_populate_bitmap_full(dma, pgsize); } +static void vfio_iommu_populate_bitmap(struct vfio_iommu *iommu) +{ + struct rb_node *n; + struct vfio_dma *dma; + + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { + dma = rb_entry(n, struct vfio_dma, node); + vfio_dma_populate_bitmap(iommu, dma); + } +} + static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) { struct rb_node *n; @@ -2190,7 +2201,12 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, * demotes the iommu scope until it declares itself dirty tracking * capable via the page pinning interface. */ - iommu->pinned_page_dirty_scope = false; + if (iommu->pinned_page_dirty_scope) { + iommu->pinned_page_dirty_scope = false; + if (iommu->dirty_page_tracking) + vfio_iommu_populate_bitmap(iommu); + } + mutex_unlock(&iommu->lock); vfio_iommu_resv_free(&group_resv_regions); From patchwork Thu Jan 7 09:29:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12003331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A095C4332E for ; Thu, 7 Jan 2021 09:30:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D7A8E2333E for ; Thu, 7 Jan 2021 09:30:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727799AbhAGJaY (ORCPT ); Thu, 7 Jan 2021 04:30:24 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:10557 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727329AbhAGJaW (ORCPT ); Thu, 7 Jan 2021 04:30:22 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DBLWb6mFDzMGT6; Thu, 7 Jan 2021 17:28:27 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.498.0; Thu, 7 Jan 2021 17:29:32 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Kirti Wankhede , Cornelia Huck , Will Deacon , Marc Zyngier , Catalin Marinas CC: Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , "Daniel Lezcano" , Thomas Gleixner , Suzuki K Poulose , Julien Thierry , Andrew Morton , Alexios Zavras , , Subject: [PATCH 4/5] vfio/iommu_type1: Carefully use unmap_unpin_all during dirty tracking Date: Thu, 7 Jan 2021 17:29:00 +0800 Message-ID: <20210107092901.19712-5-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210107092901.19712-1-zhukeqian1@huawei.com> References: <20210107092901.19712-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If we detach group during dirty page tracking, we shouldn't remove vfio_dma, because dirty log will lose. But we don't prevent unmap_unpin_all in vfio_iommu_release, because under normal procedure, dirty tracking has been stopped. Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking") Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 26b7eb2a5cfc..9776a059904d 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2373,7 +2373,12 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, if (list_empty(&iommu->external_domain->group_list)) { vfio_sanity_check_pfn_list(iommu); - if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) + /* + * During dirty page tracking, we can't remove + * vfio_dma because dirty log will lose. + */ + if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) && + !iommu->dirty_page_tracking) vfio_iommu_unmap_unpin_all(iommu); kfree(iommu->external_domain); @@ -2406,10 +2411,15 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, * iommu and external domain doesn't exist, then all the * mappings go away too. If it's the last domain with iommu and * external domain exist, update accounting + * + * Note: During dirty page tracking, we can't remove vfio_dma + * because dirty log will lose. Just update accounting is a good + * choice. */ if (list_empty(&domain->group_list)) { if (list_is_singular(&iommu->domain_list)) { - if (!iommu->external_domain) + if (!iommu->external_domain && + !iommu->dirty_page_tracking) vfio_iommu_unmap_unpin_all(iommu); else vfio_iommu_unmap_unpin_reaccount(iommu); From patchwork Thu Jan 7 09:29:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12003327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D221C433E9 for ; Thu, 7 Jan 2021 09:30:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 03EA32333F for ; Thu, 7 Jan 2021 09:30:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727840AbhAGJa3 (ORCPT ); Thu, 7 Jan 2021 04:30:29 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:10034 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727816AbhAGJa2 (ORCPT ); Thu, 7 Jan 2021 04:30:28 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DBLX02xl0zj491; Thu, 7 Jan 2021 17:28:48 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.498.0; Thu, 7 Jan 2021 17:29:34 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Kirti Wankhede , Cornelia Huck , Will Deacon , Marc Zyngier , Catalin Marinas CC: Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , "Daniel Lezcano" , Thomas Gleixner , Suzuki K Poulose , Julien Thierry , Andrew Morton , Alexios Zavras , , Subject: [PATCH 5/5] vfio/iommu_type1: Move sanity_check_pfn_list to unmap_unpin_all Date: Thu, 7 Jan 2021 17:29:01 +0800 Message-ID: <20210107092901.19712-6-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210107092901.19712-1-zhukeqian1@huawei.com> References: <20210107092901.19712-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Both external domain and iommu backend domain can use pinning interface, thus both can add pfn to dma pfn_list. By moving the sanity_check_pfn_list to unmap_unpin_all can apply it to all types of domain. Fixes: a54eb55045ae ("vfio iommu type1: Add support for mediated devices") Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 38 ++++++++++++++++----------------- 1 file changed, 18 insertions(+), 20 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 9776a059904d..d796be8bcbc5 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2225,10 +2225,28 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, return ret; } +static void vfio_sanity_check_pfn_list(struct vfio_iommu *iommu) +{ + struct rb_node *n; + + n = rb_first(&iommu->dma_list); + for (; n; n = rb_next(n)) { + struct vfio_dma *dma; + + dma = rb_entry(n, struct vfio_dma, node); + + if (WARN_ON(!RB_EMPTY_ROOT(&dma->pfn_list))) + break; + } + /* mdev vendor driver must unregister notifier */ + WARN_ON(iommu->notifier.head); +} + static void vfio_iommu_unmap_unpin_all(struct vfio_iommu *iommu) { struct rb_node *node; + vfio_sanity_check_pfn_list(iommu); while ((node = rb_first(&iommu->dma_list))) vfio_remove_dma(iommu, rb_entry(node, struct vfio_dma, node)); } @@ -2256,23 +2274,6 @@ static void vfio_iommu_unmap_unpin_reaccount(struct vfio_iommu *iommu) } } -static void vfio_sanity_check_pfn_list(struct vfio_iommu *iommu) -{ - struct rb_node *n; - - n = rb_first(&iommu->dma_list); - for (; n; n = rb_next(n)) { - struct vfio_dma *dma; - - dma = rb_entry(n, struct vfio_dma, node); - - if (WARN_ON(!RB_EMPTY_ROOT(&dma->pfn_list))) - break; - } - /* mdev vendor driver must unregister notifier */ - WARN_ON(iommu->notifier.head); -} - /* * Called when a domain is removed in detach. It is possible that * the removed domain decided the iova aperture window. Modify the @@ -2371,8 +2372,6 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, kfree(group); if (list_empty(&iommu->external_domain->group_list)) { - vfio_sanity_check_pfn_list(iommu); - /* * During dirty page tracking, we can't remove * vfio_dma because dirty log will lose. @@ -2503,7 +2502,6 @@ static void vfio_iommu_type1_release(void *iommu_data) if (iommu->external_domain) { vfio_release_domain(iommu->external_domain, true); - vfio_sanity_check_pfn_list(iommu); kfree(iommu->external_domain); }