From patchwork Thu Dec 10 07:34:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B487FC4361B for ; Thu, 10 Dec 2020 07:39:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7402122B3F for ; Thu, 10 Dec 2020 07:39:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732661AbgLJHio (ORCPT ); Thu, 10 Dec 2020 02:38:44 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:9861 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731363AbgLJHgT (ORCPT ); Thu, 10 Dec 2020 02:36:19 -0500 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Cs5Kg6N1tz7CCB; Thu, 10 Dec 2020 15:35:03 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:25 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy CC: Joerg Roedel , Catalin Marinas , James Morse , Suzuki K Poulose , Sean Christopherson , Julien Thierry , Mark Brown , "Thomas Gleixner" , Andrew Morton , Alexios Zavras , , , Keqian Zhu Subject: [PATCH 1/7] vfio: iommu_type1: Clear added dirty bit when unwind pin Date: Thu, 10 Dec 2020 15:34:19 +0800 Message-ID: <20201210073425.25960-2-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently we do not clear added dirty bit of bitmap when unwind pin, so if pin failed at halfway, we set unnecessary dirty bit in bitmap. Clearing added dirty bit when unwind pin, userspace will see less dirty page, which can save much time to handle them. Note that we should distinguish the bits added by pin and the bits already set before pin, so introduce bitmap_added to record this. Signed-off-by: Keqian Zhu Reported-by: kernel test robot Reported-by: Dan Carpenter --- drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++----------- 1 file changed, 22 insertions(+), 11 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 67e827638995..f129d24a6ec3 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -637,7 +637,11 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, struct vfio_iommu *iommu = iommu_data; struct vfio_group *group; int i, j, ret; + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); unsigned long remote_vaddr; + unsigned long bitmap_offset; + unsigned long *bitmap_added; + dma_addr_t iova; struct vfio_dma *dma; bool do_accounting; @@ -650,6 +654,12 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, mutex_lock(&iommu->lock); + bitmap_added = bitmap_zalloc(npage, GFP_KERNEL); + if (!bitmap_added) { + ret = -ENOMEM; + goto pin_done; + } + /* Fail if notifier list is empty */ if (!iommu->notifier.head) { ret = -EINVAL; @@ -664,7 +674,6 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, do_accounting = !IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu); for (i = 0; i < npage; i++) { - dma_addr_t iova; struct vfio_pfn *vpfn; iova = user_pfn[i] << PAGE_SHIFT; @@ -699,14 +708,10 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, } if (iommu->dirty_page_tracking) { - unsigned long pgshift = __ffs(iommu->pgsize_bitmap); - - /* - * Bitmap populated with the smallest supported page - * size - */ - bitmap_set(dma->bitmap, - (iova - dma->iova) >> pgshift, 1); + /* Populated with the smallest supported page size */ + bitmap_offset = (iova - dma->iova) >> pgshift; + if (!test_and_set_bit(bitmap_offset, dma->bitmap)) + set_bit(i, bitmap_added); } } ret = i; @@ -722,14 +727,20 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, pin_unwind: phys_pfn[i] = 0; for (j = 0; j < i; j++) { - dma_addr_t iova; - iova = user_pfn[j] << PAGE_SHIFT; dma = vfio_find_dma(iommu, iova, PAGE_SIZE); vfio_unpin_page_external(dma, iova, do_accounting); phys_pfn[j] = 0; + + if (test_bit(j, bitmap_added)) { + bitmap_offset = (iova - dma->iova) >> pgshift; + clear_bit(bitmap_offset, dma->bitmap); + } } pin_done: + if (bitmap_added) + bitmap_free(bitmap_added); + mutex_unlock(&iommu->lock); return ret; } From patchwork Thu Dec 10 07:34:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA208C433FE for ; Thu, 10 Dec 2020 07:36:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 68A8820727 for ; Thu, 10 Dec 2020 07:36:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728659AbgLJHgh (ORCPT ); Thu, 10 Dec 2020 02:36:37 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:9862 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733123AbgLJHgT (ORCPT ); Thu, 10 Dec 2020 02:36:19 -0500 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Cs5Kg6mXDz7CCV; Thu, 10 Dec 2020 15:35:03 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:26 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy CC: Joerg Roedel , Catalin Marinas , James Morse , Suzuki K Poulose , Sean Christopherson , Julien Thierry , Mark Brown , "Thomas Gleixner" , Andrew Morton , Alexios Zavras , , , Keqian Zhu Subject: [PATCH 2/7] vfio: iommu_type1: Initially set the pinned_page_dirty_scope Date: Thu, 10 Dec 2020 15:34:20 +0800 Message-ID: <20201210073425.25960-3-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently there are 3 ways to promote the pinned_page_dirty_scope status of vfio_iommu: 1. Through pin interface. 2. Detach a group without dirty tracking. 3. Attach a group with dirty tracking. For point 3, the only chance to change the pinned status is that the vfio_iommu is newly created. Consider that we can safely set the pinned status when create a new vfio_iommu, as we do it, the point 3 can be removed to reduce operations. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index f129d24a6ec3..c52bcefba96b 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2064,12 +2064,8 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, * Non-iommu backed group cannot dirty memory directly, * it can only use interfaces that provide dirty * tracking. - * The iommu scope can only be promoted with the - * addition of a dirty tracking group. */ group->pinned_page_dirty_scope = true; - if (!iommu->pinned_page_dirty_scope) - update_pinned_page_dirty_scope(iommu); mutex_unlock(&iommu->lock); return 0; @@ -2457,6 +2453,7 @@ static void *vfio_iommu_type1_open(unsigned long arg) INIT_LIST_HEAD(&iommu->iova_list); iommu->dma_list = RB_ROOT; iommu->dma_avail = dma_entry_limit; + iommu->pinned_page_dirty_scope = true; mutex_init(&iommu->lock); BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier); From patchwork Thu Dec 10 07:34:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC897C4361B for ; Thu, 10 Dec 2020 07:36:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8B97323B53 for ; Thu, 10 Dec 2020 07:36:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733199AbgLJHgm (ORCPT ); Thu, 10 Dec 2020 02:36:42 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:9575 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728974AbgLJHgX (ORCPT ); Thu, 10 Dec 2020 02:36:23 -0500 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Cs5KV1B12zM362; Thu, 10 Dec 2020 15:34:54 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:27 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy CC: Joerg Roedel , Catalin Marinas , James Morse , Suzuki K Poulose , Sean Christopherson , Julien Thierry , Mark Brown , "Thomas Gleixner" , Andrew Morton , Alexios Zavras , , , Keqian Zhu Subject: [PATCH 3/7] vfio: iommu_type1: Make an explicit "promote" semantic Date: Thu, 10 Dec 2020 15:34:21 +0800 Message-ID: <20201210073425.25960-4-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When we want to promote pinned_page_scope of vfio_iommu, we should call the "update" function to visit all vfio_group, but when we want to downgrade it, we can set the flag directly. Giving above, we can give an explicit "promote" semantic to that function. BTW, if vfio_iommu has been promoted, then it can return early. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index c52bcefba96b..bd9a94590ebc 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -148,7 +148,7 @@ static int put_pfn(unsigned long pfn, int prot); static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu, struct iommu_group *iommu_group); -static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu); +static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu); /* * This code handles mapping and unmapping of user data buffers * into DMA'ble space using the IOMMU @@ -719,7 +719,7 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, group = vfio_iommu_find_iommu_group(iommu, iommu_group); if (!group->pinned_page_dirty_scope) { group->pinned_page_dirty_scope = true; - update_pinned_page_dirty_scope(iommu); + promote_pinned_page_dirty_scope(iommu); } goto pin_done; @@ -1633,27 +1633,26 @@ static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu, return group; } -static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu) +static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu) { struct vfio_domain *domain; struct vfio_group *group; + if (iommu->pinned_page_dirty_scope) + return; + list_for_each_entry(domain, &iommu->domain_list, next) { list_for_each_entry(group, &domain->group_list, next) { - if (!group->pinned_page_dirty_scope) { - iommu->pinned_page_dirty_scope = false; + if (!group->pinned_page_dirty_scope) return; - } } } if (iommu->external_domain) { domain = iommu->external_domain; list_for_each_entry(group, &domain->group_list, next) { - if (!group->pinned_page_dirty_scope) { - iommu->pinned_page_dirty_scope = false; + if (!group->pinned_page_dirty_scope) return; - } } } @@ -2348,7 +2347,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, struct vfio_iommu *iommu = iommu_data; struct vfio_domain *domain; struct vfio_group *group; - bool update_dirty_scope = false; + bool promote_dirty_scope = false; LIST_HEAD(iova_copy); mutex_lock(&iommu->lock); @@ -2356,7 +2355,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, if (iommu->external_domain) { group = find_iommu_group(iommu->external_domain, iommu_group); if (group) { - update_dirty_scope = !group->pinned_page_dirty_scope; + promote_dirty_scope = !group->pinned_page_dirty_scope; list_del(&group->next); kfree(group); @@ -2386,7 +2385,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, continue; vfio_iommu_detach_group(domain, group); - update_dirty_scope = !group->pinned_page_dirty_scope; + promote_dirty_scope = !group->pinned_page_dirty_scope; list_del(&group->next); kfree(group); /* @@ -2422,8 +2421,8 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, * Removal of a group without dirty tracking may allow the iommu scope * to be promoted. */ - if (update_dirty_scope) - update_pinned_page_dirty_scope(iommu); + if (promote_dirty_scope) + promote_pinned_page_dirty_scope(iommu); mutex_unlock(&iommu->lock); } From patchwork Thu Dec 10 07:34:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E17CC4361B for ; Thu, 10 Dec 2020 07:38:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6031F22B3F for ; Thu, 10 Dec 2020 07:38:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733156AbgLJHgT (ORCPT ); Thu, 10 Dec 2020 02:36:19 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:9574 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729945AbgLJHgT (ORCPT ); Thu, 10 Dec 2020 02:36:19 -0500 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Cs5KV1ZwPzM365; Thu, 10 Dec 2020 15:34:54 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:28 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy CC: Joerg Roedel , Catalin Marinas , James Morse , Suzuki K Poulose , Sean Christopherson , Julien Thierry , Mark Brown , "Thomas Gleixner" , Andrew Morton , Alexios Zavras , , , Keqian Zhu Subject: [PATCH 4/7] vfio: iommu_type1: Fix missing dirty page when promote pinned_scope Date: Thu, 10 Dec 2020 15:34:22 +0800 Message-ID: <20201210073425.25960-5-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When we pin or detach a group which is not dirty tracking capable, we will try to promote pinned_scope of vfio_iommu. If we succeed to do so, vfio only report pinned_scope as dirty to userspace next time, but these memory written before pin or detach is missed. The solution is that we must populate all dma range as dirty before promoting pinned_scope of vfio_iommu. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index bd9a94590ebc..00684597b098 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1633,6 +1633,20 @@ static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu, return group; } +static void vfio_populate_bitmap_all(struct vfio_iommu *iommu) +{ + struct rb_node *n; + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); + + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); + unsigned long nbits = dma->size >> pgshift; + + if (dma->iommu_mapped) + bitmap_set(dma->bitmap, 0, nbits); + } +} + static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu) { struct vfio_domain *domain; @@ -1657,6 +1671,10 @@ static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu) } iommu->pinned_page_dirty_scope = true; + + /* Set all bitmap to avoid missing dirty page */ + if (iommu->dirty_page_tracking) + vfio_populate_bitmap_all(iommu); } static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions, From patchwork Thu Dec 10 07:34:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61BE8C4361B for ; Thu, 10 Dec 2020 07:38:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0A3E220727 for ; Thu, 10 Dec 2020 07:38:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733106AbgLJHgl (ORCPT ); Thu, 10 Dec 2020 02:36:41 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:9863 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732601AbgLJHgU (ORCPT ); Thu, 10 Dec 2020 02:36:20 -0500 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Cs5Kg5xvQz7C9b; Thu, 10 Dec 2020 15:35:03 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:29 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy CC: Joerg Roedel , Catalin Marinas , James Morse , Suzuki K Poulose , Sean Christopherson , Julien Thierry , Mark Brown , "Thomas Gleixner" , Andrew Morton , Alexios Zavras , , , Keqian Zhu Subject: [PATCH 5/7] vfio: iommu_type1: Drop parameter "pgsize" of vfio_dma_bitmap_alloc_all Date: Thu, 10 Dec 2020 15:34:23 +0800 Message-ID: <20201210073425.25960-6-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We always use the smallest supported page size of vfio_iommu as pgsize. Remove parameter "pgsize" of vfio_dma_bitmap_alloc_all. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 00684597b098..32ab889c8193 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -236,9 +236,10 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) } } -static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize) +static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) { struct rb_node *n; + size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap); for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); @@ -2798,12 +2799,9 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, return -EINVAL; if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) { - size_t pgsize; - mutex_lock(&iommu->lock); - pgsize = 1 << __ffs(iommu->pgsize_bitmap); if (!iommu->dirty_page_tracking) { - ret = vfio_dma_bitmap_alloc_all(iommu, pgsize); + ret = vfio_dma_bitmap_alloc_all(iommu); if (!ret) iommu->dirty_page_tracking = true; } From patchwork Thu Dec 10 07:34:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963577 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAC5DC433FE for ; Thu, 10 Dec 2020 07:38:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B228923AC4 for ; Thu, 10 Dec 2020 07:38:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733213AbgLJHgu (ORCPT ); Thu, 10 Dec 2020 02:36:50 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:9148 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733170AbgLJHg3 (ORCPT ); Thu, 10 Dec 2020 02:36:29 -0500 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Cs5Km46jlz15b9Q; Thu, 10 Dec 2020 15:35:08 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:30 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy CC: Joerg Roedel , Catalin Marinas , James Morse , Suzuki K Poulose , Sean Christopherson , Julien Thierry , Mark Brown , "Thomas Gleixner" , Andrew Morton , Alexios Zavras , , , Keqian Zhu Subject: [PATCH 6/7] vfio: iommu_type1: Drop parameter "pgsize" of vfio_iova_dirty_bitmap. Date: Thu, 10 Dec 2020 15:34:24 +0800 Message-ID: <20201210073425.25960-7-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We always use the smallest supported page size of vfio_iommu as pgsize. Remove parameter "pgsize" of vfio_iova_dirty_bitmap. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 32ab889c8193..2d7a5cd9b916 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1026,11 +1026,12 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, } static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, - dma_addr_t iova, size_t size, size_t pgsize) + dma_addr_t iova, size_t size) { struct vfio_dma *dma; struct rb_node *n; - unsigned long pgshift = __ffs(pgsize); + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); + size_t pgsize = (size_t)1 << pgshift; int ret; /* @@ -2861,8 +2862,7 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, if (iommu->dirty_page_tracking) ret = vfio_iova_dirty_bitmap(range.bitmap.data, iommu, range.iova, - range.size, - range.bitmap.pgsize); + range.size); else ret = -EINVAL; out_unlock: From patchwork Thu Dec 10 07:34:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EECA9C0018C for ; Thu, 10 Dec 2020 07:36:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A6BDA23B97 for ; Thu, 10 Dec 2020 07:36:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733236AbgLJHgv (ORCPT ); Thu, 10 Dec 2020 02:36:51 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:9147 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733168AbgLJHg3 (ORCPT ); Thu, 10 Dec 2020 02:36:29 -0500 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Cs5Km4WnFz15b9R; Thu, 10 Dec 2020 15:35:08 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:31 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy CC: Joerg Roedel , Catalin Marinas , James Morse , Suzuki K Poulose , Sean Christopherson , Julien Thierry , Mark Brown , "Thomas Gleixner" , Andrew Morton , Alexios Zavras , , , Keqian Zhu Subject: [PATCH 7/7] vfio: iommu_type1: Drop parameter "pgsize" of update_user_bitmap Date: Thu, 10 Dec 2020 15:34:25 +0800 Message-ID: <20201210073425.25960-8-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We always use the smallest supported page size of vfio_iommu as pgsize. Drop parameter "pgsize" of update_user_bitmap. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 2d7a5cd9b916..edb0a6468e8d 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -989,10 +989,9 @@ static void vfio_update_pgsize_bitmap(struct vfio_iommu *iommu) } static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, - struct vfio_dma *dma, dma_addr_t base_iova, - size_t pgsize) + struct vfio_dma *dma, dma_addr_t base_iova) { - unsigned long pgshift = __ffs(pgsize); + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); unsigned long nbits = dma->size >> pgshift; unsigned long bit_offset = (dma->iova - base_iova) >> pgshift; unsigned long copy_offset = bit_offset / BITS_PER_LONG; @@ -1057,7 +1056,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, if (dma->iova > iova + size - 1) break; - ret = update_user_bitmap(bitmap, iommu, dma, iova, pgsize); + ret = update_user_bitmap(bitmap, iommu, dma, iova); if (ret) return ret; @@ -1203,7 +1202,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu, if (unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) { ret = update_user_bitmap(bitmap->data, iommu, dma, - unmap->iova, pgsize); + unmap->iova); if (ret) break; }