From patchwork Thu Jan 7 04:43:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12002881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05A23C4332E for ; Thu, 7 Jan 2021 04:45:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CDC6923142 for ; Thu, 7 Jan 2021 04:45:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727129AbhAGEpS (ORCPT ); Wed, 6 Jan 2021 23:45:18 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:9965 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726641AbhAGEpR (ORCPT ); Wed, 6 Jan 2021 23:45:17 -0500 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DBDC85hsHzj2pY; Thu, 7 Jan 2021 12:43:48 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.498.0; Thu, 7 Jan 2021 12:44:26 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Will Deacon , "Marc Zyngier" , Catalin Marinas CC: Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , "Daniel Lezcano" , Thomas Gleixner , Suzuki K Poulose , Julien Thierry , Andrew Morton , Alexios Zavras , , Subject: [PATCH 1/6] vfio/iommu_type1: Make an explicit "promote" semantic Date: Thu, 7 Jan 2021 12:43:56 +0800 Message-ID: <20210107044401.19828-2-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210107044401.19828-1-zhukeqian1@huawei.com> References: <20210107044401.19828-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When we want to promote the pinned_page_dirty_scope of vfio_iommu, we call the "update" function to visit all vfio_group, but when we want to downgrade this, we can set the flag as false directly. So we'd better make an explicit "promote" semantic to the "update" function. BTW, if vfio_iommu already has been promoted, then return early. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 30 ++++++++++++++---------------- 1 file changed, 14 insertions(+), 16 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 0b4dedaa9128..334a8240e1da 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -148,7 +148,7 @@ static int put_pfn(unsigned long pfn, int prot); static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu, struct iommu_group *iommu_group); -static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu); +static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu); /* * This code handles mapping and unmapping of user data buffers * into DMA'ble space using the IOMMU @@ -714,7 +714,7 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, group = vfio_iommu_find_iommu_group(iommu, iommu_group); if (!group->pinned_page_dirty_scope) { group->pinned_page_dirty_scope = true; - update_pinned_page_dirty_scope(iommu); + promote_pinned_page_dirty_scope(iommu); } goto pin_done; @@ -1622,27 +1622,26 @@ static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu, return group; } -static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu) +static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu) { struct vfio_domain *domain; struct vfio_group *group; + if (iommu->pinned_page_dirty_scope) + return; + list_for_each_entry(domain, &iommu->domain_list, next) { list_for_each_entry(group, &domain->group_list, next) { - if (!group->pinned_page_dirty_scope) { - iommu->pinned_page_dirty_scope = false; + if (!group->pinned_page_dirty_scope) return; - } } } if (iommu->external_domain) { domain = iommu->external_domain; list_for_each_entry(group, &domain->group_list, next) { - if (!group->pinned_page_dirty_scope) { - iommu->pinned_page_dirty_scope = false; + if (!group->pinned_page_dirty_scope) return; - } } } @@ -2057,8 +2056,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, * addition of a dirty tracking group. */ group->pinned_page_dirty_scope = true; - if (!iommu->pinned_page_dirty_scope) - update_pinned_page_dirty_scope(iommu); + promote_pinned_page_dirty_scope(iommu); mutex_unlock(&iommu->lock); return 0; @@ -2341,7 +2339,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, struct vfio_iommu *iommu = iommu_data; struct vfio_domain *domain; struct vfio_group *group; - bool update_dirty_scope = false; + bool promote_dirty_scope = false; LIST_HEAD(iova_copy); mutex_lock(&iommu->lock); @@ -2349,7 +2347,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, if (iommu->external_domain) { group = find_iommu_group(iommu->external_domain, iommu_group); if (group) { - update_dirty_scope = !group->pinned_page_dirty_scope; + promote_dirty_scope = !group->pinned_page_dirty_scope; list_del(&group->next); kfree(group); @@ -2379,7 +2377,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, continue; vfio_iommu_detach_group(domain, group); - update_dirty_scope = !group->pinned_page_dirty_scope; + promote_dirty_scope = !group->pinned_page_dirty_scope; list_del(&group->next); kfree(group); /* @@ -2415,8 +2413,8 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, * Removal of a group without dirty tracking may allow the iommu scope * to be promoted. */ - if (update_dirty_scope) - update_pinned_page_dirty_scope(iommu); + if (promote_dirty_scope) + promote_pinned_page_dirty_scope(iommu); mutex_unlock(&iommu->lock); } From patchwork Thu Jan 7 04:43:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12002885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4C67C433E6 for ; Thu, 7 Jan 2021 04:46:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8CAD122D01 for ; Thu, 7 Jan 2021 04:46:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727083AbhAGEpR (ORCPT ); Wed, 6 Jan 2021 23:45:17 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:9964 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726254AbhAGEpR (ORCPT ); Wed, 6 Jan 2021 23:45:17 -0500 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DBDC85Jdczj2np; Thu, 7 Jan 2021 12:43:48 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.498.0; Thu, 7 Jan 2021 12:44:27 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Will Deacon , "Marc Zyngier" , Catalin Marinas CC: Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , "Daniel Lezcano" , Thomas Gleixner , Suzuki K Poulose , Julien Thierry , Andrew Morton , Alexios Zavras , , Subject: [PATCH 2/6] vfio/iommu_type1: Ignore external domain when promote pinned_scope Date: Thu, 7 Jan 2021 12:43:57 +0800 Message-ID: <20210107044401.19828-3-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210107044401.19828-1-zhukeqian1@huawei.com> References: <20210107044401.19828-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The pinned_scope of external domain's groups are always true, that's to say we can safely ignore external domain when promote pinned_scope status of vfio_iommu. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 14 +++----------- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 334a8240e1da..110ada24ee91 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1637,14 +1637,7 @@ static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu) } } - if (iommu->external_domain) { - domain = iommu->external_domain; - list_for_each_entry(group, &domain->group_list, next) { - if (!group->pinned_page_dirty_scope) - return; - } - } - + /* The external domain always passes check */ iommu->pinned_page_dirty_scope = true; } @@ -2347,7 +2340,6 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, if (iommu->external_domain) { group = find_iommu_group(iommu->external_domain, iommu_group); if (group) { - promote_dirty_scope = !group->pinned_page_dirty_scope; list_del(&group->next); kfree(group); @@ -2360,7 +2352,8 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, kfree(iommu->external_domain); iommu->external_domain = NULL; } - goto detach_group_done; + mutex_unlock(&iommu->lock); + return; } } @@ -2408,7 +2401,6 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, else vfio_iommu_iova_free(&iova_copy); -detach_group_done: /* * Removal of a group without dirty tracking may allow the iommu scope * to be promoted. From patchwork Thu Jan 7 04:43:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12002877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1ACDC433E9 for ; Thu, 7 Jan 2021 04:45:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8A1D12311E for ; Thu, 7 Jan 2021 04:45:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727241AbhAGEpX (ORCPT ); Wed, 6 Jan 2021 23:45:23 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:9967 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726641AbhAGEpW (ORCPT ); Wed, 6 Jan 2021 23:45:22 -0500 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DBDCF5MDtzhsSy; Thu, 7 Jan 2021 12:43:53 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.498.0; Thu, 7 Jan 2021 12:44:28 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Will Deacon , "Marc Zyngier" , Catalin Marinas CC: Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , "Daniel Lezcano" , Thomas Gleixner , Suzuki K Poulose , Julien Thierry , Andrew Morton , Alexios Zavras , , Subject: [PATCH 3/6] vfio/iommu_type1: Initially set the pinned_page_dirty_scope Date: Thu, 7 Jan 2021 12:43:58 +0800 Message-ID: <20210107044401.19828-4-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210107044401.19828-1-zhukeqian1@huawei.com> References: <20210107044401.19828-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For now there are 3 ways to promote the pinned_page_dirty_scope status of vfio_iommu: 1. Through vfio pin interface. 2. Detach a group without pinned_dirty_scope. 3. Attach a group with pinned_dirty_scope. For point 3, the only chance to promote the pinned_page_dirty_scope status is when vfio_iommu is newly created. As we can safely set empty vfio_iommu to be at pinned status, then the point 3 can be removed to reduce operations. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 110ada24ee91..b596c482487b 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2045,11 +2045,8 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, * Non-iommu backed group cannot dirty memory directly, * it can only use interfaces that provide dirty * tracking. - * The iommu scope can only be promoted with the - * addition of a dirty tracking group. */ group->pinned_page_dirty_scope = true; - promote_pinned_page_dirty_scope(iommu); mutex_unlock(&iommu->lock); return 0; @@ -2436,6 +2433,7 @@ static void *vfio_iommu_type1_open(unsigned long arg) INIT_LIST_HEAD(&iommu->iova_list); iommu->dma_list = RB_ROOT; iommu->dma_avail = dma_entry_limit; + iommu->pinned_page_dirty_scope = true; mutex_init(&iommu->lock); BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier); From patchwork Thu Jan 7 04:43:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12002889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87B07C433DB for ; Thu, 7 Jan 2021 04:46:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 619B722CE3 for ; Thu, 7 Jan 2021 04:46:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727348AbhAGEp7 (ORCPT ); Wed, 6 Jan 2021 23:45:59 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:9970 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726414AbhAGEp7 (ORCPT ); Wed, 6 Jan 2021 23:45:59 -0500 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DBDCG08Qdzj31g; Thu, 7 Jan 2021 12:43:54 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.498.0; Thu, 7 Jan 2021 12:44:29 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Will Deacon , "Marc Zyngier" , Catalin Marinas CC: Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , "Daniel Lezcano" , Thomas Gleixner , Suzuki K Poulose , Julien Thierry , Andrew Morton , Alexios Zavras , , Subject: [PATCH 4/6] vfio/iommu_type1: Drop parameter "pgsize" of vfio_dma_bitmap_alloc_all Date: Thu, 7 Jan 2021 12:43:59 +0800 Message-ID: <20210107044401.19828-5-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210107044401.19828-1-zhukeqian1@huawei.com> References: <20210107044401.19828-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We always use the smallest supported page size of vfio_iommu as pgsize. Remove parameter "pgsize" of vfio_dma_bitmap_alloc_all. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index b596c482487b..080c05b129ee 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -236,9 +236,10 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) } } -static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize) +static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) { struct rb_node *n; + size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap); for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); @@ -2761,12 +2762,9 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, return -EINVAL; if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) { - size_t pgsize; - mutex_lock(&iommu->lock); - pgsize = 1 << __ffs(iommu->pgsize_bitmap); if (!iommu->dirty_page_tracking) { - ret = vfio_dma_bitmap_alloc_all(iommu, pgsize); + ret = vfio_dma_bitmap_alloc_all(iommu); if (!ret) iommu->dirty_page_tracking = true; } From patchwork Thu Jan 7 04:44:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12002887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B2B5C4332B for ; Thu, 7 Jan 2021 04:46:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0B7F022CE3 for ; Thu, 7 Jan 2021 04:46:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727374AbhAGEp7 (ORCPT ); Wed, 6 Jan 2021 23:45:59 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:9969 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727198AbhAGEp7 (ORCPT ); Wed, 6 Jan 2021 23:45:59 -0500 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DBDCF6k6Yzj31X; Thu, 7 Jan 2021 12:43:53 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.498.0; Thu, 7 Jan 2021 12:44:30 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Will Deacon , "Marc Zyngier" , Catalin Marinas CC: Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , "Daniel Lezcano" , Thomas Gleixner , Suzuki K Poulose , Julien Thierry , Andrew Morton , Alexios Zavras , , Subject: [PATCH 5/6] vfio/iommu_type1: Drop parameter "pgsize" of vfio_iova_dirty_bitmap Date: Thu, 7 Jan 2021 12:44:00 +0800 Message-ID: <20210107044401.19828-6-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210107044401.19828-1-zhukeqian1@huawei.com> References: <20210107044401.19828-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We always use the smallest supported page size of vfio_iommu as pgsize. Remove parameter "pgsize" of vfio_iova_dirty_bitmap. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 080c05b129ee..82649a040148 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1015,11 +1015,12 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, } static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, - dma_addr_t iova, size_t size, size_t pgsize) + dma_addr_t iova, size_t size) { struct vfio_dma *dma; struct rb_node *n; - unsigned long pgshift = __ffs(pgsize); + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); + size_t pgsize = (size_t)1 << pgshift; int ret; /* @@ -2824,8 +2825,7 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, if (iommu->dirty_page_tracking) ret = vfio_iova_dirty_bitmap(range.bitmap.data, iommu, range.iova, - range.size, - range.bitmap.pgsize); + range.size); else ret = -EINVAL; out_unlock: From patchwork Thu Jan 7 04:44:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12002879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACE31C43381 for ; Thu, 7 Jan 2021 04:45:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5F59D22D01 for ; Thu, 7 Jan 2021 04:45:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727288AbhAGEpZ (ORCPT ); Wed, 6 Jan 2021 23:45:25 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:9968 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727209AbhAGEpY (ORCPT ); Wed, 6 Jan 2021 23:45:24 -0500 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DBDCF5z03zj2mh; Thu, 7 Jan 2021 12:43:53 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.498.0; Thu, 7 Jan 2021 12:44:31 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Will Deacon , "Marc Zyngier" , Catalin Marinas CC: Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , "Daniel Lezcano" , Thomas Gleixner , Suzuki K Poulose , Julien Thierry , Andrew Morton , Alexios Zavras , , Subject: [PATCH 6/6] vfio/iommu_type1: Drop parameter "pgsize" of update_user_bitmap Date: Thu, 7 Jan 2021 12:44:01 +0800 Message-ID: <20210107044401.19828-7-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210107044401.19828-1-zhukeqian1@huawei.com> References: <20210107044401.19828-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We always use the smallest supported page size of vfio_iommu as pgsize. Drop parameter "pgsize" of update_user_bitmap. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 82649a040148..bceda5e8baaa 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -978,10 +978,9 @@ static void vfio_update_pgsize_bitmap(struct vfio_iommu *iommu) } static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, - struct vfio_dma *dma, dma_addr_t base_iova, - size_t pgsize) + struct vfio_dma *dma, dma_addr_t base_iova) { - unsigned long pgshift = __ffs(pgsize); + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); unsigned long nbits = dma->size >> pgshift; unsigned long bit_offset = (dma->iova - base_iova) >> pgshift; unsigned long copy_offset = bit_offset / BITS_PER_LONG; @@ -1046,7 +1045,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, if (dma->iova > iova + size - 1) break; - ret = update_user_bitmap(bitmap, iommu, dma, iova, pgsize); + ret = update_user_bitmap(bitmap, iommu, dma, iova); if (ret) return ret; @@ -1192,7 +1191,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu, if (unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) { ret = update_user_bitmap(bitmap->data, iommu, dma, - unmap->iova, pgsize); + unmap->iova); if (ret) break; }