From patchwork Tue May 11 02:08:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kunkun Jiang X-Patchwork-Id: 12249543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6F70C433B4 for ; Tue, 11 May 2021 02:21:44 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8062261155 for ; Tue, 11 May 2021 02:21:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8062261155 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:47088 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lgI1n-0008J5-LV for qemu-devel@archiver.kernel.org; Mon, 10 May 2021 22:21:43 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55708) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lgHyJ-0002Ha-AG; Mon, 10 May 2021 22:18:07 -0400 Received: from szxga08-in.huawei.com ([45.249.212.255]:2235) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lgHyF-0001p9-Nc; Mon, 10 May 2021 22:18:07 -0400 Received: from dggeml702-chm.china.huawei.com (unknown [172.30.72.53]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4FfM0m5TS2z19NS3; Tue, 11 May 2021 10:13:44 +0800 (CST) Received: from dggema765-chm.china.huawei.com (10.1.198.207) by dggeml702-chm.china.huawei.com (10.3.17.135) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Tue, 11 May 2021 10:17:58 +0800 Received: from DESKTOP-6NKE0BC.china.huawei.com (10.174.185.210) by dggema765-chm.china.huawei.com (10.1.198.207) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Tue, 11 May 2021 10:17:57 +0800 From: Kunkun Jiang To: Eric Auger , Peter Maydell , Alex Williamson , "open list:ARM SMMU" , "open list:All patches CC here" Subject: [RFC PATCH v3 1/4] vfio: Introduce helpers to mark dirty pages of a RAM section Date: Tue, 11 May 2021 10:08:13 +0800 Message-ID: <20210511020816.2905-2-jiangkunkun@huawei.com> X-Mailer: git-send-email 2.26.2.windows.1 In-Reply-To: <20210511020816.2905-1-jiangkunkun@huawei.com> References: <20210511020816.2905-1-jiangkunkun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.185.210] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggema765-chm.china.huawei.com (10.1.198.207) X-CFilter-Loop: Reflected Received-SPF: pass client-ip=45.249.212.255; envelope-from=jiangkunkun@huawei.com; helo=szxga08-in.huawei.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kevin.tian@intel.com, Kunkun Jiang , Kirti Wankhede , Zenghui Yu , wanghaibin.wang@huawei.com, Keqian Zhu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Extract part of the code from vfio_sync_dirty_bitmap to form a new helper, which allows to mark dirty pages of a RAM section. This helper will be called for nested stage. Signed-off-by: Kunkun Jiang --- hw/vfio/common.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index dc8372c772..9fb8d44a6d 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -1271,6 +1271,19 @@ err_out: return ret; } +static int vfio_dma_sync_ram_section_dirty_bitmap(VFIOContainer *container, + MemoryRegionSection *section) +{ + ram_addr_t ram_addr; + + ram_addr = memory_region_get_ram_addr(section->mr) + + section->offset_within_region; + + return vfio_get_dirty_bitmap(container, + REAL_HOST_PAGE_ALIGN(section->offset_within_address_space), + int128_get64(section->size), ram_addr); +} + typedef struct { IOMMUNotifier n; VFIOGuestIOMMU *giommu; @@ -1312,8 +1325,6 @@ static void vfio_iommu_map_dirty_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb) static int vfio_sync_dirty_bitmap(VFIOContainer *container, MemoryRegionSection *section) { - ram_addr_t ram_addr; - if (memory_region_is_iommu(section->mr)) { VFIOGuestIOMMU *giommu; @@ -1342,12 +1353,7 @@ static int vfio_sync_dirty_bitmap(VFIOContainer *container, return 0; } - ram_addr = memory_region_get_ram_addr(section->mr) + - section->offset_within_region; - - return vfio_get_dirty_bitmap(container, - REAL_HOST_PAGE_ALIGN(section->offset_within_address_space), - int128_get64(section->size), ram_addr); + return vfio_dma_sync_ram_section_dirty_bitmap(container, section); } static void vfio_listener_log_sync(MemoryListener *listener, From patchwork Tue May 11 02:08:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kunkun Jiang X-Patchwork-Id: 12249541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27E78C433B4 for ; Tue, 11 May 2021 02:19:52 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9394761606 for ; Tue, 11 May 2021 02:19:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9394761606 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:42264 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lgHzy-00054h-Jj for qemu-devel@archiver.kernel.org; Mon, 10 May 2021 22:19:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55820) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lgHyb-0002ev-Qu; Mon, 10 May 2021 22:18:26 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:2048) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lgHyQ-0001vN-4X; Mon, 10 May 2021 22:18:25 -0400 Received: from dggeml703-chm.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4FfM3139pczYhYs; Tue, 11 May 2021 10:15:41 +0800 (CST) Received: from dggema765-chm.china.huawei.com (10.1.198.207) by dggeml703-chm.china.huawei.com (10.3.17.136) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Tue, 11 May 2021 10:18:08 +0800 Received: from DESKTOP-6NKE0BC.china.huawei.com (10.174.185.210) by dggema765-chm.china.huawei.com (10.1.198.207) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Tue, 11 May 2021 10:18:07 +0800 From: Kunkun Jiang To: Eric Auger , Peter Maydell , Alex Williamson , "open list:ARM SMMU" , "open list:All patches CC here" Subject: [RFC PATCH v3 2/4] vfio: Add vfio_prereg_listener_log_sync in nested stage Date: Tue, 11 May 2021 10:08:14 +0800 Message-ID: <20210511020816.2905-3-jiangkunkun@huawei.com> X-Mailer: git-send-email 2.26.2.windows.1 In-Reply-To: <20210511020816.2905-1-jiangkunkun@huawei.com> References: <20210511020816.2905-1-jiangkunkun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.185.210] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggema765-chm.china.huawei.com (10.1.198.207) X-CFilter-Loop: Reflected Received-SPF: pass client-ip=45.249.212.188; envelope-from=jiangkunkun@huawei.com; helo=szxga02-in.huawei.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kevin.tian@intel.com, Kunkun Jiang , Kirti Wankhede , Zenghui Yu , wanghaibin.wang@huawei.com, Keqian Zhu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" In nested mode, we set up the stage 2 (gpa->hpa)and stage 1 (giova->gpa) separately by vfio_prereg_listener_region_add() and vfio_listener_region_add(). So when marking dirty pages we just need to pay attention to stage 2 mappings. Legacy vfio_listener_log_sync cannot be used in nested stage. This patch adds vfio_prereg_listener_log_sync to mark dirty pages in nested mode. Signed-off-by: Kunkun Jiang --- hw/vfio/common.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 9fb8d44a6d..149e535a75 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -1284,6 +1284,22 @@ static int vfio_dma_sync_ram_section_dirty_bitmap(VFIOContainer *container, int128_get64(section->size), ram_addr); } +static void vfio_prereg_listener_log_sync(MemoryListener *listener, + MemoryRegionSection *section) +{ + VFIOContainer *container = + container_of(listener, VFIOContainer, prereg_listener); + + if (!memory_region_is_ram(section->mr) || + !container->dirty_pages_supported) { + return; + } + + if (vfio_devices_all_dirty_tracking(container)) { + vfio_dma_sync_ram_section_dirty_bitmap(container, section); + } +} + typedef struct { IOMMUNotifier n; VFIOGuestIOMMU *giommu; @@ -1328,6 +1344,16 @@ static int vfio_sync_dirty_bitmap(VFIOContainer *container, if (memory_region_is_iommu(section->mr)) { VFIOGuestIOMMU *giommu; + /* + * In nested mode, stage 2 (gpa->hpa) and stage 1 (giova->gpa) are + * set up separately. It is inappropriate to pass 'giova' to kernel + * to get dirty pages. We only need to focus on stage 2 mapping when + * marking dirty pages. + */ + if (container->iommu_type == VFIO_TYPE1_NESTING_IOMMU) { + return 0; + } + QLIST_FOREACH(giommu, &container->giommu_list, giommu_next) { if (MEMORY_REGION(giommu->iommu) == section->mr && giommu->n.start == section->offset_within_region) { @@ -1382,6 +1408,7 @@ static const MemoryListener vfio_memory_listener = { static MemoryListener vfio_memory_prereg_listener = { .region_add = vfio_prereg_listener_region_add, .region_del = vfio_prereg_listener_region_del, + .log_sync = vfio_prereg_listener_log_sync, }; static void vfio_listener_release(VFIOContainer *container) From patchwork Tue May 11 02:08:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kunkun Jiang X-Patchwork-Id: 12249539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A13CBC433ED for ; Tue, 11 May 2021 02:19:51 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3947A6162B for ; Tue, 11 May 2021 02:19:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3947A6162B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:42186 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lgHzy-00051b-94 for qemu-devel@archiver.kernel.org; Mon, 10 May 2021 22:19:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55760) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lgHyR-0002YJ-VO; Mon, 10 May 2021 22:18:15 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:2420) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lgHyP-0001wj-UK; Mon, 10 May 2021 22:18:15 -0400 Received: from dggeml701-chm.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4FfM341PLXzYgmT; Tue, 11 May 2021 10:15:44 +0800 (CST) Received: from dggema765-chm.china.huawei.com (10.1.198.207) by dggeml701-chm.china.huawei.com (10.3.17.134) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Tue, 11 May 2021 10:18:11 +0800 Received: from DESKTOP-6NKE0BC.china.huawei.com (10.174.185.210) by dggema765-chm.china.huawei.com (10.1.198.207) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Tue, 11 May 2021 10:18:10 +0800 From: Kunkun Jiang To: Eric Auger , Peter Maydell , Alex Williamson , "open list:ARM SMMU" , "open list:All patches CC here" Subject: [RFC PATCH v3 3/4] vfio: Add vfio_prereg_listener_global_log_start/stop in nested stage Date: Tue, 11 May 2021 10:08:15 +0800 Message-ID: <20210511020816.2905-4-jiangkunkun@huawei.com> X-Mailer: git-send-email 2.26.2.windows.1 In-Reply-To: <20210511020816.2905-1-jiangkunkun@huawei.com> References: <20210511020816.2905-1-jiangkunkun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.185.210] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggema765-chm.china.huawei.com (10.1.198.207) X-CFilter-Loop: Reflected Received-SPF: pass client-ip=45.249.212.187; envelope-from=jiangkunkun@huawei.com; helo=szxga01-in.huawei.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kevin.tian@intel.com, Kunkun Jiang , Kirti Wankhede , Zenghui Yu , wanghaibin.wang@huawei.com, Keqian Zhu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" In nested mode, we set up the stage 2 and stage 1 separately. In my opinion, vfio_memory_prereg_listener is used for stage 2 and vfio_memory_listener is used for stage 1. So it feels weird to call the global_log_start/stop interface in vfio_memory_listener to switch dirty tracking, although this won't cause any errors. Add global_log_start/stop interface in vfio_memory_prereg_listener can separate stage 2 from stage 1. Signed-off-by: Kunkun Jiang --- hw/vfio/common.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 149e535a75..6e004fc00f 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -1209,6 +1209,17 @@ static void vfio_listener_log_global_start(MemoryListener *listener) { VFIOContainer *container = container_of(listener, VFIOContainer, listener); + /* For nested mode, vfio_prereg_listener is used to start dirty tracking */ + if (container->iommu_type != VFIO_TYPE1_NESTING_IOMMU) { + vfio_set_dirty_page_tracking(container, true); + } +} + +static void vfio_prereg_listener_log_global_start(MemoryListener *listener) +{ + VFIOContainer *container = + container_of(listener, VFIOContainer, prereg_listener); + vfio_set_dirty_page_tracking(container, true); } @@ -1216,6 +1227,17 @@ static void vfio_listener_log_global_stop(MemoryListener *listener) { VFIOContainer *container = container_of(listener, VFIOContainer, listener); + /* For nested mode, vfio_prereg_listener is used to stop dirty tracking */ + if (container->iommu_type != VFIO_TYPE1_NESTING_IOMMU) { + vfio_set_dirty_page_tracking(container, false); + } +} + +static void vfio_prereg_listener_log_global_stop(MemoryListener *listener) +{ + VFIOContainer *container = + container_of(listener, VFIOContainer, prereg_listener); + vfio_set_dirty_page_tracking(container, false); } @@ -1408,6 +1430,8 @@ static const MemoryListener vfio_memory_listener = { static MemoryListener vfio_memory_prereg_listener = { .region_add = vfio_prereg_listener_region_add, .region_del = vfio_prereg_listener_region_del, + .log_global_start = vfio_prereg_listener_log_global_start, + .log_global_stop = vfio_prereg_listener_log_global_stop, .log_sync = vfio_prereg_listener_log_sync, }; From patchwork Tue May 11 02:08:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kunkun Jiang X-Patchwork-Id: 12249545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF0C2C433B4 for ; Tue, 11 May 2021 02:22:00 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7BA3361155 for ; Tue, 11 May 2021 02:22:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7BA3361155 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:48212 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lgI23-0000kZ-LO for qemu-devel@archiver.kernel.org; Mon, 10 May 2021 22:21:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55818) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lgHyY-0002cl-Ps; Mon, 10 May 2021 22:18:22 -0400 Received: from szxga03-in.huawei.com ([45.249.212.189]:2172) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lgHyU-0001yt-FK; Mon, 10 May 2021 22:18:22 -0400 Received: from dggeml711-chm.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4FfM25102sz5vwQ; Tue, 11 May 2021 10:14:53 +0800 (CST) Received: from dggema765-chm.china.huawei.com (10.1.198.207) by dggeml711-chm.china.huawei.com (10.3.17.122) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Tue, 11 May 2021 10:18:13 +0800 Received: from DESKTOP-6NKE0BC.china.huawei.com (10.174.185.210) by dggema765-chm.china.huawei.com (10.1.198.207) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Tue, 11 May 2021 10:18:13 +0800 From: Kunkun Jiang To: Eric Auger , Peter Maydell , Alex Williamson , "open list:ARM SMMU" , "open list:All patches CC here" Subject: [RFC PATCH v3 4/4] hw/arm/smmuv3: Post-load stage 1 configurations to the host Date: Tue, 11 May 2021 10:08:16 +0800 Message-ID: <20210511020816.2905-5-jiangkunkun@huawei.com> X-Mailer: git-send-email 2.26.2.windows.1 In-Reply-To: <20210511020816.2905-1-jiangkunkun@huawei.com> References: <20210511020816.2905-1-jiangkunkun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.185.210] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggema765-chm.china.huawei.com (10.1.198.207) X-CFilter-Loop: Reflected Received-SPF: pass client-ip=45.249.212.189; envelope-from=jiangkunkun@huawei.com; helo=szxga03-in.huawei.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kevin.tian@intel.com, Kunkun Jiang , Kirti Wankhede , Zenghui Yu , wanghaibin.wang@huawei.com, Keqian Zhu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" In nested mode, we call the set_pasid_table() callback on each STE update to pass the guest stage 1 configuration to the host and apply it at physical level. In the case of live migration, we need to manually call the set_pasid_table() to load the guest stage 1 configurations to the host. If this operation fails, the migration fails. Signed-off-by: Kunkun Jiang --- hw/arm/smmuv3.c | 33 ++++++++++++++++++++++++++++----- 1 file changed, 28 insertions(+), 5 deletions(-) diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c index ca690513e6..ac1de572f3 100644 --- a/hw/arm/smmuv3.c +++ b/hw/arm/smmuv3.c @@ -929,7 +929,7 @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd) } } -static void smmuv3_notify_config_change(SMMUState *bs, uint32_t sid) +static int smmuv3_notify_config_change(SMMUState *bs, uint32_t sid) { #ifdef __linux__ IOMMUMemoryRegion *mr = smmu_iommu_mr(bs, sid); @@ -938,9 +938,10 @@ static void smmuv3_notify_config_change(SMMUState *bs, uint32_t sid) IOMMUConfig iommu_config = {}; SMMUTransCfg *cfg; SMMUDevice *sdev; + int ret; if (!mr) { - return; + return 0; } sdev = container_of(mr, SMMUDevice, iommu); @@ -949,13 +950,13 @@ static void smmuv3_notify_config_change(SMMUState *bs, uint32_t sid) smmuv3_flush_config(sdev); if (!pci_device_is_pasid_ops_set(sdev->bus, sdev->devfn)) { - return; + return 0; } cfg = smmuv3_get_config(sdev, &event); if (!cfg) { - return; + return 0; } iommu_config.pasid_cfg.argsz = sizeof(struct iommu_pasid_table_config); @@ -977,10 +978,13 @@ static void smmuv3_notify_config_change(SMMUState *bs, uint32_t sid) iommu_config.pasid_cfg.config, iommu_config.pasid_cfg.base_ptr); - if (pci_device_set_pasid_table(sdev->bus, sdev->devfn, &iommu_config)) { + ret = pci_device_set_pasid_table(sdev->bus, sdev->devfn, &iommu_config); + if (ret) { error_report("Failed to pass PASID table to host for iommu mr %s (%m)", mr->parent_obj.name); } + + return ret; #endif } @@ -1545,6 +1549,24 @@ static void smmu_realize(DeviceState *d, Error **errp) smmu_init_irq(s, dev); } +static int smmuv3_post_load(void *opaque, int version_id) +{ + SMMUv3State *s3 = opaque; + SMMUState *s = &(s3->smmu_state); + SMMUDevice *sdev; + int ret = 0; + + QLIST_FOREACH(sdev, &s->devices_with_notifiers, next) { + uint32_t sid = smmu_get_sid(sdev); + ret = smmuv3_notify_config_change(s, sid); + if (ret) { + break; + } + } + + return ret; +} + static const VMStateDescription vmstate_smmuv3_queue = { .name = "smmuv3_queue", .version_id = 1, @@ -1563,6 +1585,7 @@ static const VMStateDescription vmstate_smmuv3 = { .version_id = 1, .minimum_version_id = 1, .priority = MIG_PRI_IOMMU, + .post_load = smmuv3_post_load, .fields = (VMStateField[]) { VMSTATE_UINT32(features, SMMUv3State), VMSTATE_UINT8(sid_size, SMMUv3State),