From patchwork Tue Mar 10 09:17:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11428897 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E5070138D for ; Tue, 10 Mar 2020 09:20:33 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C3F61208E4 for ; Tue, 10 Mar 2020 09:20:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C3F61208E4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:55900 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBb3w-0003W9-Si for patchwork-qemu-devel@patchwork.kernel.org; Tue, 10 Mar 2020 05:20:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37568) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBb2v-0001pv-ED for qemu-devel@nongnu.org; Tue, 10 Mar 2020 05:19:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBb2t-00065N-Ao for qemu-devel@nongnu.org; Tue, 10 Mar 2020 05:19:29 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:3269 helo=huawei.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBb2p-0005ih-N1; Tue, 10 Mar 2020 05:19:24 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 3DD00CAF2E97E0FBE086; Tue, 10 Mar 2020 17:19:19 +0800 (CST) Received: from linux-kDCJWP.huawei.com (10.175.104.212) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Tue, 10 Mar 2020 17:19:09 +0800 From: Keqian Zhu To: Subject: [PATCH v2 1/2] memory: Introduce start_global variable in dirty bitmap sync Date: Tue, 10 Mar 2020 17:17:03 +0800 Message-ID: <20200310091704.42340-2-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200310091704.42340-1-zhukeqian1@huawei.com> References: <20200310091704.42340-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.104.212] X-CFilter-Loop: Reflected X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 45.249.212.191 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paolo Bonzini , qemu-arm@nongnu.org, Keqian Zhu , "Dr . David Alan Gilbert" , wanghaibin.wang@huawei.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" In the cpu_physical_memory_sync_dirty_bitmap func, use start_global variable to make code more clear. And the addr variable is only used in slow path, so move it to slow path. Signed-off-by: Keqian Zhu --- include/exec/ram_addr.h | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index 5e59a3d8d7..8311efb7bc 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -445,14 +445,13 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, ram_addr_t length, uint64_t *real_dirty_pages) { - ram_addr_t addr; - unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS); + ram_addr_t start_global = start + rb->offset; + unsigned long word = BIT_WORD(start_global >> TARGET_PAGE_BITS); uint64_t num_dirty = 0; unsigned long *dest = rb->bmap; /* start address and length is aligned at the start of a word? */ - if (((word * BITS_PER_LONG) << TARGET_PAGE_BITS) == - (start + rb->offset) && + if (((word * BITS_PER_LONG) << TARGET_PAGE_BITS) == start_global && !(length & ((BITS_PER_LONG << TARGET_PAGE_BITS) - 1))) { int k; int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS); @@ -495,11 +494,11 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, memory_region_clear_dirty_bitmap(rb->mr, start, length); } } else { - ram_addr_t offset = rb->offset; + ram_addr_t addr; for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) { if (cpu_physical_memory_test_and_clear_dirty( - start + addr + offset, + start_global + addr, TARGET_PAGE_SIZE, DIRTY_MEMORY_MIGRATION)) { *real_dirty_pages += 1; From patchwork Tue Mar 10 09:17:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11428903 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5E0B7138D for ; Tue, 10 Mar 2020 09:22:25 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3E6312051A for ; Tue, 10 Mar 2020 09:22:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3E6312051A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:55946 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBb5k-0006Wj-FH for patchwork-qemu-devel@patchwork.kernel.org; Tue, 10 Mar 2020 05:22:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37567) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBb2v-0001pu-F5 for qemu-devel@nongnu.org; Tue, 10 Mar 2020 05:19:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBb2t-00065H-Ah for qemu-devel@nongnu.org; Tue, 10 Mar 2020 05:19:29 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:3270 helo=huawei.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBb2p-0005io-N7; Tue, 10 Mar 2020 05:19:24 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 42AA56A27B560642CA2B; Tue, 10 Mar 2020 17:19:19 +0800 (CST) Received: from linux-kDCJWP.huawei.com (10.175.104.212) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Tue, 10 Mar 2020 17:19:11 +0800 From: Keqian Zhu To: Subject: [PATCH v2 2/2] migration: not require length align when choose fast dirty sync path Date: Tue, 10 Mar 2020 17:17:04 +0800 Message-ID: <20200310091704.42340-3-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200310091704.42340-1-zhukeqian1@huawei.com> References: <20200310091704.42340-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.104.212] X-CFilter-Loop: Reflected X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 45.249.212.191 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paolo Bonzini , qemu-arm@nongnu.org, Keqian Zhu , "Dr . David Alan Gilbert" , wanghaibin.wang@huawei.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" In commit aa777e297c84 ("cpu_physical_memory_sync_dirty_bitmap: Another alignment fix"), ramblock length is required to align word pages when we choose the fast dirty sync path. The reason is that "If the Ramblock is less than 64 pages in length that long can contain bits representing two different RAMBlocks, but the code will update the bmap belinging to the 1st RAMBlock only while having updated the total dirty page count for both." This is right before commit 801110ab22be ("find_ram_offset: Align ram_addr_t allocation on long boundaries"), which align ram_addr_t allocation on long boundaries. So currently we wont "updated the total dirty page count for both". By removing the alignment constraint of length in fast path, we can always use fast dirty sync path if start_global is aligned to word page. Signed-off-by: Keqian Zhu --- include/exec/ram_addr.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index 8311efb7bc..57b3edf376 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -450,9 +450,8 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, uint64_t num_dirty = 0; unsigned long *dest = rb->bmap; - /* start address and length is aligned at the start of a word? */ - if (((word * BITS_PER_LONG) << TARGET_PAGE_BITS) == start_global && - !(length & ((BITS_PER_LONG << TARGET_PAGE_BITS) - 1))) { + /* start address is aligned at the start of a word? */ + if (((word * BITS_PER_LONG) << TARGET_PAGE_BITS) == start_global) { int k; int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS); unsigned long * const *src;