From patchwork Thu May 4 03:07:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Blake X-Patchwork-Id: 9710741 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A878760387 for ; Thu, 4 May 2017 03:12:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 90B6628387 for ; Thu, 4 May 2017 03:12:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8425428637; Thu, 4 May 2017 03:12:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0750828387 for ; Thu, 4 May 2017 03:12:13 +0000 (UTC) Received: from localhost ([::1]:39517 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1d67Bc-0005Ni-4l for patchwork-qemu-devel@patchwork.kernel.org; Wed, 03 May 2017 23:12:12 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:52533) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1d677j-0002xC-Mb for qemu-devel@nongnu.org; Wed, 03 May 2017 23:08:14 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1d677i-0001Ht-Ip for qemu-devel@nongnu.org; Wed, 03 May 2017 23:08:11 -0400 Received: from mx1.redhat.com ([209.132.183.28]:53352) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1d677f-0001FK-HI; Wed, 03 May 2017 23:08:07 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A121664D2; Thu, 4 May 2017 03:08:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com A121664D2 Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=eblake@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com A121664D2 Received: from red.redhat.com (ovpn-122-206.rdu2.redhat.com [10.10.122.206]) by smtp.corp.redhat.com (Postfix) with ESMTP id D6A4C18350; Thu, 4 May 2017 03:08:05 +0000 (UTC) From: Eric Blake To: qemu-devel@nongnu.org Date: Wed, 3 May 2017 22:07:50 -0500 Message-Id: <20170504030755.1001-6-eblake@redhat.com> In-Reply-To: <20170504030755.1001-1-eblake@redhat.com> References: <20170504030755.1001-1-eblake@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Thu, 04 May 2017 03:08:06 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v12 05/10] qcow2: Optimize zero_single_l2() to minimize L2 churn X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, qemu-block@nongnu.org, mreitz@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Similar to discard_single_l2(), we should try to avoid dirtying the L2 cache when the cluster we are changing already has the right characteristics. Note that by the time we get to zero_single_l2(), BDRV_REQ_MAY_UNMAP is a requirement to unallocate a cluster (this is because the block layer clears that flag if discard.* flags during open requested that we never punch holes - see the conversation around commit 170f4b2e, https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg07306.html). Therefore, this patch can only reuse a zero cluster as-is if either unmapping is not requested, or if the zero cluster was not associated with an allocation. Technically, there are some cases where an unallocated cluster already reads as all zeroes (namely, when there is no backing file [easy: check bs->backing], or when the backing file also reads as zeroes [harder: we can't check bdrv_get_block_status since we are already holding the lock]), where the guest would not immediately see a difference if we left that cluster unallocated. But if the user did not request unmapping, leaving an unallocated cluster is wrong; and even if the user DID request unmapping, keeping a cluster unallocated risks a subtle semantic change of guest-visible contents if a backing file is later added, and it is not worth auditing whether all internal uses such as mirror properly avoid an unmap request. Thus, this patch is intentionally limited to just clusters that are already marked as zero. Signed-off-by: Eric Blake Reviewed-by: Max Reitz --- v12: store cluster type in temporary v11: reserved for blkdebug half of v10 v10: new patch, replacing earlier attempt to use unallocated clusters, and ditching any optimization of v2 files --- block/qcow2-cluster.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c index 14e2086..78fbe34 100644 --- a/block/qcow2-cluster.c +++ b/block/qcow2-cluster.c @@ -1599,6 +1599,7 @@ static int zero_single_l2(BlockDriverState *bs, uint64_t offset, int l2_index; int ret; int i; + bool unmap = !!(flags & BDRV_REQ_MAY_UNMAP); ret = get_cluster_table(bs, offset, &l2_table, &l2_index); if (ret < 0) { @@ -1611,12 +1612,22 @@ static int zero_single_l2(BlockDriverState *bs, uint64_t offset, for (i = 0; i < nb_clusters; i++) { uint64_t old_offset; + int cluster_type; old_offset = be64_to_cpu(l2_table[l2_index + i]); - /* Update L2 entries */ + /* + * Minimize L2 changes if the cluster already reads back as + * zeroes with correct allocation. + */ + cluster_type = qcow2_get_cluster_type(old_offset); + if (cluster_type == QCOW2_CLUSTER_ZERO_PLAIN || + (cluster_type == QCOW2_CLUSTER_ZERO_ALLOC && !unmap)) { + continue; + } + qcow2_cache_entry_mark_dirty(bs, s->l2_table_cache, l2_table); - if (old_offset & QCOW_OFLAG_COMPRESSED || flags & BDRV_REQ_MAY_UNMAP) { + if (cluster_type == QCOW2_CLUSTER_COMPRESSED || unmap) { l2_table[l2_index + i] = cpu_to_be64(QCOW_OFLAG_ZERO); qcow2_free_any_clusters(bs, old_offset, 1, QCOW2_DISCARD_REQUEST); } else {