From patchwork Mon Aug 13 02:19:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Reitz X-Patchwork-Id: 10563833 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 607B613B4 for ; Mon, 13 Aug 2018 02:22:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5132628EA9 for ; Mon, 13 Aug 2018 02:22:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 45E2529167; Mon, 13 Aug 2018 02:22:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AE89428EA9 for ; Mon, 13 Aug 2018 02:22:39 +0000 (UTC) Received: from localhost ([::1]:37167 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fp2VC-00052Q-Vr for patchwork-qemu-devel@patchwork.kernel.org; Sun, 12 Aug 2018 22:22:39 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53141) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fp2T9-0003VE-UI for qemu-devel@nongnu.org; Sun, 12 Aug 2018 22:20:33 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fp2T6-00082W-Lv for qemu-devel@nongnu.org; Sun, 12 Aug 2018 22:20:31 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:53882 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fp2T2-0007u0-H8; Sun, 12 Aug 2018 22:20:24 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1D20F40216FC; Mon, 13 Aug 2018 02:20:24 +0000 (UTC) Received: from localhost (ovpn-204-21.brq.redhat.com [10.40.204.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 7B2DE2156712; Mon, 13 Aug 2018 02:20:23 +0000 (UTC) From: Max Reitz To: qemu-block@nongnu.org Date: Mon, 13 Aug 2018 04:19:54 +0200 Message-Id: <20180813022006.7216-6-mreitz@redhat.com> In-Reply-To: <20180813022006.7216-1-mreitz@redhat.com> References: <20180813022006.7216-1-mreitz@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Mon, 13 Aug 2018 02:20:24 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Mon, 13 Aug 2018 02:20:24 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'mreitz@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 05/17] mirror: Remove bytes_handled, part 2 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Jeff Cody , Fam Zheng , qemu-devel@nongnu.org, Max Reitz Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Remove mirror_perform()'s return value, instead aligning the @offset and @bytes it receives. We needed the return value for two reasons: (1) So that "offset += io_bytes" would result in an aligned offset, and (2) so that io_bytes_acct is kind of aligned (the tail is aligned, but the head is not). (2) does not make sense. If we want to account for the bytes we have to copy, we should not have used the returned io_bytes as the basis but its original value before the mirror_perform() call. If we want to account for the bytes we have actually copied, we should not disregard the potential alignment head, but bytes_handled only includes the tail, so that was wrong, too. Let's go for the fully aligned value (the number of bytes actually copied), because apparently we do want some alignment (or we would have just added io_bytes before this patch instead of bytes_handled). (1) can be achieved by simply letting mirror_perform() return the aligned values to its callers, which is what this patch does. Signed-off-by: Max Reitz --- block/mirror.c | 56 +++++++++++++++----------------------------------- 1 file changed, 17 insertions(+), 39 deletions(-) diff --git a/block/mirror.c b/block/mirror.c index 3234b8b687..f05404e557 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -89,7 +89,7 @@ struct MirrorOp { MirrorBlockJob *s; QEMUIOVector qiov; int64_t offset; - uint64_t bytes; + int64_t bytes; bool is_pseudo_op; bool is_active_write; @@ -239,13 +239,10 @@ static inline int64_t mirror_clip_bytes(MirrorBlockJob *s, return MIN(bytes, s->bdev_length - offset); } -/* Round offset and/or bytes to target cluster if COW is needed, and - * return the offset of the adjusted tail against original. */ -static int mirror_cow_align(MirrorBlockJob *s, int64_t *offset, - uint64_t *bytes) +/* Round offset and/or bytes to target cluster if COW is needed */ +static void mirror_cow_align(MirrorBlockJob *s, int64_t *offset, int64_t *bytes) { bool need_cow; - int ret = 0; int64_t align_offset = *offset; int64_t align_bytes = *bytes; int max_bytes = s->granularity * s->max_iov; @@ -268,11 +265,8 @@ static int mirror_cow_align(MirrorBlockJob *s, int64_t *offset, * that doesn't matter because it's already the end of source image. */ align_bytes = mirror_clip_bytes(s, align_offset, align_bytes); - ret = align_offset + align_bytes - (*offset + *bytes); *offset = align_offset; *bytes = align_bytes; - assert(ret >= 0); - return ret; } static inline void coroutine_fn @@ -304,16 +298,9 @@ static inline void coroutine_fn /* * Restrict *bytes to how much we can actually handle, and align the * [*offset, *bytes] range to clusters if COW is needed. - * - * *bytes_handled is set to the number of bytes copied after and - * including offset, excluding any bytes copied prior to offset due - * to alignment. This will be *bytes if no alignment is necessary, or - * (new_end - *offset) if the tail is rounded up or down due to - * alignment or buffer limit. */ static void mirror_align_for_copy(MirrorBlockJob *s, - int64_t *offset, uint64_t *bytes, - int64_t *bytes_handled) + int64_t *offset, int64_t *bytes) { uint64_t max_bytes; @@ -323,13 +310,11 @@ static void mirror_align_for_copy(MirrorBlockJob *s, *bytes = MIN(s->buf_size, MIN(max_bytes, *bytes)); assert(*bytes); assert(*bytes < BDRV_REQUEST_MAX_BYTES); - *bytes_handled = *bytes; if (s->cow_bitmap) { - *bytes_handled += mirror_cow_align(s, offset, bytes); + mirror_cow_align(s, offset, bytes); } - /* Cannot exceed BDRV_REQUEST_MAX_BYTES + INT_MAX */ - assert(*bytes_handled <= UINT_MAX); + assert(*bytes <= s->buf_size); /* The offset is granularity-aligned because: * 1) Caller passes in aligned values; @@ -402,28 +387,23 @@ static void coroutine_fn mirror_co_discard(void *opaque) mirror_write_complete(op, ret); } -static unsigned mirror_perform(MirrorBlockJob *s, int64_t offset, - uint64_t bytes, MirrorMethod mirror_method) +/* If mirror_method == MIRROR_METHOD_COPY, *offset and *bytes will be + * aligned as necessary */ +static void mirror_perform(MirrorBlockJob *s, int64_t *offset, int64_t *bytes, + MirrorMethod mirror_method) { MirrorOp *op; Coroutine *co; - int64_t bytes_handled; - - /* FIXME: Drop this assertion */ - assert(bytes <= UINT_MAX); if (mirror_method == MIRROR_METHOD_COPY) { - mirror_align_for_copy(s, &offset, &bytes, &bytes_handled); - assert(bytes_handled <= UINT_MAX); - } else { - bytes_handled = bytes; + mirror_align_for_copy(s, offset, bytes); } op = g_new(MirrorOp, 1); *op = (MirrorOp){ .s = s, - .offset = offset, - .bytes = bytes, + .offset = *offset, + .bytes = *bytes, }; qemu_co_queue_init(&op->waiting_requests); @@ -443,8 +423,6 @@ static unsigned mirror_perform(MirrorBlockJob *s, int64_t offset, QTAILQ_INSERT_TAIL(&s->ops_in_flight, op, next); qemu_coroutine_enter(co); - - return bytes_handled; } static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s) @@ -564,7 +542,7 @@ static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s) } io_bytes = mirror_clip_bytes(s, offset, io_bytes); - io_bytes = mirror_perform(s, offset, io_bytes, mirror_method); + mirror_perform(s, &offset, &io_bytes, mirror_method); if (mirror_method != MIRROR_METHOD_COPY && write_zeroes_ok) { io_bytes_acct = 0; } else { @@ -755,8 +733,8 @@ static int coroutine_fn mirror_dirty_init(MirrorBlockJob *s) s->initial_zeroing_ongoing = true; for (offset = 0; offset < s->bdev_length; ) { - int bytes = MIN(s->bdev_length - offset, - QEMU_ALIGN_DOWN(INT_MAX, s->granularity)); + int64_t bytes = MIN(s->bdev_length - offset, + QEMU_ALIGN_DOWN(INT_MAX, s->granularity)); mirror_throttle(s); @@ -772,7 +750,7 @@ static int coroutine_fn mirror_dirty_init(MirrorBlockJob *s) continue; } - mirror_perform(s, offset, bytes, MIRROR_METHOD_ZERO); + mirror_perform(s, &offset, &bytes, MIRROR_METHOD_ZERO); offset += bytes; }