From patchwork Tue Jul 3 21:21:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Snow X-Patchwork-Id: 10505247 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5336960325 for ; Tue, 3 Jul 2018 21:22:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3FE7B28B03 for ; Tue, 3 Jul 2018 21:22:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3379A28B0D; Tue, 3 Jul 2018 21:22:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 504B628B03 for ; Tue, 3 Jul 2018 21:22:35 +0000 (UTC) Received: from localhost ([::1]:42766 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1faSks-0004ZU-HE for patchwork-qemu-devel@patchwork.kernel.org; Tue, 03 Jul 2018 17:22:34 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47570) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1faSjw-0003pe-Kd for qemu-devel@nongnu.org; Tue, 03 Jul 2018 17:21:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1faSjv-00051B-1B for qemu-devel@nongnu.org; Tue, 03 Jul 2018 17:21:36 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:45684 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1faSjq-0004rz-L7; Tue, 03 Jul 2018 17:21:30 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8947F400589B; Tue, 3 Jul 2018 21:21:29 +0000 (UTC) Received: from [10.10.120.245] (ovpn-120-245.rdu2.redhat.com [10.10.120.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id 456C410FD2AF; Tue, 3 Jul 2018 21:21:23 +0000 (UTC) From: John Snow To: Jeff Cody , qemu-block@nongnu.org References: <20180703034655.792039-1-jcody@redhat.com> <20180703034655.792039-4-jcody@redhat.com> Openpgp: preference=signencrypt Autocrypt: addr=jsnow@redhat.com; prefer-encrypt=mutual; keydata= xsFNBFTKefwBEAChvwqYC6saTzawbih87LqBYq0d5A8jXYXaiFMV/EvMSDqqY4EY6whXliNO IYzhgrPEe7ZmPxbCSe4iMykjhwMh5byIHDoPGDU+FsQty2KXuoxto+ZdrP9gymAgmyqdk3aV vzzmCa3cOppcqKvA0Kqr10UeX/z4OMVV390V+DVWUvzXpda45/Sxup57pk+hyY52wxxjIqef rj8u5BN93s5uCVTus0oiVA6W+iXYzTvVDStMFVqnTxSxlpZoH5RGKvmoWV3uutByQyBPHW2U 1Y6n6iEZ9MlP3hcDqlo0S8jeP03HaD4gOqCuqLceWF5+2WyHzNfylpNMFVi+Hp0H/nSDtCvQ ua7j+6Pt7q5rvqgHvRipkDDVsjqwasuNc3wyoHexrBeLU/iJBuDld5iLy+dHXoYMB3HmjMxj 3K5/8XhGrDx6BDFeO3HIpi3u2z1jniB7RtyVEtdupED6lqsDj0oSz9NxaOFZrS3Jf6z/kHIf h42mM9Sx7+s4c07N2LieUxcfqhFTaa/voRibF4cmkBVUhOD1AKXNfhEsTvmcz9NbUchCkcvA T9119CrsxfVsE7bXiGvdXnzyGLXdsoosjzwacKdOrVaDmN3Uy+SHiQXo6TlkSdV0XH2PUxTM LsBFIO9qXO43Ai6J6iPAP/01l8fuZfpJE0/L/c25yyaND7xA3wARAQABzSpKb2huIFNub3cg KEpvaG4gSHVzdG9uKSA8anNub3dAcmVkaGF0LmNvbT7CwX0EEwECACcCGwMCHgECF4AFCwkI BwMFFQoJCAsFFgIDAQAFAlid5ikFCQeV0y0ACgkQiKkGTRg1YesAOw/8DLxVyLbPecXUbxLs +TRhmkoxO5UHReTvmMefyBJxflKSNQQEV49rw0aQe5S4cfOh97K5XHMh2O3tJZS3+zwrRxaK ZMdQrfeJ20acsOKkYuFYKCZuib4tAClf/lk/uF6TVdyDeY1AWf9lXWfwLnBetNpHVqY0C312 K9FC/l42+QEyDXHa2l0j0re1B1D8zScS5tHt9vj43t/VJTjJWHrlE3IAde6mNvM/wbvuFfYG SuaxpRRbFuIDUVW9DinUGtawjtkvttTWUe4+XCtT5u48P4QXvtlgNlc8eEL4+SL3ABB8oepE 2J2LcMDn6QG7o3pKIqJrKvmKKH5BHzi02c0b+u0BrzsNhNSlxUB0cpHyfzlFpJGlaxdsvwC7 CQ85KMaCB8WsQIgwgFHRldCszr9LXU52V8Tib1tnxgvwBVDYaDQEWjTZB1725FooHuRJaCVU +lthkjYoileOG0sNzTPHMXSTsKrNK9aVjg3Cs+D2hU2xi9BlG0Nzf4FqcHVxJg3tOI2f/EeR Dc9UB88GH3yPLouCshz6tG3ftimTtpf5FH0IRTSupmBO3n2JnReyIJcYbmYgo1PucmDWQVgc t+bXdgvRY8QB5EbTO1Etn5CXk3be0iUY7OE4XPRWbnEY8d9m6E4U/p083sW1kdki935XML17 WSHh0ChaSX07jREm3qTOwU0EVMp5/AEQAMG4T+OQRHf8wDAT1dnZNlsCMCkUilCxJT95F6XL og1mEKG91Dg1FKgpTvt2l1UZ3WjZmV/Vm8xmHsVaN+Y/glQQhq9w5VPAC/aadykN0iYoVnA3 ndS0plH/LUvtdHepnRsQ6oJ3fzllNteIrL4YFmbWRjDP14+3MeaRESIj/o6ruz09XIm3H3BC oT42HcldUJy7SjRq04P1DCmi/qpFgVCP/LgaGtBy7DR+t217EbTQBvMOqhu4ornmGVCFynOC ByDVILqmECVXUnvrvG9jeVzYQ0hFIXZahWK2ib1+nU48TE1uZc1Rtio87XxK0TUlJrtoHs28 NfwldM7Z4jigwO5pEIulCFA0YeC8UZuZn5Xyogsg0C+j/48RwcRb/rCII7i63zYSAtad8/a7 wofonQFSYLR7WJfKehgFaxfuMUFds2R+ObYJh6NvkibCz412bVlbFgDaXqV53rMmm1rTQFeK kAe/0nwu+82cLFV9DjmNkHPoqIZl6PFOw4F1Q4Jf0f+UIH55b9P96znTLemT8uSya/A/SyZn D9Eb1/fI8qATn9OkPULRMCbDSwjgqNiyjS+9QzY2a51xMI1B+ugl4kVw2wyqa7e9QYkk4rZB YRhlPPFwra6fEfcgK1z3+DDfsDWAY30+ytvYFk88bfQ2ECLxH+vaGQ+Weah2BqdN9RPdABEB AAHCwWUEGAECAA8CGwwFAlTKiksFCQPCd00ACgkQiKkGTRg1YessJA//TdJTJDK/G+oBOo2I niVnVKeBrd3WbKRKZFyRV2lsxulMs8VTdq+FwXwkHox9W/8aJcPWgGmd5VG3axky9w74XPLP Z17r/N55/PZF93bcBjXlaR9Y6W6UFkfkZYaKsaxL6AYx4haydW+Zfb4dubhgKXkdKlfdM8lp TRzQVfssj/l9DEwbHweQZPCECZG7qrUoQmMCqiKkDMajxfwR9MYxVvRE3+Cnm6UlUZ30FFy8 fok9CzgsWKmConTrz3+84yMCnW88oG15tCgfsqmmGByNqITD73XeFmE2hI1DSAUJo0jB4urq 8mw+muq2+cPxhrJu5RvvUDaMstFCh4AWID5O11s5rDunn2ZTAFUI5AbGfQwG8a8flVie3cQm TqRcS8jlCKCI0eBVQICRW9n7CJezb6dguoao7MDwJEaLtp8BCnHYctx+/5TxnIMgKOuXKGce Vck/xfu8poWzaPUEnqwCR80N2ZFhXoyc9M6VJ0bNNH7vmshSmcs+0OAeVgkOPiDDyROaphWX oI5/IhU3QJ7z95f3PU5sFRjs0anR9qXAzDFrYnMXhZZ3Cr+rAJgXYcoF2O6p26j2Ge8ckYPz ssv8YkXne7L8YZq8UEV/juy9WNG0B87YtiKK+m1Gl8YLGOo36tGJBaL8ZHFNxmwhBR4yulcM xwh+WlGQdSwtPFpMf7LCwWUEGAECAA8CGwwFAlid5kgFCQeV00wACgkQiKkGTRg1YesKXg// VY/yAWFlhQulEDYWagbOnok+gl57C5GH8p6s41mZsir4zN3wUE7KAYSV7O9rCXZQothIwG0u bpssiFqKStjcykf0Mac/3x8GSZyciu6rXi+3+bGlOH0kJI/qJ1nUCQ1vwjVcqYYscQRw7zhK DtivZAk4dCHNE5WDWyVvZh3Sw7EIEvXFLDPdWlAPL6BleI2LI/LJcA3biiJFGVvXhYh75e1N R+cVyi11URVxnWjRL9QcXvM/srVzi1+YWIP8c/6ftgq/vicjovNvF5E34Rj2IW/fCKyU4ma0 0HOlXaCO4H2ngZuyRpOYPXVt/Iohvzu8daKVRotOnCE1LsJ4MLN2Oi76Rw9UXHLCJ7eqw3A9 MhGOuRSgA5d/8rn3iy8SvhIIPJDCJDFwCwSX5i3E+AyH21Qf4Go3cz427PgE0HotDi2GEH8a nxH/tkXn4i+sG9gyJGCzKobXi1g2xCWdrMBkbl0J6HfdOG3IDIrVDO3LgUp2J3oZT9NlYewP d5wOERgtgqY8XvqXNpDv9c/FgPnfrmD4A1wlUtbCr+UukOD0Ky3iW/0Ij2DJlw+KUFCKxwWh EmSAxTi2NAg5f5e5A5d/clCwVeB3DXes5H6tdez4f/ai1eAB+FtItxT37i2KpUvUAdic37Io NRhibLIrSP2g5nqXONyQrFxb1HtVaoR+Nrk= Message-ID: Date: Tue, 3 Jul 2018 17:21:23 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Tue, 03 Jul 2018 21:21:29 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Tue, 03 Jul 2018 21:21:29 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'jsnow@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: Re: [Qemu-devel] [Qemu-block] [PULL 3/3] backup: Use copy offloading X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , peter.maydell@linaro.org, Vladimir Sementsov-Ogievskiy , Fam Zheng , qemu-devel@nongnu.org, Stefan Hajnoczi Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP On 07/03/2018 12:53 PM, John Snow wrote: > > > On 07/02/2018 11:46 PM, Jeff Cody wrote: >> From: Fam Zheng >> >> The implementation is similar to the 'qemu-img convert'. In the >> beginning of the job, offloaded copy is attempted. If it fails, further >> I/O will go through the existing bounce buffer code path. >> >> Then, as Kevin pointed out, both this and qemu-img convert can benefit >> from a local check if one request fails because of, for example, the >> offset is beyond EOF, but another may well be accepted by the protocol >> layer. This will be implemented separately. >> >> Reviewed-by: Stefan Hajnoczi >> Signed-off-by: Fam Zheng >> Message-id: 20180703023758.14422-4-famz@redhat.com >> Signed-off-by: Jeff Cody >> --- >> block/backup.c | 150 ++++++++++++++++++++++++++++++++------------- >> block/trace-events | 1 + >> 2 files changed, 110 insertions(+), 41 deletions(-) >> >> diff --git a/block/backup.c b/block/backup.c >> index d18be40caf..81895ddbe2 100644 >> --- a/block/backup.c >> +++ b/block/backup.c >> @@ -45,6 +45,8 @@ typedef struct BackupBlockJob { >> QLIST_HEAD(, CowRequest) inflight_reqs; >> >> HBitmap *copy_bitmap; >> + bool use_copy_range; >> + int64_t copy_range_size; >> } BackupBlockJob; >> >> static const BlockJobDriver backup_job_driver; >> @@ -86,19 +88,101 @@ static void cow_request_end(CowRequest *req) >> qemu_co_queue_restart_all(&req->wait_queue); >> } >> >> +/* Copy range to target with a bounce buffer and return the bytes copied. If >> + * error occured, return a negative error number */ >> +static int coroutine_fn backup_cow_with_bounce_buffer(BackupBlockJob *job, >> + int64_t start, >> + int64_t end, >> + bool is_write_notifier, >> + bool *error_is_read, >> + void **bounce_buffer) >> +{ >> + int ret; >> + struct iovec iov; >> + QEMUIOVector qiov; >> + BlockBackend *blk = job->common.blk; >> + int nbytes; >> + >> + hbitmap_reset(job->copy_bitmap, start / job->cluster_size, 1); >> + nbytes = MIN(job->cluster_size, job->len - start); >> + if (!*bounce_buffer) { >> + *bounce_buffer = blk_blockalign(blk, job->cluster_size); >> + } >> + iov.iov_base = *bounce_buffer; >> + iov.iov_len = nbytes; >> + qemu_iovec_init_external(&qiov, &iov, 1); >> + >> + ret = blk_co_preadv(blk, start, qiov.size, &qiov, >> + is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0); >> + if (ret < 0) { >> + trace_backup_do_cow_read_fail(job, start, ret); >> + if (error_is_read) { >> + *error_is_read = true; >> + } >> + goto fail; >> + } >> + >> + if (qemu_iovec_is_zero(&qiov)) { >> + ret = blk_co_pwrite_zeroes(job->target, start, >> + qiov.size, BDRV_REQ_MAY_UNMAP); >> + } else { >> + ret = blk_co_pwritev(job->target, start, >> + qiov.size, &qiov, >> + job->compress ? BDRV_REQ_WRITE_COMPRESSED : 0); >> + } >> + if (ret < 0) { >> + trace_backup_do_cow_write_fail(job, start, ret); >> + if (error_is_read) { >> + *error_is_read = false; >> + } >> + goto fail; >> + } >> + >> + return nbytes; >> +fail: >> + hbitmap_set(job->copy_bitmap, start / job->cluster_size, 1); >> + return ret; >> + >> +} >> + >> +/* Copy range to target and return the bytes copied. If error occured, return a >> + * negative error number. */ >> +static int coroutine_fn backup_cow_with_offload(BackupBlockJob *job, >> + int64_t start, >> + int64_t end, >> + bool is_write_notifier) >> +{ >> + int ret; >> + int nr_clusters; >> + BlockBackend *blk = job->common.blk; >> + int nbytes; >> + >> + assert(QEMU_IS_ALIGNED(job->copy_range_size, job->cluster_size)); >> + nbytes = MIN(job->copy_range_size, end - start); >> + nr_clusters = DIV_ROUND_UP(nbytes, job->cluster_size); >> + hbitmap_reset(job->copy_bitmap, start / job->cluster_size, >> + nr_clusters); >> + ret = blk_co_copy_range(blk, start, job->target, start, nbytes, >> + is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0); >> + if (ret < 0) { >> + trace_backup_do_cow_copy_range_fail(job, start, ret); >> + hbitmap_set(job->copy_bitmap, start / job->cluster_size, >> + nr_clusters); >> + return ret; >> + } >> + >> + return nbytes; >> +} >> + >> static int coroutine_fn backup_do_cow(BackupBlockJob *job, >> int64_t offset, uint64_t bytes, >> bool *error_is_read, >> bool is_write_notifier) >> { >> - BlockBackend *blk = job->common.blk; >> CowRequest cow_request; >> - struct iovec iov; >> - QEMUIOVector bounce_qiov; >> - void *bounce_buffer = NULL; >> int ret = 0; >> int64_t start, end; /* bytes */ >> - int n; /* bytes */ >> + void *bounce_buffer = NULL; >> >> qemu_co_rwlock_rdlock(&job->flush_rwlock); >> >> @@ -110,60 +194,38 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job, >> wait_for_overlapping_requests(job, start, end); >> cow_request_begin(&cow_request, job, start, end); >> >> - for (; start < end; start += job->cluster_size) { >> + while (start < end) { >> if (!hbitmap_get(job->copy_bitmap, start / job->cluster_size)) { >> trace_backup_do_cow_skip(job, start); >> + start += job->cluster_size; >> continue; /* already copied */ >> } >> - hbitmap_reset(job->copy_bitmap, start / job->cluster_size, 1); >> >> trace_backup_do_cow_process(job, start); >> >> - n = MIN(job->cluster_size, job->len - start); >> - >> - if (!bounce_buffer) { >> - bounce_buffer = blk_blockalign(blk, job->cluster_size); >> - } >> - iov.iov_base = bounce_buffer; >> - iov.iov_len = n; >> - qemu_iovec_init_external(&bounce_qiov, &iov, 1); >> - >> - ret = blk_co_preadv(blk, start, bounce_qiov.size, &bounce_qiov, >> - is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0); >> - if (ret < 0) { >> - trace_backup_do_cow_read_fail(job, start, ret); >> - if (error_is_read) { >> - *error_is_read = true; >> + if (job->use_copy_range) { >> + ret = backup_cow_with_offload(job, start, end, is_write_notifier); >> + if (ret < 0) { >> + job->use_copy_range = false; >> } >> - hbitmap_set(job->copy_bitmap, start / job->cluster_size, 1); >> - goto out; >> } >> - >> - if (buffer_is_zero(iov.iov_base, iov.iov_len)) { >> - ret = blk_co_pwrite_zeroes(job->target, start, >> - bounce_qiov.size, BDRV_REQ_MAY_UNMAP); >> - } else { >> - ret = blk_co_pwritev(job->target, start, >> - bounce_qiov.size, &bounce_qiov, >> - job->compress ? BDRV_REQ_WRITE_COMPRESSED : 0); >> + if (!job->use_copy_range) { >> + ret = backup_cow_with_bounce_buffer(job, start, end, is_write_notifier, >> + error_is_read, &bounce_buffer); >> } >> if (ret < 0) { >> - trace_backup_do_cow_write_fail(job, start, ret); >> - if (error_is_read) { >> - *error_is_read = false; >> - } >> - hbitmap_set(job->copy_bitmap, start / job->cluster_size, 1); >> - goto out; >> + break; >> } >> >> /* Publish progress, guest I/O counts as progress too. Note that the >> * offset field is an opaque progress value, it is not a disk offset. >> */ >> - job->bytes_read += n; >> - job_progress_update(&job->common.job, n); >> + start += ret; >> + job->bytes_read += ret; >> + job_progress_update(&job->common.job, ret); >> + ret = 0; >> } >> >> -out: >> if (bounce_buffer) { >> qemu_vfree(bounce_buffer); >> } >> @@ -665,6 +727,12 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs, >> } else { >> job->cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, bdi.cluster_size); >> } >> + job->use_copy_range = true; >> + job->copy_range_size = MIN_NON_ZERO(blk_get_max_transfer(job->common.blk), >> + blk_get_max_transfer(job->target)); >> + job->copy_range_size = MAX(job->cluster_size, >> + QEMU_ALIGN_UP(job->copy_range_size, >> + job->cluster_size)); >> >> /* Required permissions are already taken with target's blk_new() */ >> block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL, >> diff --git a/block/trace-events b/block/trace-events >> index 2d59b53fd3..c35287b48a 100644 >> --- a/block/trace-events >> +++ b/block/trace-events >> @@ -42,6 +42,7 @@ backup_do_cow_skip(void *job, int64_t start) "job %p start %"PRId64 >> backup_do_cow_process(void *job, int64_t start) "job %p start %"PRId64 >> backup_do_cow_read_fail(void *job, int64_t start, int ret) "job %p start %"PRId64" ret %d" >> backup_do_cow_write_fail(void *job, int64_t start, int ret) "job %p start %"PRId64" ret %d" >> +backup_do_cow_copy_range_fail(void *job, int64_t start, int ret) "job %p start %"PRId64" ret %d" >> >> # blockdev.c >> qmp_block_job_cancel(void *job) "job %p" >> > > As a head's up, this breaks fleecing test 222. Not sure why just yet. > The idiom is "heads up", not "head's up" ... as a heads up. This appears to break fleecing test 222 in a fun way; when we go to verify the reads : ``` log('') log('--- Verifying Data ---') log('') for p in (patterns + zeroes): cmd = "read -P%s %s %s" % p log(cmd) qemu_io_log('-r', '-f', 'raw', '-c', cmd, nbd_uri) ``` it actually reads zeroes on any region that was overwritten fully or partially, so these three regions: patterns = [("0x5d", "0", "64k"), ("0xd5", "1M", "64k"), ("0xdc", "32M", "64k"), ... all read solid zeroes. Interestingly enough, the files on disk -- the fleecing node and the base image -- are bit identical to each other. Reverting this patch fixes the fleecing case, but it can also be fixed by simply: ``` MIN_NON_ZERO(blk_get_max_transfer(job->common.blk), blk_get_max_transfer(job->target)); job->copy_range_size = MAX(job->cluster_size, ``` I haven't gotten any deeper on this just yet, sorry. Will look tonight, but otherwise I'll see you Thursday after the American holiday. --js diff --git a/block/backup.c b/block/backup.c index 81895ddbe2..85bc3762c5 100644 --- a/block/backup.c +++ b/block/backup.c @@ -727,7 +727,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs, } else { job->cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, bdi.cluster_size); } - job->use_copy_range = true; + job->use_copy_range = false; job->copy_range_size =