From patchwork Wed Jun 6 18:37:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Reitz X-Patchwork-Id: 10450823 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3E9196053F for ; Wed, 6 Jun 2018 18:39:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2CCD6296B9 for ; Wed, 6 Jun 2018 18:39:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 219C2298E3; Wed, 6 Jun 2018 18:39:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7A312296B9 for ; Wed, 6 Jun 2018 18:39:16 +0000 (UTC) Received: from localhost ([::1]:54006 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fQdL1-0007WR-FY for patchwork-qemu-devel@patchwork.kernel.org; Wed, 06 Jun 2018 14:39:15 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60423) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fQdJd-0006eC-8D for qemu-devel@nongnu.org; Wed, 06 Jun 2018 14:37:50 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fQdJc-0001O1-1S for qemu-devel@nongnu.org; Wed, 06 Jun 2018 14:37:49 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:59388 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fQdJV-0001Hp-GC; Wed, 06 Jun 2018 14:37:41 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B5AD94075749; Wed, 6 Jun 2018 18:37:40 +0000 (UTC) Received: from localhost (unknown [10.40.205.120]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 5CB8B20357CA; Wed, 6 Jun 2018 18:37:40 +0000 (UTC) From: Max Reitz To: qemu-block@nongnu.org Date: Wed, 6 Jun 2018 20:37:38 +0200 Message-Id: <20180606183738.31688-1-mreitz@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Wed, 06 Jun 2018 18:37:40 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Wed, 06 Jun 2018 18:37:40 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'mreitz@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH] iotests: Fix 219's timing X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-devel@nongnu.org, Max Reitz Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP 219 has two issues that may lead to sporadic failure, both of which are the result of issuing query-jobs too early after a job has been modified. This can then lead to different results based on whether the modification has taken effect already or not. First, query-jobs is issued right after the job has been created. Besides its current progress possibly being in any random state (which has already been taken care of), its total progress too is basically arbitrary, because the job may not yet have been able to determine it. This patch addresses this by just filtering the total progress, like what has been done for the current progress already. However, for more clarity, the filtering is changed to replace the values by a string 'FILTERED' instead of deleting them. Secondly, query-jobs is issued right after a job has been resumed. The job may or may not yet have had the time to actually perform any I/O, and thus its current progress may or may not have advanced. To make sure it has indeed advanced (which is what the reference output already assumes), insert a sleep of 100 ms before query-jobs is invoked. With a slice time of 100 ms, a buffer size of 64 kB and a speed of 256 kB/s, this should be the right amount of time to let the job advance by exactly 64 kB. Signed-off-by: Max Reitz --- v2: Changed the query-jobs progress filtering [Eric] --- tests/qemu-iotests/219 | 12 ++++++++---- tests/qemu-iotests/219.out | 10 +++++----- 2 files changed, 13 insertions(+), 9 deletions(-) diff --git a/tests/qemu-iotests/219 b/tests/qemu-iotests/219 index 898a26eef0..6931c5e449 100755 --- a/tests/qemu-iotests/219 +++ b/tests/qemu-iotests/219 @@ -20,6 +20,7 @@ # Check using the job-* QMP commands with block jobs import iotests +import time iotests.verify_image_format(supported_fmts=['qcow2']) @@ -46,6 +47,8 @@ def test_pause_resume(vm): iotests.log(vm.qmp(resume_cmd, **{resume_arg: 'job0'})) iotests.log(iotests.filter_qmp_event(vm.event_wait('JOB_STATUS_CHANGE'))) + # Let the job proceed before querying its progress + time.sleep(0.1) iotests.log(vm.qmp('query-jobs')) def test_job_lifecycle(vm, job, job_args, has_ready=False): @@ -58,12 +61,13 @@ def test_job_lifecycle(vm, job, job_args, has_ready=False): iotests.log(vm.qmp(job, job_id='job0', **job_args)) # Depending on the storage, the first request may or may not have completed - # yet, so filter out the progress. Later query-job calls don't need the - # filtering because the progress is made deterministic by the block job - # speed + # yet (and the total progress may not have been fully determined yet), so + # filter out the progress. Later query-job calls don't need the filtering + # because the progress is made deterministic by the block job speed result = vm.qmp('query-jobs') for j in result['return']: - del j['current-progress'] + j['current-progress'] = 'FILTERED' + j['total-progress'] = 'FILTERED' iotests.log(result) # undefined -> created -> running diff --git a/tests/qemu-iotests/219.out b/tests/qemu-iotests/219.out index 346801b655..6dc07bc41e 100644 --- a/tests/qemu-iotests/219.out +++ b/tests/qemu-iotests/219.out @@ -3,7 +3,7 @@ Launching VM... Starting block job: drive-mirror (auto-finalize: True; auto-dismiss: True) {u'return': {}} -{u'return': [{u'status': u'running', u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]} +{u'return': [{u'status': u'running', u'current-progress': 'FILTERED', u'total-progress': 'FILTERED', u'id': u'job0', u'type': u'mirror'}]} {u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'created', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'} {u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'running', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'} @@ -93,7 +93,7 @@ Waiting for PENDING state... Starting block job: drive-backup (auto-finalize: True; auto-dismiss: True) {u'return': {}} -{u'return': [{u'status': u'running', u'total-progress': 4194304, u'id': u'job0', u'type': u'backup'}]} +{u'return': [{u'status': u'running', u'current-progress': 'FILTERED', u'total-progress': 'FILTERED', u'id': u'job0', u'type': u'backup'}]} {u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'created', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'} {u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'running', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'} @@ -144,7 +144,7 @@ Waiting for PENDING state... Starting block job: drive-backup (auto-finalize: True; auto-dismiss: False) {u'return': {}} -{u'return': [{u'status': u'running', u'total-progress': 4194304, u'id': u'job0', u'type': u'backup'}]} +{u'return': [{u'status': u'running', u'current-progress': 'FILTERED', u'total-progress': 'FILTERED', u'id': u'job0', u'type': u'backup'}]} {u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'created', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'} {u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'running', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'} @@ -203,7 +203,7 @@ Waiting for PENDING state... Starting block job: drive-backup (auto-finalize: False; auto-dismiss: True) {u'return': {}} -{u'return': [{u'status': u'running', u'total-progress': 4194304, u'id': u'job0', u'type': u'backup'}]} +{u'return': [{u'status': u'running', u'current-progress': 'FILTERED', u'total-progress': 'FILTERED', u'id': u'job0', u'type': u'backup'}]} {u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'created', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'} {u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'running', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'} @@ -262,7 +262,7 @@ Waiting for PENDING state... Starting block job: drive-backup (auto-finalize: False; auto-dismiss: False) {u'return': {}} -{u'return': [{u'status': u'running', u'total-progress': 4194304, u'id': u'job0', u'type': u'backup'}]} +{u'return': [{u'status': u'running', u'current-progress': 'FILTERED', u'total-progress': 'FILTERED', u'id': u'job0', u'type': u'backup'}]} {u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'created', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'} {u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'running', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}