From patchwork Fri Nov 29 21:35:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11267633 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3593115AB for ; Fri, 29 Nov 2019 21:35:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0872C2424B for ; Fri, 29 Nov 2019 21:35:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HkIssdS/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387470AbfK2Vfk (ORCPT ); Fri, 29 Nov 2019 16:35:40 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:46604 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2387457AbfK2Vfj (ORCPT ); Fri, 29 Nov 2019 16:35:39 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1575063337; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J/6Vnhm3Bnof3G6qR9jmImN6Mxny1Mm+SQ/fRHV6cvM=; b=HkIssdS/tQH/+DRnKBjBGDBnGEcAfyOPNK6OLN4A+s5TGPiMA44NqmuJiEf6RYzFkPLWkN NiMsw0r8GL+ttD7yvZya4+GpWeJlycCiLRqpxLP4beciCPPgGZXN3KoeY6GYSXSLovYPUp z94M8eNoJYllfS68yFKcfUBqbTyW6WM= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-127-H8M-nMrMOTyf-L6sRIuU_g-1; Fri, 29 Nov 2019 16:35:33 -0500 Received: by mail-qt1-f198.google.com with SMTP id r9so8017474qtc.4 for ; Fri, 29 Nov 2019 13:35:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ABKV7wtXWm4O1KQ//VgIZzpF4hiWiXzcQGJHlUZJvqM=; b=e6oUp9hJLp/5eVVhefAS10HPUd6VS80YRWPhkmzW0QRO0Q7r8uPAVBmHmarA0mJiSk xCDQwbrVmPq9w1W/cNebxjw29VqfrtDNTyFUGHvLDycmgqo1sMWkBrtaIhjz99daNFyN 4ByeilP8dYgY9UO05tzVhRLFSiD6tdXO+phxddTZ5tlYvkHo/MixcqQuwxKcF39Fitbr jjw6dNgaMxi43vtnCuOX4Zv6ywRRtw9amHaaXZAI+IUhXGFsbn9epoq1OVXb4ZbUI7yU m3RR50FKSKchB7df0GwIGX0TKI3TvucivZ/CxlBvAnKqF0x+F8NZ/fq7Wmw5bOloL9hU T7Yw== X-Gm-Message-State: APjAAAVxLG+BsSeponWA2dXx6kMX0hdjlFQkQq0+jJ+aOjUV0v+6EFVG f370v220vKqWh9M8zXGW2aulO7uDulbkzuEcW9idApgiTA/2GwlbCgEryLB40pDtRipwhjIrxkV dcaUjH0iMMG7q X-Received: by 2002:a0c:baa5:: with SMTP id x37mr19072002qvf.228.1575063332542; Fri, 29 Nov 2019 13:35:32 -0800 (PST) X-Google-Smtp-Source: APXvYqx0gzwjNokztQZ2uGe1W03JAxQU4oiBDN2ekwESAnlau3v7lb+68GzvJT2MYI6UKslZ39yUvA== X-Received: by 2002:a0c:baa5:: with SMTP id x37mr19071983qvf.228.1575063332220; Fri, 29 Nov 2019 13:35:32 -0800 (PST) Received: from xz-x1.yyz.redhat.com ([104.156.64.74]) by smtp.gmail.com with ESMTPSA id h186sm10679046qkf.64.2019.11.29.13.35.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Nov 2019 13:35:31 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Sean Christopherson , Paolo Bonzini , "Dr . David Alan Gilbert" , peterx@redhat.com, Vitaly Kuznetsov Subject: [PATCH RFC 15/15] KVM: selftests: Test dirty ring waitqueue Date: Fri, 29 Nov 2019 16:35:05 -0500 Message-Id: <20191129213505.18472-16-peterx@redhat.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191129213505.18472-1-peterx@redhat.com> References: <20191129213505.18472-1-peterx@redhat.com> MIME-Version: 1.0 X-MC-Unique: H8M-nMrMOTyf-L6sRIuU_g-1 X-Mimecast-Spam-Score: 0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This is a bit tricky, but should still be reasonable. Firstly we introduce a totally new dirty log test type, because we need to force vcpu to go into a blocked state by dead loop on vcpu_run even if it wants to quit to userspace. Here the tricky part is we need to read the procfs to make sure the vcpu thread is TASK_UNINTERRUPTIBLE. After that, we reset the ring and the reset should kick the vcpu again by moving out of that state. Signed-off-by: Peter Xu --- tools/testing/selftests/kvm/dirty_log_test.c | 101 +++++++++++++++++++ 1 file changed, 101 insertions(+) diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index c9db136a1f12..41bc015131e1 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include #include @@ -151,12 +152,16 @@ enum log_mode_t { /* Use dirty ring for logging */ LOG_MODE_DIRTY_RING = 2, + /* Dirty ring test but tailored for the waitqueue */ + LOG_MODE_DIRTY_RING_WP = 3, + LOG_MODE_NUM, }; /* Mode of logging. Default is LOG_MODE_DIRTY_LOG */ static enum log_mode_t host_log_mode; pthread_t vcpu_thread; +pid_t vcpu_thread_tid; static uint32_t test_dirty_ring_count = TEST_DIRTY_RING_COUNT; /* Only way to pass this to the signal handler */ @@ -221,6 +226,18 @@ static void dirty_ring_create_vm_done(struct kvm_vm *vm) sizeof(struct kvm_dirty_gfn)); } +static void dirty_ring_wq_create_vm_done(struct kvm_vm *vm) +{ + /* + * Force to use a relatively small ring size, so easier to get + * full. Better bigger than PML size, hence 1024. + */ + test_dirty_ring_count = 1024; + DEBUG("Forcing ring size: %u\n", test_dirty_ring_count); + vm_enable_dirty_ring(vm, test_dirty_ring_count * + sizeof(struct kvm_dirty_gfn)); +} + static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns, struct kvm_dirty_ring_indexes *indexes, int slot, void *bitmap, @@ -295,6 +312,81 @@ static void dirty_ring_collect_dirty_pages(struct kvm_vm *vm, int slot, DEBUG("Iteration %ld collected %u pages\n", iteration, count); } +/* + * Return 'D' for uninterruptible, 'R' for running, 'S' for + * interruptible, etc. + */ +static char read_tid_status_char(unsigned int tid) +{ + int fd, ret, line = 0; + char buf[128], *c; + + snprintf(buf, sizeof(buf) - 1, "/proc/%u/status", tid); + fd = open(buf, O_RDONLY); + TEST_ASSERT(fd >= 0, "open status file failed: %s", buf); + ret = read(fd, buf, sizeof(buf) - 1); + TEST_ASSERT(ret > 0, "read status file failed: %d, %d", ret, errno); + close(fd); + + /* Skip 2 lines */ + for (c = buf; c < buf + sizeof(buf) && line < 2; c++) { + if (*c == '\n') { + line++; + continue; + } + } + + /* Skip "Status: " */ + while (*c != ':') c++; + c++; + while (*c == ' ') c++; + c++; + + return *c; +} + +static void dirty_ring_wq_collect_dirty_pages(struct kvm_vm *vm, int slot, + void *bitmap, uint32_t num_pages) +{ + uint32_t count = test_dirty_ring_count; + struct kvm_run *state = vcpu_state(vm, VCPU_ID); + struct kvm_dirty_ring_indexes *indexes = &state->vcpu_ring_indexes; + uint32_t avail; + + while (count--) { + /* + * Force vcpu to run enough time to make sure we + * trigger the ring full case + */ + sem_post(&dirty_ring_vcpu_cont); + } + + /* Make sure it's stuck */ + TEST_ASSERT(vcpu_thread_tid, "TID not inited"); + /* + * Wait for /proc/pid/status "Status:" changes to "D". "D" + * stands for "D (disk sleep)", TASK_UNINTERRUPTIBLE + */ + while (read_tid_status_char(vcpu_thread_tid) != 'D') { + usleep(1000); + } + DEBUG("Now VCPU thread dirty ring full\n"); + + avail = READ_ONCE(indexes->avail_index); + /* Assuming we've consumed all */ + WRITE_ONCE(indexes->fetch_index, avail); + + kvm_vm_reset_dirty_ring(vm); + + /* Wait for it to be awake */ + while (read_tid_status_char(vcpu_thread_tid) == 'D') { + usleep(1000); + } + DEBUG("VCPU Thread is successfully waked up\n"); + + exit(0); +} + static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err) { struct kvm_run *run = vcpu_state(vm, VCPU_ID); @@ -353,6 +445,12 @@ struct log_mode { .before_vcpu_join = dirty_ring_before_vcpu_join, .after_vcpu_run = dirty_ring_after_vcpu_run, }, + { + .name = "dirty-ring-wait-queue", + .create_vm_done = dirty_ring_wq_create_vm_done, + .collect_dirty_pages = dirty_ring_wq_collect_dirty_pages, + .after_vcpu_run = dirty_ring_after_vcpu_run, + }, }; /* @@ -422,6 +520,9 @@ static void *vcpu_worker(void *data) uint64_t *guest_array; struct sigaction sigact; + vcpu_thread_tid = syscall(SYS_gettid); + printf("VCPU Thread ID: %u\n", vcpu_thread_tid); + current_vm = vm; memset(&sigact, 0, sizeof(sigact)); sigact.sa_handler = vcpu_sig_handler;