From patchwork Tue Mar 31 18:59:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468259 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 26EFF92C for ; Tue, 31 Mar 2020 19:00:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 04904207FF for ; Tue, 31 Mar 2020 19:00:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GO+8PA/K" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727627AbgCaTAO (ORCPT ); Tue, 31 Mar 2020 15:00:14 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:49625 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726295AbgCaTAN (ORCPT ); Tue, 31 Mar 2020 15:00:13 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681212; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zL6sq31OFwBXF/RS7PK1GArrpq69jRR45bepTSw46TU=; b=GO+8PA/KdDHQqj2lPSUe2/KuzG47nqbm8j6XfBRl3s90dve8YCpPAJAahf7jIxhmvyo2Z+ hZtYSxAMyBHUwBqTdOu/x0WbGbQsNdt44BGrfT73p07pZEjMRiX24MqGKrtHPCddP4lVjm qtPld3QGpZ7qUiEEtdKOAeqrWGjBd5A= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-73-YYb3VtZ4PXW-_EfU9QqxvQ-1; Tue, 31 Mar 2020 15:00:10 -0400 X-MC-Unique: YYb3VtZ4PXW-_EfU9QqxvQ-1 Received: by mail-wr1-f72.google.com with SMTP id v17so9307934wro.21 for ; Tue, 31 Mar 2020 12:00:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zL6sq31OFwBXF/RS7PK1GArrpq69jRR45bepTSw46TU=; b=EHRmOWrV+9YUaFPycnvZEeLoDyIcVeV8ZvaXR8CW830pgmnjz/ft8RIBwY5x58/xKV YOQ4jyHZJEY8Sc9bh8Y7biqPS1QN0uJC93xoMeNO5Xi0mu8XmvbDCXWx1qdNIHY2WkKK PYpV2u/YYn65ux/ywAQsK9Ulyxieazvf5tijqp/3oWT73UPTkD6p3jf1Y4ZUQzC23T3O eIv4czfd2Auhph8xA/II7xEm8DbmWq4gXG7hV1zalwvVH7jo4UadC18z/LQHeUA4ggEl 6CfYZFaJAnxXb7WMrz9cZdaii2YBTIfnxMpoGUwq1opWLv3ccYtn5gAkV1JnVMjvOlYP kd+w== X-Gm-Message-State: ANhLgQ3QkLejWn209NEUXFRyICr9OqAiPEf1JGC0U8FpdREu3npGueWt cmVbS0wGuf1Ph/XbIpT8lq27TH2jue+mLwMoS9a/q7Ju4+qx5MBZ/jHLipFkcCsIsDBFeZ99Mn4 tfTrj+UH3TwNf X-Received: by 2002:adf:b186:: with SMTP id q6mr21784782wra.253.1585681209639; Tue, 31 Mar 2020 12:00:09 -0700 (PDT) X-Google-Smtp-Source: ADFU+vuw1mD28p3JHof9YrDOHCb6+VgxgT119t2q54QLWIHZ0l4+T/QIDJWlvKoTTEJJxE9mpkLKdQ== X-Received: by 2002:adf:b186:: with SMTP id q6mr21784753wra.253.1585681209403; Tue, 31 Mar 2020 12:00:09 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id h10sm29018467wrq.33.2020.03.31.12.00.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:00:08 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com Subject: [PATCH v8 01/14] KVM: X86: Change parameter for fast_page_fault tracepoint Date: Tue, 31 Mar 2020 14:59:47 -0400 Message-Id: <20200331190000.659614-2-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org It would be clearer to dump the return value to know easily on whether did we go through the fast path for handling current page fault. Remove the old two last parameters because after all the old/new sptes were dumped in the same line. Signed-off-by: Peter Xu --- arch/x86/kvm/mmutrace.h | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86/kvm/mmutrace.h index ffcd96fc02d0..ef523e760743 100644 --- a/arch/x86/kvm/mmutrace.h +++ b/arch/x86/kvm/mmutrace.h @@ -244,9 +244,6 @@ TRACE_EVENT( __entry->access) ); -#define __spte_satisfied(__spte) \ - (__entry->retry && is_writable_pte(__entry->__spte)) - TRACE_EVENT( fast_page_fault, TP_PROTO(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u32 error_code, @@ -274,12 +271,10 @@ TRACE_EVENT( ), TP_printk("vcpu %d gva %llx error_code %s sptep %p old %#llx" - " new %llx spurious %d fixed %d", __entry->vcpu_id, + " new %llx ret %d", __entry->vcpu_id, __entry->cr2_or_gpa, __print_flags(__entry->error_code, "|", kvm_mmu_trace_pferr_flags), __entry->sptep, - __entry->old_spte, __entry->new_spte, - __spte_satisfied(old_spte), __spte_satisfied(new_spte) - ) + __entry->old_spte, __entry->new_spte, __entry->retry) ); TRACE_EVENT( From patchwork Tue Mar 31 18:59:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468261 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7484892C for ; Tue, 31 Mar 2020 19:00:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 520DA2137B for ; Tue, 31 Mar 2020 19:00:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="drY80bYw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728461AbgCaTAY (ORCPT ); Tue, 31 Mar 2020 15:00:24 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:58183 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728301AbgCaTAW (ORCPT ); Tue, 31 Mar 2020 15:00:22 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681222; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SaPQXaz9TKeQDPqLSUMIea2gfLqrEOy+hFjS5pVMWw8=; b=drY80bYwEEaUZTCDRYn/JWhT9jhHjf3Qc/+yB9DZNyXymVIH8XhBg+g2ZrxvlIaaHGd8kP a9Ck0Ws9PIAFyd64QtEVVTBtqyHUAFnAkC/cnvVCbQgoy6ArhAek/Aca0zNsuO4wJsyQQg s7RfiMj2wiLTHCvGZY+i3YRfc52OEEg= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-315-CdWQ95bjNVaGvkDeSs1nmA-1; Tue, 31 Mar 2020 15:00:16 -0400 X-MC-Unique: CdWQ95bjNVaGvkDeSs1nmA-1 Received: by mail-wm1-f71.google.com with SMTP id t22so1058707wmt.4 for ; Tue, 31 Mar 2020 12:00:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SaPQXaz9TKeQDPqLSUMIea2gfLqrEOy+hFjS5pVMWw8=; b=QhMInD2FgmakSCij/ydI3OUgj6xX5lSy7j998E1JuDRplzU2oCcahU/ypLaOWHLvTe kmzttUhJm5tn/tkFmvRKFq27ziLl5lmk0svvDXXFZblI8Du1fKwCaU5Z8o0hbVEu6S/l cx9rI6R0haKUDa62B+0GtDE/99sOvQMpekP9ARTu1WmWLFGXtK+oB2iff7Auf3Ndj7rK 8v31+BCDUc8MJ4T7nIYqmwQgltriYZZBfvi9Mb04SvNF4XXji2ObLvawRGnkBDdvKoLJ E+NO26n9mMunQW1wo5M7ezHi0opNCrXlQjKyxbF5kZn4JhozupgcGQkOZR7NE6zqKTfs DBZw== X-Gm-Message-State: ANhLgQ2V2C2uv//MnQg1SOuQeWMm3Xg+/vo8+ULsVnmvmIUcqPaBsE2c SFXGOqoEGLeoJo4oMjXRnbZCVE9kD8brs9S8L2+gxKhI63suGEPHmfvGuqEW5a/9m9Xc502VulY uKY68WA0KmE5W X-Received: by 2002:a05:6000:1142:: with SMTP id d2mr14417447wrx.320.1585681215825; Tue, 31 Mar 2020 12:00:15 -0700 (PDT) X-Google-Smtp-Source: ADFU+vs0Lmm2yi8/lUh7kTfD26unrSU3jTgx7/Q3MKzSFGqyTBAic1CzUicbDhtyuT9z9CxyagfGGg== X-Received: by 2002:a05:6000:1142:: with SMTP id d2mr14417424wrx.320.1585681215622; Tue, 31 Mar 2020 12:00:15 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id a8sm4817655wmb.39.2020.03.31.12.00.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:00:15 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com Subject: [PATCH v8 02/14] KVM: Cache as_id in kvm_memory_slot Date: Tue, 31 Mar 2020 14:59:48 -0400 Message-Id: <20200331190000.659614-3-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Cache the address space ID just like the slot ID. It will be used in order to fill in the dirty ring entries. Suggested-by: Paolo Bonzini Suggested-by: Sean Christopherson Signed-off-by: Peter Xu --- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 1 + 2 files changed, 2 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f6a1905da9bf..515570116b60 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -346,6 +346,7 @@ struct kvm_memory_slot { unsigned long userspace_addr; u32 flags; short id; + u16 as_id; }; static inline unsigned long kvm_dirty_bitmap_bytes(struct kvm_memory_slot *memslot) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f744bc603c53..c04612726e85 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1243,6 +1243,7 @@ int __kvm_set_memory_region(struct kvm *kvm, if (!mem->memory_size) return kvm_delete_memslot(kvm, mem, &old, as_id); + new.as_id = as_id; new.id = id; new.base_gfn = mem->guest_phys_addr >> PAGE_SHIFT; new.npages = mem->memory_size >> PAGE_SHIFT; From patchwork Tue Mar 31 18:59:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468267 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 94BDA92C for ; Tue, 31 Mar 2020 19:00:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5D94721473 for ; Tue, 31 Mar 2020 19:00:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bB69EAWd" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729051AbgCaTAd (ORCPT ); Tue, 31 Mar 2020 15:00:33 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:31653 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728492AbgCaTAc (ORCPT ); Tue, 31 Mar 2020 15:00:32 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681231; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Q3MwMaAcbeVrTpYtUDAY4tu26s2H/OuHsu7M7PH3POY=; b=bB69EAWd/DaUv8gvMeRnSrLRe7sPytQCAhPsb1PHwx9gh0YiuaqNCnA3fYk7YgrZ4GXGSr QlQgUT0rHNKKmMJnUfonlXYBH1Jnvzqy9C0XO72h1K036NMh22JDWYkRFJ1s1rzYQr6SWq zerfsejEayOza0IPBBWUZKacJx1wA58= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-417-sZDaajFvOC6nKZzDFs72Gg-1; Tue, 31 Mar 2020 15:00:24 -0400 X-MC-Unique: sZDaajFvOC6nKZzDFs72Gg-1 Received: by mail-wm1-f69.google.com with SMTP id l13so706461wme.7 for ; Tue, 31 Mar 2020 12:00:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q3MwMaAcbeVrTpYtUDAY4tu26s2H/OuHsu7M7PH3POY=; b=m1DIG1Nculsx3mFdcpsoQU65e6QhJeXk+fIsNdLiB/7ashbMfSHTDNhgHVGHpQx/EA huzxN62Zs2Asl7ox5VFfRz+Iv4CyYk6Ikd5UsCxiM/3cut2l3NK+tM5VQmtn5FfCuXG8 YuIkYD5qkt/TZEOU+3pcw1JAyBP45V8IqbB7pBMNdZoCciPslqGrffp/4RKrTO46k5Ja 4SSEPJk2VymzNRZennJZy5/af2hDqwlKlFd98qVggkhb7IDUpdsWvehp15MBCWaGcCQH Hsgd3AS7nTUCWAO0IyKbi+QVihRi3Kvve5MAdI8RAbLgh2JowY3cuKBwtH8nC4J/D31H Zgsw== X-Gm-Message-State: ANhLgQ3NOQ0l6ZlL0nFwgiGdtpzjMLLW+tNbOn8dLsfMVQ2XZQIy4OZl Pvnuld0Zmkni0eP8rzl94FUlQHQDcVvWgLbDwv2vXroD5eyjepoimf5csezlTshJVPVEbHqwbJV 3gKbTV6NuRnh8 X-Received: by 2002:adf:fc08:: with SMTP id i8mr22270019wrr.109.1585681222959; Tue, 31 Mar 2020 12:00:22 -0700 (PDT) X-Google-Smtp-Source: ADFU+vvGQ4owG0ZJqRauYX4MBPuawFfENn9lwauZfu79U6X8338lkJyQA34TvhNt/oX8GDgLmQExow== X-Received: by 2002:adf:fc08:: with SMTP id i8mr22269995wrr.109.1585681222660; Tue, 31 Mar 2020 12:00:22 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id r15sm29479563wra.19.2020.03.31.12.00.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:00:21 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com Subject: [PATCH v8 03/14] KVM: X86: Don't track dirty for KVM_SET_[TSS_ADDR|IDENTITY_MAP_ADDR] Date: Tue, 31 Mar 2020 14:59:49 -0400 Message-Id: <20200331190000.659614-4-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Originally, we have three code paths that can dirty a page without vcpu context for X86: - init_rmode_identity_map - init_rmode_tss - kvmgt_rw_gpa init_rmode_identity_map and init_rmode_tss will be setup on destination VM no matter what (and the guest cannot even see them), so it does not make sense to track them at all. To do this, allow __x86_set_memory_region() to return the userspace address that just allocated to the caller. Then in both of the functions we directly write to the userspace address instead of calling kvm_write_*() APIs. Another trivial change is that we don't need to explicitly clear the identity page table root in init_rmode_identity_map() because no matter what we'll write to the whole page with 4M huge page entries. Suggested-by: Paolo Bonzini Signed-off-by: Peter Xu --- arch/x86/include/asm/kvm_host.h | 3 +- arch/x86/kvm/svm.c | 9 ++-- arch/x86/kvm/vmx/vmx.c | 82 ++++++++++++++++----------------- arch/x86/kvm/x86.c | 39 +++++++++++++--- 4 files changed, 81 insertions(+), 52 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9a183e9d4cb1..a8c68f626fb5 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1645,7 +1645,8 @@ void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu); int kvm_is_in_guest(void); -int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size); +void __user *__x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, + u32 size); bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu); bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 05cb45bc0e08..140bff1946b1 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1785,7 +1785,8 @@ static u64 *avic_get_physical_id_entry(struct kvm_vcpu *vcpu, */ static int avic_update_access_page(struct kvm *kvm, bool activate) { - int ret = 0; + void __user *ret; + int r = 0; mutex_lock(&kvm->slots_lock); /* @@ -1801,13 +1802,15 @@ static int avic_update_access_page(struct kvm *kvm, bool activate) APIC_ACCESS_PAGE_PRIVATE_MEMSLOT, APIC_DEFAULT_PHYS_BASE, activate ? PAGE_SIZE : 0); - if (ret) + if (IS_ERR(ret)) { + r = PTR_ERR(ret); goto out; + } kvm->arch.apic_access_page_done = activate; out: mutex_unlock(&kvm->slots_lock); - return ret; + return r; } static int avic_init_backing_page(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index a7dd67859bd4..529b04ca0ac8 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3432,34 +3432,26 @@ static bool guest_state_valid(struct kvm_vcpu *vcpu) return true; } -static int init_rmode_tss(struct kvm *kvm) +static int init_rmode_tss(struct kvm *kvm, void __user *ua) { - gfn_t fn; - u16 data = 0; - int idx, r; + const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0))); + u16 data; + int i, r; + + for (i = 0; i < 3; i++) { + r = __copy_to_user(ua + PAGE_SIZE * i, zero_page, PAGE_SIZE); + if (r) + return -EFAULT; + } - idx = srcu_read_lock(&kvm->srcu); - fn = to_kvm_vmx(kvm)->tss_addr >> PAGE_SHIFT; - r = kvm_clear_guest_page(kvm, fn, 0, PAGE_SIZE); - if (r < 0) - goto out; data = TSS_BASE_SIZE + TSS_REDIRECTION_SIZE; - r = kvm_write_guest_page(kvm, fn++, &data, - TSS_IOPB_BASE_OFFSET, sizeof(u16)); - if (r < 0) - goto out; - r = kvm_clear_guest_page(kvm, fn++, 0, PAGE_SIZE); - if (r < 0) - goto out; - r = kvm_clear_guest_page(kvm, fn, 0, PAGE_SIZE); - if (r < 0) - goto out; + r = __copy_to_user(ua + TSS_IOPB_BASE_OFFSET, &data, sizeof(u16)); + if (r) + return -EFAULT; + data = ~0; - r = kvm_write_guest_page(kvm, fn, &data, - RMODE_TSS_SIZE - 2 * PAGE_SIZE - 1, - sizeof(u8)); -out: - srcu_read_unlock(&kvm->srcu, idx); + r = __copy_to_user(ua + RMODE_TSS_SIZE - 1, &data, sizeof(u8)); + return r; } @@ -3468,6 +3460,7 @@ static int init_rmode_identity_map(struct kvm *kvm) struct kvm_vmx *kvm_vmx = to_kvm_vmx(kvm); int i, r = 0; kvm_pfn_t identity_map_pfn; + void __user *uaddr; u32 tmp; /* Protect kvm_vmx->ept_identity_pagetable_done. */ @@ -3480,22 +3473,24 @@ static int init_rmode_identity_map(struct kvm *kvm) kvm_vmx->ept_identity_map_addr = VMX_EPT_IDENTITY_PAGETABLE_ADDR; identity_map_pfn = kvm_vmx->ept_identity_map_addr >> PAGE_SHIFT; - r = __x86_set_memory_region(kvm, IDENTITY_PAGETABLE_PRIVATE_MEMSLOT, - kvm_vmx->ept_identity_map_addr, PAGE_SIZE); - if (r < 0) + uaddr = __x86_set_memory_region(kvm, + IDENTITY_PAGETABLE_PRIVATE_MEMSLOT, + kvm_vmx->ept_identity_map_addr, + PAGE_SIZE); + if (IS_ERR(uaddr)) { + r = PTR_ERR(uaddr); goto out; + } - r = kvm_clear_guest_page(kvm, identity_map_pfn, 0, PAGE_SIZE); - if (r < 0) - goto out; /* Set up identity-mapping pagetable for EPT in real mode */ for (i = 0; i < PT32_ENT_PER_PAGE; i++) { tmp = (i << 22) + (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_PSE); - r = kvm_write_guest_page(kvm, identity_map_pfn, - &tmp, i * sizeof(tmp), sizeof(tmp)); - if (r < 0) + r = __copy_to_user(uaddr + i * sizeof(tmp), &tmp, sizeof(tmp)); + if (r) { + r = -EFAULT; goto out; + } } kvm_vmx->ept_identity_pagetable_done = true; @@ -3522,19 +3517,22 @@ static void seg_setup(int seg) static int alloc_apic_access_page(struct kvm *kvm) { struct page *page; - int r = 0; + void __user *r; + int ret = 0; mutex_lock(&kvm->slots_lock); if (kvm->arch.apic_access_page_done) goto out; r = __x86_set_memory_region(kvm, APIC_ACCESS_PAGE_PRIVATE_MEMSLOT, APIC_DEFAULT_PHYS_BASE, PAGE_SIZE); - if (r) + if (IS_ERR(r)) { + ret = PTR_ERR(r); goto out; + } page = gfn_to_page(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT); if (is_error_page(page)) { - r = -EFAULT; + ret = -EFAULT; goto out; } @@ -3546,7 +3544,7 @@ static int alloc_apic_access_page(struct kvm *kvm) kvm->arch.apic_access_page_done = true; out: mutex_unlock(&kvm->slots_lock); - return r; + return ret; } int allocate_vpid(void) @@ -4473,7 +4471,7 @@ static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu) static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr) { - int ret; + void __user *ret; if (enable_unrestricted_guest) return 0; @@ -4483,10 +4481,12 @@ static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr) PAGE_SIZE * 3); mutex_unlock(&kvm->slots_lock); - if (ret) - return ret; + if (IS_ERR(ret)) + return PTR_ERR(ret); + to_kvm_vmx(kvm)->tss_addr = addr; - return init_rmode_tss(kvm); + + return init_rmode_tss(kvm, ret); } static int vmx_set_identity_map_addr(struct kvm *kvm, u64 ident_addr) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 1b6d9ac9533c..faa702c4d37b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9791,7 +9791,32 @@ void kvm_arch_sync_events(struct kvm *kvm) kvm_free_pit(kvm); } -int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size) +#define ERR_PTR_USR(e) ((void __user *)ERR_PTR(e)) + +/** + * __x86_set_memory_region: Setup KVM internal memory slot + * + * @kvm: the kvm pointer to the VM. + * @id: the slot ID to setup. + * @gpa: the GPA to install the slot (unused when @size == 0). + * @size: the size of the slot. Set to zero to uninstall a slot. + * + * This function helps to setup a KVM internal memory slot. Specify + * @size > 0 to install a new slot, while @size == 0 to uninstall a + * slot. The return code can be one of the following: + * + * HVA: on success (uninstall will return a bogus HVA) + * -errno: on error + * + * The caller should always use IS_ERR() to check the return value + * before use. Note, the KVM internal memory slots are guaranteed to + * remain valid and unchanged until the VM is destroyed, i.e., the + * GPA->HVA translation will not change. However, the HVA is a user + * address, i.e. its accessibility is not guaranteed, and must be + * accessed via __copy_{to,from}_user(). + */ +void __user * __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, + u32 size) { int i, r; unsigned long hva, uninitialized_var(old_npages); @@ -9800,12 +9825,12 @@ int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size) /* Called with kvm->slots_lock held. */ if (WARN_ON(id >= KVM_MEM_SLOTS_NUM)) - return -EINVAL; + return ERR_PTR_USR(-EINVAL); slot = id_to_memslot(slots, id); if (size) { if (slot && slot->npages) - return -EEXIST; + return ERR_PTR_USR(-EEXIST); /* * MAP_SHARED to prevent internal slot pages from being moved @@ -9814,10 +9839,10 @@ int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size) hva = vm_mmap(NULL, 0, size, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, 0); if (IS_ERR((void *)hva)) - return PTR_ERR((void *)hva); + return (void __user *)hva; } else { if (!slot || !slot->npages) - return 0; + return ERR_PTR_USR(0); /* * Stuff a non-canonical value to catch use-after-delete. This @@ -9838,13 +9863,13 @@ int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size) m.memory_size = size; r = __kvm_set_memory_region(kvm, &m); if (r < 0) - return r; + return ERR_PTR_USR(r); } if (!size) vm_munmap(hva, old_npages * PAGE_SIZE); - return 0; + return (void __user *)hva; } EXPORT_SYMBOL_GPL(__x86_set_memory_region); From patchwork Tue Mar 31 18:59:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468265 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9686092C for ; Tue, 31 Mar 2020 19:00:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 68936208E0 for ; Tue, 31 Mar 2020 19:00:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UZ6Olkjb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729405AbgCaTAd (ORCPT ); Tue, 31 Mar 2020 15:00:33 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:42687 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729069AbgCaTAd (ORCPT ); Tue, 31 Mar 2020 15:00:33 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681232; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=942TZSgCZTIXNGim4SfsG/Ay9TGVU0UU6faKqGGsXTY=; b=UZ6OlkjbLD06RZl/VNpHw5F/4YCcxiojRih6dJp+EYx3sN2cLzLk83YCZjoJj7RnxJWx0s M9qK5Bi7F3/yMA641XkOHqAHfXit4V2xUa0X3PxT8DUMYf2lgaWB0NIv3/Fd5ZoXdShH9j 0KDGHUlQu69UIlwnTxXuU2jLXEE1qXs= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-273-VJZ4V7EwN-WsWr-rmwJGAQ-1; Tue, 31 Mar 2020 15:00:30 -0400 X-MC-Unique: VJZ4V7EwN-WsWr-rmwJGAQ-1 Received: by mail-wm1-f69.google.com with SMTP id s22so1055594wmh.8 for ; Tue, 31 Mar 2020 12:00:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=942TZSgCZTIXNGim4SfsG/Ay9TGVU0UU6faKqGGsXTY=; b=bzXONMOZrH1ONyOB7TsbWZgVigyH8CocPLIqcJ6/XwTkiKsfVM8IDNTkkSMU+kzMQB nmXUhtQ2uJgZWbByNaNZpkB4cBFwDXI52Mtina2KyQrvahbgVCvVn0OVa/K8dLXYPBCo WXGubcJ583fTjqOhOCLeF6kF2714gHNxjW9+zKWelmDXEm4aYe8hhQwLgjg1KrceGZjv O27A5w2O4o5NeHzzU9mk1s7Cgv0MhsXBa639VobT94s4KG4WJLXOsIlzY25ZuVXPEjl3 ibqOKNZmPUWfe6t22BCwrVBqehk2BbwlvmypIzqnjEMm4+pn4TTdLQQf+kOxhlU+9GOs LfGg== X-Gm-Message-State: AGi0PuZ/0wePdp3zL9EP+GmmN5up4N89TjDpB9VMpL3Q60F2GE+oi+gx xzp1MFGWG4/hXFou3tEaO4vCC//q+4PPMX4mKPqCkaDOIGaS01wxWc9r6u0JuIivYIaxcW0g9+z xVtWnSSVMOti4 X-Received: by 2002:a1c:f70a:: with SMTP id v10mr319012wmh.72.1585681229271; Tue, 31 Mar 2020 12:00:29 -0700 (PDT) X-Google-Smtp-Source: APiQypKNgdaGXaBJc5fR6s6Yyf+lq0KrRuYoKuWVHf4TrUF9elB8CYNwFzvUoQ50OoDHIPomdVVz4A== X-Received: by 2002:a1c:f70a:: with SMTP id v10mr318980wmh.72.1585681229021; Tue, 31 Mar 2020 12:00:29 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id 61sm29852553wrn.82.2020.03.31.12.00.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:00:28 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com Subject: [PATCH v8 04/14] KVM: Pass in kvm pointer into mark_page_dirty_in_slot() Date: Tue, 31 Mar 2020 14:59:50 -0400 Message-Id: <20200331190000.659614-5-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The context will be needed to implement the kvm dirty ring. Signed-off-by: Peter Xu --- virt/kvm/kvm_main.c | 33 +++++++++++++++++++-------------- 1 file changed, 19 insertions(+), 14 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c04612726e85..1f869dda8110 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -144,7 +144,9 @@ static void hardware_disable_all(void); static void kvm_io_bus_destroy(struct kvm_io_bus *bus); -static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, gfn_t gfn); +static void mark_page_dirty_in_slot(struct kvm *kvm, + struct kvm_memory_slot *memslot, + gfn_t gfn); __visible bool kvm_rebooting; EXPORT_SYMBOL_GPL(kvm_rebooting); @@ -2120,7 +2122,8 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) } EXPORT_SYMBOL_GPL(kvm_vcpu_map); -static void __kvm_unmap_gfn(struct kvm_memory_slot *memslot, +static void __kvm_unmap_gfn(struct kvm *kvm, + struct kvm_memory_slot *memslot, struct kvm_host_map *map, struct gfn_to_pfn_cache *cache, bool dirty, bool atomic) @@ -2145,7 +2148,7 @@ static void __kvm_unmap_gfn(struct kvm_memory_slot *memslot, #endif if (dirty) - mark_page_dirty_in_slot(memslot, map->gfn); + mark_page_dirty_in_slot(kvm, memslot, map->gfn); if (cache) cache->dirty |= dirty; @@ -2159,7 +2162,7 @@ static void __kvm_unmap_gfn(struct kvm_memory_slot *memslot, int kvm_unmap_gfn(struct kvm_vcpu *vcpu, struct kvm_host_map *map, struct gfn_to_pfn_cache *cache, bool dirty, bool atomic) { - __kvm_unmap_gfn(gfn_to_memslot(vcpu->kvm, map->gfn), map, + __kvm_unmap_gfn(vcpu->kvm, gfn_to_memslot(vcpu->kvm, map->gfn), map, cache, dirty, atomic); return 0; } @@ -2167,8 +2170,8 @@ EXPORT_SYMBOL_GPL(kvm_unmap_gfn); void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty) { - __kvm_unmap_gfn(kvm_vcpu_gfn_to_memslot(vcpu, map->gfn), map, NULL, - dirty, false); + __kvm_unmap_gfn(vcpu->kvm, kvm_vcpu_gfn_to_memslot(vcpu, map->gfn), + map, NULL, dirty, false); } EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); @@ -2342,7 +2345,8 @@ int kvm_vcpu_read_guest_atomic(struct kvm_vcpu *vcpu, gpa_t gpa, } EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_atomic); -static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn, +static int __kvm_write_guest_page(struct kvm *kvm, + struct kvm_memory_slot *memslot, gfn_t gfn, const void *data, int offset, int len) { int r; @@ -2354,7 +2358,7 @@ static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn, r = __copy_to_user((void __user *)addr + offset, data, len); if (r) return -EFAULT; - mark_page_dirty_in_slot(memslot, gfn); + mark_page_dirty_in_slot(kvm, memslot, gfn); return 0; } @@ -2363,7 +2367,7 @@ int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn, { struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); - return __kvm_write_guest_page(slot, gfn, data, offset, len); + return __kvm_write_guest_page(kvm, slot, gfn, data, offset, len); } EXPORT_SYMBOL_GPL(kvm_write_guest_page); @@ -2372,7 +2376,7 @@ int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, { struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - return __kvm_write_guest_page(slot, gfn, data, offset, len); + return __kvm_write_guest_page(vcpu->kvm, slot, gfn, data, offset, len); } EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest_page); @@ -2491,7 +2495,7 @@ int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, r = __copy_to_user((void __user *)ghc->hva + offset, data, len); if (r) return -EFAULT; - mark_page_dirty_in_slot(ghc->memslot, gpa >> PAGE_SHIFT); + mark_page_dirty_in_slot(kvm, ghc->memslot, gpa >> PAGE_SHIFT); return 0; } @@ -2558,7 +2562,8 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len) } EXPORT_SYMBOL_GPL(kvm_clear_guest); -static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, +static void mark_page_dirty_in_slot(struct kvm *kvm, + struct kvm_memory_slot *memslot, gfn_t gfn) { if (memslot && memslot->dirty_bitmap) { @@ -2573,7 +2578,7 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn) struct kvm_memory_slot *memslot; memslot = gfn_to_memslot(kvm, gfn); - mark_page_dirty_in_slot(memslot, gfn); + mark_page_dirty_in_slot(kvm, memslot, gfn); } EXPORT_SYMBOL_GPL(mark_page_dirty); @@ -2582,7 +2587,7 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn) struct kvm_memory_slot *memslot; memslot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - mark_page_dirty_in_slot(memslot, gfn); + mark_page_dirty_in_slot(vcpu->kvm, memslot, gfn); } EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty); From patchwork Tue Mar 31 18:59:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468269 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 849BA1667 for ; Tue, 31 Mar 2020 19:00:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 394102137B for ; Tue, 31 Mar 2020 19:00:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fL8E5QNk" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730408AbgCaTAo (ORCPT ); Tue, 31 Mar 2020 15:00:44 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:28134 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729925AbgCaTAo (ORCPT ); Tue, 31 Mar 2020 15:00:44 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681242; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sNxylM8pNLr4Y2kFaaEuPGPZPBVtw0E5VJSPXULzl4M=; b=fL8E5QNkzZI9egO2wkgkA+6OURVBlL+dwJHyQ1Zcm3XggZ1eM6r8aUM6kowvxoB8sK7Aj6 7Yowvi03ghHul6JgGq3Q5JxKVMLYcQLc4PGtAkb0BXy2XhGvs7/Utctn3unt/CxH1hXnAh 12GIF88+XXmFjlzE9SPFWefCJxhO6Ck= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-397-CAKwssa7Mtq4gMoC_wM93A-1; Tue, 31 Mar 2020 15:00:38 -0400 X-MC-Unique: CAKwssa7Mtq4gMoC_wM93A-1 Received: by mail-wm1-f70.google.com with SMTP id f8so1443108wmh.4 for ; Tue, 31 Mar 2020 12:00:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sNxylM8pNLr4Y2kFaaEuPGPZPBVtw0E5VJSPXULzl4M=; b=idHidG9KKBEMMXYPum19ostVsUpitBNumM1KR3beKjn4nke2a+DqtG3CNXV/jk1Mbl 4lBbYe2FDWBsAjjAVuhZYhNORSQwC68e3GnkA4CpeZkDXodsS0kxYvP3nRMi2x6EtwXl g3mvTd2UG+HhRn58W7RUia2w6FGV5Ghi66bz71ACoGP372ynza/KoqE/AYwj8s3QE+Yw IVCwfmfJPcCwx9kE9pLTM2Ya4gBPbzLmO+IK4T9vafnIKWTxskLc6PV+1WeXGaNYrZLX 2e53sso8DljGhkpp4v//+iLqtIU0iySP1eHBCfdrZAk6bxufkRUKebG73/cbz3FpHN6t D4GA== X-Gm-Message-State: AGi0PuYiXubGn5XRcIWwUAZS8Q40xcx6Uomnn/jalq7JE651uB7xsR65 bAYlxaaygBkMW+Qct9c4gs3/nmzEauX6u4sE4p5mKmt3SX7hZMaGlbj/OQjlENbrcIbLV3oHllK bNmzkbzKGPc3s X-Received: by 2002:a7b:cd0c:: with SMTP id f12mr352480wmj.4.1585681235265; Tue, 31 Mar 2020 12:00:35 -0700 (PDT) X-Google-Smtp-Source: APiQypKvEyDiS8WVye8wl4SsbfSTsnLc2UmQBtvitYBCFHM4uIg+8+gsmCyyWiBaH+QXRtOt6oyPbw== X-Received: by 2002:a7b:cd0c:: with SMTP id f12mr352416wmj.4.1585681234338; Tue, 31 Mar 2020 12:00:34 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id q4sm4688099wmj.1.2020.03.31.12.00.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:00:33 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com, Lei Cao Subject: [PATCH v8 05/14] KVM: X86: Implement ring-based dirty memory tracking Date: Tue, 31 Mar 2020 14:59:51 -0400 Message-Id: <20200331190000.659614-6-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch is heavily based on previous work from Lei Cao and Paolo Bonzini . [1] KVM currently uses large bitmaps to track dirty memory. These bitmaps are copied to userspace when userspace queries KVM for its dirty page information. The use of bitmaps is mostly sufficient for live migration, as large parts of memory are be dirtied from one log-dirty pass to another. However, in a checkpointing system, the number of dirty pages is small and in fact it is often bounded---the VM is paused when it has dirtied a pre-defined number of pages. Traversing a large, sparsely populated bitmap to find set bits is time-consuming, as is copying the bitmap to user-space. A similar issue will be there for live migration when the guest memory is huge while the page dirty procedure is trivial. In that case for each dirty sync we need to pull the whole dirty bitmap to userspace and analyse every bit even if it's mostly zeros. The preferred data structure for above scenarios is a dense list of guest frame numbers (GFN). This patch series stores the dirty list in kernel memory that can be memory mapped into userspace to allow speedy harvesting. This patch enables dirty ring for X86 only. However it should be easily extended to other archs as well. [1] https://patchwork.kernel.org/patch/10471409/ Signed-off-by: Lei Cao Signed-off-by: Paolo Bonzini Signed-off-by: Peter Xu --- Documentation/virt/kvm/api.rst | 116 +++++++++++++++++++ arch/x86/include/asm/kvm_host.h | 3 + arch/x86/include/uapi/asm/kvm.h | 1 + arch/x86/kvm/Makefile | 3 +- arch/x86/kvm/mmu/mmu.c | 6 + arch/x86/kvm/vmx/vmx.c | 7 ++ arch/x86/kvm/x86.c | 9 ++ include/linux/kvm_dirty_ring.h | 103 +++++++++++++++++ include/linux/kvm_host.h | 13 +++ include/trace/events/kvm.h | 78 +++++++++++++ include/uapi/linux/kvm.h | 53 +++++++++ virt/kvm/dirty_ring.c | 195 ++++++++++++++++++++++++++++++++ virt/kvm/kvm_main.c | 112 +++++++++++++++++- 13 files changed, 697 insertions(+), 2 deletions(-) create mode 100644 include/linux/kvm_dirty_ring.h create mode 100644 virt/kvm/dirty_ring.c diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index efbbe570aa9b..aa54a34077b7 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -249,6 +249,7 @@ Based on their initialization different VMs may have different capabilities. It is thus encouraged to use the vm ioctl to query for capabilities (available with KVM_CAP_CHECK_EXTENSION_VM on the vm fd) + 4.5 KVM_GET_VCPU_MMAP_SIZE -------------------------- @@ -262,6 +263,18 @@ The KVM_RUN ioctl (cf.) communicates with userspace via a shared memory region. This ioctl returns the size of that region. See the KVM_RUN documentation for details. +Besides the size of the KVM_RUN communication region, other areas of +the VCPU file descriptor can be mmap-ed, including: + +- if KVM_CAP_COALESCED_MMIO is available, a page at + KVM_COALESCED_MMIO_PAGE_OFFSET * PAGE_SIZE; for historical reasons, + this page is included in the result of KVM_GET_VCPU_MMAP_SIZE. + KVM_CAP_COALESCED_MMIO is not documented yet. + +- if KVM_CAP_DIRTY_LOG_RING is available, a number of pages at + KVM_DIRTY_LOG_PAGE_OFFSET * PAGE_SIZE. For more information on + KVM_CAP_DIRTY_LOG_RING, see section 8.3. + 4.6 KVM_SET_MEMORY_REGION ------------------------- @@ -6109,3 +6122,106 @@ KVM can therefore start protected VMs. This capability governs the KVM_S390_PV_COMMAND ioctl and the KVM_MP_STATE_LOAD MP_STATE. KVM_SET_MP_STATE can fail for protected guests when the state change is invalid. + +8.24 KVM_CAP_DIRTY_LOG_RING + +Architectures: x86 +Parameters: args[0] - size of the dirty log ring + +KVM is capable of tracking dirty memory using ring buffers that are +mmaped into userspace; there is one dirty ring per vcpu. + +One dirty ring is defined as below internally: + +struct kvm_dirty_ring { + u32 dirty_index; + u32 reset_index; + u32 size; + u32 soft_limit; + struct kvm_dirty_gfn *dirty_gfns; + int index; +}; + +Dirty GFNs (Guest Frame Numbers) are stored in the dirty_gfns array. +For each of the dirty entry it's defined as: + +struct kvm_dirty_gfn { + __u32 flags; + __u32 slot; /* as_id | slot_id */ + __u64 offset; +}; + +Each GFN is a state machine itself. The state is embeded in the flags +field, as defined in the uapi header: + +/* + * KVM dirty GFN flags, defined as: + * + * |---------------+---------------+--------------| + * | bit 1 (reset) | bit 0 (dirty) | Status | + * |---------------+---------------+--------------| + * | 0 | 0 | Invalid GFN | + * | 0 | 1 | Dirty GFN | + * | 1 | X | GFN to reset | + * |---------------+---------------+--------------| + * + * Lifecycle of a dirty GFN goes like: + * + * dirtied collected reset + * 00 -----------> 01 -------------> 1X -------+ + * ^ | + * | | + * +------------------------------------------+ + * + * The userspace program is only responsible for the 01->1X state + * conversion (to collect dirty bits). Also, it must not skip any + * dirty bits so that dirty bits are always collected in sequence. + */ +#define KVM_DIRTY_GFN_F_DIRTY BIT(0) +#define KVM_DIRTY_GFN_F_RESET BIT(1) +#define KVM_DIRTY_GFN_F_MASK 0x3 + +Userspace calls KVM_ENABLE_CAP ioctl right after KVM_CREATE_VM ioctl +to enable this capability for the new guest and set the size of the +rings. It is only allowed before creating any vCPU, and the size of +the ring must be a power of two. The larger the ring buffer, the less +likely the ring is full and the VM is forced to exit to userspace. The +optimal size depends on the workload, but it is recommended that it be +at least 64 KiB (4096 entries). + +Just like for dirty page bitmaps, the buffer tracks writes to +all user memory regions for which the KVM_MEM_LOG_DIRTY_PAGES flag was +set in KVM_SET_USER_MEMORY_REGION. Once a memory region is registered +with the flag set, userspace can start harvesting dirty pages from the +ring buffer. + +To harvest the dirty pages, userspace accesses the mmaped ring buffer +to read the dirty GFNs starting from zero. If the flags has the DIRTY +bit set (at this stage the RESET bit must be cleared), then it means +this GFN is a dirty GFN. The userspace should collect this GFN and +mark the flags from state 01b to 1Xb (bit 0 will be ignored by KVM, +but bit 1 must be set to show that this GFN is collected and waiting +for a reset), and move on to the next GFN. The userspace should +continue to do this until when the flags of a GFN has the DIRTY bit +cleared, it means we've collected all the dirty GFNs we have for now. +It's not a must that the userspace collects the all dirty GFNs in +once. However it must collect the dirty GFNs in sequence, i.e., the +userspace program cannot skip one dirty GFN to collect the one next to +it. + +After processing one or more entries in the ring buffer, userspace +calls the VM ioctl KVM_RESET_DIRTY_RINGS to notify the kernel about +it, so that the kernel will reprotect those collected GFNs. +Therefore, the ioctl must be called *before* reading the content of +the dirty pages. + +The dirty ring interface has a major difference comparing to the +KVM_GET_DIRTY_LOG interface in that, when reading the dirty ring from +userspace it's still possible that the kernel has not yet flushed the +hardware dirty buffers into the kernel buffer (the flushing was +previously done by the KVM_GET_DIRTY_LOG ioctl). To achieve that, one +needs to kick the vcpu out for a hardware buffer flush (vmexit) to +make sure all the existing dirty gfns are flushed to the dirty rings. + +The dirty ring can gets full. When it happens, the KVM_RUN of the +vcpu will return with exit reason KVM_EXIT_DIRTY_LOG_FULL. diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index a8c68f626fb5..970770bfafbd 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1201,6 +1201,7 @@ struct kvm_x86_ops { struct kvm_memory_slot *slot, gfn_t offset, unsigned long mask); int (*write_log_dirty)(struct kvm_vcpu *vcpu); + int (*cpu_dirty_log_size)(void); /* pmu operations of sub-arch */ const struct kvm_pmu_ops *pmu_ops; @@ -1693,4 +1694,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) #define GET_SMSTATE(type, buf, offset) \ (*(type *)((buf) + (offset) - 0x7e00)) +int kvm_cpu_dirty_log_size(void); + #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index 3f3f780c8c65..99b15ce39e75 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -12,6 +12,7 @@ #define KVM_PIO_PAGE_OFFSET 1 #define KVM_COALESCED_MMIO_PAGE_OFFSET 2 +#define KVM_DIRTY_LOG_PAGE_OFFSET 64 #define DE_VECTOR 0 #define DB_VECTOR 1 diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index e553f0fdd87d..8e12a6fe1524 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -6,7 +6,8 @@ ccflags-$(CONFIG_KVM_WERROR) += -Werror KVM := ../../../virt/kvm kvm-y += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o \ - $(KVM)/eventfd.o $(KVM)/irqchip.o $(KVM)/vfio.o + $(KVM)/eventfd.o $(KVM)/irqchip.o $(KVM)/vfio.o \ + $(KVM)/dirty_ring.o kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o kvm-y += x86.o emulate.o i8259.o irq.o lapic.o \ diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 560e85ebdf22..e770a5dd0c30 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1749,7 +1749,13 @@ int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu) { if (kvm_x86_ops->write_log_dirty) return kvm_x86_ops->write_log_dirty(vcpu); + return 0; +} +int kvm_cpu_dirty_log_size(void) +{ + if (kvm_x86_ops->cpu_dirty_log_size) + return kvm_x86_ops->cpu_dirty_log_size(); return 0; } diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 529b04ca0ac8..3c6bc41cd6a6 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7761,6 +7761,7 @@ static __init int hardware_setup(void) kvm_x86_ops->slot_disable_log_dirty = NULL; kvm_x86_ops->flush_log_dirty = NULL; kvm_x86_ops->enable_log_dirty_pt_masked = NULL; + kvm_x86_ops->cpu_dirty_log_size = NULL; } if (!cpu_has_vmx_preemption_timer()) @@ -7835,6 +7836,11 @@ static bool vmx_check_apicv_inhibit_reasons(ulong bit) return supported & BIT(bit); } +static int vmx_cpu_dirty_log_size(void) +{ + return enable_pml ? PML_ENTITY_NUM : 0; +} + static struct kvm_x86_ops vmx_x86_ops __ro_after_init = { .cpu_has_kvm_support = cpu_has_kvm_support, .disabled_by_bios = vmx_disabled_by_bios, @@ -7945,6 +7951,7 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = { .flush_log_dirty = vmx_flush_log_dirty, .enable_log_dirty_pt_masked = vmx_enable_log_dirty_pt_masked, .write_log_dirty = vmx_write_pml_buffer, + .cpu_dirty_log_size = vmx_cpu_dirty_log_size, .pre_block = vmx_pre_block, .post_block = vmx_post_block, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index faa702c4d37b..3a12f931a045 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8176,6 +8176,15 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) bool req_immediate_exit = false; + /* Forbid vmenter if vcpu dirty ring is soft-full */ + if (unlikely(vcpu->kvm->dirty_ring_size && + kvm_dirty_ring_soft_full(&vcpu->dirty_ring))) { + vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL; + trace_kvm_dirty_ring_exit(vcpu); + r = 0; + goto out; + } + if (kvm_request_pending(vcpu)) { if (kvm_check_request(KVM_REQ_GET_VMCS12_PAGES, vcpu)) { if (unlikely(!kvm_x86_ops->get_vmcs12_pages(vcpu))) { diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h new file mode 100644 index 000000000000..120e5e90fa1d --- /dev/null +++ b/include/linux/kvm_dirty_ring.h @@ -0,0 +1,103 @@ +#ifndef KVM_DIRTY_RING_H +#define KVM_DIRTY_RING_H + +#include + +/** + * kvm_dirty_ring: KVM internal dirty ring structure + * + * @dirty_index: free running counter that points to the next slot in + * dirty_ring->dirty_gfns, where a new dirty page should go + * @reset_index: free running counter that points to the next dirty page + * in dirty_ring->dirty_gfns for which dirty trap needs to + * be reenabled + * @size: size of the compact list, dirty_ring->dirty_gfns + * @soft_limit: when the number of dirty pages in the list reaches this + * limit, vcpu that owns this ring should exit to userspace + * to allow userspace to harvest all the dirty pages + * @dirty_gfns: the array to keep the dirty gfns + * @index: index of this dirty ring + */ +struct kvm_dirty_ring { + u32 dirty_index; + u32 reset_index; + u32 size; + u32 soft_limit; + struct kvm_dirty_gfn *dirty_gfns; + int index; +}; + +#if (KVM_DIRTY_LOG_PAGE_OFFSET == 0) +/* + * If KVM_DIRTY_LOG_PAGE_OFFSET not defined, kvm_dirty_ring.o should + * not be included as well, so define these nop functions for the arch. + */ +static inline u32 kvm_dirty_ring_get_rsvd_entries(void) +{ + return 0; +} + +static inline int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, + int index, u32 size) +{ + return 0; +} + +static inline struct kvm_dirty_ring *kvm_dirty_ring_get(struct kvm *kvm) +{ + return NULL; +} + +static inline int kvm_dirty_ring_reset(struct kvm *kvm, + struct kvm_dirty_ring *ring) +{ + return 0; +} + +static inline void kvm_dirty_ring_push(struct kvm_dirty_ring *ring, + u32 slot, u64 offset) +{ +} + +static inline struct page *kvm_dirty_ring_get_page(struct kvm_dirty_ring *ring, + u32 offset) +{ + return NULL; +} + +static inline void kvm_dirty_ring_free(struct kvm_dirty_ring *ring) +{ +} + +static inline bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring) +{ + return true; +} + +#else /* KVM_DIRTY_LOG_PAGE_OFFSET == 0 */ + +u32 kvm_dirty_ring_get_rsvd_entries(void); +int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size); +struct kvm_dirty_ring *kvm_dirty_ring_get(struct kvm *kvm); + +/* + * called with kvm->slots_lock held, returns the number of + * processed pages. + */ +int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring); + +/* + * returns =0: successfully pushed + * <0: unable to push, need to wait + */ +void kvm_dirty_ring_push(struct kvm_dirty_ring *ring, u32 slot, u64 offset); + +/* for use in vm_operations_struct */ +struct page *kvm_dirty_ring_get_page(struct kvm_dirty_ring *ring, u32 offset); + +void kvm_dirty_ring_free(struct kvm_dirty_ring *ring); +bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring); + +#endif /* KVM_DIRTY_LOG_PAGE_OFFSET == 0 */ + +#endif /* KVM_DIRTY_RING_H */ diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 515570116b60..291a9a9a1239 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -34,6 +34,7 @@ #include #include +#include #ifndef KVM_MAX_VCPU_ID #define KVM_MAX_VCPU_ID KVM_MAX_VCPUS @@ -319,6 +320,7 @@ struct kvm_vcpu { bool ready; struct kvm_vcpu_arch arch; struct dentry *debugfs_dentry; + struct kvm_dirty_ring dirty_ring; }; static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu) @@ -504,6 +506,7 @@ struct kvm { struct srcu_struct srcu; struct srcu_struct irq_srcu; pid_t userspace_pid; + u32 dirty_ring_size; }; #define kvm_err(fmt, ...) \ @@ -1422,4 +1425,14 @@ int kvm_vm_create_worker_thread(struct kvm *kvm, kvm_vm_thread_fn_t thread_fn, uintptr_t data, const char *name, struct task_struct **thread_ptr); +/* + * This defines how many reserved entries we want to keep before we + * kick the vcpu to the userspace to avoid dirty ring full. This + * value can be tuned to higher if e.g. PML is enabled on the host. + */ +#define KVM_DIRTY_RING_RSVD_ENTRIES 64 + +/* Max number of entries allowed for each kvm dirty ring */ +#define KVM_DIRTY_RING_MAX_ENTRIES 65536 + #endif diff --git a/include/trace/events/kvm.h b/include/trace/events/kvm.h index 2c735a3e6613..3d850997940c 100644 --- a/include/trace/events/kvm.h +++ b/include/trace/events/kvm.h @@ -399,6 +399,84 @@ TRACE_EVENT(kvm_halt_poll_ns, #define trace_kvm_halt_poll_ns_shrink(vcpu_id, new, old) \ trace_kvm_halt_poll_ns(false, vcpu_id, new, old) +TRACE_EVENT(kvm_dirty_ring_push, + TP_PROTO(struct kvm_dirty_ring *ring, u32 slot, u64 offset), + TP_ARGS(ring, slot, offset), + + TP_STRUCT__entry( + __field(int, index) + __field(u32, dirty_index) + __field(u32, reset_index) + __field(u32, slot) + __field(u64, offset) + ), + + TP_fast_assign( + __entry->index = ring->index; + __entry->dirty_index = ring->dirty_index; + __entry->reset_index = ring->reset_index; + __entry->slot = slot; + __entry->offset = offset; + ), + + TP_printk("ring %d: dirty 0x%x reset 0x%x " + "slot %u offset 0x%llx (used %u)", + __entry->index, __entry->dirty_index, + __entry->reset_index, __entry->slot, __entry->offset, + __entry->dirty_index - __entry->reset_index) +); + +TRACE_EVENT(kvm_dirty_ring_reset, + TP_PROTO(struct kvm_dirty_ring *ring), + TP_ARGS(ring), + + TP_STRUCT__entry( + __field(int, index) + __field(u32, dirty_index) + __field(u32, reset_index) + ), + + TP_fast_assign( + __entry->index = ring->index; + __entry->dirty_index = ring->dirty_index; + __entry->reset_index = ring->reset_index; + ), + + TP_printk("ring %d: dirty 0x%x reset 0x%x (used %u)", + __entry->index, __entry->dirty_index, __entry->reset_index, + __entry->dirty_index - __entry->reset_index) +); + +TRACE_EVENT(kvm_dirty_ring_waitqueue, + TP_PROTO(bool enter), + TP_ARGS(enter), + + TP_STRUCT__entry( + __field(bool, enter) + ), + + TP_fast_assign( + __entry->enter = enter; + ), + + TP_printk("%s", __entry->enter ? "wait" : "awake") +); + +TRACE_EVENT(kvm_dirty_ring_exit, + TP_PROTO(struct kvm_vcpu *vcpu), + TP_ARGS(vcpu), + + TP_STRUCT__entry( + __field(int, vcpu_id) + ), + + TP_fast_assign( + __entry->vcpu_id = vcpu->vcpu_id; + ), + + TP_printk("vcpu %d", __entry->vcpu_id) +); + #endif /* _TRACE_KVM_MAIN_H */ /* This part must be outside protection */ diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 428c7dde6b4b..74f150c69ee6 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -236,6 +236,7 @@ struct kvm_hyperv_exit { #define KVM_EXIT_IOAPIC_EOI 26 #define KVM_EXIT_HYPERV 27 #define KVM_EXIT_ARM_NISV 28 +#define KVM_EXIT_DIRTY_RING_FULL 29 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -1017,6 +1018,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_S390_VCPU_RESETS 179 #define KVM_CAP_S390_PROTECTED 180 #define KVM_CAP_PPC_SECURE_GUEST 181 +#define KVM_CAP_DIRTY_LOG_RING 182 #ifdef KVM_CAP_IRQ_ROUTING @@ -1518,6 +1520,9 @@ struct kvm_pv_cmd { /* Available with KVM_CAP_S390_PROTECTED */ #define KVM_S390_PV_COMMAND _IOWR(KVMIO, 0xc5, struct kvm_pv_cmd) +/* Available with KVM_CAP_DIRTY_LOG_RING */ +#define KVM_RESET_DIRTY_RINGS _IO(KVMIO, 0xc6) + /* Secure Encrypted Virtualization command */ enum sev_cmd_id { /* Guest initialization commands */ @@ -1671,4 +1676,52 @@ struct kvm_hyperv_eventfd { #define KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE (1 << 0) #define KVM_DIRTY_LOG_INITIALLY_SET (1 << 1) +/* + * Arch needs to define the macro after implementing the dirty ring + * feature. KVM_DIRTY_LOG_PAGE_OFFSET should be defined as the + * starting page offset of the dirty ring structures. + */ +#ifndef KVM_DIRTY_LOG_PAGE_OFFSET +#define KVM_DIRTY_LOG_PAGE_OFFSET 0 +#endif + +/* + * KVM dirty GFN flags, defined as: + * + * |---------------+---------------+--------------| + * | bit 1 (reset) | bit 0 (dirty) | Status | + * |---------------+---------------+--------------| + * | 0 | 0 | Invalid GFN | + * | 0 | 1 | Dirty GFN | + * | 1 | X | GFN to reset | + * |---------------+---------------+--------------| + * + * Lifecycle of a dirty GFN goes like: + * + * dirtied collected reset + * 00 -----------> 01 -------------> 1X -------+ + * ^ | + * | | + * +------------------------------------------+ + * + * The userspace program is only responsible for the 01->1X state + * conversion (to collect dirty bits). Also, it must not skip any + * dirty bits so that dirty bits are always collected in sequence. + */ +#define KVM_DIRTY_GFN_F_DIRTY BIT(0) +#define KVM_DIRTY_GFN_F_RESET BIT(1) +#define KVM_DIRTY_GFN_F_MASK 0x3 + +/* + * KVM dirty rings should be mapped at KVM_DIRTY_LOG_PAGE_OFFSET of + * per-vcpu mmaped regions as an array of struct kvm_dirty_gfn. The + * size of the gfn buffer is decided by the first argument when + * enabling KVM_CAP_DIRTY_LOG_RING. + */ +struct kvm_dirty_gfn { + __u32 flags; + __u32 slot; + __u64 offset; +}; + #endif /* __LINUX_KVM_H */ diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c new file mode 100644 index 000000000000..12d09802b8a7 --- /dev/null +++ b/virt/kvm/dirty_ring.c @@ -0,0 +1,195 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * KVM dirty ring implementation + * + * Copyright 2019 Red Hat, Inc. + */ +#include +#include +#include +#include +#include + +int __weak kvm_cpu_dirty_log_size(void) +{ + return 0; +} + +u32 kvm_dirty_ring_get_rsvd_entries(void) +{ + return KVM_DIRTY_RING_RSVD_ENTRIES + kvm_cpu_dirty_log_size(); +} + +static u32 kvm_dirty_ring_used(struct kvm_dirty_ring *ring) +{ + return READ_ONCE(ring->dirty_index) - READ_ONCE(ring->reset_index); +} + +bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring) +{ + return kvm_dirty_ring_used(ring) >= ring->soft_limit; +} + +bool kvm_dirty_ring_full(struct kvm_dirty_ring *ring) +{ + return kvm_dirty_ring_used(ring) >= ring->size; +} + +struct kvm_dirty_ring *kvm_dirty_ring_get(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu = kvm_get_running_vcpu(); + + WARN_ON_ONCE(vcpu->kvm != kvm); + + return &vcpu->dirty_ring; +} + +static void kvm_reset_dirty_gfn(struct kvm *kvm, u32 slot, u64 offset, u64 mask) +{ + struct kvm_memory_slot *memslot; + int as_id, id; + + as_id = slot >> 16; + id = (u16)slot; + if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS) + return; + + memslot = id_to_memslot(__kvm_memslots(kvm, as_id), id); + if (offset >= memslot->npages) + return; + + spin_lock(&kvm->mmu_lock); + kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot, offset, mask); + spin_unlock(&kvm->mmu_lock); +} + +int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size) +{ + ring->dirty_gfns = vmalloc(size); + if (!ring->dirty_gfns) + return -ENOMEM; + memset(ring->dirty_gfns, 0, size); + + ring->size = size / sizeof(struct kvm_dirty_gfn); + ring->soft_limit = ring->size - kvm_dirty_ring_get_rsvd_entries(); + ring->dirty_index = 0; + ring->reset_index = 0; + ring->index = index; + + return 0; +} + +static inline void kvm_dirty_gfn_set_invalid(struct kvm_dirty_gfn *gfn) +{ + gfn->flags = 0; +} + +static inline void kvm_dirty_gfn_set_dirtied(struct kvm_dirty_gfn *gfn) +{ + gfn->flags = KVM_DIRTY_GFN_F_DIRTY; +} + +static inline bool kvm_dirty_gfn_invalid(struct kvm_dirty_gfn *gfn) +{ + return gfn->flags == 0; +} + +static inline bool kvm_dirty_gfn_collected(struct kvm_dirty_gfn *gfn) +{ + return gfn->flags & KVM_DIRTY_GFN_F_RESET; +} + +int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring) +{ + u32 cur_slot, next_slot; + u64 cur_offset, next_offset; + unsigned long mask; + int count = 0; + struct kvm_dirty_gfn *entry; + bool first_round = true; + + /* This is only needed to make compilers happy */ + cur_slot = cur_offset = mask = 0; + + while (true) { + entry = &ring->dirty_gfns[ring->reset_index & (ring->size - 1)]; + + if (!kvm_dirty_gfn_collected(entry)) + break; + + next_slot = READ_ONCE(entry->slot); + next_offset = READ_ONCE(entry->offset); + + /* Update the flags to reflect that this GFN is reset */ + kvm_dirty_gfn_set_invalid(entry); + + ring->reset_index++; + count++; + /* + * Try to coalesce the reset operations when the guest is + * scanning pages in the same slot. + */ + if (!first_round && next_slot == cur_slot) { + s64 delta = next_offset - cur_offset; + + if (delta >= 0 && delta < BITS_PER_LONG) { + mask |= 1ull << delta; + continue; + } + + /* Backwards visit, careful about overflows! */ + if (delta > -BITS_PER_LONG && delta < 0 && + (mask << -delta >> -delta) == mask) { + cur_offset = next_offset; + mask = (mask << -delta) | 1; + continue; + } + } + kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask); + cur_slot = next_slot; + cur_offset = next_offset; + mask = 1; + first_round = false; + } + + kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask); + + trace_kvm_dirty_ring_reset(ring); + + return count; +} + +void kvm_dirty_ring_push(struct kvm_dirty_ring *ring, u32 slot, u64 offset) +{ + struct kvm_dirty_gfn *entry; + + /* It should never get full */ + WARN_ON_ONCE(kvm_dirty_ring_full(ring)); + + entry = &ring->dirty_gfns[ring->dirty_index & (ring->size - 1)]; + + /* It should always be an invalid entry to fill in */ + WARN_ON_ONCE(!kvm_dirty_gfn_invalid(entry)); + + entry->slot = slot; + entry->offset = offset; + /* + * Make sure the data is filled in before we publish this to + * the userspace program. There's no paired kernel-side reader. + */ + smp_wmb(); + kvm_dirty_gfn_set_dirtied(entry); + ring->dirty_index++; + trace_kvm_dirty_ring_push(ring, slot, offset); +} + +struct page *kvm_dirty_ring_get_page(struct kvm_dirty_ring *ring, u32 offset) +{ + return vmalloc_to_page((void *)ring->dirty_gfns + offset * PAGE_SIZE); +} + +void kvm_dirty_ring_free(struct kvm_dirty_ring *ring) +{ + vfree(ring->dirty_gfns); + ring->dirty_gfns = NULL; +} diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1f869dda8110..eacdedf8d122 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -64,6 +64,8 @@ #define CREATE_TRACE_POINTS #include +#include + /* Worst case buffer size needed for holding an integer. */ #define ITOA_MAX_LEN 12 @@ -358,6 +360,7 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id) void kvm_vcpu_destroy(struct kvm_vcpu *vcpu) { + kvm_dirty_ring_free(&vcpu->dirty_ring); kvm_arch_vcpu_destroy(vcpu); /* @@ -2568,8 +2571,13 @@ static void mark_page_dirty_in_slot(struct kvm *kvm, { if (memslot && memslot->dirty_bitmap) { unsigned long rel_gfn = gfn - memslot->base_gfn; + u32 slot = (memslot->as_id << 16) | memslot->id; - set_bit_le(rel_gfn, memslot->dirty_bitmap); + if (kvm->dirty_ring_size) + kvm_dirty_ring_push(kvm_dirty_ring_get(kvm), + slot, rel_gfn); + else + set_bit_le(rel_gfn, memslot->dirty_bitmap); } } @@ -2916,6 +2924,16 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode) } EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin); +static bool kvm_page_in_dirty_ring(struct kvm *kvm, unsigned long pgoff) +{ + if (!KVM_DIRTY_LOG_PAGE_OFFSET) + return false; + + return (pgoff >= KVM_DIRTY_LOG_PAGE_OFFSET) && + (pgoff < KVM_DIRTY_LOG_PAGE_OFFSET + + kvm->dirty_ring_size / PAGE_SIZE); +} + static vm_fault_t kvm_vcpu_fault(struct vm_fault *vmf) { struct kvm_vcpu *vcpu = vmf->vma->vm_file->private_data; @@ -2931,6 +2949,10 @@ static vm_fault_t kvm_vcpu_fault(struct vm_fault *vmf) else if (vmf->pgoff == KVM_COALESCED_MMIO_PAGE_OFFSET) page = virt_to_page(vcpu->kvm->coalesced_mmio_ring); #endif + else if (kvm_page_in_dirty_ring(vcpu->kvm, vmf->pgoff)) + page = kvm_dirty_ring_get_page( + &vcpu->dirty_ring, + vmf->pgoff - KVM_DIRTY_LOG_PAGE_OFFSET); else return kvm_arch_vcpu_fault(vcpu, vmf); get_page(page); @@ -2944,6 +2966,14 @@ static const struct vm_operations_struct kvm_vcpu_vm_ops = { static int kvm_vcpu_mmap(struct file *file, struct vm_area_struct *vma) { + struct kvm_vcpu *vcpu = file->private_data; + unsigned long pages = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; + + if ((kvm_page_in_dirty_ring(vcpu->kvm, vma->vm_pgoff) || + kvm_page_in_dirty_ring(vcpu->kvm, vma->vm_pgoff + pages - 1)) && + ((vma->vm_flags & VM_EXEC) || !(vma->vm_flags & VM_SHARED))) + return -EINVAL; + vma->vm_ops = &kvm_vcpu_vm_ops; return 0; } @@ -3037,6 +3067,13 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id) if (r) goto vcpu_free_run_page; + if (kvm->dirty_ring_size) { + r = kvm_dirty_ring_alloc(&vcpu->dirty_ring, + id, kvm->dirty_ring_size); + if (r) + goto arch_vcpu_destroy; + } + kvm_create_vcpu_debugfs(vcpu); mutex_lock(&kvm->lock); @@ -3072,6 +3109,8 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id) unlock_vcpu_destroy: mutex_unlock(&kvm->lock); debugfs_remove_recursive(vcpu->debugfs_dentry); + kvm_dirty_ring_free(&vcpu->dirty_ring); +arch_vcpu_destroy: kvm_arch_vcpu_destroy(vcpu); vcpu_free_run_page: free_page((unsigned long)vcpu->run); @@ -3543,12 +3582,78 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) #endif case KVM_CAP_NR_MEMSLOTS: return KVM_USER_MEM_SLOTS; + case KVM_CAP_DIRTY_LOG_RING: +#ifdef CONFIG_X86 + return KVM_DIRTY_RING_MAX_ENTRIES * sizeof(struct kvm_dirty_gfn); +#else + return 0; +#endif default: break; } return kvm_vm_ioctl_check_extension(kvm, arg); } +static int kvm_vm_ioctl_enable_dirty_log_ring(struct kvm *kvm, u32 size) +{ + int r; + + if (!KVM_DIRTY_LOG_PAGE_OFFSET) + return -EINVAL; + + /* the size should be power of 2 */ + if (!size || (size & (size - 1))) + return -EINVAL; + + /* Should be bigger to keep the reserved entries, or a page */ + if (size < kvm_dirty_ring_get_rsvd_entries() * + sizeof(struct kvm_dirty_gfn) || size < PAGE_SIZE) + return -EINVAL; + + if (size > KVM_DIRTY_RING_MAX_ENTRIES * + sizeof(struct kvm_dirty_gfn)) + return -E2BIG; + + /* We only allow it to set once */ + if (kvm->dirty_ring_size) + return -EINVAL; + + mutex_lock(&kvm->lock); + + if (kvm->created_vcpus) { + /* We don't allow to change this value after vcpu created */ + r = -EINVAL; + } else { + kvm->dirty_ring_size = size; + r = 0; + } + + mutex_unlock(&kvm->lock); + return r; +} + +static int kvm_vm_ioctl_reset_dirty_pages(struct kvm *kvm) +{ + int i; + struct kvm_vcpu *vcpu; + int cleared = 0; + + if (!kvm->dirty_ring_size) + return -EINVAL; + + mutex_lock(&kvm->slots_lock); + + kvm_for_each_vcpu(i, vcpu, kvm) + cleared += kvm_dirty_ring_reset(vcpu->kvm, &vcpu->dirty_ring); + + mutex_unlock(&kvm->slots_lock); + + if (cleared) + kvm_flush_remote_tlbs(kvm); + + return cleared; +} + int __attribute__((weak)) kvm_vm_ioctl_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) { @@ -3572,6 +3677,8 @@ static int kvm_vm_ioctl_enable_cap_generic(struct kvm *kvm, return 0; } #endif + case KVM_CAP_DIRTY_LOG_RING: + return kvm_vm_ioctl_enable_dirty_log_ring(kvm, cap->args[0]); default: return kvm_vm_ioctl_enable_cap(kvm, cap); } @@ -3759,6 +3866,9 @@ static long kvm_vm_ioctl(struct file *filp, case KVM_CHECK_EXTENSION: r = kvm_vm_ioctl_check_extension_generic(kvm, arg); break; + case KVM_RESET_DIRTY_RINGS: + r = kvm_vm_ioctl_reset_dirty_pages(kvm); + break; default: r = kvm_arch_vm_ioctl(filp, ioctl, arg); } From patchwork Tue Mar 31 18:59:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468273 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CA8D714B4 for ; Tue, 31 Mar 2020 19:00:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A86132137B for ; Tue, 31 Mar 2020 19:00:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ALj1E49F" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730673AbgCaTA6 (ORCPT ); Tue, 31 Mar 2020 15:00:58 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:20080 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729941AbgCaTAo (ORCPT ); Tue, 31 Mar 2020 15:00:44 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681243; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V6kMOTSpFcDOP0/szFuhOxO6Epp29hRF0tLj3kk1Vz8=; b=ALj1E49FBlCVILJe+1SQBZGGKdi34KaePAAwlbJOHLkqomOl3GEAcKqOqWNiXn6J/Ii6PE Szj/BL6RArlrgM/4SdqD0TKM4GKw6ax0xfa3I3eI/kjj3+tyVkoKnAScfjncTiCXKdYFAo +BsLhAoSFxnUJvUDvcu4JT69aC3CKmI= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-305-bVlLqy8ON16ySv6aeZGASQ-1; Tue, 31 Mar 2020 15:00:41 -0400 X-MC-Unique: bVlLqy8ON16ySv6aeZGASQ-1 Received: by mail-wm1-f70.google.com with SMTP id s9so1451350wmh.2 for ; Tue, 31 Mar 2020 12:00:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=V6kMOTSpFcDOP0/szFuhOxO6Epp29hRF0tLj3kk1Vz8=; b=mN23ysW1rUwJHCCTx5nQOcQzCVYEtGUnaumo8tFyQ0rOWRM5WqTIDy4Z3AyPVauX6/ TapNfF9LlAb0AXLIQau9VZTnUORZ1PBx46mWf5t5hL60uho9pvX29pBAwt2D0/lv7kNu HHISeJCU129cO4Uv7v0kIAtq1vVt5m6cFbj+m/x9Y1B9bh9j+pe6Na9l/Y/lcmcXpFo/ cm9fyXcXxS5QnlSsZbxfiWTTbvBgZK8Nc+5o4u8c9px3S02FxCEg+dw/c4wE1qWhg60E yEoX2hD9lpMOCkAMT/oEbaJTaP2lwI2X1Rh9j8Pd4yYuPX8c90YbVPl/RUUSNL3EikB6 XLMg== X-Gm-Message-State: ANhLgQ3erlN0kNY9YPInYxbXaOEwR0nSrOVvCWc7+8PWOmcE4NvWPgL2 ffGUNP6TJh4oRjcsZXpAw/btIZN7kL1Y/rhH0eFVNigRXt+5tpyv3h4tt4sd0TWvGBg2aqqkkHm g6Z6b0ZCO2QbO X-Received: by 2002:a5d:4f08:: with SMTP id c8mr22446222wru.27.1585681240675; Tue, 31 Mar 2020 12:00:40 -0700 (PDT) X-Google-Smtp-Source: ADFU+vuThXSQEdN/aEbHa49dPZlaZI4ScfCEKwPD+n8JlJD2q3rs9KEG7qvjptIjNzDS5gjRDFr73Q== X-Received: by 2002:a5d:4f08:: with SMTP id c8mr22446200wru.27.1585681240485; Tue, 31 Mar 2020 12:00:40 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id p10sm23444848wre.15.2020.03.31.12.00.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:00:39 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com Subject: [PATCH v8 06/14] KVM: Make dirty ring exclusive to dirty bitmap log Date: Tue, 31 Mar 2020 14:59:52 -0400 Message-Id: <20200331190000.659614-7-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There's no good reason to use both the dirty bitmap logging and the new dirty ring buffer to track dirty bits. We should be able to even support both of them at the same time, but it could complicate things which could actually help little. Let's simply make it the rule before we enable dirty ring on any arch, that we don't allow these two interfaces to be used together. The big world switch would be KVM_CAP_DIRTY_LOG_RING capability enablement. That's where we'll switch from the default dirty logging way to the dirty ring way. As long as kvm->dirty_ring_size is setup correctly, we'll once and for all switch to the dirty ring buffer mode for the current virtual machine. Signed-off-by: Peter Xu --- Documentation/virt/kvm/api.rst | 7 +++++++ virt/kvm/kvm_main.c | 12 ++++++++++++ 2 files changed, 19 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index aa54a34077b7..d56f86ba05a0 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6225,3 +6225,10 @@ make sure all the existing dirty gfns are flushed to the dirty rings. The dirty ring can gets full. When it happens, the KVM_RUN of the vcpu will return with exit reason KVM_EXIT_DIRTY_LOG_FULL. + +NOTE: the capability KVM_CAP_DIRTY_LOG_RING and the corresponding +ioctl KVM_RESET_DIRTY_RINGS are mutual exclusive to the existing ioctl +KVM_GET_DIRTY_LOG. After enabling KVM_CAP_DIRTY_LOG_RING with an +acceptable dirty ring size, the virtual machine will switch to the +dirty ring tracking mode. Further ioctls to either KVM_GET_DIRTY_LOG +or KVM_CLEAR_DIRTY_LOG will fail. diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index eacdedf8d122..1a4cc20c5a3c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1355,6 +1355,10 @@ int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log, unsigned long n; unsigned long any = 0; + /* Dirty ring tracking is exclusive to dirty log tracking */ + if (kvm->dirty_ring_size) + return -EINVAL; + *memslot = NULL; *is_dirty = 0; @@ -1416,6 +1420,10 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) unsigned long *dirty_bitmap_buffer; bool flush; + /* Dirty ring tracking is exclusive to dirty log tracking */ + if (kvm->dirty_ring_size) + return -EINVAL; + as_id = log->slot >> 16; id = (u16)log->slot; if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS) @@ -1524,6 +1532,10 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, unsigned long *dirty_bitmap_buffer; bool flush; + /* Dirty ring tracking is exclusive to dirty log tracking */ + if (kvm->dirty_ring_size) + return -EINVAL; + as_id = log->slot >> 16; id = (u16)log->slot; if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS) From patchwork Tue Mar 31 18:59:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468271 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 23BB31667 for ; Tue, 31 Mar 2020 19:00:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EC8AB2137B for ; Tue, 31 Mar 2020 19:00:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="dI1kXfuK" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730699AbgCaTAz (ORCPT ); Tue, 31 Mar 2020 15:00:55 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:55256 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730081AbgCaTAy (ORCPT ); Tue, 31 Mar 2020 15:00:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681252; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SESLTHxgqRYHhPsVLzqJvHWTJelt1XN9kzslX/6S8Zk=; b=dI1kXfuKm2OvtKmG/+gAjg2MWojtbzUlE0F+5C9+vwMxaZQWslrRYYAgjuTx1TvNITH7K9 Bn1c+o9uSdchG3NRYTAucTem5MmZgbRl4jHqsRjVmqEK75X1Kpc64rAt7IaEclV9c7qgjL qbJcM1qnS3Q/GxgroGwNZfI67gJNoMQ= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-9-k687vTMUOMu--lMc8x0lqA-1; Tue, 31 Mar 2020 15:00:48 -0400 X-MC-Unique: k687vTMUOMu--lMc8x0lqA-1 Received: by mail-wm1-f70.google.com with SMTP id l13so706752wme.7 for ; Tue, 31 Mar 2020 12:00:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SESLTHxgqRYHhPsVLzqJvHWTJelt1XN9kzslX/6S8Zk=; b=kNIiKdv7Mw5lThOfNznymRaKBAt1wPL4KLM5A/kS2B+clFiHdAxc14fLFy7GuYj+25 3xUETcMleeWBdM7kAokxli//SPMryHRUcKm5n/qn2EI/Y9Asr91fpENZI78Lmff0WUPl RUJoNZ0cArilaH9FeX+zzxrxgqP02zedfAMvfEfQHoJOuPPEUbdhKTPYFAqPAXEwO53x lLfKcDP5Dba6ZPE//IS7uednV3DRn/5DD3sY+Pn22xcP1CUfleUbTOapjCChlTKpv0I8 mTJptIQnD2NWGOWcZVIjasbGUm7dq9fSU3N2jopcP+WBwOApN93HR5ZH4mj9B2px2M+Z MwBg== X-Gm-Message-State: ANhLgQ2eyc1sPf/4nRPjtDYOl94fQ1y7WLlWJk8czgEs3QbxvHR192c4 ox5fNzUyX/x11uaiwTeEFDXIJuNO1OFMBHA/cTWXTzqjqvVfLJjgqynNk8cg/1wgQ1rcklrTxNF c3//nd1YVk/QU X-Received: by 2002:adf:de8b:: with SMTP id w11mr21613219wrl.397.1585681247276; Tue, 31 Mar 2020 12:00:47 -0700 (PDT) X-Google-Smtp-Source: ADFU+vuWQEYgbG1VrqqRA6WBmQ4l+4RS20Li5aJu09zIM449p+Lv+I4MacbCZRb45tLfnecdgTH9sw== X-Received: by 2002:adf:de8b:: with SMTP id w11mr21613194wrl.397.1585681247077; Tue, 31 Mar 2020 12:00:47 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id w9sm30264179wrk.18.2020.03.31.12.00.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:00:46 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com Subject: [PATCH v8 07/14] KVM: Don't allocate dirty bitmap if dirty ring is enabled Date: Tue, 31 Mar 2020 14:59:53 -0400 Message-Id: <20200331190000.659614-8-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Because kvm dirty rings and kvm dirty log is used in an exclusive way, Let's avoid creating the dirty_bitmap when kvm dirty ring is enabled. At the meantime, since the dirty_bitmap will be conditionally created now, we can't use it as a sign of "whether this memory slot enabled dirty tracking". Change users like that to check against the kvm memory slot flags. Note that there still can be chances where the kvm memory slot got its dirty_bitmap allocated, _if_ the memory slots are created before enabling of the dirty rings and at the same time with the dirty tracking capability enabled, they'll still with the dirty_bitmap. However it should not hurt much (e.g., the bitmaps will always be freed if they are there), and the real users normally won't trigger this because dirty bit tracking flag should in most cases only be applied to kvm slots only before migration starts, that should be far latter than kvm initializes (VM starts). Signed-off-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 4 ++-- include/linux/kvm_host.h | 5 +++++ virt/kvm/kvm_main.c | 4 ++-- 3 files changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e770a5dd0c30..970025875abc 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1276,8 +1276,8 @@ gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t gfn, slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); if (!slot || slot->flags & KVM_MEMSLOT_INVALID) return NULL; - if (no_dirty_log && slot->dirty_bitmap) - return NULL; + if (no_dirty_log && kvm_slot_dirty_track_enabled(slot)) + return false; return slot; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 291a9a9a1239..2061cafaf56d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -351,6 +351,11 @@ struct kvm_memory_slot { u16 as_id; }; +static inline bool kvm_slot_dirty_track_enabled(struct kvm_memory_slot *slot) +{ + return slot->flags & KVM_MEM_LOG_DIRTY_PAGES; +} + static inline unsigned long kvm_dirty_bitmap_bytes(struct kvm_memory_slot *memslot) { return ALIGN(memslot->npages, BITS_PER_LONG) / 8; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1a4cc20c5a3c..ae4930e404d1 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1294,7 +1294,7 @@ int __kvm_set_memory_region(struct kvm *kvm, /* Allocate/free page dirty bitmap as needed */ if (!(new.flags & KVM_MEM_LOG_DIRTY_PAGES)) new.dirty_bitmap = NULL; - else if (!new.dirty_bitmap) { + else if (!new.dirty_bitmap && !kvm->dirty_ring_size) { r = kvm_alloc_dirty_bitmap(&new); if (r) return r; @@ -2581,7 +2581,7 @@ static void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot, gfn_t gfn) { - if (memslot && memslot->dirty_bitmap) { + if (memslot && kvm_slot_dirty_track_enabled(memslot)) { unsigned long rel_gfn = gfn - memslot->base_gfn; u32 slot = (memslot->as_id << 16) | memslot->id; From patchwork Tue Mar 31 18:59:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468277 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 01DBD17EA for ; Tue, 31 Mar 2020 19:01:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D2E42207FF for ; Tue, 31 Mar 2020 19:01:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Lv8zH9fM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730907AbgCaTBG (ORCPT ); Tue, 31 Mar 2020 15:01:06 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:44640 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730081AbgCaTA5 (ORCPT ); Tue, 31 Mar 2020 15:00:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681256; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mBeZgqbPS2/FWwO6pnXFu0eVYzKmasr03QOYT6AICyE=; b=Lv8zH9fMpE9BoBZ43Z16Uf0OhiydA6QPwvoTBXqJT+9Noqy98wNuC4oAxkHBVl808HjQbq W2TynUKfHMiH98FBcin5xayD4dQuzWCoP3IjOENUpjNYu2cckN0gdolGc6bEtssbqVaU5V acbncmzhE3sriRBHkDu+ox87ItRE5rY= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-38-OJY3F19vOYKwplk7x9cdig-1; Tue, 31 Mar 2020 15:00:54 -0400 X-MC-Unique: OJY3F19vOYKwplk7x9cdig-1 Received: by mail-wr1-f72.google.com with SMTP id u16so10398282wrp.14 for ; Tue, 31 Mar 2020 12:00:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mBeZgqbPS2/FWwO6pnXFu0eVYzKmasr03QOYT6AICyE=; b=heLWBekRFxkiTpTcInnwCYdfU4Q18ceEpnzCYyeU3+bHPT743oqRe8W7jKCOXft2zP +miwGKf3y27j1TYGC5kkMsIxpHp6/z0iRLe+x83yZVF5leJCcI37qeFcjr7aVW/aT3tr tlz4CUNYisrtmc1L7QgbfcdzL8hwt6jV+tcTtZCR0z/+1D85zPUUnw4XybwbdACMUqbt wTLL5bRsEGVCPQCUPHqWLvNVIZaK4Hx6/mho9LSfw24KaZ6NuYNh5qSjvB6VJtQkqO40 c83WyCslRaEaG3iNMmED5Xm9hNmbgleWmfYhpeRa902RYQ8GEmfEheLddYwOLK3li0mh 46Eg== X-Gm-Message-State: ANhLgQ1+rctkZr3kI/J7u+ygPsfSvNKr3/iuDW9/76m6zPbOTlnz7zOn 2xdNqhQXSJiF3YJcckq1J+KtRWWzenJFU+z5cl+UB3pZdxPm2IZ/nZqoxgzP3QyRFSN/afvf1sd a/DR5b9le3Yz+ X-Received: by 2002:adf:ed42:: with SMTP id u2mr23267918wro.175.1585681253674; Tue, 31 Mar 2020 12:00:53 -0700 (PDT) X-Google-Smtp-Source: ADFU+vvw7dspAUr3v867wOvqD+4fThISVCr4cOSo/0L9hQFvFTZpAQC3J7Z80h0Sz8FhZoplTN5kdQ== X-Received: by 2002:adf:ed42:: with SMTP id u2mr23267898wro.175.1585681253504; Tue, 31 Mar 2020 12:00:53 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id o14sm4882863wmh.22.2020.03.31.12.00.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:00:52 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com Subject: [PATCH v8 08/14] KVM: selftests: Always clear dirty bitmap after iteration Date: Tue, 31 Mar 2020 14:59:54 -0400 Message-Id: <20200331190000.659614-9-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We don't clear the dirty bitmap before because KVM_GET_DIRTY_LOG will clear it for us before copying the dirty log onto it. However we'd still better to clear it explicitly instead of assuming the kernel will always do it for us. More importantly, in the upcoming dirty ring tests we'll start to fetch dirty pages from a ring buffer, so no one is going to clear the dirty bitmap for us. Signed-off-by: Peter Xu Reviewed-by: Andrew Jones --- tools/testing/selftests/kvm/dirty_log_test.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index 752ec158ac59..6a8275a22861 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -195,7 +195,7 @@ static void vm_dirty_log_verify(enum vm_guest_mode mode, unsigned long *bmap) page); } - if (test_bit_le(page, bmap)) { + if (test_and_clear_bit_le(page, bmap)) { host_dirty_count++; /* * If the bit is set, the value written onto From patchwork Tue Mar 31 18:59:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468275 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C9BEE1667 for ; Tue, 31 Mar 2020 19:01:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9E73A2137B for ; Tue, 31 Mar 2020 19:01:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CwhxjFW8" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731082AbgCaTBH (ORCPT ); Tue, 31 Mar 2020 15:01:07 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:28661 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727932AbgCaTBE (ORCPT ); Tue, 31 Mar 2020 15:01:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681262; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lvoidnZPqPUSWh6jETn/g9SWiiflMsbax7YPeLocRiM=; b=CwhxjFW8F9nYTjCKhhWEJl4fGbksJKWECExhYQRkf0NqwJ4BQFPuK6oVBeezIVxgXWaLCE VKb27xAAQ7IoBPsaNDK32i0WWxsi0APdGL4CTU2RutM3wqgKyd6FsWvdw6S6hDyEDrwE7G GVTV5sspwTGeMDdtgyPLSMJfoqDpXhs= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-256-_p9apof4N4Ck3Xz2-l5AWg-1; Tue, 31 Mar 2020 15:01:00 -0400 X-MC-Unique: _p9apof4N4Ck3Xz2-l5AWg-1 Received: by mail-wr1-f72.google.com with SMTP id e10so13460843wrm.2 for ; Tue, 31 Mar 2020 12:01:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lvoidnZPqPUSWh6jETn/g9SWiiflMsbax7YPeLocRiM=; b=mRy/gcMoPI04wcHzrurbmlKCdfkSaWQTyy/Dqho7r5MMOcf0fNkZUcgUqgJ5jmdryj /BjGN68r8F3sHhpbWPTFr9lsUd8/+uK/xZ5rzDbJUe2VU3JW5aeNIMXq8KmerHLHwyDY 62ViRYVn0KQdXkAqDCJfPjrALniHtcIpAT7hWra1kVWjXQ35h6aym4DpJe6UnPKLbArz pQVFpZ4yw7HBmYSSHWBjyjKGVsxMRi/vMrgjI6IBS4vIS+ftWAxzDghGbOjIhExdx+Lp qyiB2CdBGEwhuCEyFd4dqNAa0Z550hFr+Ks3QuMv6c/BR6B6blKHeonLDQSgUmQSSrqQ Biyw== X-Gm-Message-State: AGi0PuYX3ADEvHBbn8+7k4n+Vz+J+p6qken/1LZfXbdDSM5ZsEZfuJAQ PHDFhhJfKsPWxdNpSodzBPhgfwVRK/zR3iJqHNYhVik7r4pi5wcU2XP7GxlCbgTypRnHEZqHN0X zp4QqoLccLIbl X-Received: by 2002:a1c:2842:: with SMTP id o63mr291269wmo.73.1585681259085; Tue, 31 Mar 2020 12:00:59 -0700 (PDT) X-Google-Smtp-Source: APiQypIMEbB8FgDIHKfz2jBBidIdJtnxskEPlPzxV7OjJvwOKEURIRmixY0ZEEnS5wY0iKDzb75OcA== X-Received: by 2002:a1c:2842:: with SMTP id o63mr291244wmo.73.1585681258814; Tue, 31 Mar 2020 12:00:58 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id a8sm4819848wmb.39.2020.03.31.12.00.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:00:58 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com Subject: [PATCH v8 09/14] KVM: selftests: Sync uapi/linux/kvm.h to tools/ Date: Tue, 31 Mar 2020 14:59:55 -0400 Message-Id: <20200331190000.659614-10-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This will be needed to extend the kvm selftest program. Signed-off-by: Peter Xu --- tools/include/uapi/linux/kvm.h | 100 ++++++++++++++++++++++++++++++++- 1 file changed, 98 insertions(+), 2 deletions(-) diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h index 4b95f9a31a2f..74f150c69ee6 100644 --- a/tools/include/uapi/linux/kvm.h +++ b/tools/include/uapi/linux/kvm.h @@ -236,6 +236,7 @@ struct kvm_hyperv_exit { #define KVM_EXIT_IOAPIC_EOI 26 #define KVM_EXIT_HYPERV 27 #define KVM_EXIT_ARM_NISV 28 +#define KVM_EXIT_DIRTY_RING_FULL 29 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -474,12 +475,17 @@ struct kvm_s390_mem_op { __u32 size; /* amount of bytes */ __u32 op; /* type of operation */ __u64 buf; /* buffer in userspace */ - __u8 ar; /* the access register number */ - __u8 reserved[31]; /* should be set to 0 */ + union { + __u8 ar; /* the access register number */ + __u32 sida_offset; /* offset into the sida */ + __u8 reserved[32]; /* should be set to 0 */ + }; }; /* types for kvm_s390_mem_op->op */ #define KVM_S390_MEMOP_LOGICAL_READ 0 #define KVM_S390_MEMOP_LOGICAL_WRITE 1 +#define KVM_S390_MEMOP_SIDA_READ 2 +#define KVM_S390_MEMOP_SIDA_WRITE 3 /* flags for kvm_s390_mem_op->flags */ #define KVM_S390_MEMOP_F_CHECK_ONLY (1ULL << 0) #define KVM_S390_MEMOP_F_INJECT_EXCEPTION (1ULL << 1) @@ -1010,6 +1016,9 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_ARM_NISV_TO_USER 177 #define KVM_CAP_ARM_INJECT_EXT_DABT 178 #define KVM_CAP_S390_VCPU_RESETS 179 +#define KVM_CAP_S390_PROTECTED 180 +#define KVM_CAP_PPC_SECURE_GUEST 181 +#define KVM_CAP_DIRTY_LOG_RING 182 #ifdef KVM_CAP_IRQ_ROUTING @@ -1478,6 +1487,42 @@ struct kvm_enc_region { #define KVM_S390_NORMAL_RESET _IO(KVMIO, 0xc3) #define KVM_S390_CLEAR_RESET _IO(KVMIO, 0xc4) +struct kvm_s390_pv_sec_parm { + __u64 origin; + __u64 length; +}; + +struct kvm_s390_pv_unp { + __u64 addr; + __u64 size; + __u64 tweak; +}; + +enum pv_cmd_id { + KVM_PV_ENABLE, + KVM_PV_DISABLE, + KVM_PV_SET_SEC_PARMS, + KVM_PV_UNPACK, + KVM_PV_VERIFY, + KVM_PV_PREP_RESET, + KVM_PV_UNSHARE_ALL, +}; + +struct kvm_pv_cmd { + __u32 cmd; /* Command to be executed */ + __u16 rc; /* Ultravisor return code */ + __u16 rrc; /* Ultravisor return reason code */ + __u64 data; /* Data or address */ + __u32 flags; /* flags for future extensions. Must be 0 for now */ + __u32 reserved[3]; +}; + +/* Available with KVM_CAP_S390_PROTECTED */ +#define KVM_S390_PV_COMMAND _IOWR(KVMIO, 0xc5, struct kvm_pv_cmd) + +/* Available with KVM_CAP_DIRTY_LOG_RING */ +#define KVM_RESET_DIRTY_RINGS _IO(KVMIO, 0xc6) + /* Secure Encrypted Virtualization command */ enum sev_cmd_id { /* Guest initialization commands */ @@ -1628,4 +1673,55 @@ struct kvm_hyperv_eventfd { #define KVM_HYPERV_CONN_ID_MASK 0x00ffffff #define KVM_HYPERV_EVENTFD_DEASSIGN (1 << 0) +#define KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE (1 << 0) +#define KVM_DIRTY_LOG_INITIALLY_SET (1 << 1) + +/* + * Arch needs to define the macro after implementing the dirty ring + * feature. KVM_DIRTY_LOG_PAGE_OFFSET should be defined as the + * starting page offset of the dirty ring structures. + */ +#ifndef KVM_DIRTY_LOG_PAGE_OFFSET +#define KVM_DIRTY_LOG_PAGE_OFFSET 0 +#endif + +/* + * KVM dirty GFN flags, defined as: + * + * |---------------+---------------+--------------| + * | bit 1 (reset) | bit 0 (dirty) | Status | + * |---------------+---------------+--------------| + * | 0 | 0 | Invalid GFN | + * | 0 | 1 | Dirty GFN | + * | 1 | X | GFN to reset | + * |---------------+---------------+--------------| + * + * Lifecycle of a dirty GFN goes like: + * + * dirtied collected reset + * 00 -----------> 01 -------------> 1X -------+ + * ^ | + * | | + * +------------------------------------------+ + * + * The userspace program is only responsible for the 01->1X state + * conversion (to collect dirty bits). Also, it must not skip any + * dirty bits so that dirty bits are always collected in sequence. + */ +#define KVM_DIRTY_GFN_F_DIRTY BIT(0) +#define KVM_DIRTY_GFN_F_RESET BIT(1) +#define KVM_DIRTY_GFN_F_MASK 0x3 + +/* + * KVM dirty rings should be mapped at KVM_DIRTY_LOG_PAGE_OFFSET of + * per-vcpu mmaped regions as an array of struct kvm_dirty_gfn. The + * size of the gfn buffer is decided by the first argument when + * enabling KVM_CAP_DIRTY_LOG_RING. + */ +struct kvm_dirty_gfn { + __u32 flags; + __u32 slot; + __u64 offset; +}; + #endif /* __LINUX_KVM_H */ From patchwork Tue Mar 31 18:59:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468279 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DEF5914B4 for ; Tue, 31 Mar 2020 19:01:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AA2272137B for ; Tue, 31 Mar 2020 19:01:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZZPAUVDu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731144AbgCaTBK (ORCPT ); Tue, 31 Mar 2020 15:01:10 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:34263 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730677AbgCaTBK (ORCPT ); Tue, 31 Mar 2020 15:01:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681269; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=95vXa10yfbD57ZoMXDp8pO1CirBxQyw3OEmxh+K3pR8=; b=ZZPAUVDutT4jSUo4gpTay4uVkxyrdYbHBj6ZfKe9sT6RbPSQs5QFI/v51kuZu2GtMGg0dD EFB2VE8LXohW/aiBhfSTJiDZAR6FKjN6/kPLtsdO2VxrVGKwswAW7OD7KDTfytYnwHvUZc LfKzRJovpG0SgabN94LZEoV8bt+rFNQ= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-172-rKS8Hwl8M1en4i_IgjJQZw-1; Tue, 31 Mar 2020 15:01:07 -0400 X-MC-Unique: rKS8Hwl8M1en4i_IgjJQZw-1 Received: by mail-wr1-f69.google.com with SMTP id e10so13367228wru.6 for ; Tue, 31 Mar 2020 12:01:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=95vXa10yfbD57ZoMXDp8pO1CirBxQyw3OEmxh+K3pR8=; b=WTCx9zUUQjJrezUpJc4DrMMxMom6tM8xSN2vxskDf/Y6wGbiyr6A5JFSpH6Mqa3ywc ZfBZzMJQF8QFwycdFXNzvzffgb8YH7H8pVDA51vQX/XGJ0ctCtemxCQixUyeCD4tWBXZ doso4lp4ZXpS8Z+3pcbMQo+7f/t+1xycZ9MgUP5g4KbRF+7jarbECrvXVSPn7JcKeiwc xZYCXR6yCDdSos/bAOnTvQ5DlIVrqO+O1UaHfVC9u221ueAw6RYkI3JCd7TXckxy/SKD OvDZ0v6X+6JKQv7tGLUUuDVEcztiIChfZKmBC/TkDx/6+N79eED8uTob71DDzr9thTTe vhKg== X-Gm-Message-State: ANhLgQ3uZNYL7D+F8APxY5D2wbWZ3VbxYEJd7erN3wvBDuZriCUcma5Y Bwr0LiffwY1YEHAXDUyt4SRaIa6Ap3nhwepNa7tv4jPK2m8D9QOoHkLUsYY20AW4vwR9qZH4pOx 6LxMR7naYp4ii X-Received: by 2002:adf:9d88:: with SMTP id p8mr21687173wre.257.1585681266314; Tue, 31 Mar 2020 12:01:06 -0700 (PDT) X-Google-Smtp-Source: ADFU+vuWY2exGj47N7yHGFbBoHIrl7B15ca2osKiacBp1KQGZwK5EyFMkDL+vO3EFOHvnhdr0ZRJvA== X-Received: by 2002:adf:9d88:: with SMTP id p8mr21687126wre.257.1585681265944; Tue, 31 Mar 2020 12:01:05 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id b67sm4986124wmh.29.2020.03.31.12.01.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:01:05 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com, Andrew Jones Subject: [PATCH v8 10/14] KVM: selftests: Use a single binary for dirty/clear log test Date: Tue, 31 Mar 2020 14:59:56 -0400 Message-Id: <20200331190000.659614-11-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove the clear_dirty_log test, instead merge it into the existing dirty_log_test. It should be cleaner to use this single binary to do both tests, also it's a preparation for the upcoming dirty ring test. The default behavior will run all the modes in sequence. Reviewed-by: Andrew Jones Signed-off-by: Peter Xu --- tools/testing/selftests/kvm/Makefile | 2 - .../selftests/kvm/clear_dirty_log_test.c | 6 - tools/testing/selftests/kvm/dirty_log_test.c | 187 +++++++++++++++--- 3 files changed, 156 insertions(+), 39 deletions(-) delete mode 100644 tools/testing/selftests/kvm/clear_dirty_log_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 712a2ddd2a27..fee0393f10da 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -28,13 +28,11 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_dirty_log_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_tsc_adjust_test TEST_GEN_PROGS_x86_64 += x86_64/xss_msr_test -TEST_GEN_PROGS_x86_64 += clear_dirty_log_test TEST_GEN_PROGS_x86_64 += demand_paging_test TEST_GEN_PROGS_x86_64 += dirty_log_test TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus TEST_GEN_PROGS_x86_64 += steal_time -TEST_GEN_PROGS_aarch64 += clear_dirty_log_test TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus diff --git a/tools/testing/selftests/kvm/clear_dirty_log_test.c b/tools/testing/selftests/kvm/clear_dirty_log_test.c deleted file mode 100644 index 11672ec6f74e..000000000000 --- a/tools/testing/selftests/kvm/clear_dirty_log_test.c +++ /dev/null @@ -1,6 +0,0 @@ -#define USE_CLEAR_DIRTY_LOG -#define KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE (1 << 0) -#define KVM_DIRTY_LOG_INITIALLY_SET (1 << 1) -#define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \ - KVM_DIRTY_LOG_INITIALLY_SET) -#include "dirty_log_test.c" diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index 6a8275a22861..139ccb550618 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -128,6 +128,78 @@ static uint64_t host_dirty_count; static uint64_t host_clear_count; static uint64_t host_track_next_count; +enum log_mode_t { + /* Only use KVM_GET_DIRTY_LOG for logging */ + LOG_MODE_DIRTY_LOG = 0, + + /* Use both KVM_[GET|CLEAR]_DIRTY_LOG for logging */ + LOG_MODE_CLEAR_LOG = 1, + + LOG_MODE_NUM, + + /* Run all supported modes */ + LOG_MODE_ALL = LOG_MODE_NUM, +}; + +/* Mode of logging to test. Default is to run all supported modes */ +static enum log_mode_t host_log_mode_option = LOG_MODE_ALL; +/* Logging mode for current run */ +static enum log_mode_t host_log_mode; + +static bool clear_log_supported(void) +{ + return kvm_check_cap(KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2); +} + +static void clear_log_create_vm_done(struct kvm_vm *vm) +{ + struct kvm_enable_cap cap = {}; + u64 manual_caps; + + manual_caps = kvm_check_cap(KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2); + TEST_ASSERT(manual_caps, "MANUAL_CAPS is zero!"); + manual_caps &= (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | + KVM_DIRTY_LOG_INITIALLY_SET); + cap.cap = KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2; + cap.args[0] = manual_caps; + vm_enable_cap(vm, &cap); +} + +static void dirty_log_collect_dirty_pages(struct kvm_vm *vm, int slot, + void *bitmap, uint32_t num_pages) +{ + kvm_vm_get_dirty_log(vm, slot, bitmap); +} + +static void clear_log_collect_dirty_pages(struct kvm_vm *vm, int slot, + void *bitmap, uint32_t num_pages) +{ + kvm_vm_get_dirty_log(vm, slot, bitmap); + kvm_vm_clear_dirty_log(vm, slot, bitmap, 0, num_pages); +} + +struct log_mode { + const char *name; + /* Return true if this mode is supported, otherwise false */ + bool (*supported)(void); + /* Hook when the vm creation is done (before vcpu creation) */ + void (*create_vm_done)(struct kvm_vm *vm); + /* Hook to collect the dirty pages into the bitmap provided */ + void (*collect_dirty_pages) (struct kvm_vm *vm, int slot, + void *bitmap, uint32_t num_pages); +} log_modes[LOG_MODE_NUM] = { + { + .name = "dirty-log", + .collect_dirty_pages = dirty_log_collect_dirty_pages, + }, + { + .name = "clear-log", + .supported = clear_log_supported, + .create_vm_done = clear_log_create_vm_done, + .collect_dirty_pages = clear_log_collect_dirty_pages, + }, +}; + /* * We use this bitmap to track some pages that should have its dirty * bit set in the _next_ iteration. For example, if we detected the @@ -137,6 +209,44 @@ static uint64_t host_track_next_count; */ static unsigned long *host_bmap_track; +static void log_modes_dump(void) +{ + int i; + + printf("all"); + for (i = 0; i < LOG_MODE_NUM; i++) + printf(", %s", log_modes[i].name); + printf("\n"); +} + +static bool log_mode_supported(void) +{ + struct log_mode *mode = &log_modes[host_log_mode]; + + if (mode->supported) + return mode->supported(); + + return true; +} + +static void log_mode_create_vm_done(struct kvm_vm *vm) +{ + struct log_mode *mode = &log_modes[host_log_mode]; + + if (mode->create_vm_done) + mode->create_vm_done(vm); +} + +static void log_mode_collect_dirty_pages(struct kvm_vm *vm, int slot, + void *bitmap, uint32_t num_pages) +{ + struct log_mode *mode = &log_modes[host_log_mode]; + + TEST_ASSERT(mode->collect_dirty_pages != NULL, + "collect_dirty_pages() is required for any log mode!"); + mode->collect_dirty_pages(vm, slot, bitmap, num_pages); +} + static void generate_random_array(uint64_t *guest_array, uint64_t size) { uint64_t i; @@ -257,6 +367,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, #ifdef __x86_64__ vm_create_irqchip(vm); #endif + log_mode_create_vm_done(vm); vm_vcpu_add_default(vm, vcpuid, guest_code); return vm; } @@ -264,10 +375,6 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, #define DIRTY_MEM_BITS 30 /* 1G */ #define PAGE_SHIFT_4K 12 -#ifdef USE_CLEAR_DIRTY_LOG -static u64 dirty_log_manual_caps; -#endif - static void run_test(enum vm_guest_mode mode, unsigned long iterations, unsigned long interval, uint64_t phys_offset) { @@ -275,6 +382,12 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, struct kvm_vm *vm; unsigned long *bmap; + if (!log_mode_supported()) { + print_skip("Log mode '%s' not supported", + log_modes[host_log_mode].name); + return; + } + /* * We reserve page table for 2 times of extra dirty mem which * will definitely cover the original (1G+) test range. Here @@ -317,14 +430,6 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, bmap = bitmap_alloc(host_num_pages); host_bmap_track = bitmap_alloc(host_num_pages); -#ifdef USE_CLEAR_DIRTY_LOG - struct kvm_enable_cap cap = {}; - - cap.cap = KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2; - cap.args[0] = dirty_log_manual_caps; - vm_enable_cap(vm, &cap); -#endif - /* Add an extra memory slot for testing dirty logging */ vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, guest_test_phys_mem, @@ -362,11 +467,8 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, while (iteration < iterations) { /* Give the vcpu thread some time to dirty some pages */ usleep(interval * 1000); - kvm_vm_get_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap); -#ifdef USE_CLEAR_DIRTY_LOG - kvm_vm_clear_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap, 0, - host_num_pages); -#endif + log_mode_collect_dirty_pages(vm, TEST_MEM_SLOT_INDEX, + bmap, host_num_pages); vm_dirty_log_verify(mode, bmap); iteration++; sync_global_to_guest(vm, iteration); @@ -410,6 +512,9 @@ static void help(char *name) TEST_HOST_LOOP_INTERVAL); printf(" -p: specify guest physical test memory offset\n" " Warning: a low offset can conflict with the loaded test code.\n"); + printf(" -M: specify the host logging mode " + "(default: run all log modes). Supported modes: \n\t"); + log_modes_dump(); printf(" -m: specify the guest mode ID to test " "(default: test all supported modes)\n" " This option may be used multiple times.\n" @@ -429,18 +534,7 @@ int main(int argc, char *argv[]) bool mode_selected = false; uint64_t phys_offset = 0; unsigned int mode; - int opt, i; - -#ifdef USE_CLEAR_DIRTY_LOG - dirty_log_manual_caps = - kvm_check_cap(KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2); - if (!dirty_log_manual_caps) { - print_skip("KVM_CLEAR_DIRTY_LOG not available"); - exit(KSFT_SKIP); - } - dirty_log_manual_caps &= (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | - KVM_DIRTY_LOG_INITIALLY_SET); -#endif + int opt, i, j; #ifdef __x86_64__ guest_mode_init(VM_MODE_PXXV48_4K, true, true); @@ -464,7 +558,7 @@ int main(int argc, char *argv[]) guest_mode_init(VM_MODE_P40V48_4K, true, true); #endif - while ((opt = getopt(argc, argv, "hi:I:p:m:")) != -1) { + while ((opt = getopt(argc, argv, "hi:I:p:m:M:")) != -1) { switch (opt) { case 'i': iterations = strtol(optarg, NULL, 10); @@ -486,6 +580,26 @@ int main(int argc, char *argv[]) "Guest mode ID %d too big", mode); guest_modes[mode].enabled = true; break; + case 'M': + if (!strcmp(optarg, "all")) { + host_log_mode_option = LOG_MODE_ALL; + break; + } + for (i = 0; i < LOG_MODE_NUM; i++) { + if (!strcmp(optarg, log_modes[i].name)) { + pr_info("Setting log mode to: '%s'\n", + optarg); + host_log_mode_option = i; + break; + } + } + if (i == LOG_MODE_NUM) { + printf("Log mode '%s' invalid. Please choose " + "from: ", optarg); + log_modes_dump(); + exit(1); + } + break; case 'h': default: help(argv[0]); @@ -507,7 +621,18 @@ int main(int argc, char *argv[]) TEST_ASSERT(guest_modes[i].supported, "Guest mode ID %d (%s) not supported.", i, vm_guest_mode_string(i)); - run_test(i, iterations, interval, phys_offset); + if (host_log_mode_option == LOG_MODE_ALL) { + /* Run each log mode */ + for (j = 0; j < LOG_MODE_NUM; j++) { + pr_info("Testing Log Mode '%s'\n", + log_modes[j].name); + host_log_mode = j; + run_test(i, iterations, interval, phys_offset); + } + } else { + host_log_mode = host_log_mode_option; + run_test(i, iterations, interval, phys_offset); + } } return 0; From patchwork Tue Mar 31 18:59:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468281 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8946A14B4 for ; Tue, 31 Mar 2020 19:01:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5EC4721582 for ; Tue, 31 Mar 2020 19:01:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XMxJaWud" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731271AbgCaTBU (ORCPT ); Tue, 31 Mar 2020 15:01:20 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:30237 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1731173AbgCaTBS (ORCPT ); Tue, 31 Mar 2020 15:01:18 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681276; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0/6pv55/PbmGOHXU3lAv5xTE6lvOy1tPaO+Eq7xV6aI=; b=XMxJaWudTs3P3/MD4eZdmSvrgTSIpkVyf77PbUcT/UtTbfO8ryBs3O9IjVlrxR7Un56KEb EeMcBgnUlzn/D4LDQouuaNdGrXNF76oTlL7GsW2nQenBWV96XQXXiWUoJuXls7xoHBUrPK 5LkiX/1y9ZGmzbzA/Xm6GHmIzAP2eLc= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-208-i99LK3mUM_auGEr3LJc5eQ-1; Tue, 31 Mar 2020 15:01:15 -0400 X-MC-Unique: i99LK3mUM_auGEr3LJc5eQ-1 Received: by mail-wr1-f71.google.com with SMTP id y1so12250394wrp.5 for ; Tue, 31 Mar 2020 12:01:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0/6pv55/PbmGOHXU3lAv5xTE6lvOy1tPaO+Eq7xV6aI=; b=q1Y+j/aghqkqWe4qKomMGhPd8jhR0v7pLLyjSrHggT6pDVb1hR0TBm/n5XnMe33uo6 rE76K8FhAEMAFdW6lBbjjqmf5Je37Zup8RfIop0ApY1kyhcY4hQbGbptNxoU4nQSl6K6 kUdwNBRULRYg2rH/sHV72pqcJ8uPqJ22LWvlE5F9UmUIHq8TwagRC58DIOM3SSAQW8TD MDkFBEmowb0Z2uif0Uz02QBcGDj98CGY61oIcFEJCOkyrUq1Z076PQ1AhrICKx7Fn25P r1I4ODmhYce1RBKqSURXnEtUXpmSOE9CWSWblw7XhNahN0qJVkj1IQ3CWoAgcBOLdpjy 2R1A== X-Gm-Message-State: AGi0PuYtUlT5dJRpvkvy2GJanbAHSfibMTkDnDBJuDfvrvHfGIf13g18 Y3ua5FnYnGYCdtilQfOH1kzewkWo7im96PDM/C1a6GHMV2gbGNLJnYwm2zOVf27lYBXNzxfvz2y nuXPfcWLUDkN1 X-Received: by 2002:a1c:1942:: with SMTP id 63mr297425wmz.133.1585681272696; Tue, 31 Mar 2020 12:01:12 -0700 (PDT) X-Google-Smtp-Source: APiQypJ9h8+F7rIf5huhCtVoYCjzH6pX54pEwFayM8bEg2v2YVSWrUFxZ/smyWk6d5DC7WLNSj4eTg== X-Received: by 2002:a1c:1942:: with SMTP id 63mr297409wmz.133.1585681272450; Tue, 31 Mar 2020 12:01:12 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id f14sm5039226wmb.3.2020.03.31.12.01.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:01:11 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com Subject: [PATCH v8 11/14] KVM: selftests: Introduce after_vcpu_run hook for dirty log test Date: Tue, 31 Mar 2020 14:59:57 -0400 Message-Id: <20200331190000.659614-12-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Provide a hook for the checks after vcpu_run() completes. Preparation for the dirty ring test because we'll need to take care of another exit reason. Since at it, drop the pages_count because after all we have a better summary right now with statistics, and clean it up a bit. Signed-off-by: Peter Xu Reviewed-by: Andrew Jones --- tools/testing/selftests/kvm/dirty_log_test.c | 36 +++++++++++++------- 1 file changed, 24 insertions(+), 12 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index 139ccb550618..a2160946bcf5 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -178,6 +178,15 @@ static void clear_log_collect_dirty_pages(struct kvm_vm *vm, int slot, kvm_vm_clear_dirty_log(vm, slot, bitmap, 0, num_pages); } +static void default_after_vcpu_run(struct kvm_vm *vm) +{ + struct kvm_run *run = vcpu_state(vm, VCPU_ID); + + TEST_ASSERT(get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC, + "Invalid guest sync status: exit_reason=%s\n", + exit_reason_str(run->exit_reason)); +} + struct log_mode { const char *name; /* Return true if this mode is supported, otherwise false */ @@ -187,16 +196,20 @@ struct log_mode { /* Hook to collect the dirty pages into the bitmap provided */ void (*collect_dirty_pages) (struct kvm_vm *vm, int slot, void *bitmap, uint32_t num_pages); + /* Hook to call when after each vcpu run */ + void (*after_vcpu_run)(struct kvm_vm *vm); } log_modes[LOG_MODE_NUM] = { { .name = "dirty-log", .collect_dirty_pages = dirty_log_collect_dirty_pages, + .after_vcpu_run = default_after_vcpu_run, }, { .name = "clear-log", .supported = clear_log_supported, .create_vm_done = clear_log_create_vm_done, .collect_dirty_pages = clear_log_collect_dirty_pages, + .after_vcpu_run = default_after_vcpu_run, }, }; @@ -247,6 +260,14 @@ static void log_mode_collect_dirty_pages(struct kvm_vm *vm, int slot, mode->collect_dirty_pages(vm, slot, bitmap, num_pages); } +static void log_mode_after_vcpu_run(struct kvm_vm *vm) +{ + struct log_mode *mode = &log_modes[host_log_mode]; + + if (mode->after_vcpu_run) + mode->after_vcpu_run(vm); +} + static void generate_random_array(uint64_t *guest_array, uint64_t size) { uint64_t i; @@ -261,25 +282,16 @@ static void *vcpu_worker(void *data) struct kvm_vm *vm = data; uint64_t *guest_array; uint64_t pages_count = 0; - struct kvm_run *run; - - run = vcpu_state(vm, VCPU_ID); guest_array = addr_gva2hva(vm, (vm_vaddr_t)random_array); - generate_random_array(guest_array, TEST_PAGES_PER_LOOP); while (!READ_ONCE(host_quit)) { + generate_random_array(guest_array, TEST_PAGES_PER_LOOP); + pages_count += TEST_PAGES_PER_LOOP; /* Let the guest dirty the random pages */ ret = _vcpu_run(vm, VCPU_ID); TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); - if (get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC) { - pages_count += TEST_PAGES_PER_LOOP; - generate_random_array(guest_array, TEST_PAGES_PER_LOOP); - } else { - TEST_FAIL("Invalid guest sync status: " - "exit_reason=%s\n", - exit_reason_str(run->exit_reason)); - } + log_mode_after_vcpu_run(vm); } pr_info("Dirtied %"PRIu64" pages\n", pages_count); From patchwork Tue Mar 31 18:59:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468283 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 029481667 for ; Tue, 31 Mar 2020 19:01:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C286C2137B for ; Tue, 31 Mar 2020 19:01:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="WP4j0K23" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731214AbgCaTBY (ORCPT ); Tue, 31 Mar 2020 15:01:24 -0400 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:52621 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726295AbgCaTBY (ORCPT ); Tue, 31 Mar 2020 15:01:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681282; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wGJ8j7RpgylgeWWs+vuYTa8pkNvFmtJH98WcalowqmI=; b=WP4j0K23qKBspmKnMh7fpuXxd+I9+V4wSCaDmS9xvOjsUfmirGRvfhEFpK/nMWpSZx5SIR g+/NPRXhAf3tQqO24n47X+f01hKkH7vn36eSdIkMTzw5URtcLY2dng18XCxAITxOXeGKDI fe1w8zUODtccZPKTqn0DEZ7zhJctcis= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-355-paUG2B_6MAOOH1yMmFAidA-1; Tue, 31 Mar 2020 15:01:21 -0400 X-MC-Unique: paUG2B_6MAOOH1yMmFAidA-1 Received: by mail-wr1-f72.google.com with SMTP id t25so11018191wrb.16 for ; Tue, 31 Mar 2020 12:01:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wGJ8j7RpgylgeWWs+vuYTa8pkNvFmtJH98WcalowqmI=; b=aVUAB27cFS9LTy49h9e+nNkVrXUT1caZv53cZTg4t+U2XsYirFcNKIN9rvE3njTDWh WCKFPpdfMJV42SqX/V+2g8GwG4GiRB4uFmhExFp72wWZJxH8hRPBKuNyFOXodt/KJcSS jQtQBsGXLAGpdvr6qRdT0APsQmSbM4l6pt1b+0tkzG8INZtJLlBzXz4DALx4UsLmlduE 7pbYwgVwufx47+wDEz3sUHjzMSCdaNk8zZTj+9GLgT+grwLXw0/sfe14HdfjjhJ7jQQB qiItakOpZvNAkGuexNcaOF0wxRToEBzVsLDD6JxzhbdKOAXfT0su+yeOLBC/fX4pxCqB YNPg== X-Gm-Message-State: ANhLgQ2UgBuamkm2jZ5QiyU+LZXpCGI9Qrx1mliypASUFt9rZJ0ewT2J 0WeyGpXiQXb3z6JhNrK9qqyXu+ZH57koThYmmAmWsWiuOxkgexwnnQqq0YreQjmvwVkPvFdWEu/ ctDmPm/vXJrZJ X-Received: by 2002:a5d:5547:: with SMTP id g7mr22353946wrw.263.1585681279376; Tue, 31 Mar 2020 12:01:19 -0700 (PDT) X-Google-Smtp-Source: ADFU+vsa/xyfPcJ8TKEHqbrhTbd8IDjCVeWisUDzhFLvpCYmAY0ehw3wnjuQwbsfZVsM7waFzvUeSQ== X-Received: by 2002:a5d:5547:: with SMTP id g7mr22353903wrw.263.1585681278885; Tue, 31 Mar 2020 12:01:18 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id n7sm5141791wmf.4.2020.03.31.12.01.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:01:18 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com Subject: [PATCH v8 12/14] KVM: selftests: Add dirty ring buffer test Date: Tue, 31 Mar 2020 14:59:58 -0400 Message-Id: <20200331190000.659614-13-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add the initial dirty ring buffer test. The current test implements the userspace dirty ring collection, by only reaping the dirty ring when the ring is full. So it's still running synchronously like this: vcpu main thread 1. vcpu dirties pages 2. vcpu gets dirty ring full (userspace exit) 3. main thread waits until full (so hardware buffers flushed) 4. main thread collects 5. main thread continues vcpu 6. vcpu continues, goes back to 1 We can't directly collects dirty bits during vcpu execution because otherwise we can't guarantee the hardware dirty bits were flushed when we collect and we're very strict on the dirty bits so otherwise we can fail the future verify procedure. A follow up patch will make this test to support async just like the existing dirty log test, by adding a vcpu kick mechanism. Signed-off-by: Peter Xu --- tools/testing/selftests/kvm/dirty_log_test.c | 201 +++++++++++++++++- .../testing/selftests/kvm/include/kvm_util.h | 3 + tools/testing/selftests/kvm/lib/kvm_util.c | 59 +++++ .../selftests/kvm/lib/kvm_util_internal.h | 4 + 4 files changed, 265 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index a2160946bcf5..531431cff4fc 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -12,8 +12,10 @@ #include #include #include +#include #include #include +#include #include "test_util.h" #include "kvm_util.h" @@ -57,6 +59,8 @@ # define test_and_clear_bit_le test_and_clear_bit #endif +#define TEST_DIRTY_RING_COUNT 1024 + /* * Guest/Host shared variables. Ensure addr_gva2hva() and/or * sync_global_to/from_guest() are used when accessing from @@ -128,6 +132,10 @@ static uint64_t host_dirty_count; static uint64_t host_clear_count; static uint64_t host_track_next_count; +/* Whether dirty ring reset is requested, or finished */ +static sem_t dirty_ring_vcpu_stop; +static sem_t dirty_ring_vcpu_cont; + enum log_mode_t { /* Only use KVM_GET_DIRTY_LOG for logging */ LOG_MODE_DIRTY_LOG = 0, @@ -135,6 +143,9 @@ enum log_mode_t { /* Use both KVM_[GET|CLEAR]_DIRTY_LOG for logging */ LOG_MODE_CLEAR_LOG = 1, + /* Use dirty ring for logging */ + LOG_MODE_DIRTY_RING = 2, + LOG_MODE_NUM, /* Run all supported modes */ @@ -187,6 +198,120 @@ static void default_after_vcpu_run(struct kvm_vm *vm) exit_reason_str(run->exit_reason)); } +static bool dirty_ring_supported(void) +{ + return kvm_check_cap(KVM_CAP_DIRTY_LOG_RING); +} + +static void dirty_ring_create_vm_done(struct kvm_vm *vm) +{ + /* + * Switch to dirty ring mode after VM creation but before any + * of the vcpu creation. + */ + vm_enable_dirty_ring(vm, TEST_DIRTY_RING_COUNT * + sizeof(struct kvm_dirty_gfn)); +} + +static inline bool dirty_gfn_is_dirtied(struct kvm_dirty_gfn *gfn) +{ + return gfn->flags == KVM_DIRTY_GFN_F_DIRTY; +} + +static inline void dirty_gfn_set_collected(struct kvm_dirty_gfn *gfn) +{ + gfn->flags = KVM_DIRTY_GFN_F_RESET; +} + +static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns, + int slot, void *bitmap, + uint32_t num_pages, uint32_t *fetch_index) +{ + struct kvm_dirty_gfn *cur; + uint32_t count = 0; + + while (true) { + cur = &dirty_gfns[*fetch_index % TEST_DIRTY_RING_COUNT]; + if (!dirty_gfn_is_dirtied(cur)) + break; + TEST_ASSERT(cur->slot == slot, "Slot number didn't match: " + "%u != %u", cur->slot, slot); + TEST_ASSERT(cur->offset < num_pages, "Offset overflow: " + "0x%llx >= 0x%x", cur->offset, num_pages); + pr_info("fetch 0x%x page %llu\n", *fetch_index, cur->offset); + set_bit(cur->offset, bitmap); + dirty_gfn_set_collected(cur); + (*fetch_index)++; + count++; + } + + return count; +} + +static void dirty_ring_collect_dirty_pages(struct kvm_vm *vm, int slot, + void *bitmap, uint32_t num_pages) +{ + /* We only have one vcpu */ + static uint32_t fetch_index = 0; + uint32_t count = 0, cleared; + + /* + * Before fetching the dirty pages, we need a vmexit of the + * worker vcpu to make sure the hardware dirty buffers were + * flushed. This is not needed for dirty-log/clear-log tests + * because get dirty log will natually do so. + * + * For now we do it in the simple way - we simply wait until + * the vcpu uses up the soft dirty ring, then it'll always + * do a vmexit to make sure that PML buffers will be flushed. + * In real hypervisors, we probably need a vcpu kick or to + * stop the vcpus (before the final sync) to make sure we'll + * get all the existing dirty PFNs even cached in hardware. + */ + sem_wait(&dirty_ring_vcpu_stop); + + /* Only have one vcpu */ + count = dirty_ring_collect_one(vcpu_map_dirty_ring(vm, VCPU_ID), + slot, bitmap, num_pages, &fetch_index); + + cleared = kvm_vm_reset_dirty_ring(vm); + + /* Cleared pages should be the same as collected */ + TEST_ASSERT(cleared == count, "Reset dirty pages (%u) mismatch " + "with collected (%u)", cleared, count); + + pr_info("Notifying vcpu to continue\n"); + sem_post(&dirty_ring_vcpu_cont); + + pr_info("Iteration %ld collected %u pages\n", iteration, count); +} + +static void dirty_ring_after_vcpu_run(struct kvm_vm *vm) +{ + struct kvm_run *run = vcpu_state(vm, VCPU_ID); + + /* A ucall-sync or ring-full event is allowed */ + if (get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC) { + /* We should allow this to continue */ + ; + } else if (run->exit_reason == KVM_EXIT_DIRTY_RING_FULL) { + sem_post(&dirty_ring_vcpu_stop); + pr_info("vcpu stops because dirty ring full...\n"); + sem_wait(&dirty_ring_vcpu_cont); + pr_info("vcpu continues now.\n"); + } else { + TEST_ASSERT(false, "Invalid guest sync status: " + "exit_reason=%s\n", + exit_reason_str(run->exit_reason)); + } +} + +static void dirty_ring_before_vcpu_join(void) +{ + /* Kick another round of vcpu just to make sure it will quit */ + sem_post(&dirty_ring_vcpu_cont); +} + struct log_mode { const char *name; /* Return true if this mode is supported, otherwise false */ @@ -198,6 +323,7 @@ struct log_mode { void *bitmap, uint32_t num_pages); /* Hook to call when after each vcpu run */ void (*after_vcpu_run)(struct kvm_vm *vm); + void (*before_vcpu_join) (void); } log_modes[LOG_MODE_NUM] = { { .name = "dirty-log", @@ -211,6 +337,14 @@ struct log_mode { .collect_dirty_pages = clear_log_collect_dirty_pages, .after_vcpu_run = default_after_vcpu_run, }, + { + .name = "dirty-ring", + .supported = dirty_ring_supported, + .create_vm_done = dirty_ring_create_vm_done, + .collect_dirty_pages = dirty_ring_collect_dirty_pages, + .before_vcpu_join = dirty_ring_before_vcpu_join, + .after_vcpu_run = dirty_ring_after_vcpu_run, + }, }; /* @@ -268,6 +402,14 @@ static void log_mode_after_vcpu_run(struct kvm_vm *vm) mode->after_vcpu_run(vm); } +static void log_mode_before_vcpu_join(void) +{ + struct log_mode *mode = &log_modes[host_log_mode]; + + if (mode->before_vcpu_join) + mode->before_vcpu_join(); +} + static void generate_random_array(uint64_t *guest_array, uint64_t size) { uint64_t i; @@ -318,14 +460,65 @@ static void vm_dirty_log_verify(enum vm_guest_mode mode, unsigned long *bmap) } if (test_and_clear_bit_le(page, bmap)) { + bool matched; + host_dirty_count++; + /* * If the bit is set, the value written onto * the corresponding page should be either the * previous iteration number or the current one. */ - TEST_ASSERT(*value_ptr == iteration || - *value_ptr == iteration - 1, + matched = (*value_ptr == iteration || + *value_ptr == iteration - 1); + + if (host_log_mode == LOG_MODE_DIRTY_RING && !matched) { + if (*value_ptr == iteration - 2) { + /* + * Short answer: this case is special + * only for dirty ring test where the + * page is the last page before a kvm + * dirty ring full in iteration N-2. + * + * Long answer: Assuming ring size R, + * one possible condition is: + * + * main thr vcpu thr + * -------- -------- + * iter=1 + * write 1 to page 0~(R-1) + * full, vmexit + * collect 0~(R-1) + * kick vcpu + * write 1 to (R-1)~(2R-2) + * full, vmexit + * iter=2 + * collect (R-1)~(2R-2) + * kick vcpu + * write 1 to (2R-2) + * (NOTE!!! "1" cached in cpu reg) + * write 2 to (2R-1)~(3R-3) + * full, vmexit + * iter=3 + * collect (2R-2)~(3R-3) + * (here if we read value on page + * "2R-2" is 1, while iter=3!!!) + */ + matched = true; + } else { + /* + * This is also special for dirty ring + * when this page is exactly the last + * page touched before vcpu ring full. + * If it happens, we should expect the + * value to change in the next round. + */ + set_bit_le(page, host_bmap_track); + continue; + } + } + + TEST_ASSERT(matched, "Set page %"PRIu64" value %"PRIu64 " incorrect (iteration=%"PRIu64")", page, *value_ptr, iteration); @@ -488,6 +681,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, /* Tell the vcpu thread to quit */ host_quit = true; + log_mode_before_vcpu_join(); pthread_join(vcpu_thread, NULL); pr_info("Total bits checked: dirty (%"PRIu64"), clear (%"PRIu64"), " @@ -548,6 +742,9 @@ int main(int argc, char *argv[]) unsigned int mode; int opt, i, j; + sem_init(&dirty_ring_vcpu_stop, 0, 0); + sem_init(&dirty_ring_vcpu_cont, 0, 0); + #ifdef __x86_64__ guest_mode_init(VM_MODE_PXXV48_4K, true, true); #endif diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index a99b875f50d2..554fdb294bef 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -62,6 +62,7 @@ enum vm_mem_backing_src_type { int kvm_check_cap(long cap); int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap); +void vm_enable_dirty_ring(struct kvm_vm *vm, uint32_t ring_size); struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm); struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm); @@ -71,6 +72,7 @@ void kvm_vm_release(struct kvm_vm *vmp); void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log); void kvm_vm_clear_dirty_log(struct kvm_vm *vm, int slot, void *log, uint64_t first_page, uint32_t num_pages); +uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm); int kvm_memcmp_hva_gva(void *hva, struct kvm_vm *vm, const vm_vaddr_t gva, size_t len); @@ -192,6 +194,7 @@ void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid, int vcpu_nested_state_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_nested_state *state, bool ignore_error); #endif +void *vcpu_map_dirty_ring(struct kvm_vm *vm, uint32_t vcpuid); const char *exit_reason_str(unsigned int exit_reason); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 8a3523d4434f..e632d1f4a112 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -85,6 +85,16 @@ int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap) return ret; } +void vm_enable_dirty_ring(struct kvm_vm *vm, uint32_t ring_size) +{ + struct kvm_enable_cap cap = { 0 }; + + cap.cap = KVM_CAP_DIRTY_LOG_RING; + cap.args[0] = ring_size; + vm_enable_cap(vm, &cap); + vm->dirty_ring_size = ring_size; +} + static void vm_open(struct kvm_vm *vm, int perm) { vm->kvm_fd = open(KVM_DEV_PATH, perm); @@ -295,6 +305,11 @@ void kvm_vm_clear_dirty_log(struct kvm_vm *vm, int slot, void *log, __func__, strerror(-ret)); } +uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm) +{ + return ioctl(vm->fd, KVM_RESET_DIRTY_RINGS); +} + /* * Userspace Memory Region Find * @@ -406,6 +421,13 @@ static void vm_vcpu_rm(struct kvm_vm *vm, uint32_t vcpuid) struct vcpu *vcpu = vcpu_find(vm, vcpuid); int ret; + if (vcpu->dirty_gfns) { + ret = munmap(vcpu->dirty_gfns, vm->dirty_ring_size); + TEST_ASSERT(ret == 0, "munmap of VCPU dirty ring failed, " + "rc: %i errno: %i", ret, errno); + vcpu->dirty_gfns = NULL; + } + ret = munmap(vcpu->state, sizeof(*vcpu->state)); TEST_ASSERT(ret == 0, "munmap of VCPU fd failed, rc: %i " "errno: %i", ret, errno); @@ -1475,6 +1497,42 @@ int _vcpu_ioctl(struct kvm_vm *vm, uint32_t vcpuid, return ret; } +void *vcpu_map_dirty_ring(struct kvm_vm *vm, uint32_t vcpuid) +{ + struct vcpu *vcpu; + uint32_t size = vm->dirty_ring_size; + + TEST_ASSERT(size > 0, "Should enable dirty ring first"); + + vcpu = vcpu_find(vm, vcpuid); + + TEST_ASSERT(vcpu, "Cannot find vcpu %u", vcpuid); + + if (!vcpu->dirty_gfns) { + void *addr; + + addr = mmap(NULL, size, PROT_READ, + MAP_PRIVATE, vcpu->fd, + vm->page_size * KVM_DIRTY_LOG_PAGE_OFFSET); + TEST_ASSERT(addr == MAP_FAILED, "Dirty ring mapped private"); + + addr = mmap(NULL, size, PROT_READ | PROT_EXEC, + MAP_PRIVATE, vcpu->fd, + vm->page_size * KVM_DIRTY_LOG_PAGE_OFFSET); + TEST_ASSERT(addr == MAP_FAILED, "Dirty ring mapped exec"); + + addr = mmap(NULL, size, PROT_READ | PROT_WRITE, + MAP_SHARED, vcpu->fd, + vm->page_size * KVM_DIRTY_LOG_PAGE_OFFSET); + TEST_ASSERT(addr != MAP_FAILED, "Dirty ring map failed"); + + vcpu->dirty_gfns = addr; + vcpu->dirty_gfns_count = size / sizeof(struct kvm_dirty_gfn); + } + + return vcpu->dirty_gfns; +} + /* * VM Ioctl * @@ -1569,6 +1627,7 @@ static struct exit_reason { {KVM_EXIT_INTERNAL_ERROR, "INTERNAL_ERROR"}, {KVM_EXIT_OSI, "OSI"}, {KVM_EXIT_PAPR_HCALL, "PAPR_HCALL"}, + {KVM_EXIT_DIRTY_RING_FULL, "DIRTY_RING_FULL"}, #ifdef KVM_EXIT_MEMORY_NOT_PRESENT {KVM_EXIT_MEMORY_NOT_PRESENT, "MEMORY_NOT_PRESENT"}, #endif diff --git a/tools/testing/selftests/kvm/lib/kvm_util_internal.h b/tools/testing/selftests/kvm/lib/kvm_util_internal.h index ca56a0133127..22c84d9c8b03 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util_internal.h +++ b/tools/testing/selftests/kvm/lib/kvm_util_internal.h @@ -28,6 +28,9 @@ struct vcpu { uint32_t id; int fd; struct kvm_run *state; + struct kvm_dirty_gfn *dirty_gfns; + uint32_t fetch_index; + uint32_t dirty_gfns_count; }; struct kvm_vm { @@ -50,6 +53,7 @@ struct kvm_vm { vm_paddr_t pgd; vm_vaddr_t gdt; vm_vaddr_t tss; + uint32_t dirty_ring_size; }; struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid); From patchwork Tue Mar 31 18:59:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468285 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A10891667 for ; Tue, 31 Mar 2020 19:01:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6835121582 for ; Tue, 31 Mar 2020 19:01:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="VZbmAmTx" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731302AbgCaTBe (ORCPT ); Tue, 31 Mar 2020 15:01:34 -0400 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:44385 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727768AbgCaTBd (ORCPT ); Tue, 31 Mar 2020 15:01:33 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681292; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pvn8rulfwWl/vBfwgfPRKDDuo5rVv+xe9BS1KlCAb2o=; b=VZbmAmTxlzUZhLe8mdugHUtUMHjKySGc+1D+Dj35ytLzg2/FdJdJ9u21oNSpqGzciZ271S w7CeZpkhVRrI2SjjIwQUQE22/8Sejwpqn8xO7sNbfDab4m/qIi3tiIabnsqqtNBdfi+LRQ gCWAql+LTOcWev3WgUTAYDDqF5sPdgw= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-1-QlC1ZReVOACc2Tmr52VLvQ-1; Tue, 31 Mar 2020 15:01:28 -0400 X-MC-Unique: QlC1ZReVOACc2Tmr52VLvQ-1 Received: by mail-wm1-f72.google.com with SMTP id c21so1062638wmb.3 for ; Tue, 31 Mar 2020 12:01:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pvn8rulfwWl/vBfwgfPRKDDuo5rVv+xe9BS1KlCAb2o=; b=BI3r5i/SrRq68CLvmq4zIcE8H/xUYiOoX1p6ddDwrn+uFoeLGBgMPkm6Yvfrgv5lYc cGaijCmgWjEkETNEBKvWaoVXMOkqxyYN+L8VKKARYhBt+e+/vad6ebN6hlRK59w+/t62 glp8bMY+kK45hdiNlze3bB+K/uQTLjXD68c/REJhie+3XzIoQBEiBQQh37uiUmTwGcdQ GjITYYtseWdYbTmh7rOS6s5scEDODQHzxnNaApkhARnaWLR6jiL+eA1FkgtBl2Ya3rLh oz0i98MEPdRKF7NB7sv31HIzaesHu/JAHih8zjRubPAINp7wabRxTCVE3rmVPPy8sxHA 6bUA== X-Gm-Message-State: AGi0PuannIL7/znT7ZDE10y1kw7pkFlvlC7YHAncoRh2621i8mLn5L3F 3XMD8g6hxacqjj3wLMmI77C3FuNYYwiZVlZy5RlLsrhE34rUrBofKBuqIRKl4sJP4PA92bjlYNV qrClQNwo9Vo5C X-Received: by 2002:a1c:197:: with SMTP id 145mr325818wmb.42.1585681285275; Tue, 31 Mar 2020 12:01:25 -0700 (PDT) X-Google-Smtp-Source: APiQypKJ/Xw0n9e2iLLun1jE+OYTGFSdJDoEp3FmyV8lpTtDEhz1iOY9vmPITxRiiLBw4GCzCJqtEw== X-Received: by 2002:a1c:197:: with SMTP id 145mr325787wmb.42.1585681284907; Tue, 31 Mar 2020 12:01:24 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id n11sm4847831wmi.10.2020.03.31.12.01.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:01:24 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com Subject: [PATCH v8 13/14] KVM: selftests: Let dirty_log_test async for dirty ring test Date: Tue, 31 Mar 2020 14:59:59 -0400 Message-Id: <20200331190000.659614-14-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Previously the dirty ring test was working in synchronous way, because only with a vmexit (with that it was the ring full event) we'll know the hardware dirty bits will be flushed to the dirty ring. With this patch we first introduced the vcpu kick mechanism by using SIGUSR1, meanwhile we can have a guarantee of vmexit and also the flushing of hardware dirty bits. With all these, we can keep the vcpu dirty work asynchronous of the whole collection procedure now. Still, we need to be very careful that we can only do it async if the vcpu is not reaching soft limit (no KVM_EXIT_DIRTY_RING_FULL). Otherwise we must collect the dirty bits before continuing the vcpu. Further increase the dirty ring size to current maximum to make sure we torture more on the no-ring-full case, which should be the major scenario when the hypervisors like QEMU would like to use this feature. Signed-off-by: Peter Xu Reviewed-by: Andrew Jones --- tools/testing/selftests/kvm/dirty_log_test.c | 126 +++++++++++++----- .../testing/selftests/kvm/include/kvm_util.h | 1 + tools/testing/selftests/kvm/lib/kvm_util.c | 9 ++ 3 files changed, 106 insertions(+), 30 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index 531431cff4fc..4b404dfdc2f9 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -13,6 +13,9 @@ #include #include #include +#include +#include +#include #include #include #include @@ -59,7 +62,9 @@ # define test_and_clear_bit_le test_and_clear_bit #endif -#define TEST_DIRTY_RING_COUNT 1024 +#define TEST_DIRTY_RING_COUNT 65536 + +#define SIG_IPI SIGUSR1 /* * Guest/Host shared variables. Ensure addr_gva2hva() and/or @@ -135,6 +140,12 @@ static uint64_t host_track_next_count; /* Whether dirty ring reset is requested, or finished */ static sem_t dirty_ring_vcpu_stop; static sem_t dirty_ring_vcpu_cont; +/* + * This is updated by the vcpu thread to tell the host whether it's a + * ring-full event. It should only be read until a sem_wait() of + * dirty_ring_vcpu_stop and before vcpu continues to run. + */ +static bool dirty_ring_vcpu_ring_full; enum log_mode_t { /* Only use KVM_GET_DIRTY_LOG for logging */ @@ -156,6 +167,33 @@ enum log_mode_t { static enum log_mode_t host_log_mode_option = LOG_MODE_ALL; /* Logging mode for current run */ static enum log_mode_t host_log_mode; +static pthread_t vcpu_thread; + +/* Only way to pass this to the signal handler */ +static struct kvm_vm *current_vm; + +static void vcpu_sig_handler(int sig) +{ + TEST_ASSERT(sig == SIG_IPI, "unknown signal: %d", sig); +} + +static void vcpu_kick(void) +{ + pthread_kill(vcpu_thread, SIG_IPI); +} + +/* + * In our test we do signal tricks, let's use a better version of + * sem_wait to avoid signal interrupts + */ +static void sem_wait_until(sem_t *sem) +{ + int ret; + + do + ret = sem_wait(sem); + while (ret == -1 && errno == EINTR); +} static bool clear_log_supported(void) { @@ -189,10 +227,13 @@ static void clear_log_collect_dirty_pages(struct kvm_vm *vm, int slot, kvm_vm_clear_dirty_log(vm, slot, bitmap, 0, num_pages); } -static void default_after_vcpu_run(struct kvm_vm *vm) +static void default_after_vcpu_run(struct kvm_vm *vm, int ret, int err) { struct kvm_run *run = vcpu_state(vm, VCPU_ID); + TEST_ASSERT(ret == 0 || (ret == -1 && err == EINTR), + "vcpu run failed: errno=%d", err); + TEST_ASSERT(get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC, "Invalid guest sync status: exit_reason=%s\n", exit_reason_str(run->exit_reason)); @@ -248,27 +289,37 @@ static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns, return count; } +static void dirty_ring_wait_vcpu(void) +{ + /* This makes sure that hardware PML cache flushed */ + vcpu_kick(); + sem_wait_until(&dirty_ring_vcpu_stop); +} + +static void dirty_ring_continue_vcpu(void) +{ + pr_info("Notifying vcpu to continue\n"); + sem_post(&dirty_ring_vcpu_cont); +} + static void dirty_ring_collect_dirty_pages(struct kvm_vm *vm, int slot, void *bitmap, uint32_t num_pages) { /* We only have one vcpu */ static uint32_t fetch_index = 0; uint32_t count = 0, cleared; + bool continued_vcpu = false; - /* - * Before fetching the dirty pages, we need a vmexit of the - * worker vcpu to make sure the hardware dirty buffers were - * flushed. This is not needed for dirty-log/clear-log tests - * because get dirty log will natually do so. - * - * For now we do it in the simple way - we simply wait until - * the vcpu uses up the soft dirty ring, then it'll always - * do a vmexit to make sure that PML buffers will be flushed. - * In real hypervisors, we probably need a vcpu kick or to - * stop the vcpus (before the final sync) to make sure we'll - * get all the existing dirty PFNs even cached in hardware. - */ - sem_wait(&dirty_ring_vcpu_stop); + dirty_ring_wait_vcpu(); + + if (!dirty_ring_vcpu_ring_full) { + /* + * This is not a ring-full event, it's safe to allow + * vcpu to continue + */ + dirty_ring_continue_vcpu(); + continued_vcpu = true; + } /* Only have one vcpu */ count = dirty_ring_collect_one(vcpu_map_dirty_ring(vm, VCPU_ID), @@ -280,13 +331,16 @@ static void dirty_ring_collect_dirty_pages(struct kvm_vm *vm, int slot, TEST_ASSERT(cleared == count, "Reset dirty pages (%u) mismatch " "with collected (%u)", cleared, count); - pr_info("Notifying vcpu to continue\n"); - sem_post(&dirty_ring_vcpu_cont); + if (!continued_vcpu) { + TEST_ASSERT(dirty_ring_vcpu_ring_full, + "Didn't continue vcpu even without ring full"); + dirty_ring_continue_vcpu(); + } pr_info("Iteration %ld collected %u pages\n", iteration, count); } -static void dirty_ring_after_vcpu_run(struct kvm_vm *vm) +static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err) { struct kvm_run *run = vcpu_state(vm, VCPU_ID); @@ -294,10 +348,16 @@ static void dirty_ring_after_vcpu_run(struct kvm_vm *vm) if (get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC) { /* We should allow this to continue */ ; - } else if (run->exit_reason == KVM_EXIT_DIRTY_RING_FULL) { + } else if (run->exit_reason == KVM_EXIT_DIRTY_RING_FULL || + (ret == -1 && err == EINTR)) { + /* Update the flag first before pause */ + WRITE_ONCE(dirty_ring_vcpu_ring_full, + run->exit_reason == KVM_EXIT_DIRTY_RING_FULL); sem_post(&dirty_ring_vcpu_stop); - pr_info("vcpu stops because dirty ring full...\n"); - sem_wait(&dirty_ring_vcpu_cont); + pr_info("vcpu stops because %s...\n", + dirty_ring_vcpu_ring_full ? + "dirty ring is full" : "vcpu is kicked out"); + sem_wait_until(&dirty_ring_vcpu_cont); pr_info("vcpu continues now.\n"); } else { TEST_ASSERT(false, "Invalid guest sync status: " @@ -322,7 +382,7 @@ struct log_mode { void (*collect_dirty_pages) (struct kvm_vm *vm, int slot, void *bitmap, uint32_t num_pages); /* Hook to call when after each vcpu run */ - void (*after_vcpu_run)(struct kvm_vm *vm); + void (*after_vcpu_run)(struct kvm_vm *vm, int ret, int err); void (*before_vcpu_join) (void); } log_modes[LOG_MODE_NUM] = { { @@ -394,12 +454,12 @@ static void log_mode_collect_dirty_pages(struct kvm_vm *vm, int slot, mode->collect_dirty_pages(vm, slot, bitmap, num_pages); } -static void log_mode_after_vcpu_run(struct kvm_vm *vm) +static void log_mode_after_vcpu_run(struct kvm_vm *vm, int ret, int err) { struct log_mode *mode = &log_modes[host_log_mode]; if (mode->after_vcpu_run) - mode->after_vcpu_run(vm); + mode->after_vcpu_run(vm, ret, err); } static void log_mode_before_vcpu_join(void) @@ -420,20 +480,27 @@ static void generate_random_array(uint64_t *guest_array, uint64_t size) static void *vcpu_worker(void *data) { - int ret; + int ret, vcpu_fd; struct kvm_vm *vm = data; uint64_t *guest_array; uint64_t pages_count = 0; + struct sigaction sigact; + + current_vm = vm; + vcpu_fd = vcpu_get_fd(vm, VCPU_ID); + memset(&sigact, 0, sizeof(sigact)); + sigact.sa_handler = vcpu_sig_handler; + sigaction(SIG_IPI, &sigact, NULL); guest_array = addr_gva2hva(vm, (vm_vaddr_t)random_array); while (!READ_ONCE(host_quit)) { + /* Clear any existing kick signals */ generate_random_array(guest_array, TEST_PAGES_PER_LOOP); pages_count += TEST_PAGES_PER_LOOP; /* Let the guest dirty the random pages */ - ret = _vcpu_run(vm, VCPU_ID); - TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); - log_mode_after_vcpu_run(vm); + ret = ioctl(vcpu_fd, KVM_RUN, NULL); + log_mode_after_vcpu_run(vm, ret, errno); } pr_info("Dirtied %"PRIu64" pages\n", pages_count); @@ -583,7 +650,6 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, static void run_test(enum vm_guest_mode mode, unsigned long iterations, unsigned long interval, uint64_t phys_offset) { - pthread_t vcpu_thread; struct kvm_vm *vm; unsigned long *bmap; diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 554fdb294bef..62254375ec50 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -144,6 +144,7 @@ vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva); struct kvm_run *vcpu_state(struct kvm_vm *vm, uint32_t vcpuid); void vcpu_run(struct kvm_vm *vm, uint32_t vcpuid); int _vcpu_run(struct kvm_vm *vm, uint32_t vcpuid); +int vcpu_get_fd(struct kvm_vm *vm, uint32_t vcpuid); void vcpu_run_complete_io(struct kvm_vm *vm, uint32_t vcpuid); void vcpu_set_mp_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_mp_state *mp_state); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index e632d1f4a112..0e79bde7a2a8 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1207,6 +1207,15 @@ int _vcpu_run(struct kvm_vm *vm, uint32_t vcpuid) return rc; } +int vcpu_get_fd(struct kvm_vm *vm, uint32_t vcpuid) +{ + struct vcpu *vcpu = vcpu_find(vm, vcpuid); + + TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid); + + return vcpu->fd; +} + void vcpu_run_complete_io(struct kvm_vm *vm, uint32_t vcpuid) { struct vcpu *vcpu = vcpu_find(vm, vcpuid); From patchwork Tue Mar 31 19:00:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468287 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8904514B4 for ; Tue, 31 Mar 2020 19:01:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5D8F8207FF for ; Tue, 31 Mar 2020 19:01:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bVWa4nzB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731311AbgCaTBf (ORCPT ); Tue, 31 Mar 2020 15:01:35 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:53919 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728433AbgCaTBe (ORCPT ); Tue, 31 Mar 2020 15:01:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681293; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mpD2kEQIv9tlf4VTpXdqhelQzLU1X8sWWcoS/iqQyA8=; b=bVWa4nzBsxvhKvnTAxhlNXchVtRc5K3HobKcbQCJ0FDkk6ifHS4Rbt3vaBRpR85EAgEL6C E44AKHCtFVMQlqrbTnjQ5EVyWmi6BA9xnsX7VVZgCYBjT8RLWrtDnvWdzItwy+6/EIrjZO WxE6pPeXOPL1KrtV7WC1/w3dDM8d/Sw= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-242-dD0SqMYTPCy6FnWqWdQalw-1; Tue, 31 Mar 2020 15:01:31 -0400 X-MC-Unique: dD0SqMYTPCy6FnWqWdQalw-1 Received: by mail-wr1-f72.google.com with SMTP id f15so13459992wrt.4 for ; Tue, 31 Mar 2020 12:01:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mpD2kEQIv9tlf4VTpXdqhelQzLU1X8sWWcoS/iqQyA8=; b=XL/qSILYKo6AOGzvDjIriIihUA8Wb99zMDNQNj2tGhVm9s08H8UqCmkF6j4JIioVly MZUMetNYSz5ZYS8P9Z7Y50dlNg+PndTtTBlJeMNWjYvkj6c2FnBzTbd4ezi3QD1U4fSJ KZQiKG08jeVNnIXShmIHKy53YUmxP0aGBnsVgClD/1BbuV9kzBCgnDoJ6dDrXgKNjw/6 BATdRaBr8HBQgAYw2S2XKV1weVqp8lF08svH1ChLLhWWvvyxOJTE6XaSKbve6RVrLhPw GGVmrTdQPLeXS9QQnbC05fYS50WYfS9w9KytX8wSLPN+1myPCd1A0HmEqXKzgQP1v4a9 DG7g== X-Gm-Message-State: ANhLgQ0zUpsDfgqRl+wMprIJcDi9YfO+vUizOTYCvDnbQGWtM9nlLSwH paj5yIoybJH4ofAA1k5W7z/h2v5oIy9K9jeXdWyDZMIbxyo5QwBqPYt4b2MtlBvjhFpWzIXW1as HD11z57+pGX8l X-Received: by 2002:adf:f4c6:: with SMTP id h6mr21661337wrp.353.1585681290530; Tue, 31 Mar 2020 12:01:30 -0700 (PDT) X-Google-Smtp-Source: ADFU+vsHKybAaTsS9HEduBHy2riDVsEOagD3f8xBZunLDGxSK2xwbCHDHdt1hzqrHnjR4E9o0Hx1BQ== X-Received: by 2002:adf:f4c6:: with SMTP id h6mr21661303wrp.353.1585681290320; Tue, 31 Mar 2020 12:01:30 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id h10sm29023479wrq.33.2020.03.31.12.01.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:01:29 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com, Andrew Jones Subject: [PATCH v8 14/14] KVM: selftests: Add "-c" parameter to dirty log test Date: Tue, 31 Mar 2020 15:00:00 -0400 Message-Id: <20200331190000.659614-15-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org It's only used to override the existing dirty ring size/count. If with a bigger ring count, we test async of dirty ring. If with a smaller ring count, we test ring full code path. Async is default. It has no use for non-dirty-ring tests. Reviewed-by: Andrew Jones Signed-off-by: Peter Xu --- tools/testing/selftests/kvm/dirty_log_test.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index 4b404dfdc2f9..80c42c87265e 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -168,6 +168,7 @@ static enum log_mode_t host_log_mode_option = LOG_MODE_ALL; /* Logging mode for current run */ static enum log_mode_t host_log_mode; static pthread_t vcpu_thread; +static uint32_t test_dirty_ring_count = TEST_DIRTY_RING_COUNT; /* Only way to pass this to the signal handler */ static struct kvm_vm *current_vm; @@ -250,7 +251,7 @@ static void dirty_ring_create_vm_done(struct kvm_vm *vm) * Switch to dirty ring mode after VM creation but before any * of the vcpu creation. */ - vm_enable_dirty_ring(vm, TEST_DIRTY_RING_COUNT * + vm_enable_dirty_ring(vm, test_dirty_ring_count * sizeof(struct kvm_dirty_gfn)); } @@ -272,7 +273,7 @@ static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns, uint32_t count = 0; while (true) { - cur = &dirty_gfns[*fetch_index % TEST_DIRTY_RING_COUNT]; + cur = &dirty_gfns[*fetch_index % test_dirty_ring_count]; if (!dirty_gfn_is_dirtied(cur)) break; TEST_ASSERT(cur->slot == slot, "Slot number didn't match: " @@ -778,6 +779,9 @@ static void help(char *name) printf("usage: %s [-h] [-i iterations] [-I interval] " "[-p offset] [-m mode]\n", name); puts(""); + printf(" -c: specify dirty ring size, in number of entries\n"); + printf(" (only useful for dirty-ring test; default: %"PRIu32")\n", + TEST_DIRTY_RING_COUNT); printf(" -i: specify iteration counts (default: %"PRIu64")\n", TEST_HOST_LOOP_N); printf(" -I: specify interval in ms (default: %"PRIu64" ms)\n", @@ -833,8 +837,11 @@ int main(int argc, char *argv[]) guest_mode_init(VM_MODE_P40V48_4K, true, true); #endif - while ((opt = getopt(argc, argv, "hi:I:p:m:M:")) != -1) { + while ((opt = getopt(argc, argv, "c:hi:I:p:m:M:")) != -1) { switch (opt) { + case 'c': + test_dirty_ring_count = strtol(optarg, NULL, 10); + break; case 'i': iterations = strtol(optarg, NULL, 10); break;