From patchwork Tue Mar 31 18:59:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11468265 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9686092C for ; Tue, 31 Mar 2020 19:00:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 68936208E0 for ; Tue, 31 Mar 2020 19:00:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UZ6Olkjb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729405AbgCaTAd (ORCPT ); Tue, 31 Mar 2020 15:00:33 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:42687 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729069AbgCaTAd (ORCPT ); Tue, 31 Mar 2020 15:00:33 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585681232; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=942TZSgCZTIXNGim4SfsG/Ay9TGVU0UU6faKqGGsXTY=; b=UZ6OlkjbLD06RZl/VNpHw5F/4YCcxiojRih6dJp+EYx3sN2cLzLk83YCZjoJj7RnxJWx0s M9qK5Bi7F3/yMA641XkOHqAHfXit4V2xUa0X3PxT8DUMYf2lgaWB0NIv3/Fd5ZoXdShH9j 0KDGHUlQu69UIlwnTxXuU2jLXEE1qXs= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-273-VJZ4V7EwN-WsWr-rmwJGAQ-1; Tue, 31 Mar 2020 15:00:30 -0400 X-MC-Unique: VJZ4V7EwN-WsWr-rmwJGAQ-1 Received: by mail-wm1-f69.google.com with SMTP id s22so1055594wmh.8 for ; Tue, 31 Mar 2020 12:00:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=942TZSgCZTIXNGim4SfsG/Ay9TGVU0UU6faKqGGsXTY=; b=bzXONMOZrH1ONyOB7TsbWZgVigyH8CocPLIqcJ6/XwTkiKsfVM8IDNTkkSMU+kzMQB nmXUhtQ2uJgZWbByNaNZpkB4cBFwDXI52Mtina2KyQrvahbgVCvVn0OVa/K8dLXYPBCo WXGubcJ583fTjqOhOCLeF6kF2714gHNxjW9+zKWelmDXEm4aYe8hhQwLgjg1KrceGZjv O27A5w2O4o5NeHzzU9mk1s7Cgv0MhsXBa639VobT94s4KG4WJLXOsIlzY25ZuVXPEjl3 ibqOKNZmPUWfe6t22BCwrVBqehk2BbwlvmypIzqnjEMm4+pn4TTdLQQf+kOxhlU+9GOs LfGg== X-Gm-Message-State: AGi0PuZ/0wePdp3zL9EP+GmmN5up4N89TjDpB9VMpL3Q60F2GE+oi+gx xzp1MFGWG4/hXFou3tEaO4vCC//q+4PPMX4mKPqCkaDOIGaS01wxWc9r6u0JuIivYIaxcW0g9+z xVtWnSSVMOti4 X-Received: by 2002:a1c:f70a:: with SMTP id v10mr319012wmh.72.1585681229271; Tue, 31 Mar 2020 12:00:29 -0700 (PDT) X-Google-Smtp-Source: APiQypKNgdaGXaBJc5fR6s6Yyf+lq0KrRuYoKuWVHf4TrUF9elB8CYNwFzvUoQ50OoDHIPomdVVz4A== X-Received: by 2002:a1c:f70a:: with SMTP id v10mr318980wmh.72.1585681229021; Tue, 31 Mar 2020 12:00:29 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id 61sm29852553wrn.82.2020.03.31.12.00.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 12:00:28 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Kevin Tian , "Michael S . Tsirkin" , Jason Wang , Sean Christopherson , Christophe de Dinechin , Yan Zhao , Alex Williamson , Paolo Bonzini , Vitaly Kuznetsov , "Dr . David Alan Gilbert" , peterx@redhat.com Subject: [PATCH v8 04/14] KVM: Pass in kvm pointer into mark_page_dirty_in_slot() Date: Tue, 31 Mar 2020 14:59:50 -0400 Message-Id: <20200331190000.659614-5-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200331190000.659614-1-peterx@redhat.com> References: <20200331190000.659614-1-peterx@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The context will be needed to implement the kvm dirty ring. Signed-off-by: Peter Xu --- virt/kvm/kvm_main.c | 33 +++++++++++++++++++-------------- 1 file changed, 19 insertions(+), 14 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c04612726e85..1f869dda8110 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -144,7 +144,9 @@ static void hardware_disable_all(void); static void kvm_io_bus_destroy(struct kvm_io_bus *bus); -static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, gfn_t gfn); +static void mark_page_dirty_in_slot(struct kvm *kvm, + struct kvm_memory_slot *memslot, + gfn_t gfn); __visible bool kvm_rebooting; EXPORT_SYMBOL_GPL(kvm_rebooting); @@ -2120,7 +2122,8 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) } EXPORT_SYMBOL_GPL(kvm_vcpu_map); -static void __kvm_unmap_gfn(struct kvm_memory_slot *memslot, +static void __kvm_unmap_gfn(struct kvm *kvm, + struct kvm_memory_slot *memslot, struct kvm_host_map *map, struct gfn_to_pfn_cache *cache, bool dirty, bool atomic) @@ -2145,7 +2148,7 @@ static void __kvm_unmap_gfn(struct kvm_memory_slot *memslot, #endif if (dirty) - mark_page_dirty_in_slot(memslot, map->gfn); + mark_page_dirty_in_slot(kvm, memslot, map->gfn); if (cache) cache->dirty |= dirty; @@ -2159,7 +2162,7 @@ static void __kvm_unmap_gfn(struct kvm_memory_slot *memslot, int kvm_unmap_gfn(struct kvm_vcpu *vcpu, struct kvm_host_map *map, struct gfn_to_pfn_cache *cache, bool dirty, bool atomic) { - __kvm_unmap_gfn(gfn_to_memslot(vcpu->kvm, map->gfn), map, + __kvm_unmap_gfn(vcpu->kvm, gfn_to_memslot(vcpu->kvm, map->gfn), map, cache, dirty, atomic); return 0; } @@ -2167,8 +2170,8 @@ EXPORT_SYMBOL_GPL(kvm_unmap_gfn); void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty) { - __kvm_unmap_gfn(kvm_vcpu_gfn_to_memslot(vcpu, map->gfn), map, NULL, - dirty, false); + __kvm_unmap_gfn(vcpu->kvm, kvm_vcpu_gfn_to_memslot(vcpu, map->gfn), + map, NULL, dirty, false); } EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); @@ -2342,7 +2345,8 @@ int kvm_vcpu_read_guest_atomic(struct kvm_vcpu *vcpu, gpa_t gpa, } EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_atomic); -static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn, +static int __kvm_write_guest_page(struct kvm *kvm, + struct kvm_memory_slot *memslot, gfn_t gfn, const void *data, int offset, int len) { int r; @@ -2354,7 +2358,7 @@ static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn, r = __copy_to_user((void __user *)addr + offset, data, len); if (r) return -EFAULT; - mark_page_dirty_in_slot(memslot, gfn); + mark_page_dirty_in_slot(kvm, memslot, gfn); return 0; } @@ -2363,7 +2367,7 @@ int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn, { struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); - return __kvm_write_guest_page(slot, gfn, data, offset, len); + return __kvm_write_guest_page(kvm, slot, gfn, data, offset, len); } EXPORT_SYMBOL_GPL(kvm_write_guest_page); @@ -2372,7 +2376,7 @@ int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, { struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - return __kvm_write_guest_page(slot, gfn, data, offset, len); + return __kvm_write_guest_page(vcpu->kvm, slot, gfn, data, offset, len); } EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest_page); @@ -2491,7 +2495,7 @@ int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, r = __copy_to_user((void __user *)ghc->hva + offset, data, len); if (r) return -EFAULT; - mark_page_dirty_in_slot(ghc->memslot, gpa >> PAGE_SHIFT); + mark_page_dirty_in_slot(kvm, ghc->memslot, gpa >> PAGE_SHIFT); return 0; } @@ -2558,7 +2562,8 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len) } EXPORT_SYMBOL_GPL(kvm_clear_guest); -static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, +static void mark_page_dirty_in_slot(struct kvm *kvm, + struct kvm_memory_slot *memslot, gfn_t gfn) { if (memslot && memslot->dirty_bitmap) { @@ -2573,7 +2578,7 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn) struct kvm_memory_slot *memslot; memslot = gfn_to_memslot(kvm, gfn); - mark_page_dirty_in_slot(memslot, gfn); + mark_page_dirty_in_slot(kvm, memslot, gfn); } EXPORT_SYMBOL_GPL(mark_page_dirty); @@ -2582,7 +2587,7 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn) struct kvm_memory_slot *memslot; memslot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - mark_page_dirty_in_slot(memslot, gfn); + mark_page_dirty_in_slot(vcpu->kvm, memslot, gfn); } EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty);