From patchwork Wed Oct 7 20:53:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11821767 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 105EC6CA for ; Wed, 7 Oct 2020 20:54:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D573B20872 for ; Wed, 7 Oct 2020 20:54:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YDA9r+rD" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728605AbgJGUx7 (ORCPT ); Wed, 7 Oct 2020 16:53:59 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:27128 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728535AbgJGUx5 (ORCPT ); Wed, 7 Oct 2020 16:53:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1602104035; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9WQM922/ujEpth7wPBQLl+hC1rbuv4XbBYQfJtRx5bA=; b=YDA9r+rDNlVIV7LDvU4kBuZRhwIiN3X6PlSGF5eiCSvT/QmS+EBNuKJ+O1674J1TrESZ6V wfWKhWFh0uQ+kiO+PYV+I9I9gG5XSr6jw8uvX7dte1euTIYo/HE3mbuW+cjFklvFHLBz9p gWpwSKJ3aDOCT0boNKl18oV0wi9w90k= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-43-FGf_q8k4M3epiaj4ib4r4Q-1; Wed, 07 Oct 2020 16:53:50 -0400 X-MC-Unique: FGf_q8k4M3epiaj4ib4r4Q-1 Received: by mail-qk1-f200.google.com with SMTP id s9so2210503qks.21 for ; Wed, 07 Oct 2020 13:53:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9WQM922/ujEpth7wPBQLl+hC1rbuv4XbBYQfJtRx5bA=; b=Y+DPWMFn+W0BgY6oLbRSDNHKgWMOxpg2YXDUfPGKdksQQtS98FOLEvSbZui+En6zkL pthXoZyce4J97oHbeSeeRGoA9imCX6VArhnqtnbm/I4QnbtxqntpH6DOhHVQvs9F+7Y+ GdvnaAP971ghgOuQlogcG7POYWR5PgsUnlPzKGkB7pcb4N5YdOCZbG1DL/oxgcFpv0Dn lZ6/tlwd6KExcYHxTS1Hti8qIL+Smscc1y1ZB3xhBe7Dd1pYVBiWug7QWcmIJVX5Styh b1nW+JCdzH41i9ZGYtrIotbJk2GvOX/UkZ1/mpjUcztZSyu0+grnWx0I3UrNjduDfd0A jeXQ== X-Gm-Message-State: AOAM530IT77N47yvvp9SzSC7d6UeSHZneZCWfkV3Vrb5uirrlplRKAml AXfoMGtgiCA1sdg+PKHnva7TXISMcZd+Xvi8NVjcjaIha4bGpk781i+ePa1uZeDqCsmlqj831LI fEciXEd10IlXD X-Received: by 2002:ac8:1c1b:: with SMTP id a27mr5310829qtk.157.1602104029742; Wed, 07 Oct 2020 13:53:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzzux2CPw5Zd7amKVhjAILi9ybcfGhAWdLM74occY7Pcr9kVydZOaE923apD5YWoSSeGcU2vQ== X-Received: by 2002:ac8:1c1b:: with SMTP id a27mr5310807qtk.157.1602104029517; Wed, 07 Oct 2020 13:53:49 -0700 (PDT) Received: from xz-x1.redhat.com (toroon474qw-lp140-04-174-95-215-133.dsl.bell.ca. [174.95.215.133]) by smtp.gmail.com with ESMTPSA id j24sm2390695qkg.107.2020.10.07.13.53.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Oct 2020 13:53:48 -0700 (PDT) From: Peter Xu To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: peterx@redhat.com, Paolo Bonzini , Sean Christopherson , Andrew Jones , "Dr . David Alan Gilbert" Subject: [PATCH v14 04/14] KVM: Pass in kvm pointer into mark_page_dirty_in_slot() Date: Wed, 7 Oct 2020 16:53:32 -0400 Message-Id: <20201007205342.295402-5-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201007205342.295402-1-peterx@redhat.com> References: <20201007205342.295402-1-peterx@redhat.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The context will be needed to implement the kvm dirty ring. Signed-off-by: Peter Xu --- virt/kvm/kvm_main.c | 33 +++++++++++++++++++-------------- 1 file changed, 19 insertions(+), 14 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2e8539213125..63cbf335e2e7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -143,7 +143,9 @@ static void hardware_disable_all(void); static void kvm_io_bus_destroy(struct kvm_io_bus *bus); -static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, gfn_t gfn); +static void mark_page_dirty_in_slot(struct kvm *kvm, + struct kvm_memory_slot *memslot, + gfn_t gfn); __visible bool kvm_rebooting; EXPORT_SYMBOL_GPL(kvm_rebooting); @@ -2198,7 +2200,8 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) } EXPORT_SYMBOL_GPL(kvm_vcpu_map); -static void __kvm_unmap_gfn(struct kvm_memory_slot *memslot, +static void __kvm_unmap_gfn(struct kvm *kvm, + struct kvm_memory_slot *memslot, struct kvm_host_map *map, struct gfn_to_pfn_cache *cache, bool dirty, bool atomic) @@ -2223,7 +2226,7 @@ static void __kvm_unmap_gfn(struct kvm_memory_slot *memslot, #endif if (dirty) - mark_page_dirty_in_slot(memslot, map->gfn); + mark_page_dirty_in_slot(kvm, memslot, map->gfn); if (cache) cache->dirty |= dirty; @@ -2237,7 +2240,7 @@ static void __kvm_unmap_gfn(struct kvm_memory_slot *memslot, int kvm_unmap_gfn(struct kvm_vcpu *vcpu, struct kvm_host_map *map, struct gfn_to_pfn_cache *cache, bool dirty, bool atomic) { - __kvm_unmap_gfn(gfn_to_memslot(vcpu->kvm, map->gfn), map, + __kvm_unmap_gfn(vcpu->kvm, gfn_to_memslot(vcpu->kvm, map->gfn), map, cache, dirty, atomic); return 0; } @@ -2245,8 +2248,8 @@ EXPORT_SYMBOL_GPL(kvm_unmap_gfn); void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty) { - __kvm_unmap_gfn(kvm_vcpu_gfn_to_memslot(vcpu, map->gfn), map, NULL, - dirty, false); + __kvm_unmap_gfn(vcpu->kvm, kvm_vcpu_gfn_to_memslot(vcpu, map->gfn), + map, NULL, dirty, false); } EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); @@ -2420,7 +2423,8 @@ int kvm_vcpu_read_guest_atomic(struct kvm_vcpu *vcpu, gpa_t gpa, } EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_atomic); -static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn, +static int __kvm_write_guest_page(struct kvm *kvm, + struct kvm_memory_slot *memslot, gfn_t gfn, const void *data, int offset, int len) { int r; @@ -2432,7 +2436,7 @@ static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn, r = __copy_to_user((void __user *)addr + offset, data, len); if (r) return -EFAULT; - mark_page_dirty_in_slot(memslot, gfn); + mark_page_dirty_in_slot(kvm, memslot, gfn); return 0; } @@ -2441,7 +2445,7 @@ int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn, { struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); - return __kvm_write_guest_page(slot, gfn, data, offset, len); + return __kvm_write_guest_page(kvm, slot, gfn, data, offset, len); } EXPORT_SYMBOL_GPL(kvm_write_guest_page); @@ -2450,7 +2454,7 @@ int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, { struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - return __kvm_write_guest_page(slot, gfn, data, offset, len); + return __kvm_write_guest_page(vcpu->kvm, slot, gfn, data, offset, len); } EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest_page); @@ -2569,7 +2573,7 @@ int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, r = __copy_to_user((void __user *)ghc->hva + offset, data, len); if (r) return -EFAULT; - mark_page_dirty_in_slot(ghc->memslot, gpa >> PAGE_SHIFT); + mark_page_dirty_in_slot(kvm, ghc->memslot, gpa >> PAGE_SHIFT); return 0; } @@ -2645,7 +2649,8 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len) } EXPORT_SYMBOL_GPL(kvm_clear_guest); -static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, +static void mark_page_dirty_in_slot(struct kvm *kvm, + struct kvm_memory_slot *memslot, gfn_t gfn) { if (memslot && memslot->dirty_bitmap) { @@ -2660,7 +2665,7 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn) struct kvm_memory_slot *memslot; memslot = gfn_to_memslot(kvm, gfn); - mark_page_dirty_in_slot(memslot, gfn); + mark_page_dirty_in_slot(kvm, memslot, gfn); } EXPORT_SYMBOL_GPL(mark_page_dirty); @@ -2669,7 +2674,7 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn) struct kvm_memory_slot *memslot; memslot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - mark_page_dirty_in_slot(memslot, gfn); + mark_page_dirty_in_slot(vcpu->kvm, memslot, gfn); } EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty);