From patchwork Mon Nov 30 18:26:07 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 7729111 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D279C9F1C0 for ; Mon, 30 Nov 2015 18:35:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E929E20553 for ; Mon, 30 Nov 2015 18:35:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EF6BE20532 for ; Mon, 30 Nov 2015 18:35:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754484AbbK3SfY (ORCPT ); Mon, 30 Nov 2015 13:35:24 -0500 Received: from mga03.intel.com ([134.134.136.65]:23995 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754680AbbK3Scf (ORCPT ); Mon, 30 Nov 2015 13:32:35 -0500 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP; 30 Nov 2015 10:32:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,365,1444719600"; d="scan'208";a="610094022" Received: from xiaoreal1.sh.intel.com (HELO xiaoreal1.sh.intel.com.sh.intel.com) ([10.239.48.79]) by FMSMGA003.fm.intel.com with ESMTP; 30 Nov 2015 10:32:34 -0800 From: Xiao Guangrong To: pbonzini@redhat.com Cc: gleb@kernel.org, mtosatti@redhat.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Xiao Guangrong Subject: [PATCH 05/11] KVM: page track: introduce kvm_page_track_{add, remove}_page Date: Tue, 1 Dec 2015 02:26:07 +0800 Message-Id: <1448907973-36066-6-git-send-email-guangrong.xiao@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1448907973-36066-1-git-send-email-guangrong.xiao@linux.intel.com> References: <1448907973-36066-1-git-send-email-guangrong.xiao@linux.intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP These two functions are the user APIs: - kvm_page_track_add_page(): add the page to the tracking pool after that later specified access on that page will be tracked - kvm_page_track_remove_page(): remove the page from the tracking pool, the specified access on the page is not tracked after the last user is gone Both of these are called under the protection of kvm->srcu or kvm->slots_lock Signed-off-by: Xiao Guangrong --- arch/x86/include/asm/kvm_page_track.h | 5 ++ arch/x86/kvm/page_track.c | 95 +++++++++++++++++++++++++++++++++++ 2 files changed, 100 insertions(+) diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index 347d5c9..9cc17c6 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -10,4 +10,9 @@ int kvm_page_track_create_memslot(struct kvm_memory_slot *slot, unsigned long npages); void kvm_page_track_free_memslot(struct kvm_memory_slot *free, struct kvm_memory_slot *dont); + +void kvm_page_track_add_page(struct kvm *kvm, gfn_t gfn, + enum kvm_page_track_mode mode); +void kvm_page_track_remove_page(struct kvm *kvm, gfn_t gfn, + enum kvm_page_track_mode mode); #endif diff --git a/arch/x86/kvm/page_track.c b/arch/x86/kvm/page_track.c index 0338d36..ad510db 100644 --- a/arch/x86/kvm/page_track.c +++ b/arch/x86/kvm/page_track.c @@ -56,3 +56,98 @@ void kvm_page_track_free_memslot(struct kvm_memory_slot *free, if (!dont || free->arch.gfn_track != dont->arch.gfn_track) page_track_slot_free(free); } + +static bool check_mode(enum kvm_page_track_mode mode) +{ + if (mode < 0 || mode >= KVM_PAGE_TRACK_MAX) + return false; + + return true; +} + +static void update_gfn_track(struct kvm_memory_slot *slot, gfn_t gfn, + enum kvm_page_track_mode mode, int count) +{ + int index, val; + + index = gfn_to_index(gfn, slot->base_gfn, PT_PAGE_TABLE_LEVEL); + + slot->arch.gfn_track[mode][index] += count; + val = slot->arch.gfn_track[mode][index]; + WARN_ON(val < 0); +} + +/* + * add guest page to the tracking pool so that corresponding access on that + * page will be intercepted. + * + * It should be called under the protection of kvm->srcu or kvm->slots_lock + * + * @kvm: the guest instance we are interested in. + * @gfn: the guest page. + * @mode: tracking mode, currently only write track is supported. + */ +void kvm_page_track_add_page(struct kvm *kvm, gfn_t gfn, + enum kvm_page_track_mode mode) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *slot; + int i; + + WARN_ON(!check_mode(mode)); + + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + slot = __gfn_to_memslot(slots, gfn); + + spin_lock(&kvm->mmu_lock); + update_gfn_track(slot, gfn, mode, 1); + + /* + * new track stops large page mapping for the + * tracked page. + */ + kvm_mmu_gfn_disallow_lpage(slot, gfn); + + if (mode == KVM_PAGE_TRACK_WRITE) + if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn)) + kvm_flush_remote_tlbs(kvm); + spin_unlock(&kvm->mmu_lock); + } +} + +/* + * remove the guest page from the tracking pool which stops the interception + * of corresponding access on that page. It is the opposed operation of + * kvm_page_track_add_page(). + * + * It should be called under the protection of kvm->srcu or kvm->slots_lock + * + * @kvm: the guest instance we are interested in. + * @gfn: the guest page. + * @mode: tracking mode, currently only write track is supported. + */ +void kvm_page_track_remove_page(struct kvm *kvm, gfn_t gfn, + enum kvm_page_track_mode mode) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *slot; + int i; + + WARN_ON(!check_mode(mode)); + + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + slot = __gfn_to_memslot(slots, gfn); + + spin_lock(&kvm->mmu_lock); + update_gfn_track(slot, gfn, mode, -1); + + /* + * allow large page mapping for the tracked page + * after the tracker is gone. + */ + kvm_mmu_gfn_allow_lpage(slot, gfn); + spin_unlock(&kvm->mmu_lock); + } +}