From patchwork Mon Dec 3 19:25:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Duyck X-Patchwork-Id: 10710403 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0B08113BF for ; Mon, 3 Dec 2018 19:25:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F13892A962 for ; Mon, 3 Dec 2018 19:25:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E58932AF6C; Mon, 3 Dec 2018 19:25:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8BEDF2A962 for ; Mon, 3 Dec 2018 19:25:27 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 73089211982F7; Mon, 3 Dec 2018 11:25:27 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received-SPF: Pass (sender SPF authorized) identity=helo; client-ip=192.55.52.136; helo=mga12.intel.com; envelope-from=alexander.h.duyck@linux.intel.com; receiver=linux-nvdimm@lists.01.org Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 568DD211982F2 for ; Mon, 3 Dec 2018 11:25:26 -0800 (PST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Dec 2018 11:25:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,311,1539673200"; d="scan'208";a="115600470" Received: from ahduyck-desk1.amr.corp.intel.com ([10.7.198.76]) by orsmga001.jf.intel.com with ESMTP; 03 Dec 2018 11:25:26 -0800 Subject: [PATCH RFC 1/3] kvm: Split use cases for kvm_is_reserved_pfn to kvm_is_refcounted_pfn From: Alexander Duyck To: dan.j.williams@intel.com, pbonzini@redhat.com, yi.z.zhang@linux.intel.com, brho@google.com, kvm@vger.kernel.org, linux-nvdimm@lists.01.org Date: Mon, 03 Dec 2018 11:25:26 -0800 Message-ID: <154386512606.27193.13867450982940890636.stgit@ahduyck-desk1.amr.corp.intel.com> In-Reply-To: <154386493754.27193.1300965403157243427.stgit@ahduyck-desk1.amr.corp.intel.com> References: <154386493754.27193.1300965403157243427.stgit@ahduyck-desk1.amr.corp.intel.com> User-Agent: StGit/unknown-version MIME-Version: 1.0 X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: yu.c.zhang@intel.com, david@redhat.com, rkrcmar@redhat.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, jglisse@redhat.com, jack@suse.cz, hch@lst.de Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP The function kvm_is_reserved_pfn really has two uses. One is to test for if we should be updating the reference count on a page when we are accessing it. The other is to determine if we should be updating the dirty flag or marking pages as accessed. In preparation for blurring the lines between ZONE_DEVICE and system RAM I am splitting out the dirty/accessed cases into their own checks. Doing this allows us to add ZONE_DEVICE to the list of refcounted pages without having to worry about us introducing possible issues with pages being marked as dirty or accessed and possibly causing any issues with attempted LRU accesses on the ZONE_DEVICE pages. Signed-off-by: Alexander Duyck --- arch/x86/kvm/mmu.c | 6 +++--- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 22 +++++++++++++--------- 3 files changed, 17 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 7c03c0f35444..7c61cc260c23 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -798,7 +798,7 @@ static int mmu_spte_clear_track_bits(u64 *sptep) * kvm mmu, before reclaiming the page, we should * unmap it from mmu first. */ - WARN_ON(!kvm_is_reserved_pfn(pfn) && !page_count(pfn_to_page(pfn))); + WARN_ON(kvm_is_refcounted_pfn(pfn) && !page_count(pfn_to_page(pfn))); if (is_accessed_spte(old_spte)) kvm_set_pfn_accessed(pfn); @@ -3166,7 +3166,7 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, * PT_PAGE_TABLE_LEVEL and there would be no adjustment done * here. */ - if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) && + if (!is_error_noslot_pfn(pfn) && kvm_is_refcounted_pfn(pfn) && level == PT_PAGE_TABLE_LEVEL && PageTransCompoundMap(pfn_to_page(pfn)) && !mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL)) { @@ -5668,7 +5668,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, * mapping if the indirect sp has level = 1. */ if (sp->role.direct && - !kvm_is_reserved_pfn(pfn) && + kvm_is_refcounted_pfn(pfn) && PageTransCompoundMap(pfn_to_page(pfn))) { pte_list_remove(rmap_head, sptep); need_tlb_flush = 1; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c926698040e0..132e5dbc9049 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -906,7 +906,7 @@ void kvm_arch_sync_events(struct kvm *kvm); int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu); void kvm_vcpu_kick(struct kvm_vcpu *vcpu); -bool kvm_is_reserved_pfn(kvm_pfn_t pfn); +bool kvm_is_refcounted_pfn(kvm_pfn_t pfn); struct kvm_irq_ack_notifier { struct hlist_node link; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2679e476b6c3..5e666df5666d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -146,7 +146,15 @@ __weak int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, return 0; } -bool kvm_is_reserved_pfn(kvm_pfn_t pfn) +bool kvm_is_refcounted_pfn(kvm_pfn_t pfn) +{ + if (pfn_valid(pfn)) + return !PageReserved(pfn_to_page(pfn)); + + return false; +} + +static bool kvm_is_reserved_pfn(kvm_pfn_t pfn) { if (pfn_valid(pfn)) return PageReserved(pfn_to_page(pfn)); @@ -1678,7 +1686,7 @@ EXPORT_SYMBOL_GPL(kvm_release_page_clean); void kvm_release_pfn_clean(kvm_pfn_t pfn) { - if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn)) + if (!is_error_noslot_pfn(pfn) && kvm_is_refcounted_pfn(pfn)) put_page(pfn_to_page(pfn)); } EXPORT_SYMBOL_GPL(kvm_release_pfn_clean); @@ -1700,12 +1708,8 @@ EXPORT_SYMBOL_GPL(kvm_release_pfn_dirty); void kvm_set_pfn_dirty(kvm_pfn_t pfn) { - if (!kvm_is_reserved_pfn(pfn)) { - struct page *page = pfn_to_page(pfn); - - if (!PageReserved(page)) - SetPageDirty(page); - } + if (!kvm_is_reserved_pfn(pfn)) + SetPageDirty(pfn_to_page(pfn)); } EXPORT_SYMBOL_GPL(kvm_set_pfn_dirty); @@ -1718,7 +1722,7 @@ EXPORT_SYMBOL_GPL(kvm_set_pfn_accessed); void kvm_get_pfn(kvm_pfn_t pfn) { - if (!kvm_is_reserved_pfn(pfn)) + if (kvm_is_refcounted_pfn(pfn)) get_page(pfn_to_page(pfn)); } EXPORT_SYMBOL_GPL(kvm_get_pfn);