From patchwork Wed Feb 21 17:47:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: KarimAllah Ahmed X-Patchwork-Id: 10233729 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9E1F560209 for ; Wed, 21 Feb 2018 17:52:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 96F5A27F7F for ; Wed, 21 Feb 2018 17:52:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8BC8A27F82; Wed, 21 Feb 2018 17:52:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E13027F7F for ; Wed, 21 Feb 2018 17:52:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935027AbeBURrv (ORCPT ); Wed, 21 Feb 2018 12:47:51 -0500 Received: from smtp-fw-9101.amazon.com ([207.171.184.25]:11223 "EHLO smtp-fw-9101.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934959AbeBURrt (ORCPT ); Wed, 21 Feb 2018 12:47:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209; t=1519235269; x=1550771269; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Fxm9thAq4mrLuzPVGxdIFgBoW0TA9ilD816lwM9xAoQ=; b=gDSdzAc4O4RAFzP6rPAtiVsKXDLR94GwrW64uz/b0iPEylDOWp6wfc8R i/sSONv/p/3Z2DM8VXCvhAlbOByKgG5G+Uc1gCxjmIZQsU8xVy434SvnV uNS8fUgVBCErnPK2JcvyJlpSYUAa8DYCvyTtFJ0hI+alL7j3ybgfKLlyB c=; X-IronPort-AV: E=Sophos;i="5.47,375,1515456000"; d="scan'208";a="723871744" Received: from sea3-co-svc-lb6-vlan3.sea.amazon.com (HELO email-inbound-relay-2b-c300ac87.us-west-2.amazon.com) ([10.47.22.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 21 Feb 2018 17:47:40 +0000 Received: from u54e1ad5160425a4b64ea.ant.amazon.com (pdx2-ws-svc-lb17-vlan3.amazon.com [10.247.140.70]) by email-inbound-relay-2b-c300ac87.us-west-2.amazon.com (8.14.7/8.14.7) with ESMTP id w1LHlafF028786 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 21 Feb 2018 17:47:37 GMT Received: from u54e1ad5160425a4b64ea.ant.amazon.com (localhost [127.0.0.1]) by u54e1ad5160425a4b64ea.ant.amazon.com (8.15.2/8.15.2/Debian-3) with ESMTP id w1LHlZFE006666; Wed, 21 Feb 2018 18:47:35 +0100 Received: (from karahmed@localhost) by u54e1ad5160425a4b64ea.ant.amazon.com (8.15.2/8.15.2/Submit) id w1LHlYss006665; Wed, 21 Feb 2018 18:47:34 +0100 From: KarimAllah Ahmed To: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: hpa@zytor.com, jmattson@google.com, mingo@redhat.com, pbonzini@redhat.com, rkrcmar@redhat.com, tglx@linutronix.de, KarimAllah Ahmed Subject: [PATCH 04/10] KVM: Introduce a new guest mapping API Date: Wed, 21 Feb 2018 18:47:15 +0100 Message-Id: <1519235241-6500-5-git-send-email-karahmed@amazon.de> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1519235241-6500-1-git-send-email-karahmed@amazon.de> References: <1519235241-6500-1-git-send-email-karahmed@amazon.de> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In KVM, specially for nested guests, there is a dominant pattern of: => map guest memory -> do_something -> unmap guest memory In addition to all this unnecessarily noise in the code due to boiler plate code, most of the time the mapping function does not properly handle memory that is not backed by "struct page". This new guest mapping API encapsulate most of this boiler plate code and also handles guest memory that is not backed by "struct page". Keep in mind that memremap is horribly slow, so this mapping API should not be used for high-frequency mapping operations. But rather for low-frequency mappings. Signed-off-by: KarimAllah Ahmed --- include/linux/kvm_host.h | 15 +++++++++++++++ virt/kvm/kvm_main.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 65 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ac0062b..6cc2c29 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -204,6 +204,13 @@ enum { READING_SHADOW_PAGE_TABLES, }; +struct kvm_host_map { + struct page *page; + void *kaddr; + kvm_pfn_t pfn; + kvm_pfn_t gfn; +}; + /* * Sometimes a large or cross-page mmio needs to be broken up into separate * exits for userspace servicing. @@ -700,6 +707,9 @@ struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); +bool kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, + struct kvm_host_map *map); +void kvm_vcpu_unmap(struct kvm_host_map *map); struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *writable); @@ -996,6 +1006,11 @@ static inline struct page *kvm_vcpu_gpa_to_page(struct kvm_vcpu *vcpu, return kvm_vcpu_gfn_to_page(vcpu, gpa_to_gfn(gpa)); } +static inline bool kvm_vcpu_map_valid(struct kvm_host_map *map) +{ + return map->kaddr != NULL; +} + static inline bool kvm_is_error_gpa(struct kvm *kvm, gpa_t gpa) { unsigned long hva = gfn_to_hva(kvm, gpa_to_gfn(gpa)); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 4501e65..54e7329 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1632,6 +1632,56 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) } EXPORT_SYMBOL_GPL(gfn_to_page); +bool kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) +{ + kvm_pfn_t pfn; + void *kaddr = NULL; + struct page *page = NULL; + + if (map->kaddr && map->gfn == gfn) + /* If the mapping is valid and guest memory is already mapped */ + return true; + else if (map->kaddr) + /* If the mapping is valid but trying to map a different guest pfn */ + kvm_vcpu_unmap(map); + + pfn = kvm_vcpu_gfn_to_pfn(vcpu, gfn); + if (is_error_pfn(pfn)) + return false; + + if (pfn_valid(pfn)) { + page = pfn_to_page(pfn); + kaddr = vmap(&page, 1, VM_MAP, PAGE_KERNEL); + } else { + kaddr = memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB); + } + + if (!kaddr) + return false; + + map->page = page; + map->kaddr = kaddr; + map->pfn = pfn; + map->gfn = gfn; + + return true; +} +EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_kaddr); + +void kvm_vcpu_unmap(struct kvm_host_map *map) +{ + if (!map->kaddr) + return; + + if (map->page) + kunmap(map->page); + else + memunmap(map->kaddr); + + kvm_release_pfn_dirty(map->pfn); + memset(map, 0, sizeof(*map)); +} + struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn) { kvm_pfn_t pfn;