From patchwork Wed Mar 6 06:00:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jitindar Singh X-Patchwork-Id: 10840403 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 23D8917E0 for ; Wed, 6 Mar 2019 06:00:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 10F462CB42 for ; Wed, 6 Mar 2019 06:00:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0519B2CB47; Wed, 6 Mar 2019 06:00:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8D41F2CB34 for ; Wed, 6 Mar 2019 06:00:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726028AbfCFGAq (ORCPT ); Wed, 6 Mar 2019 01:00:46 -0500 Received: from mail-pf1-f193.google.com ([209.85.210.193]:37678 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725268AbfCFGAp (ORCPT ); Wed, 6 Mar 2019 01:00:45 -0500 Received: by mail-pf1-f193.google.com with SMTP id s22so7653090pfh.4; Tue, 05 Mar 2019 22:00:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=Rxp119B/CaSHrhKUvAaT+nByr589rsvUv3aAS9bUJ1k=; b=a3EZ5HiehLHyfj67uOHjFMNr+v68XETyHFrDr4JVZ7AJAeRCEEMx36cVlGgJVjEN72 XrFY3MRznsj6zWPzZ4h/zm5S4FZ8jhlkFQtEh8v8qeihlzejlw971mh0+JQX6ArzTCeU cDSRY74kkrKp82+/GHhk0ICj7ssmYFNpbQ2bO9/rweUTC6p3uhDvfaZKFClF7XCfh7nC 8GI27Hbifp0/MCDwmWeAB8NjKrZjFRjMY4my+sTzWn4kD/oVlYXEzUiGU0qdskd1HwKk W3aER62JkxFp5vCytB4beullAosM6VP/ezuyLmE1kMQ1sT8Vp1x8zVcWO448UdzK6Srw AldQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=Rxp119B/CaSHrhKUvAaT+nByr589rsvUv3aAS9bUJ1k=; b=en9kqZwpbaHUTFwGEkFQbMjqbMP6t86IyPfbsGIuAqGutLFAywdhA1k14yMsqsFeqm Dgqw2gztnH5nayaHZHNvTj/K2XOjPIGFKMt3hStPrYwUjMJ7ZY1Sj5uPA1SdEWpl+Vj6 edaIczCRq6UgbOVZGZxroBrbF4Coz/Lbw/tj6zv0TaGCVh/iIe+A6EXn51KzAMOIyuoQ hKnFyIg2OJbkw4zU7C5fatS+r4RHIpXztNsRfM44Wp7eSEyqxbRwmPZo4CFh8MPPuqmZ qZyI2zkPyyVnJtJ7uX0pqpqfhsg9e7fu2smGQr6Kxyvji3H0THKyCBbnOyl7ovV+2qA5 F2BQ== X-Gm-Message-State: APjAAAW/z0CQOKsZWRW9WbEy8IDpbKC1eQUSDoa1Ev1yoIKSQ5CAbW8a JXOzLw/7LP3cxcQNfKPaEhA90qvx X-Google-Smtp-Source: APXvYqzklfdhYTqmhh1mkbJmXQtjAyOuAvcdfJSGX2eDTu/xYP7CaJTfG+DZ9m8VimEgS/ajbnV2uA== X-Received: by 2002:a17:902:8e8b:: with SMTP id bg11mr5073900plb.328.1551852044615; Tue, 05 Mar 2019 22:00:44 -0800 (PST) Received: from surajjs2.ozlabs.ibm.com.ozlabs.ibm.com ([122.99.82.10]) by smtp.gmail.com with ESMTPSA id l28sm2027610pfi.186.2019.03.05.22.00.41 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 05 Mar 2019 22:00:43 -0800 (PST) From: Suraj Jitindar Singh To: kvm-ppc@vger.kernel.org Cc: kvm@vger.kernel.org, paulus@samba.org, Suraj Jitindar Singh Subject: [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place Date: Wed, 6 Mar 2019 17:00:15 +1100 Message-Id: <20190306060016.18733-1-sjitindarsingh@gmail.com> X-Mailer: git-send-email 2.13.6 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Implement the function kvm_copy_guest() to be used to perform a memory copy from one guest physical address to another of a variable length. This performs similar functionality as the kvm_read_guest() and kvm_write_guest() functions, except both addresses point to guest memory. This performs a copy in place using raw_copy_in_user() to avoid having to buffer the data. The guest memory can reside in different memslots and the copy length can span memslots. Signed-off-by: Suraj Jitindar Singh --- I suspect additional checking may be required around the raw_copy_in_user() call. --- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 70 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c38cc5eb7e73..53b266c04041 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -697,6 +697,7 @@ int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, void *data, unsigned int offset, unsigned long len); +int kvm_copy_guest(struct kvm *kvm, gpa_t to, gpa_t from, unsigned long len); int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, gpa_t gpa, unsigned long len); int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 076bc38963bf..0922d45baf72 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1999,6 +1999,75 @@ int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_t gpa, const void *data, } EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest); +static int __kvm_copy_guest_page(struct kvm_memory_slot *to_memslot, + gfn_t to_gfn, int to_offset, + struct kvm_memory_slot *from_memslot, + gfn_t from_gfn, int from_offset, int len) +{ + int r; + unsigned long to_addr, from_addr; + + to_addr = gfn_to_hva_memslot(to_memslot, to_gfn); + if (kvm_is_error_hva(to_addr)) + return -EFAULT; + from_addr = gfn_to_hva_memslot(from_memslot, from_gfn); + if (kvm_is_error_hva(from_addr)) + return -EFAULT; + r = raw_copy_in_user((void __user *)to_addr + to_offset, + (void __user *)from_addr + from_offset, len); + if (r) + return -EFAULT; + mark_page_dirty_in_slot(to_memslot, to_gfn); + return 0; +} + +static int next_segment_many(unsigned long len, int offset1, int offset2) +{ + int size = min(PAGE_SIZE - offset1, PAGE_SIZE - offset2); + + if (len > size) + return size; + else + return len; +} + +int kvm_copy_guest(struct kvm *kvm, gpa_t to, gpa_t from, unsigned long len) +{ + struct kvm_memory_slot *to_memslot = NULL; + struct kvm_memory_slot *from_memslot = NULL; + gfn_t to_gfn = to >> PAGE_SHIFT; + gfn_t from_gfn = from >> PAGE_SHIFT; + int seg; + int to_offset = offset_in_page(to); + int from_offset = offset_in_page(from); + int ret; + + while ((seg = next_segment_many(len, to_offset, from_offset)) != 0) { + if (!to_memslot || (to_gfn >= (to_memslot->base_gfn + + to_memslot->npages))) + to_memslot = gfn_to_memslot(kvm, to_gfn); + if (!from_memslot || (from_gfn >= (from_memslot->base_gfn + + from_memslot->npages))) + from_memslot = gfn_to_memslot(kvm, from_gfn); + + ret = __kvm_copy_guest_page(to_memslot, to_gfn, to_offset, + from_memslot, from_gfn, from_offset, + seg); + if (ret < 0) + return ret; + + to_offset = (to_offset + seg) & (PAGE_SIZE - 1); + from_offset = (from_offset + seg) & (PAGE_SIZE - 1); + len -= seg; + if (!to_offset) + to_gfn += 1; + if (!from_offset) + from_gfn += 1; + } + return 0; +} +EXPORT_SYMBOL_GPL(kvm_copy_guest); + static int __kvm_gfn_to_hva_cache_init(struct kvm_memslots *slots, struct gfn_to_hva_cache *ghc, gpa_t gpa, unsigned long len)