From patchwork Mon Oct 12 12:09:18 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 7374311 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 2C8519F1D5 for ; Mon, 12 Oct 2015 12:09:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3D76920675 for ; Mon, 12 Oct 2015 12:09:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 404F120676 for ; Mon, 12 Oct 2015 12:09:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752159AbbJLMJg (ORCPT ); Mon, 12 Oct 2015 08:09:36 -0400 Received: from mail-wi0-f179.google.com ([209.85.212.179]:38876 "EHLO mail-wi0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752032AbbJLMJb (ORCPT ); Mon, 12 Oct 2015 08:09:31 -0400 Received: by wieq12 with SMTP id q12so17136513wie.1; Mon, 12 Oct 2015 05:09:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=xsBTuvD18a8zJba2DvLCu1NxPCW+qyUsexowqJcJrkc=; b=ZSJ8B9eRBiY8JTlftuJGLY9aTOBDVPZJmz65TioHSwROOlLGAwY8LAiZo1hWm4ZtcP DQCY/uezuqmlE41igKpMMY/crgl18fPgDZDHRvcqklfGEEB6m2hlcGynSVoWH0OxZJbM sSroZeoegd3Amc23cnxtMfJsUatSDbj0I5XLaY2zg0gAGKF91IOapu98vnAe19YMr1+Q 7TxiMEV+/f3TZ9MC15kpo0vsvgMaQl+CqgOvj3ySkl+ZrEH3/26LQyiA19J9dPOQZ5c6 H6ClIQbQIFvH999wNkEwXZd5VtbP2SIIABr0KCC76pBMaE0Z5MtEEWv/N+YQ35G+paR9 81Pw== X-Received: by 10.194.113.101 with SMTP id ix5mr30126985wjb.107.1444651768906; Mon, 12 Oct 2015 05:09:28 -0700 (PDT) Received: from donizetti.redhat.com (94-39-171-91.adsl-ull.clienti.tiscali.it. [94.39.171.91]) by smtp.gmail.com with ESMTPSA id fz1sm10655161wic.8.2015.10.12.05.09.27 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 12 Oct 2015 05:09:28 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: aderumier@odiso.com, rkrcmar@redhat.com, stable@vger.kernel.org Subject: [PATCH 2/2] KVM: x86: map/unmap private slots in __x86_set_memory_region Date: Mon, 12 Oct 2015 14:09:18 +0200 Message-Id: <1444651758-6926-3-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1444651758-6926-1-git-send-email-pbonzini@redhat.com> References: <1444651758-6926-1-git-send-email-pbonzini@redhat.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Otherwise, two copies (one of them never used and thus bogus) are allocated for the regular and SMM address spaces. This breaks SMM with EPT but without unrestricted guest support, because the SMM copy of the identity page map is all zeros. By moving the allocation to the caller we also remove the last vestiges of kernel-allocated memory regions (not accessible anymore in userspace since commit b74a07beed0e, "KVM: Remove kernel-allocated memory regions", 2010-06-21); that is a nice bonus. Reported-by: Alexandre DERUMIER Cc: stable@vger.kernel.org Fixes: 9da0e4d5ac969909f6b435ce28ea28135a9cbd69 Signed-off-by: Paolo Bonzini Reviewed-by: Radim Kr?má? --- arch/x86/kvm/x86.c | 62 ++++++++++++++++++++++++++---------------------------- 1 file changed, 30 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a3a4cf900e0c..ab59eccb9e78 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7717,23 +7717,53 @@ void kvm_arch_sync_events(struct kvm *kvm) int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size) { int i, r; + u64 hva; + struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memory_slot *slot, old; /* Called with kvm->slots_lock held. */ if (WARN_ON(id >= KVM_MEM_SLOTS_NUM)) return -EINVAL; + slot = &slots->memslots[slots->id_to_index[id]]; + if (size) { + if (WARN_ON(slot->npages)) + return -EEXIST; + + /* + * MAP_SHARED to prevent internal slot pages from being moved + * by fork()/COW. + */ + hva = vm_mmap(NULL, 0, size, PROT_READ | PROT_WRITE, + MAP_SHARED | MAP_ANONYMOUS, 0); + if (IS_ERR((void *)hva)) + return PTR_ERR((void *)hva); + } else { + if (!slot->npages) + return 0; + + hva = 0; + } + + old = *slot; for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { struct kvm_userspace_memory_region m; m.slot = id | (i << 16); m.flags = 0; m.guest_phys_addr = gpa; + m.userspace_addr = hva; m.memory_size = size; r = __kvm_set_memory_region(kvm, &m); if (r < 0) return r; } + if (!size) { + r = vm_munmap(old.userspace_addr, old.npages * PAGE_SIZE); + WARN_ON(r < 0); + } + return 0; } EXPORT_SYMBOL_GPL(__x86_set_memory_region); @@ -7863,27 +7893,6 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, enum kvm_mr_change change) { - /* - * Only private memory slots need to be mapped here since - * KVM_SET_MEMORY_REGION ioctl is no longer supported. - */ - if ((memslot->id >= KVM_USER_MEM_SLOTS) && (change == KVM_MR_CREATE)) { - unsigned long userspace_addr; - - /* - * MAP_SHARED to prevent internal slot pages from being moved - * by fork()/COW. - */ - userspace_addr = vm_mmap(NULL, 0, memslot->npages * PAGE_SIZE, - PROT_READ | PROT_WRITE, - MAP_SHARED | MAP_ANONYMOUS, 0); - - if (IS_ERR((void *)userspace_addr)) - return PTR_ERR((void *)userspace_addr); - - memslot->userspace_addr = userspace_addr; - } - return 0; } @@ -7945,17 +7954,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, { int nr_mmu_pages = 0; - if (change == KVM_MR_DELETE && old->id >= KVM_USER_MEM_SLOTS) { - int ret; - - ret = vm_munmap(old->userspace_addr, - old->npages * PAGE_SIZE); - if (ret < 0) - printk(KERN_WARNING - "kvm_vm_ioctl_set_memory_region: " - "failed to munmap memory\n"); - } - if (!kvm->arch.n_requested_mmu_pages) nr_mmu_pages = kvm_mmu_calculate_mmu_pages(kvm);