From patchwork Thu Sep 5 12:21:53 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 2854077 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A94E8C0AB5 for ; Thu, 5 Sep 2013 12:22:34 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6087920304 for ; Thu, 5 Sep 2013 12:22:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 13F6C202CF for ; Thu, 5 Sep 2013 12:22:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935768Ab3IEMWF (ORCPT ); Thu, 5 Sep 2013 08:22:05 -0400 Received: from mail-ee0-f42.google.com ([74.125.83.42]:49955 "EHLO mail-ee0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754244Ab3IEMWE (ORCPT ); Thu, 5 Sep 2013 08:22:04 -0400 Received: by mail-ee0-f42.google.com with SMTP id b45so866420eek.29 for ; Thu, 05 Sep 2013 05:22:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id; bh=SWC4p0h4cBhaTJBo73iybKJetkFFMM8i5cljE6vDanI=; b=yMMaJ4aKXuhe+orlYi7jrx6zrrHp4UqtaaFebULkAphDILhBRfh/jwhFYwvo/1hcZ3 Pdp0rt6mb2dMQBDDYZ+lOjp/HrdCVxu4eD996I7QdEHrtGU0jq3eQRBk5NisrNTi3q0X GH/Z8rScfwhIdgNrW8B2KLDNVKOWekAtLKh70iqKoPK2QFbqB5j1LmrPq51xW+GNeulp Cg8h9+V24qhxjDW8B4vqekR79tfpwm+wvkxYzjITkmMMpYOVr8nlaXIvbaxh4/OCGu/C 4fbqUvhTw0otomhfmPCQvgRnHuRrseWfkkPY0FeP/NYk021IHxFBVTIL45ia4I1fEiFi ZuhQ== X-Received: by 10.14.216.132 with SMTP id g4mr3060925eep.62.1378383721809; Thu, 05 Sep 2013 05:22:01 -0700 (PDT) Received: from playground.lan (net-37-117-144-28.cust.dsl.vodafone.it. [37.117.144.28]) by mx.google.com with ESMTPSA id p5sm48241572eeg.5.1969.12.31.16.00.00 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Thu, 05 Sep 2013 05:22:00 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org Cc: stable@vger.kernel.org, kvm@vger.kernel.org, Xiao Guangrong , Gleb Natapov Subject: [PATCH v2] KVM: mmu: allow page tables to be in read-only slots Date: Thu, 5 Sep 2013 14:21:53 +0200 Message-Id: <1378383714-9723-1-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-9.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Page tables in a read-only memory slot will currently cause a triple fault when running with shadow paging, because the page walker uses gfn_to_hva and it fails on such a slot. TianoCore uses such a page table. The idea is that, on real hardware, the firmware can already run in 64-bit flat mode when setting up the memory controller. Real hardware seems to be fine with that as long as the accessed/dirty bits are set. Thus, this patch saves whether the slot is readonly, and later checks it when updating the accessed and dirty bits. Note that this scenario is not supported by NPT at all, as explained by comments in the code. Cc: stable@vger.kernel.org Cc: kvm@vger.kernel.org Cc: Xiao Guangrong Cc: Gleb Natapov Signed-off-by: Paolo Bonzini Reviewed-by: Xiao Guangrong Reviewed-by: Gleb Natapov --- arch/x86/kvm/paging_tmpl.h | 20 +++++++++++++++++++- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 14 +++++++++----- 3 files changed, 29 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 0433301..aa18aca 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -99,6 +99,7 @@ struct guest_walker { pt_element_t prefetch_ptes[PTE_PREFETCH_NUM]; gpa_t pte_gpa[PT_MAX_FULL_LEVELS]; pt_element_t __user *ptep_user[PT_MAX_FULL_LEVELS]; + bool pte_writable[PT_MAX_FULL_LEVELS]; unsigned pt_access; unsigned pte_access; gfn_t gfn; @@ -235,6 +236,22 @@ static int FNAME(update_accessed_dirty_bits)(struct kvm_vcpu *vcpu, if (pte == orig_pte) continue; + /* + * If the slot is read-only, simply do not process the accessed + * and dirty bits. This is the correct thing to do if the slot + * is ROM, and page tables in read-as-ROM/write-as-MMIO slots + * are only supported if the accessed and dirty bits are already + * set in the ROM (so that MMIO writes are never needed). + * + * Note that NPT does not allow this at all and faults, since + * it always wants nested page table entries for the guest + * page tables to be writable. And EPT works but will simply + * overwrite the read-only memory to set the accessed and dirty + * bits. + */ + if (unlikely(!walker->pte_writable[level - 1])) + continue; + ret = FNAME(cmpxchg_gpte)(vcpu, mmu, ptep_user, index, orig_pte, pte); if (ret) return ret; @@ -309,7 +326,8 @@ retry_walk: goto error; real_gfn = gpa_to_gfn(real_gfn); - host_addr = gfn_to_hva(vcpu->kvm, real_gfn); + host_addr = gfn_to_hva_read(vcpu->kvm, real_gfn, + &walker->pte_writable[walker->level - 1]); if (unlikely(kvm_is_error_hva(host_addr))) goto error; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ca645a0..22f9cdf 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -533,6 +533,7 @@ int gfn_to_page_many_atomic(struct kvm *kvm, gfn_t gfn, struct page **pages, struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn); unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn); +unsigned long gfn_to_hva_read(struct kvm *kvm, gfn_t gfn, bool *writable); unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn); void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f7e4334..418d037 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1078,11 +1078,15 @@ unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn) EXPORT_SYMBOL_GPL(gfn_to_hva); /* - * The hva returned by this function is only allowed to be read. - * It should pair with kvm_read_hva() or kvm_read_hva_atomic(). + * If writable is set to false, the hva returned by this function is only + * allowed to be read. */ -static unsigned long gfn_to_hva_read(struct kvm *kvm, gfn_t gfn) +unsigned long gfn_to_hva_read(struct kvm *kvm, gfn_t gfn, bool *writable) { + struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); + if (writable) + *writable = !memslot_is_readonly(slot); + return __gfn_to_hva_many(gfn_to_memslot(kvm, gfn), gfn, NULL, false); } @@ -1450,7 +1454,7 @@ int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, int r; unsigned long addr; - addr = gfn_to_hva_read(kvm, gfn); + addr = gfn_to_hva_read(kvm, gfn, NULL); if (kvm_is_error_hva(addr)) return -EFAULT; r = kvm_read_hva(data, (void __user *)addr + offset, len); @@ -1488,7 +1492,7 @@ int kvm_read_guest_atomic(struct kvm *kvm, gpa_t gpa, void *data, gfn_t gfn = gpa >> PAGE_SHIFT; int offset = offset_in_page(gpa); - addr = gfn_to_hva_read(kvm, gfn); + addr = gfn_to_hva_read(kvm, gfn, NULL); if (kvm_is_error_hva(addr)) return -EFAULT; pagefault_disable();