From patchwork Mon Jan 9 06:24:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jintack Lim X-Patchwork-Id: 9504085 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A5AB860710 for ; Mon, 9 Jan 2017 06:44:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 93091280DE for ; Mon, 9 Jan 2017 06:44:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 875EA2811E; Mon, 9 Jan 2017 06:44:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 20E0B280DE for ; Mon, 9 Jan 2017 06:44:02 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cQTgV-0006yP-57; Mon, 09 Jan 2017 06:43:59 +0000 Received: from casper.infradead.org ([2001:770:15f::2]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cQTgG-0006YP-20 for linux-arm-kernel@bombadil.infradead.org; Mon, 09 Jan 2017 06:43:44 +0000 Received: from outprodmail01.cc.columbia.edu ([128.59.72.39]) by casper.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cQTPw-00031l-OX for linux-arm-kernel@lists.infradead.org; Mon, 09 Jan 2017 06:26:55 +0000 Received: from hazelnut (hazelnut.cc.columbia.edu [128.59.213.250]) by outprodmail01.cc.columbia.edu (8.14.4/8.14.4) with ESMTP id v096PXwG017320 for ; Mon, 9 Jan 2017 01:26:32 -0500 Received: from hazelnut (localhost.localdomain [127.0.0.1]) by hazelnut (Postfix) with ESMTP id AC31082 for ; Mon, 9 Jan 2017 01:26:32 -0500 (EST) Received: from sendprodmail03.cc.columbia.edu (sendprodmail03.cc.columbia.edu [128.59.72.15]) by hazelnut (Postfix) with ESMTP id 7E99885 for ; Mon, 9 Jan 2017 01:26:32 -0500 (EST) Received: from mail-qt0-f200.google.com (mail-qt0-f200.google.com [209.85.216.200]) by sendprodmail03.cc.columbia.edu (8.14.4/8.14.4) with ESMTP id v096QWrK057854 (version=TLSv1/SSLv3 cipher=AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 9 Jan 2017 01:26:32 -0500 Received: by mail-qt0-f200.google.com with SMTP id x49so82680627qtc.7 for ; Sun, 08 Jan 2017 22:26:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=S+kXwpmHgmfokyLd0MTd/xD4xgduEW0DCcohb03khj4=; b=kztcGRGsVfaRgdxFB0JeLDxOFMgXLpFXjpzLr22b7ycTnted5pRgpBJvR01OrG6r72 y/ZtuMXaF4rRPusFX4vb2S35xLbZNzvUNH5h2AtFblSCKU5O6882tHUpjkzCNsILF2Df HuiDgRZSunpq5N2AvTErQ5TQuoE0IypaxIbgV9oVzABj6acBWlrJcLzyeKNd116oIwE3 cUUNjShlHrEotYHakK9Oeaaqd8HbCR22RiKRSW48ltXa0eR97kJgKglRvnUBNpj5JpVI 7s1A/xOfV69rLkOANEg1Ic08lXZoAUWU/hu9J82ursESaPhO+K/4iioNILmYkxRIiVOJ +Vcw== X-Gm-Message-State: AIkVDXJym+H+Nhishzuxjb9bT8pQBCsU7ij2P1fXfsT78/1YH6GppXmvBRpwfmiyNXoQmLXknmAMXdbg7Y7zB1LzzPugSSiKpYbsA1ezqBLwjXdGNBnVdUUWGyEB0LC0rz52wIUxRPMWVYs6MqTuJqFG2MwVuCaRNbMuWg== X-Received: by 10.55.104.22 with SMTP id d22mr70829182qkc.127.1483943192104; Sun, 08 Jan 2017 22:26:32 -0800 (PST) X-Received: by 10.55.104.22 with SMTP id d22mr70829110qkc.127.1483943190701; Sun, 08 Jan 2017 22:26:30 -0800 (PST) Received: from jintack.cs.columbia.edu ([2001:18d8:ffff:16:21a:4aff:feaa:f900]) by smtp.gmail.com with ESMTPSA id h3sm8623257qtc.6.2017.01.08.22.26.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 08 Jan 2017 22:26:30 -0800 (PST) From: Jintack Lim To: christoffer.dall@linaro.org, marc.zyngier@arm.com, pbonzini@redhat.com, rkrcmar@redhat.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com, vladimir.murzin@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, james.morse@arm.com, lorenzo.pieralisi@arm.com, kevin.brodsky@arm.com, wcohen@redhat.com, shankerd@codeaurora.org, geoff@infradead.org, andre.przywara@arm.com, eric.auger@redhat.com, anna-maria@linutronix.de, shihwei@cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC 47/55] KVM: arm/arm64: Forward the guest hypervisor's stage 2 permission faults Date: Mon, 9 Jan 2017 01:24:43 -0500 Message-Id: <1483943091-1364-48-git-send-email-jintack@cs.columbia.edu> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1483943091-1364-1-git-send-email-jintack@cs.columbia.edu> References: <1483943091-1364-1-git-send-email-jintack@cs.columbia.edu> X-No-Spam-Score: Local X-Scanned-By: MIMEDefang 2.78 on 128.59.72.15 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170109_062652_962159_F30DC841 X-CRM114-Status: GOOD ( 17.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jintack@cs.columbia.edu MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall When faulting on a shadow stage 2 page table we have to check if the fault was a permission fault and if so, if that fault needs to be handled by the guest hypervisor before us, in case the guest hypervisor has created a less permissive S2 entry than the operation required. Check if this is the case, and inject a fault if it is. Signed-off-by: Christoffer Dall Signed-off-by: Jintack Lim --- arch/arm/include/asm/kvm_mmu.h | 7 +++++++ arch/arm/kvm/mmu.c | 5 +++++ arch/arm64/include/asm/kvm_mmu.h | 9 +++++++++ arch/arm64/kvm/mmu-nested.c | 33 +++++++++++++++++++++++++++++++++ 4 files changed, 54 insertions(+) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index ab41a10..0d106ae 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -241,6 +241,13 @@ static inline int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa, return 0; } +static inline int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, + phys_addr_t fault_ipa, + struct kvm_s2_trans *trans) +{ + return 0; +} + static inline void kvm_nested_s2_unmap(struct kvm_vcpu *vcpu) { } static inline int kvm_nested_s2_init(struct kvm_vcpu *vcpu) { return 0; } static inline void kvm_nested_s2_teardown(struct kvm_vcpu *vcpu) { } diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index abdf345..68fc8e8 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -1542,6 +1542,11 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) ret = kvm_walk_nested_s2(vcpu, fault_ipa, &nested_trans); if (ret) goto out_unlock; + + ret = kvm_s2_handle_perm_fault(vcpu, fault_ipa, &nested_trans); + if (ret) + goto out_unlock; + ipa = nested_trans.output; } diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 2ac603d..2086296 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -338,6 +338,8 @@ struct kvm_s2_trans { bool handle_vttbr_update(struct kvm_vcpu *vcpu, u64 vttbr); int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa, struct kvm_s2_trans *result); +int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + struct kvm_s2_trans *trans); void kvm_nested_s2_unmap(struct kvm_vcpu *vcpu); int kvm_nested_s2_init(struct kvm_vcpu *vcpu); void kvm_nested_s2_teardown(struct kvm_vcpu *vcpu); @@ -366,6 +368,13 @@ static inline int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa, return 0; } +static inline int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, + phys_addr_t fault_ipa, + struct kvm_s2_trans *trans) +{ + return 0; +} + static inline void kvm_nested_s2_unmap(struct kvm_vcpu *vcpu) { } static inline int kvm_nested_s2_init(struct kvm_vcpu *vcpu) { return 0; } static inline void kvm_nested_s2_teardown(struct kvm_vcpu *vcpu) { } diff --git a/arch/arm64/kvm/mmu-nested.c b/arch/arm64/kvm/mmu-nested.c index b579d23..65ad0da 100644 --- a/arch/arm64/kvm/mmu-nested.c +++ b/arch/arm64/kvm/mmu-nested.c @@ -52,6 +52,19 @@ static unsigned int pa_max(void) return ps_to_output_size(parange); } +static int vcpu_inject_s2_perm_fault(struct kvm_vcpu *vcpu, gpa_t ipa, + int level) +{ + u32 esr; + + vcpu->arch.ctxt.el2_regs[FAR_EL2] = vcpu->arch.fault.far_el2; + vcpu->arch.ctxt.el2_regs[HPFAR_EL2] = vcpu->arch.fault.hpfar_el2; + esr = kvm_vcpu_get_hsr(vcpu) & ~ESR_ELx_FSC; + esr |= ESR_ELx_FSC_PERM; + esr |= level & 0x3; + return kvm_inject_nested_sync(vcpu, esr); +} + static int vcpu_inject_s2_trans_fault(struct kvm_vcpu *vcpu, gpa_t ipa, int level) { @@ -268,6 +281,26 @@ int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa, return walk_nested_s2_pgd(vcpu, gipa, &wi, result); } +/* + * Returns non-zero if permission fault is handled by injecting it to the next + * level hypervisor. + */ +int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + struct kvm_s2_trans *trans) +{ + unsigned long fault_status = kvm_vcpu_trap_get_fault_type(vcpu); + bool write_fault = kvm_is_write_fault(vcpu); + + if (fault_status != FSC_PERM) + return 0; + + if ((write_fault && !trans->writable) || + (!write_fault && !trans->readable)) + return vcpu_inject_s2_perm_fault(vcpu, fault_ipa, trans->level); + + return 0; +} + /* expects kvm->mmu_lock to be held */ void kvm_nested_s2_all_vcpus_wp(struct kvm *kvm) {