From patchwork Mon Dec 15 07:28:05 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 5491301 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id F3F619F326 for ; Mon, 15 Dec 2014 07:37:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 19245209EB for ; Mon, 15 Dec 2014 07:37:20 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 23797209ED for ; Mon, 15 Dec 2014 07:37:19 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Y0QAk-0000fF-R7; Mon, 15 Dec 2014 07:34:26 +0000 Received: from mailout3.w2.samsung.com ([211.189.100.13] helo=usmailout3.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Y0Q9n-0008JK-MP for linux-arm-kernel@lists.infradead.org; Mon, 15 Dec 2014 07:33:28 +0000 Received: from uscpsbgex1.samsung.com (u122.gpu85.samsung.co.kr [203.254.195.122]) by usmailout3.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0NGM005O64Z5FS00@usmailout3.samsung.com> for linux-arm-kernel@lists.infradead.org; Mon, 15 Dec 2014 02:33:05 -0500 (EST) X-AuditID: cbfec37a-b7fba6d000007572-54-548e8eb1a261 Received: from usmmp2.samsung.com ( [203.254.195.78]) by uscpsbgex1.samsung.com (USCPEXMTA) with SMTP id FF.D8.30066.1BE8E845; Mon, 15 Dec 2014 02:33:05 -0500 (EST) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp2.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0NGM00KC94Z44450@usmmp2.samsung.com>; Mon, 15 Dec 2014 02:33:05 -0500 (EST) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.160.8.49) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.3.123.3; Sun, 14 Dec 2014 23:33:03 -0800 From: Mario Smarduch To: pbonzini@redhat.com, james.hogan@imgtec.com, christoffer.dall@linaro.org, agraf@suse.de, marc.zyngier@arm.com, cornelia.huck@de.ibm.com, borntraeger@de.ibm.com, catalin.marinas@arm.com Subject: [PATCH v15 08/11] KVM: arm64: ARMv8 header changes for page logging Date: Sun, 14 Dec 2014 23:28:05 -0800 Message-id: <1418628488-3696-9-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1418628488-3696-1-git-send-email-m.smarduch@samsung.com> References: <1418628488-3696-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 X-Originating-IP: [105.160.8.49] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpkkeLIzCtJLcpLzFFi42I5/e+wn+7Gvr4Qg5ezxCxOXPnHaDF9xXYW i/fLehgtXrwGcuc3NzJavJv3gtmi+1kzo8WbT9oWc6YWWnw8dZzdYtPja6wWf+/8Y7PYv+0f q8WcMw9YLCa92cbkwO+xZt4aRo+Djw6xefTsPMPocefaHjaP85vWMHtsXlLv8X7fVTaPzaer PT5vkgvgjOKySUnNySxLLdK3S+DK2LDxKkvBaomKGZM6WRsY74l0MXJySAiYSOy/fYYVwhaT uHBvPVsXIxeHkMAyRomNt3YxQTi9TBILLuxlhHDOM0pMXvWOGaSFTUBXYv+9jewgCRGBA4wS Jzb+AmthFnjLKLHj5B8mkCphAR+J9T1TwJawCKhKHL/wD6ybV8BV4v7f6UA2B9ByBYk5k2xA TE4BN4m9u5VBKoSAKv5uuM8EUS0o8WPyPRaQEmYBCYnnn5UgSlQltt18zggzZOMCnwmMQrOQ NMxCaFjAyLSKUay0OLmgOCk9tcJQrzgxt7g0L10vOT93EyMk2qp2MN75anOIUYCDUYmHN4Kx L0SINbGsuDL3EKMEB7OSCG93PFCINyWxsiq1KD++qDQntfgQIxMHp1QDo/rkT88t1oYoTXjP q+Og2jlvToLHsdNLmv9dsjK6OfH9H8/ZjQbdvzRWRsyp3hcUw6F2L4zv6B7fKdMOqRZzfd/Y c93g3fd87Wt6a1hnlXUobDy6X5nj4raTql/Sk6TPP2B/FjHn+PIjFdybzitHP3uTtndy3vXI tu+zPgi0rU+ubF7kb3hZXF6JpTgj0VCLuag4EQCi8khMlAIAAA== X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141214_233327_886400_394AC642 X-CRM114-Status: UNSURE ( 8.02 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -5.0 (-----) Cc: peter.maydell@linaro.org, kvm@vger.kernel.org, steve.capper@arm.com, kvm-ia64@vger.kernel.org, kvm-ppc@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Mario Smarduch X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds arm64 helpers to write protect pmds/ptes and retrieve permissions while logging dirty pages. Also adds prototype to write protect a memory slot and adds a pmd define to check for read-only pmds. Reviewed-by: Christoffer Dall Signed-off-by: Mario Smarduch --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_mmu.h | 21 +++++++++++++++++++++ arch/arm64/include/asm/pgtable-hwdef.h | 1 + 4 files changed, 24 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 4838421..4f7310f 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -126,6 +126,7 @@ extern char __kvm_hyp_vector[]; extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); +extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 2012c4b..8b60c0f 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -200,6 +200,7 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void); struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void); u64 kvm_call_hyp(void *hypfn, ...); +void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, int exception_index); diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 123b521..f925e40 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -117,6 +117,27 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd) pmd_val(*pmd) |= PMD_S2_RDWR; } +static inline void kvm_set_s2pte_readonly(pte_t *pte) +{ + pte_val(*pte) = (pte_val(*pte) & ~PTE_S2_RDWR) | PTE_S2_RDONLY; +} + +static inline bool kvm_s2pte_readonly(pte_t *pte) +{ + return (pte_val(*pte) & PTE_S2_RDWR) == PTE_S2_RDONLY; +} + +static inline void kvm_set_s2pmd_readonly(pmd_t *pmd) +{ + pmd_val(*pmd) = (pmd_val(*pmd) & ~PMD_S2_RDWR) | PMD_S2_RDONLY; +} + +static inline bool kvm_s2pmd_readonly(pmd_t *pmd) +{ + return (pmd_val(*pmd) & PMD_S2_RDWR) == PMD_S2_RDONLY; +} + + #define kvm_pgd_addr_end(addr, end) pgd_addr_end(addr, end) #define kvm_pud_addr_end(addr, end) pud_addr_end(addr, end) #define kvm_pmd_addr_end(addr, end) pmd_addr_end(addr, end) diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index 88174e0..5f930cc 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -119,6 +119,7 @@ #define PTE_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[2:1] */ #define PTE_S2_RDWR (_AT(pteval_t, 3) << 6) /* HAP[2:1] */ +#define PMD_S2_RDONLY (_AT(pmdval_t, 1) << 6) /* HAP[2:1] */ #define PMD_S2_RDWR (_AT(pmdval_t, 3) << 6) /* HAP[2:1] */ /*