From patchwork Tue Dec 18 07:56:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Daniel Kachhap X-Patchwork-Id: 10734975 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 20FAA6C2 for ; Tue, 18 Dec 2018 07:58:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0DA402A1A6 for ; Tue, 18 Dec 2018 07:58:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F1E1B2A444; Tue, 18 Dec 2018 07:58:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 57E832A1A6 for ; Tue, 18 Dec 2018 07:58:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=U897+w11u1ovvOVduwY9mw9Q2qSJcbkY0uV0rwHSfR0=; b=dkL3ckwl0/+mkZLgLpNDRKp8ut mhjoPZHEQIEjq6jFNmVBIbhdAP8O1/itjSvI1iXvhgxmHnZmE+Ma4vhUGeAqDtpxRFRcAtbB7P08y 7zCLgnA33x2FWbnzy1EBLL9CKcAUHK52j2NgSBRYa6uKBv2ZEE5511gpUr9ru4qif8O/dwGgrXvP3 BtsHLZaV2U8lvMjJZzCAHvjoeS3mOQpUleEvZW3NYkbbK391q8IgaCPKhsMbx3BmMBndDMGHCcYVh SDrSeTabOBBCrxgQTIXzJp6LYA+U+f0S4SqbyrEGbuXjwDXyBFJlOBYGicWwgf/MrHq34xUnL7oo6 sGfvgghw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gZAGQ-00023y-Oz; Tue, 18 Dec 2018 07:58:02 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gZAFp-0001Qp-FO for linux-arm-kernel@lists.infradead.org; Tue, 18 Dec 2018 07:57:28 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9C2F115AB; Mon, 17 Dec 2018 23:57:17 -0800 (PST) Received: from a75553-lin.blr.arm.com (a75553-lin.blr.arm.com [10.162.0.175]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C80FA3F575; Mon, 17 Dec 2018 23:57:13 -0800 (PST) From: Amit Daniel Kachhap To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 1/6] arm64/kvm: preserve host HCR_EL2 value Date: Tue, 18 Dec 2018 13:26:45 +0530 Message-Id: <1545119810-12182-2-git-send-email-amit.kachhap@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1545119810-12182-1-git-send-email-amit.kachhap@arm.com> References: <1545119810-12182-1-git-send-email-amit.kachhap@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181217_235725_759601_09253458 X-CRM114-Status: GOOD ( 19.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Andrew Jones , Marc Zyngier , Catalin Marinas , Will Deacon , Christoffer Dall , Kristina Martsenko , kvmarm@lists.cs.columbia.edu, Ramana Radhakrishnan , Amit Daniel Kachhap , Dave Martin , linux-kernel@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which is a constant value. This works today, as the host HCR_EL2 value is always the same, but this will get in the way of supporting extensions that require HCR_EL2 bits to be set conditionally for the host. To allow such features to work without KVM having to explicitly handle every possible host feature combination, this patch has KVM save/restore the host HCR when switching to/from a guest HCR. The saving of the register is done once during cpu hypervisor initialization state and is just restored after switch from guest. For fetching HCR_EL2 during kvm initilisation, a hyp call is made using kvm_call_hyp and is helpful in NHVE case. For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated to toggle the TGE bit with a RMW sequence, as we already do in __tlb_switch_to_guest_vhe(). Signed-off-by: Mark Rutland Signed-off-by: Amit Daniel Kachhap Cc: Marc Zyngier Cc: Christoffer Dall Cc: kvmarm@lists.cs.columbia.edu --- arch/arm/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/kvm_asm.h | 2 ++ arch/arm64/include/asm/kvm_host.h | 14 ++++++++++++-- arch/arm64/kvm/hyp/switch.c | 15 +++++++++------ arch/arm64/kvm/hyp/sysreg-sr.c | 11 +++++++++++ arch/arm64/kvm/hyp/tlb.c | 6 +++++- virt/kvm/arm/arm.c | 2 ++ 7 files changed, 43 insertions(+), 9 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 5ca5d9a..0f012c8 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -273,6 +273,8 @@ static inline void __cpu_init_stage2(void) kvm_call_hyp(__init_stage2_translation); } +static inline void __cpu_copy_host_registers(void) {} + static inline int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext) { return 0; diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index aea01a0..25ac9fa 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -73,6 +73,8 @@ extern void __vgic_v3_init_lrs(void); extern u32 __kvm_get_mdcr_el2(void); +extern u64 __read_hyp_hcr_el2(void); + /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */ #define __hyp_this_cpu_ptr(sym) \ ({ \ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 52fbc82..1b9eed9 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -196,13 +196,17 @@ enum vcpu_sysreg { #define NR_COPRO_REGS (NR_SYS_REGS * 2) +struct kvm_cpu_init_host_regs { + u64 hcr_el2; +}; + struct kvm_cpu_context { struct kvm_regs gp_regs; union { u64 sys_regs[NR_SYS_REGS]; u32 copro[NR_COPRO_REGS]; }; - + struct kvm_cpu_init_host_regs init_regs; struct kvm_vcpu *__hyp_running_vcpu; }; @@ -211,7 +215,7 @@ typedef struct kvm_cpu_context kvm_cpu_context_t; struct kvm_vcpu_arch { struct kvm_cpu_context ctxt; - /* HYP configuration */ + /* Guest HYP configuration */ u64 hcr_el2; u32 mdcr_el2; @@ -455,6 +459,12 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, static inline void __cpu_init_stage2(void) {} +static inline void __cpu_copy_host_registers(void) +{ + kvm_cpu_context_t *host_cxt = this_cpu_ptr(&kvm_host_cpu_state); + host_cxt->init_regs.hcr_el2 = kvm_call_hyp(__read_hyp_hcr_el2); +} + /* Guest/host FPSIMD coordination helpers */ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index f6e02cc..85a2a5c 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -139,15 +139,15 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) __activate_traps_nvhe(vcpu); } -static void deactivate_traps_vhe(void) +static void deactivate_traps_vhe(struct kvm_cpu_context *host_ctxt) { extern char vectors[]; /* kernel exception vectors */ - write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); + write_sysreg(host_ctxt->init_regs.hcr_el2, hcr_el2); write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1); write_sysreg(vectors, vbar_el1); } -static void __hyp_text __deactivate_traps_nvhe(void) +static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt) { u64 mdcr_el2 = read_sysreg(mdcr_el2); @@ -157,12 +157,15 @@ static void __hyp_text __deactivate_traps_nvhe(void) mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; write_sysreg(mdcr_el2, mdcr_el2); - write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2); + write_sysreg(host_ctxt->init_regs.hcr_el2, hcr_el2); write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); } static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *host_ctxt; + + host_ctxt = vcpu->arch.host_cpu_context; /* * If we pended a virtual abort, preserve it until it gets * cleared. See D1.14.3 (Virtual Interrupts) for details, but @@ -173,9 +176,9 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) vcpu->arch.hcr_el2 = read_sysreg(hcr_el2); if (has_vhe()) - deactivate_traps_vhe(); + deactivate_traps_vhe(host_ctxt); else - __deactivate_traps_nvhe(); + __deactivate_traps_nvhe(host_ctxt); } void activate_traps_vhe_load(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c index 68d6f7c..29ef34d 100644 --- a/arch/arm64/kvm/hyp/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/sysreg-sr.c @@ -316,3 +316,14 @@ void __hyp_text __kvm_enable_ssbs(void) "msr sctlr_el2, %0" : "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS)); } + +/** + * __read_hyp_hcr_el2 - Returns hcr_el2 register value + * + * This function acts as a function handler parameter for kvm_call_hyp and + * may be called from EL1 exception level to fetch the register value. + */ +u64 __hyp_text __read_hyp_hcr_el2(void) +{ + return read_sysreg(hcr_el2); +} diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c index 4dbd9c6..ad6282d 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/tlb.c @@ -50,12 +50,16 @@ static hyp_alternate_select(__tlb_switch_to_guest, static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm) { + u64 val; + /* * We're done with the TLB operation, let's restore the host's * view of HCR_EL2. */ write_sysreg(0, vttbr_el2); - write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); + val = read_sysreg(hcr_el2); + val |= HCR_TGE; + write_sysreg(val, hcr_el2); } static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm) diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 2377497..feda456 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -1318,6 +1318,8 @@ static void cpu_hyp_reinit(void) if (vgic_present) kvm_vgic_init_cpu_hardware(); + + __cpu_copy_host_registers(); } static void _kvm_arch_hardware_enable(void *discard) From patchwork Tue Dec 18 07:56:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Daniel Kachhap X-Patchwork-Id: 10734981 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 147E11399 for ; Tue, 18 Dec 2018 07:58:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 03EDB2A171 for ; Tue, 18 Dec 2018 07:58:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EC3812A17C; Tue, 18 Dec 2018 07:58:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0A8592A171 for ; Tue, 18 Dec 2018 07:58:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=iWXiFxxkysCZBNO1fb8I4mmHwe3jy1EKpwqNDVxj++w=; b=OgE83jrZWRSmLT3+lCEODw3vNH KqnXDwsIFP5bP6oJza9iaqpczu0bm4MPlFMnYok+idJj6dCZFWVQjAJpG4/4xKPELf5sUP4OISHYp ln7EEeYyizJN+Umq3zBoCoAvlSrEjElcH5JKeJvYvNR93+zgAYogQNTtcM5O6FJCwKy+Osod14vEu zcoCG9s5ZWGZtFztJJUpG5LsMLiKpzaW0IbxngmNPoDLvecEvcvUbGS72JL4dx9O+2mQ1NFYm9Lc6 hdHDT5LCfr6T7+Dpxj6Z1eWr/jc0DCpsgGronrEahbdLcjRh4+T9orLVIPANJ9oBkdK3lOxDoF5Bq Jmps3o+g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gZAHF-0002by-HI; Tue, 18 Dec 2018 07:58:53 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gZAFy-0001VS-Nz for linux-arm-kernel@lists.infradead.org; Tue, 18 Dec 2018 07:57:37 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3BFA115BF; Mon, 17 Dec 2018 23:57:23 -0800 (PST) Received: from a75553-lin.blr.arm.com (a75553-lin.blr.arm.com [10.162.0.175]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 678D93F575; Mon, 17 Dec 2018 23:57:19 -0800 (PST) From: Amit Daniel Kachhap To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 2/6] arm64/kvm: context-switch ptrauth registers Date: Tue, 18 Dec 2018 13:26:46 +0530 Message-Id: <1545119810-12182-3-git-send-email-amit.kachhap@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1545119810-12182-1-git-send-email-amit.kachhap@arm.com> References: <1545119810-12182-1-git-send-email-amit.kachhap@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181217_235734_839049_F33886C2 X-CRM114-Status: GOOD ( 25.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Andrew Jones , Marc Zyngier , Catalin Marinas , Will Deacon , Christoffer Dall , Kristina Martsenko , kvmarm@lists.cs.columbia.edu, Ramana Radhakrishnan , Amit Daniel Kachhap , Dave Martin , linux-kernel@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP When pointer authentication is supported, a guest may wish to use it. This patch adds the necessary KVM infrastructure for this to work, with a semi-lazy context switch of the pointer auth state. Pointer authentication feature is only enabled when VHE is built into the kernel and present into CPU implementation so only VHE code paths are modified. When we schedule a vcpu, we disable guest usage of pointer authentication instructions and accesses to the keys. While these are disabled, we avoid context-switching the keys. When we trap the guest trying to use pointer authentication functionality, we change to eagerly context-switching the keys, and enable the feature. The next time the vcpu is scheduled out/in, we start again. Pointer authentication consists of address authentication and generic authentication, and CPUs in a system might have varied support for either. Where support for either feature is not uniform, it is hidden from guests via ID register emulation, as a result of the cpufeature framework in the host. Unfortunately, address authentication and generic authentication cannot be trapped separately, as the architecture provides a single EL2 trap covering both. If we wish to expose one without the other, we cannot prevent a (badly-written) guest from intermittently using a feature which is not uniformly supported (when scheduled on a physical CPU which supports the relevant feature). When the guest is scheduled on a physical CPU lacking the feature, these attemts will result in an UNDEF being taken by the guest. Signed-off-by: Mark Rutland Signed-off-by: Amit Daniel Kachhap Cc: Marc Zyngier Cc: Christoffer Dall Cc: kvmarm@lists.cs.columbia.edu --- arch/arm/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/cpufeature.h | 6 +++ arch/arm64/include/asm/kvm_host.h | 24 ++++++++++++ arch/arm64/include/asm/kvm_hyp.h | 7 ++++ arch/arm64/kernel/traps.c | 1 + arch/arm64/kvm/handle_exit.c | 24 +++++++----- arch/arm64/kvm/hyp/Makefile | 1 + arch/arm64/kvm/hyp/ptrauth-sr.c | 73 +++++++++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/switch.c | 4 ++ arch/arm64/kvm/sys_regs.c | 40 ++++++++++++++++---- virt/kvm/arm/arm.c | 2 + 11 files changed, 166 insertions(+), 17 deletions(-) create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 0f012c8..02d9bfc 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -351,6 +351,7 @@ static inline int kvm_arm_have_ssbd(void) static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {} +static inline void kvm_arm_vcpu_ptrauth_config(struct kvm_vcpu *vcpu) {} #define __KVM_HAVE_ARCH_VM_ALLOC struct kvm *kvm_arch_alloc_vm(void); diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 1c8393f..ac7d496 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -526,6 +526,12 @@ static inline bool system_supports_generic_auth(void) cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH); } +static inline bool system_supports_ptrauth(void) +{ + return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) && + cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH); +} + #define ARM64_SSBD_UNKNOWN -1 #define ARM64_SSBD_FORCE_DISABLE 0 #define ARM64_SSBD_KERNEL 1 diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 1b9eed9..629712d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -146,6 +146,18 @@ enum vcpu_sysreg { PMSWINC_EL0, /* Software Increment Register */ PMUSERENR_EL0, /* User Enable Register */ + /* Pointer Authentication Registers */ + APIAKEYLO_EL1, + APIAKEYHI_EL1, + APIBKEYLO_EL1, + APIBKEYHI_EL1, + APDAKEYLO_EL1, + APDAKEYHI_EL1, + APDBKEYLO_EL1, + APDBKEYHI_EL1, + APGAKEYLO_EL1, + APGAKEYHI_EL1, + /* 32bit specific registers. Keep them at the end of the range */ DACR32_EL2, /* Domain Access Control Register */ IFSR32_EL2, /* Instruction Fault Status Register */ @@ -439,6 +451,18 @@ static inline bool kvm_arch_check_sve_has_vhe(void) return true; } +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu); +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu); + +static inline void kvm_arm_vcpu_ptrauth_config(struct kvm_vcpu *vcpu) +{ + /* Disable ptrauth and use it in a lazy context via traps */ + if (has_vhe() && system_supports_ptrauth()) + kvm_arm_vcpu_ptrauth_disable(vcpu); +} + +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu); + static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 23aca66..ebffd44 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -152,6 +152,13 @@ bool __fpsimd_enabled(void); void activate_traps_vhe_load(struct kvm_vcpu *vcpu); void deactivate_traps_vhe_put(void); +void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt, + struct kvm_cpu_context *guest_ctxt); +void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt, + struct kvm_cpu_context *guest_ctxt); + u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt); void __noreturn __hyp_do_panic(unsigned long, ...); diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 5f4d9ac..90e18de 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -748,6 +748,7 @@ static const char *esr_class_str[] = { [ESR_ELx_EC_CP14_LS] = "CP14 LDC/STC", [ESR_ELx_EC_FP_ASIMD] = "ASIMD", [ESR_ELx_EC_CP10_ID] = "CP10 MRC/VMRS", + [ESR_ELx_EC_PAC] = "Pointer authentication trap", [ESR_ELx_EC_CP14_64] = "CP14 MCRR/MRRC", [ESR_ELx_EC_ILL] = "PSTATE.IL", [ESR_ELx_EC_SVC32] = "SVC (AArch32)", diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index ab35929..5c47a8f47 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -174,19 +174,25 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run) } /* + * Handle the guest trying to use a ptrauth instruction, or trying to access a + * ptrauth register. This trap should not occur as we enable ptrauth during + * vcpu schedule itself but is anyway kept here for any unfortunate scenario. + */ +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu) +{ + if (system_supports_ptrauth()) + kvm_arm_vcpu_ptrauth_enable(vcpu); + else + kvm_inject_undefined(vcpu); +} + +/* * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into - * a NOP). + * a NOP), or guest EL1 access to a ptrauth register. */ static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run) { - /* - * We don't currently support ptrauth in a guest, and we mask the ID - * registers to prevent well-behaved guests from trying to make use of - * it. - * - * Inject an UNDEF, as if the feature really isn't present. - */ - kvm_inject_undefined(vcpu); + kvm_arm_vcpu_ptrauth_trap(vcpu); return 1; } diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index 82d1904..17cec99 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -19,6 +19,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += switch.o obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o obj-$(CONFIG_KVM_ARM_HOST) += tlb.o obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o +obj-$(CONFIG_KVM_ARM_HOST) += ptrauth-sr.o # KVM code is run at a different exception code with a different map, so # compiler instrumentation that inserts callbacks or checks into the code may diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c new file mode 100644 index 0000000..1bfaf74 --- /dev/null +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c @@ -0,0 +1,73 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * arch/arm64/kvm/hyp/ptrauth-sr.c: Guest/host ptrauth save/restore + * + * Copyright 2018 Arm Limited + * Author: Mark Rutland + * Amit Daniel Kachhap + */ +#include +#include + +#include +#include +#include +#include +#include + +static __always_inline bool __hyp_text __ptrauth_is_enabled(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.hcr_el2 & (HCR_API | HCR_APK); +} + +#define __ptrauth_save_key(regs, key) \ +({ \ + regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \ + regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1); \ +}) + +static __always_inline void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt) +{ + __ptrauth_save_key(ctxt->sys_regs, APIA); + __ptrauth_save_key(ctxt->sys_regs, APIB); + __ptrauth_save_key(ctxt->sys_regs, APDA); + __ptrauth_save_key(ctxt->sys_regs, APDB); + __ptrauth_save_key(ctxt->sys_regs, APGA); +} + +#define __ptrauth_restore_key(regs, key) \ +({ \ + write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1); \ + write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1); \ +}) + +static __always_inline void __hyp_text __ptrauth_restore_state(struct kvm_cpu_context *ctxt) +{ + __ptrauth_restore_key(ctxt->sys_regs, APIA); + __ptrauth_restore_key(ctxt->sys_regs, APIB); + __ptrauth_restore_key(ctxt->sys_regs, APDA); + __ptrauth_restore_key(ctxt->sys_regs, APDB); + __ptrauth_restore_key(ctxt->sys_regs, APGA); +} + +void __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt, + struct kvm_cpu_context *guest_ctxt) +{ + if (!__ptrauth_is_enabled(vcpu)) + return; + + __ptrauth_save_state(host_ctxt); + __ptrauth_restore_state(guest_ctxt); +} + +void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt, + struct kvm_cpu_context *guest_ctxt) +{ + if (!__ptrauth_is_enabled(vcpu)) + return; + + __ptrauth_save_state(guest_ctxt); + __ptrauth_restore_state(host_ctxt); +} diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 85a2a5c..6c57552 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -508,6 +508,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) sysreg_restore_guest_state_vhe(guest_ctxt); __debug_switch_to_guest(vcpu); + __ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt); + __set_guest_arch_workaround_state(vcpu); do { @@ -519,6 +521,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) __set_host_arch_workaround_state(vcpu); + __ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt); + sysreg_save_guest_state_vhe(guest_ctxt); __deactivate_traps(vcpu); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 1ca592d..6af6c7d 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -986,6 +986,32 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, { SYS_DESC(SYS_PMEVTYPERn_EL0(n)), \ access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), } + +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu) +{ + vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK); +} + +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu) +{ + vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK); +} + +static bool trap_ptrauth(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *rd) +{ + kvm_arm_vcpu_ptrauth_trap(vcpu); + return false; +} + +#define __PTRAUTH_KEY(k) \ + { SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k } + +#define PTRAUTH_KEY(k) \ + __PTRAUTH_KEY(k ## KEYLO_EL1), \ + __PTRAUTH_KEY(k ## KEYHI_EL1) + static bool access_cntp_tval(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) @@ -1040,14 +1066,6 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz) kvm_debug("SVE unsupported for guests, suppressing\n"); val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT); - } else if (id == SYS_ID_AA64ISAR1_EL1) { - const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) | - (0xfUL << ID_AA64ISAR1_API_SHIFT) | - (0xfUL << ID_AA64ISAR1_GPA_SHIFT) | - (0xfUL << ID_AA64ISAR1_GPI_SHIFT); - if (val & ptrauth_mask) - kvm_debug("ptrauth unsupported for guests, suppressing\n"); - val &= ~ptrauth_mask; } else if (id == SYS_ID_AA64MMFR1_EL1) { if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT)) kvm_debug("LORegions unsupported for guests, suppressing\n"); @@ -1316,6 +1334,12 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, { SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 }, + PTRAUTH_KEY(APIA), + PTRAUTH_KEY(APIB), + PTRAUTH_KEY(APDA), + PTRAUTH_KEY(APDB), + PTRAUTH_KEY(APGA), + { SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 }, { SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 }, { SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 }, diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index feda456..1f3b6ed 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -388,6 +388,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) vcpu_clear_wfe_traps(vcpu); else vcpu_set_wfe_traps(vcpu); + + kvm_arm_vcpu_ptrauth_config(vcpu); } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) From patchwork Tue Dec 18 07:56:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Daniel Kachhap X-Patchwork-Id: 10734991 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 067126C2 for ; Tue, 18 Dec 2018 07:59:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E9F222A47B for ; Tue, 18 Dec 2018 07:59:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DDB352A486; Tue, 18 Dec 2018 07:59:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3E63D2A47B for ; Tue, 18 Dec 2018 07:59:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=kyX/orDSyXdoRqmzIoXQp20n4HNsyhJym4nMqtovHoI=; b=H5ww+8nNJ7RHZ6xUdF2vi/G8Z6 lnoWqMknNKBcW2iVZCOkVcd+fI57+zhj0dC04av6/l7I5kgLgfmeZU2qQB40/cSzPMWZ85iLu/BBa yT83gwFVRNHFfoKcncE3IBfD0B39Qfh6Lw6X15yeoXdQMVOStrmRMDB4Y6Vu7lz3WtqXKjV6VrjYZ US434zb3k85NyARYgIcsa5BHpqdQ3XWyFNHYRsQ21krvrInxEBovKMfno4bygHaqWDiW2CdGx6JqS GLUJKRReMQcbJgZfuWJlI4BVX+dmN+OAGIXKLjq/IjUvfUva5Sau9mJQ8XDC9OcojqglaNNU1hY9D t5IF7jRQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gZAHj-00037X-GN; Tue, 18 Dec 2018 07:59:23 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gZAFy-0001a5-Nx for linux-arm-kernel@lists.infradead.org; Tue, 18 Dec 2018 07:57:39 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1FF4480D; Mon, 17 Dec 2018 23:57:29 -0800 (PST) Received: from a75553-lin.blr.arm.com (a75553-lin.blr.arm.com [10.162.0.175]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4ACCE3F575; Mon, 17 Dec 2018 23:57:25 -0800 (PST) From: Amit Daniel Kachhap To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 3/6] arm64/kvm: add a userspace option to enable pointer authentication Date: Tue, 18 Dec 2018 13:26:47 +0530 Message-Id: <1545119810-12182-4-git-send-email-amit.kachhap@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1545119810-12182-1-git-send-email-amit.kachhap@arm.com> References: <1545119810-12182-1-git-send-email-amit.kachhap@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181217_235734_832875_EF246462 X-CRM114-Status: GOOD ( 19.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Andrew Jones , Marc Zyngier , Catalin Marinas , Will Deacon , Christoffer Dall , Kristina Martsenko , kvmarm@lists.cs.columbia.edu, Ramana Radhakrishnan , Amit Daniel Kachhap , Dave Martin , linux-kernel@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This feature will allow the KVM guest to allow the handling of pointer authentication instructions or to treat them as undefined if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to supply this parameter instead of creating a new API. A new register is not created to pass this parameter via SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH) supplied is enough to select this feature. Signed-off-by: Amit Daniel Kachhap Cc: Mark Rutland Cc: Marc Zyngier Cc: Christoffer Dall Cc: kvmarm@lists.cs.columbia.edu --- Documentation/arm64/pointer-authentication.txt | 9 +++++---- Documentation/virtual/kvm/api.txt | 4 ++++ arch/arm/include/asm/kvm_host.h | 4 ++++ arch/arm64/include/asm/kvm_host.h | 7 ++++--- arch/arm64/include/uapi/asm/kvm.h | 1 + arch/arm64/kvm/handle_exit.c | 2 +- arch/arm64/kvm/hyp/ptrauth-sr.c | 16 ++++++++++++++++ arch/arm64/kvm/reset.c | 3 +++ include/uapi/linux/kvm.h | 1 + 9 files changed, 39 insertions(+), 8 deletions(-) diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt index 5baca42..8c0f338 100644 --- a/Documentation/arm64/pointer-authentication.txt +++ b/Documentation/arm64/pointer-authentication.txt @@ -87,7 +87,8 @@ used to get and set the keys for a thread. Virtualization -------------- -Pointer authentication is not currently supported in KVM guests. KVM -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of -the feature will result in an UNDEFINED exception being injected into -the guest. +Pointer authentication is enabled in KVM guest when virtual machine is +created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature +to be enabled. Without this flag, pointer authentication is not enabled +in KVM guests and attempted use of the feature will result in an UNDEFINED +exception being injected into the guest. diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index cd209f7..e20583a 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -2634,6 +2634,10 @@ Possible features: Depends on KVM_CAP_ARM_PSCI_0_2. - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU. Depends on KVM_CAP_ARM_PMU_V3. + - KVM_ARM_VCPU_PTRAUTH: Emulate Pointer authentication for the CPU. + Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If + set, then the KVM guest allows the execution of pointer authentication + instructions or treats them as undefined if not set. 4.83 KVM_ARM_PREFERRED_TARGET diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 02d9bfc..62a85d9 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -352,6 +352,10 @@ static inline int kvm_arm_have_ssbd(void) static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {} static inline void kvm_arm_vcpu_ptrauth_config(struct kvm_vcpu *vcpu) {} +static inline bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu) +{ + return false; +} #define __KVM_HAVE_ARCH_VM_ALLOC struct kvm *kvm_arch_alloc_vm(void); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 629712d..f853a95 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -43,7 +43,7 @@ #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS -#define KVM_VCPU_MAX_FEATURES 4 +#define KVM_VCPU_MAX_FEATURES 5 #define KVM_REQ_SLEEP \ KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) @@ -453,14 +453,15 @@ static inline bool kvm_arch_check_sve_has_vhe(void) void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu); void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu); +bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu); static inline void kvm_arm_vcpu_ptrauth_config(struct kvm_vcpu *vcpu) { /* Disable ptrauth and use it in a lazy context via traps */ - if (has_vhe() && system_supports_ptrauth()) + if (has_vhe() && system_supports_ptrauth() + && kvm_arm_vcpu_ptrauth_allowed(vcpu)) kvm_arm_vcpu_ptrauth_disable(vcpu); } - void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu); static inline void kvm_arch_hardware_unsetup(void) {} diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 97c3478..5f82ca1 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -102,6 +102,7 @@ struct kvm_regs { #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */ #define KVM_ARM_VCPU_PSCI_0_2 2 /* CPU uses PSCI v0.2 */ #define KVM_ARM_VCPU_PMU_V3 3 /* Support guest PMUv3 */ +#define KVM_ARM_VCPU_PTRAUTH 4 /* VCPU uses address authentication */ struct kvm_vcpu_init { __u32 target; diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 5c47a8f47..3394028 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -180,7 +180,7 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run) */ void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu) { - if (system_supports_ptrauth()) + if (system_supports_ptrauth() && kvm_arm_vcpu_ptrauth_allowed(vcpu)) kvm_arm_vcpu_ptrauth_enable(vcpu); else kvm_inject_undefined(vcpu); diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c index 1bfaf74..03999bb 100644 --- a/arch/arm64/kvm/hyp/ptrauth-sr.c +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c @@ -71,3 +71,19 @@ void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu, __ptrauth_save_state(guest_ctxt); __ptrauth_restore_state(host_ctxt); } + +/** + * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is present in vcpu + * + * @vcpu: The VCPU pointer + * + * This function will be used to enable/disable ptrauth in guest as configured + * by the KVM userspace API. + */ +bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu) +{ + if (test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features)) + return true; + else + return false; +} diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index b72a3dd..4656070 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -91,6 +91,9 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_VM_IPA_SIZE: r = kvm_ipa_limit; break; + case KVM_CAP_ARM_PTRAUTH: + r = system_supports_ptrauth(); + break; default: r = 0; } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 2b7a652..fa48660 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -975,6 +975,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_HYPERV_ENLIGHTENED_VMCS 163 #define KVM_CAP_EXCEPTION_PAYLOAD 164 #define KVM_CAP_ARM_VM_IPA_SIZE 165 +#define KVM_CAP_ARM_PTRAUTH 166 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Tue Dec 18 07:56:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Daniel Kachhap X-Patchwork-Id: 10734989 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 99C8A1399 for ; Tue, 18 Dec 2018 07:59:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A7492A47B for ; Tue, 18 Dec 2018 07:59:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7E4562A486; Tue, 18 Dec 2018 07:59:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0039F2A47B for ; Tue, 18 Dec 2018 07:59:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=P69x1YCsr5SZqDrfWMwCOw3st0IfPK78ktsAPdm4wVM=; b=ciJLAVPLDboZ7qsC4ZrJJcRp9q yTWcqPhr4XfszEh2eiVDpgtXnv0WSTjCaXd/5CILaVKBqv0winaD2JWL4LdJhK4ID56ty4b9fVq/t HGx98m4cLlFfVigCcFKLGgv8qIrywiJfENEkjr1kO+jaSlyWlx7jfksW+N0Mr/Q7oLXhNALDOnNeF UqU9X4tCS73E1s2FSktfWcKQpWbp756eioYRGp/ks1nWhtHY+421ClCHCEykhfrjOgc+VyNLcgfE0 +g6Di8MYiv1k/kSsyB7rPAmVuq1EJXZ+t24Nyxur7Gb+gOgxravsR7kPqx9DRT3Z7GSR/qLsnrIPY fILaH2oQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gZAHU-0002s6-6s; Tue, 18 Dec 2018 07:59:08 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gZAFz-0001ch-5V for linux-arm-kernel@lists.infradead.org; Tue, 18 Dec 2018 07:57:39 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D4E6215AB; Mon, 17 Dec 2018 23:57:34 -0800 (PST) Received: from a75553-lin.blr.arm.com (a75553-lin.blr.arm.com [10.162.0.175]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0C94A3F575; Mon, 17 Dec 2018 23:57:30 -0800 (PST) From: Amit Daniel Kachhap To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 4/6] arm64/kvm: enable pointer authentication cpufeature conditionally Date: Tue, 18 Dec 2018 13:26:48 +0530 Message-Id: <1545119810-12182-5-git-send-email-amit.kachhap@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1545119810-12182-1-git-send-email-amit.kachhap@arm.com> References: <1545119810-12182-1-git-send-email-amit.kachhap@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181217_235735_293953_73ABA81C X-CRM114-Status: GOOD ( 15.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Andrew Jones , Marc Zyngier , Catalin Marinas , Will Deacon , Christoffer Dall , Kristina Martsenko , kvmarm@lists.cs.columbia.edu, Ramana Radhakrishnan , Amit Daniel Kachhap , Dave Martin , linux-kernel@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP According to userspace settings, pointer authentication cpufeature is enabled/disabled from guests. Signed-off-by: Amit Daniel Kachhap Cc: Mark Rutland Cc: Christoffer Dall Cc: Marc Zyngier Cc: kvmarm@lists.cs.columbia.edu --- Documentation/arm64/pointer-authentication.txt | 3 +++ arch/arm64/kvm/sys_regs.c | 33 ++++++++++++++++---------- 2 files changed, 24 insertions(+), 12 deletions(-) diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt index 8c0f338..a65dca2 100644 --- a/Documentation/arm64/pointer-authentication.txt +++ b/Documentation/arm64/pointer-authentication.txt @@ -92,3 +92,6 @@ created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature to be enabled. Without this flag, pointer authentication is not enabled in KVM guests and attempted use of the feature will result in an UNDEFINED exception being injected into the guest. + +Additionally, when KVM_ARM_VCPU_PTRAUTH is not set then KVM will mask the +feature bits from ID_AA64ISAR1_EL1. diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 6af6c7d..ce6144a 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1055,7 +1055,7 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu, } /* Read a sanitised cpufeature ID register by sys_reg_desc */ -static u64 read_id_reg(struct sys_reg_desc const *r, bool raz) +static u64 read_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_desc const *r, bool raz) { u32 id = sys_reg((u32)r->Op0, (u32)r->Op1, (u32)r->CRn, (u32)r->CRm, (u32)r->Op2); @@ -1066,6 +1066,15 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz) kvm_debug("SVE unsupported for guests, suppressing\n"); val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT); + } else if (id == SYS_ID_AA64ISAR1_EL1) { + const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) | + (0xfUL << ID_AA64ISAR1_API_SHIFT) | + (0xfUL << ID_AA64ISAR1_GPA_SHIFT) | + (0xfUL << ID_AA64ISAR1_GPI_SHIFT); + if (!kvm_arm_vcpu_ptrauth_allowed(vcpu)) { + kvm_debug("ptrauth unsupported for guests, suppressing\n"); + val &= ~ptrauth_mask; + } } else if (id == SYS_ID_AA64MMFR1_EL1) { if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT)) kvm_debug("LORegions unsupported for guests, suppressing\n"); @@ -1086,7 +1095,7 @@ static bool __access_id_reg(struct kvm_vcpu *vcpu, if (p->is_write) return write_to_read_only(vcpu, p, r); - p->regval = read_id_reg(r, raz); + p->regval = read_id_reg(vcpu, r, raz); return true; } @@ -1115,17 +1124,17 @@ static u64 sys_reg_to_index(const struct sys_reg_desc *reg); * are stored, and for set_id_reg() we don't allow the effective value * to be changed. */ -static int __get_id_reg(const struct sys_reg_desc *rd, void __user *uaddr, - bool raz) +static int __get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + void __user *uaddr, bool raz) { const u64 id = sys_reg_to_index(rd); - const u64 val = read_id_reg(rd, raz); + const u64 val = read_id_reg(vcpu, rd, raz); return reg_to_user(uaddr, &val, id); } -static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr, - bool raz) +static int __set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + void __user *uaddr, bool raz) { const u64 id = sys_reg_to_index(rd); int err; @@ -1136,7 +1145,7 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr, return err; /* This is what we mean by invariant: you can't change it. */ - if (val != read_id_reg(rd, raz)) + if (val != read_id_reg(vcpu, rd, raz)) return -EINVAL; return 0; @@ -1145,25 +1154,25 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr, static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, const struct kvm_one_reg *reg, void __user *uaddr) { - return __get_id_reg(rd, uaddr, false); + return __get_id_reg(vcpu, rd, uaddr, false); } static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, const struct kvm_one_reg *reg, void __user *uaddr) { - return __set_id_reg(rd, uaddr, false); + return __set_id_reg(vcpu, rd, uaddr, false); } static int get_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, const struct kvm_one_reg *reg, void __user *uaddr) { - return __get_id_reg(rd, uaddr, true); + return __get_id_reg(vcpu, rd, uaddr, true); } static int set_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, const struct kvm_one_reg *reg, void __user *uaddr) { - return __set_id_reg(rd, uaddr, true); + return __set_id_reg(vcpu, rd, uaddr, true); } /* sys_reg_desc initialiser for known cpufeature ID registers */ From patchwork Tue Dec 18 07:56:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Daniel Kachhap X-Patchwork-Id: 10734997 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD86B1399 for ; Tue, 18 Dec 2018 07:59:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B98DD2A47B for ; Tue, 18 Dec 2018 07:59:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AB18D2A487; Tue, 18 Dec 2018 07:59:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 173C42A47B for ; Tue, 18 Dec 2018 07:59:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=l0lPx3ScgO5pAeLcnV+Ks8ZJn0/Cg4BQ2+nOI5SctXI=; b=pzW8jcXIYCXlBT2upa5k+0fcbR dPktYx/ZZmPqTSnAFlo+TtxmUmtPwMyPW48fnBVgfLlAe6W4j6gLU3J+GYtBvruXJmZ9Sc9ug6Tcv F0htb7Bpej6B/X3BSyzc0BMu1vzAhSF5CZnfd1cX0sD0VaWMo+q8rDIqm2b8vlCGIJBEtPqKeAomg tE9uqP6osVrsZ9MIZheLyIiYkr6jyGx0uVN18nyaJjrXOM/He94+6J2HMAxgZJRPLte+J/GxVjgIn ePo2JK4Yf3Jzlignos5r8KE13nbj5w5uYUgz1Ue8MxxzV7UgEtbB78ByyWSGbwV4TJG+OggvvyeaR sPA2U6mQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gZAIB-0003ay-BZ; Tue, 18 Dec 2018 07:59:51 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gZAGF-0001hW-4L for linux-arm-kernel@lists.infradead.org; Tue, 18 Dec 2018 07:58:01 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9401B80D; Mon, 17 Dec 2018 23:57:40 -0800 (PST) Received: from a75553-lin.blr.arm.com (a75553-lin.blr.arm.com [10.162.0.175]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BFACE3F575; Mon, 17 Dec 2018 23:57:36 -0800 (PST) From: Amit Daniel Kachhap To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 5/6] arm64/kvm: control accessibility of ptrauth key registers Date: Tue, 18 Dec 2018 13:26:49 +0530 Message-Id: <1545119810-12182-6-git-send-email-amit.kachhap@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1545119810-12182-1-git-send-email-amit.kachhap@arm.com> References: <1545119810-12182-1-git-send-email-amit.kachhap@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181217_235752_436603_E239D187 X-CRM114-Status: GOOD ( 16.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Andrew Jones , Marc Zyngier , Catalin Marinas , Will Deacon , Christoffer Dall , Kristina Martsenko , kvmarm@lists.cs.columbia.edu, Ramana Radhakrishnan , Amit Daniel Kachhap , Dave Martin , linux-kernel@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP According to userspace settings, ptrauth key registers are conditionally present in guest system register list based on user specified flag KVM_ARM_VCPU_PTRAUTH. Signed-off-by: Amit Daniel Kachhap Cc: Mark Rutland Cc: Christoffer Dall Cc: Marc Zyngier Cc: kvmarm@lists.cs.columbia.edu --- Documentation/arm64/pointer-authentication.txt | 3 +- arch/arm64/kvm/sys_regs.c | 42 +++++++++++++++++++------- 2 files changed, 33 insertions(+), 12 deletions(-) diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt index a65dca2..729055a 100644 --- a/Documentation/arm64/pointer-authentication.txt +++ b/Documentation/arm64/pointer-authentication.txt @@ -94,4 +94,5 @@ in KVM guests and attempted use of the feature will result in an UNDEFINED exception being injected into the guest. Additionally, when KVM_ARM_VCPU_PTRAUTH is not set then KVM will mask the -feature bits from ID_AA64ISAR1_EL1. +feature bits from ID_AA64ISAR1_EL1 and pointer authentication key registers +are hidden from userspace. diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index ce6144a..09302b2 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1343,12 +1343,6 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, { SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 }, - PTRAUTH_KEY(APIA), - PTRAUTH_KEY(APIB), - PTRAUTH_KEY(APDA), - PTRAUTH_KEY(APDB), - PTRAUTH_KEY(APGA), - { SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 }, { SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 }, { SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 }, @@ -1500,6 +1494,14 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x70 }, }; +static const struct sys_reg_desc ptrauth_reg_descs[] = { + PTRAUTH_KEY(APIA), + PTRAUTH_KEY(APIB), + PTRAUTH_KEY(APDA), + PTRAUTH_KEY(APDB), + PTRAUTH_KEY(APGA), +}; + static bool trap_dbgidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) @@ -2100,6 +2102,8 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu, r = find_reg(params, table, num); if (!r) r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); + if (!r && kvm_arm_vcpu_ptrauth_allowed(vcpu)) + r = find_reg(params, ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs)); if (likely(r)) { perform_access(vcpu, params, r); @@ -2213,6 +2217,8 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu, r = find_reg_by_id(id, ¶ms, table, num); if (!r) r = find_reg(¶ms, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); + if (!r && kvm_arm_vcpu_ptrauth_allowed(vcpu)) + r = find_reg(¶ms, ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs)); /* Not saved in the sys_reg array and not otherwise accessible? */ if (r && !(r->reg || r->get_user)) @@ -2494,18 +2500,22 @@ static int walk_one_sys_reg(const struct sys_reg_desc *rd, } /* Assumed ordered tables, see kvm_sys_reg_table_init. */ -static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind) +static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind, + const struct sys_reg_desc *desc, unsigned int len) { const struct sys_reg_desc *i1, *i2, *end1, *end2; unsigned int total = 0; size_t num; int err; + if (desc == ptrauth_reg_descs && !kvm_arm_vcpu_ptrauth_allowed(vcpu)) + return total; + /* We check for duplicates here, to allow arch-specific overrides. */ i1 = get_target_table(vcpu->arch.target, true, &num); end1 = i1 + num; - i2 = sys_reg_descs; - end2 = sys_reg_descs + ARRAY_SIZE(sys_reg_descs); + i2 = desc; + end2 = desc + len; BUG_ON(i1 == end1 || i2 == end2); @@ -2533,7 +2543,10 @@ unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu) { return ARRAY_SIZE(invariant_sys_regs) + num_demux_regs() - + walk_sys_regs(vcpu, (u64 __user *)NULL); + + walk_sys_regs(vcpu, (u64 __user *)NULL, sys_reg_descs, + ARRAY_SIZE(sys_reg_descs)) + + walk_sys_regs(vcpu, (u64 __user *)NULL, ptrauth_reg_descs, + ARRAY_SIZE(ptrauth_reg_descs)); } int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) @@ -2548,7 +2561,12 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) uindices++; } - err = walk_sys_regs(vcpu, uindices); + err = walk_sys_regs(vcpu, uindices, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); + if (err < 0) + return err; + uindices += err; + + err = walk_sys_regs(vcpu, uindices, ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs)); if (err < 0) return err; uindices += err; @@ -2582,6 +2600,7 @@ void kvm_sys_reg_table_init(void) BUG_ON(check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs))); BUG_ON(check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs))); BUG_ON(check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs))); + BUG_ON(check_sysreg_table(ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs))); /* We abuse the reset function to overwrite the table itself. */ for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++) @@ -2623,6 +2642,7 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu) /* Generic chip reset first (so target could override). */ reset_sys_reg_descs(vcpu, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); + reset_sys_reg_descs(vcpu, ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs)); table = get_target_table(vcpu->arch.target, true, &num); reset_sys_reg_descs(vcpu, table, num); From patchwork Tue Dec 18 07:56:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Daniel Kachhap X-Patchwork-Id: 10734995 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E37991399 for ; Tue, 18 Dec 2018 07:59:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D20D22A47B for ; Tue, 18 Dec 2018 07:59:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C2CEA2A487; Tue, 18 Dec 2018 07:59:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 420322A47B for ; Tue, 18 Dec 2018 07:59:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=AU22pBgil3mBA4bO9Pa/Y8u5MzHpJngnRK+H+eL5iX4=; b=MiBlaTalltR+zkJG52LfxPAMge bIgaUaGIijPdVCeL3OJ1XmfN67DNFXV40qC8pywAQLHvtgX/O9lq5a+yeYL+1uQHZc3MbbNIvW5sB LqDjbw9ZoXJssSR5tR6ekWEi16Ve4928sojSx25aVUmAGcf7scL4F7P4lMUi4T4eapzpPTzF9nsIn qP5uiEKFFP2MWoBw2ZPZRP7emuVRgEevTxd+9mwPQbTrwqmxlhSkoztdjdghm28QRKjaVsdxSdyRh ffj4F47rKG6z6C7iy06EuXq80jwfpH5yjjfjcDVRRubpezqMK5gVOCxYU1g4SO+XYjQvQU27XT1hD QFSow7+A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gZAHx-0003MB-6H; Tue, 18 Dec 2018 07:59:37 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gZAGF-0001mh-31 for linux-arm-kernel@lists.infradead.org; Tue, 18 Dec 2018 07:57:59 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7AFE515AB; Mon, 17 Dec 2018 23:57:46 -0800 (PST) Received: from a75553-lin.blr.arm.com (a75553-lin.blr.arm.com [10.162.0.175]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A54813F575; Mon, 17 Dec 2018 23:57:42 -0800 (PST) From: Amit Daniel Kachhap To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication Date: Tue, 18 Dec 2018 13:26:50 +0530 Message-Id: <1545119810-12182-7-git-send-email-amit.kachhap@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1545119810-12182-1-git-send-email-amit.kachhap@arm.com> References: <1545119810-12182-1-git-send-email-amit.kachhap@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181217_235752_238236_008C2ABD X-CRM114-Status: GOOD ( 15.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Andrew Jones , Marc Zyngier , Catalin Marinas , Will Deacon , Christoffer Dall , Kristina Martsenko , kvmarm@lists.cs.columbia.edu, Ramana Radhakrishnan , Amit Daniel Kachhap , Dave Martin , linux-kernel@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This is a runtime feature and can be enabled by --ptrauth option. Signed-off-by: Amit Daniel Kachhap Cc: Mark Rutland Cc: Christoffer Dall Cc: Marc Zyngier Cc: kvmarm@lists.cs.columbia.edu --- arm/aarch32/include/kvm/kvm-cpu-arch.h | 2 ++ arm/aarch64/include/asm/kvm.h | 3 +++ arm/aarch64/include/kvm/kvm-arch.h | 1 + arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++- arm/aarch64/include/kvm/kvm-cpu-arch.h | 2 ++ arm/aarch64/kvm-cpu.c | 5 +++++ arm/include/arm-common/kvm-config-arch.h | 1 + arm/kvm-cpu.c | 7 +++++++ include/linux/kvm.h | 1 + 9 files changed, 25 insertions(+), 1 deletion(-) diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h index d28ea67..5779767 100644 --- a/arm/aarch32/include/kvm/kvm-cpu-arch.h +++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h @@ -13,4 +13,6 @@ #define ARM_CPU_ID 0, 0, 0 #define ARM_CPU_ID_MPIDR 5 +unsigned int kvm__cpu_ptrauth_get_feature(void) {} + #endif /* KVM__KVM_CPU_ARCH_H */ diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h index c286035..0fd183d 100644 --- a/arm/aarch64/include/asm/kvm.h +++ b/arm/aarch64/include/asm/kvm.h @@ -98,6 +98,9 @@ struct kvm_regs { #define KVM_ARM_VCPU_PSCI_0_2 2 /* CPU uses PSCI v0.2 */ #define KVM_ARM_VCPU_PMU_V3 3 /* Support guest PMUv3 */ +/* CPU uses address authentication and A key */ +#define KVM_ARM_VCPU_PTRAUTH 4 + struct kvm_vcpu_init { __u32 target; __u32 features[7]; diff --git a/arm/aarch64/include/kvm/kvm-arch.h b/arm/aarch64/include/kvm/kvm-arch.h index 9de623a..bd566cb 100644 --- a/arm/aarch64/include/kvm/kvm-arch.h +++ b/arm/aarch64/include/kvm/kvm-arch.h @@ -11,4 +11,5 @@ #include "arm-common/kvm-arch.h" + #endif /* KVM__KVM_ARCH_H */ diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h index 04be43d..2074684 100644 --- a/arm/aarch64/include/kvm/kvm-config-arch.h +++ b/arm/aarch64/include/kvm/kvm-config-arch.h @@ -8,7 +8,9 @@ "Create PMUv3 device"), \ OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed, \ "Specify random seed for Kernel Address Space " \ - "Layout Randomization (KASLR)"), + "Layout Randomization (KASLR)"), \ + OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth, \ + "Enable address authentication"), #include "arm-common/kvm-config-arch.h" diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h index a9d8563..f7b64b7 100644 --- a/arm/aarch64/include/kvm/kvm-cpu-arch.h +++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h @@ -17,4 +17,6 @@ #define ARM_CPU_CTRL 3, 0, 1, 0 #define ARM_CPU_CTRL_SCTLR_EL1 0 +unsigned int kvm__cpu_ptrauth_get_feature(void); + #endif /* KVM__KVM_CPU_ARCH_H */ diff --git a/arm/aarch64/kvm-cpu.c b/arm/aarch64/kvm-cpu.c index 1b29374..10da2cb 100644 --- a/arm/aarch64/kvm-cpu.c +++ b/arm/aarch64/kvm-cpu.c @@ -123,6 +123,11 @@ void kvm_cpu__reset_vcpu(struct kvm_cpu *vcpu) return reset_vcpu_aarch64(vcpu); } +unsigned int kvm__cpu_ptrauth_get_feature(void) +{ + return (1UL << KVM_ARM_VCPU_PTRAUTH); +} + int kvm_cpu__get_endianness(struct kvm_cpu *vcpu) { struct kvm_one_reg reg; diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h index 6a196f1..eb872db 100644 --- a/arm/include/arm-common/kvm-config-arch.h +++ b/arm/include/arm-common/kvm-config-arch.h @@ -10,6 +10,7 @@ struct kvm_config_arch { bool aarch32_guest; bool has_pmuv3; u64 kaslr_seed; + bool has_ptrauth; enum irqchip_type irqchip; }; diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c index 7780251..5afd727 100644 --- a/arm/kvm-cpu.c +++ b/arm/kvm-cpu.c @@ -68,6 +68,13 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id) vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2); } + /* Set KVM_ARM_VCPU_PTRAUTH_I_A if available */ + if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH)) { + if (kvm->cfg.arch.has_ptrauth) + vcpu_init.features[0] |= + kvm__cpu_ptrauth_get_feature(); + } + /* * If the preferred target ioctl is successful then * use preferred target else try each and every target type diff --git a/include/linux/kvm.h b/include/linux/kvm.h index f51d508..ffd8f5c 100644 --- a/include/linux/kvm.h +++ b/include/linux/kvm.h @@ -883,6 +883,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_PPC_MMU_RADIX 134 #define KVM_CAP_PPC_MMU_HASH_V3 135 #define KVM_CAP_IMMEDIATE_EXIT 136 +#define KVM_CAP_ARM_PTRAUTH 166 #ifdef KVM_CAP_IRQ_ROUTING