From patchwork Thu Apr 30 14:48:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520603 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4D40915E6 for ; Thu, 30 Apr 2020 14:51:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 06B942074A for ; Thu, 30 Apr 2020 14:51:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="NoCPXVUU"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="nB5mGeq0" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 06B942074A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1BFKNGKksl0/naFMTu/l0c+71Nf1K7JQUl4qlKxxOSw=; b=NoCPXVUUwmqwel zSO+NiafwA/VoyY4J3tHDkZU244uMXySMda9DsYUpB9mWpUHT5/YmYDKEdvwHZLs/0f2lSf78jHi0 gzmpjrhUcDoGb2O5s0tibNhXxWeQV7MSuisBzruRGnK7KSzUgdjgNLlu80cFKfVp2/2WubUK8uYt1 CklHFFTrbWhRUgsiSvbUVFoJ47lOs9ANtb6+8trksVl88SqMzSqiJc7XS95oyAdaMDlmMZyAmXuOE gpOqHhJKbW3UvOv4RljV5aTND7vGNeZ9+EJTGZEVRHy55pxeZEcJ10qwyI/q4cvoQ38qd7dPmjNlD YGZDJ+sjw6p4sGTWsJGA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAWr-0007js-TW; Thu, 30 Apr 2020 14:51:09 +0000 Received: from mail-wr1-x442.google.com ([2a00:1450:4864:20::442]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAUY-0003ia-W3 for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:48:58 +0000 Received: by mail-wr1-x442.google.com with SMTP id j1so7300461wrt.1 for ; Thu, 30 Apr 2020 07:48:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=egiaFql39oEuCKp35vVM/J/wvaPk77SfjBPWeNkN4EQ=; b=nB5mGeq0nTBVhKlnlqv4Llt0TMP/aUts+rgtLkxieNFwbpS41LiH7YeZKayOBo7F+R C3s4TIXXrD2+7E3b4ohJJ2Axs6LeH0ESvFLXJcz1iiu5IstRNMxv3dn3yELniPZnFWrs vsYG1qystGlIFHe//U0q7sO/lFVYTbWPdfvdbjkAjh1ZyR6EeQKWtYh6EPRFSkl+2z8O Yv72rkqEG0i4koppCee/TOUYda74E3W6fuXQN7lW+Vx7NTL38ItNprvxrtlNGySkR93U nd2G+yQBBQlo1xbNXoHA3RsGqs0vI4ioWSV2ho9NyPiw+ANPIUVXPy5gfkspHmdfwZIN 3Raw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=egiaFql39oEuCKp35vVM/J/wvaPk77SfjBPWeNkN4EQ=; b=EwpnNbECRVh4om8CeuGE/pgdbcpgd//Gy5Hi8iVJJxZgZRknHX5MM5bF96wDmjiSkC 2BdwC4eogfwJgK8wpS/ePcdzYHmat/i/KMr/ZA/QbWJ73F3UqEekn1Du0ZRctw0WvUUY u+QgxqbLWEFmKDfYe2B753T2ipeVqgaJfKd0dJI2o6TJCFFECBRX6WNkraJI0m94WltQ ogHYkyB/K74db5MwFPIZLKyQ//q46N4zfbZ40kXO/U6noDVmiO0+t0bk4Qx7dMoCSjPG Ve5NXbpmBGxA+rjt/t9YJJ/w2UWQOy9sUDv1ZDGc4OsYQFritGq1szjO6kvYkGguV4oV MmwA== X-Gm-Message-State: AGi0PuZBAOGwS5edHVzhmLNPfwRBwwzdRp8E2SDpcrkMGYoMKOtO0pl6 UHWTVULOKtzY1DqOocxtqJ9uzQ== X-Google-Smtp-Source: APiQypJGJ907ybuWp6RcJm5F8T2ltyVdz4fgwI8zgjD+9wF/LoP/M+k/GKpCYyXi1j2aYnHd1X/BSw== X-Received: by 2002:a5d:4241:: with SMTP id s1mr4269423wrr.250.1588258125018; Thu, 30 Apr 2020 07:48:45 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id d143sm12354301wmd.16.2020.04.30.07.48.43 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:48:43 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 01/15] arm64: kvm: Unify users of HVC instruction Date: Thu, 30 Apr 2020 15:48:17 +0100 Message-Id: <20200430144831.59194-2-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_074847_086568_57E31A78 X-CRM114-Status: GOOD ( 19.10 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:442 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , Quentin Perret , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Quentin Perret Currently, the arm64 KVM code provides __kvm_call_hyp assembly procedure which does nothing but call the HVC instruction. This is used to call functions by their pointer in EL2 under nVHE, and abused by __cpu_init_hyp_mode to pass a data pointer. The hyp-stub code, on the other hand, has its own assembly procedures for (re)setting hyp vectors. In preparation for a clean-up of the KVM hypercall interface, unify all HVC users behind __kvm_call_hyp and remove comments about expected meaning of arguments. No functional changes intended. Signed-off-by: Quentin Perret Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_host.h | 12 ++++++----- arch/arm64/include/asm/virt.h | 33 ++++++++++++++++++++++++++++-- arch/arm64/kernel/hyp-stub.S | 34 ------------------------------- arch/arm64/kvm/hyp.S | 13 +----------- 4 files changed, 39 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 32c8a675e5a4..e61143d6602d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -25,6 +25,7 @@ #include #include #include +#include #define __KVM_HAVE_ARCH_INTC_INITIALIZED @@ -446,7 +447,8 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); void kvm_arm_halt_guest(struct kvm *kvm); void kvm_arm_resume_guest(struct kvm *kvm); -u64 __kvm_call_hyp(void *hypfn, ...); +#define kvm_call_hyp_nvhe(hypfn, ...) \ + __kvm_call_hyp((unsigned long)kvm_ksym_ref(hypfn), ##__VA_ARGS__) /* * The couple of isb() below are there to guarantee the same behaviour @@ -459,7 +461,7 @@ u64 __kvm_call_hyp(void *hypfn, ...); f(__VA_ARGS__); \ isb(); \ } else { \ - __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__); \ + kvm_call_hyp_nvhe(f, ##__VA_ARGS__); \ } \ } while(0) @@ -471,8 +473,7 @@ u64 __kvm_call_hyp(void *hypfn, ...); ret = f(__VA_ARGS__); \ isb(); \ } else { \ - ret = __kvm_call_hyp(kvm_ksym_ref(f), \ - ##__VA_ARGS__); \ + ret = kvm_call_hyp_nvhe(f, ##__VA_ARGS__); \ } \ \ ret; \ @@ -551,7 +552,8 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr, * cpus_have_const_cap() wrapper. */ BUG_ON(!system_capabilities_finalized()); - __kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr, tpidr_el2); + __kvm_call_hyp((unsigned long)pgd_ptr, hyp_stack_ptr, vector_ptr, + tpidr_el2); /* * Disabling SSBD on a non-VHE system requires us to enable SSBS diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h index 61fd26752adc..fdc11f819b06 100644 --- a/arch/arm64/include/asm/virt.h +++ b/arch/arm64/include/asm/virt.h @@ -62,8 +62,37 @@ */ extern u32 __boot_cpu_mode[2]; -void __hyp_set_vectors(phys_addr_t phys_vector_base); -void __hyp_reset_vectors(void); +/* Make HVC call into the hypervisor. */ +extern u64 __kvm_call_hyp(unsigned long arg, ...); + +/* + * __hyp_set_vectors: Call this after boot to set the initial hypervisor + * vectors as part of hypervisor installation. On an SMP system, this should + * be called on each CPU. + * + * @phys_vector_base must be the physical address of the new vector table, and + * must be 2KB aligned. + * + * Before calling this, you must check that the stub hypervisor is installed + * everywhere, by waiting for any secondary CPUs to be brought up and then + * checking that is_hyp_mode_available() is true. + * + * If not, there is a pre-existing hypervisor, some CPUs failed to boot, or + * something else went wrong... in such cases, trying to install a new + * hypervisor is unlikely to work as desired. + * + * When you call into your shiny new hypervisor, sp_el2 will contain junk, + * so you will need to set that to something sensible at the new hypervisor's + * initialisation entry point. + */ +static inline void __hyp_set_vectors(phys_addr_t phys_vector_base) +{ + __kvm_call_hyp(HVC_SET_VECTORS, phys_vector_base); +} +static inline void __hyp_reset_vectors(void) +{ + __kvm_call_hyp(HVC_RESET_VECTORS); +} /* Reports the availability of HYP mode */ static inline bool is_hyp_mode_available(void) diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S index e473ead806ed..78d4ec5c4290 100644 --- a/arch/arm64/kernel/hyp-stub.S +++ b/arch/arm64/kernel/hyp-stub.S @@ -84,37 +84,3 @@ ENDPROC(\label) invalid_vector el1_irq_invalid invalid_vector el1_fiq_invalid invalid_vector el1_error_invalid - -/* - * __hyp_set_vectors: Call this after boot to set the initial hypervisor - * vectors as part of hypervisor installation. On an SMP system, this should - * be called on each CPU. - * - * x0 must be the physical address of the new vector table, and must be - * 2KB aligned. - * - * Before calling this, you must check that the stub hypervisor is installed - * everywhere, by waiting for any secondary CPUs to be brought up and then - * checking that is_hyp_mode_available() is true. - * - * If not, there is a pre-existing hypervisor, some CPUs failed to boot, or - * something else went wrong... in such cases, trying to install a new - * hypervisor is unlikely to work as desired. - * - * When you call into your shiny new hypervisor, sp_el2 will contain junk, - * so you will need to set that to something sensible at the new hypervisor's - * initialisation entry point. - */ - -ENTRY(__hyp_set_vectors) - mov x1, x0 - mov x0, #HVC_SET_VECTORS - hvc #0 - ret -ENDPROC(__hyp_set_vectors) - -ENTRY(__hyp_reset_vectors) - mov x0, #HVC_RESET_VECTORS - hvc #0 - ret -ENDPROC(__hyp_reset_vectors) diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index 3c79a1124af2..f6c9501ddfc9 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -11,22 +11,11 @@ #include /* - * u64 __kvm_call_hyp(void *hypfn, ...); + * u64 __kvm_call_hyp(unsigned long arg, ...); * * This is not really a variadic function in the classic C-way and care must * be taken when calling this to ensure parameters are passed in registers * only, since the stack will change between the caller and the callee. - * - * Call the function with the first argument containing a pointer to the - * function you wish to call in Hyp mode, and subsequent arguments will be - * passed as x0, x1, and x2 (a maximum of 3 arguments in addition to the - * function pointer can be passed). The function being called must be mapped - * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c). Return values are - * passed in x0. - * - * A function pointer with a value less than 0xfff has a special meaning, - * and is used to implement hyp stubs in the same way as in - * arch/arm64/kernel/hyp_stub.S. */ SYM_FUNC_START(__kvm_call_hyp) hvc #0 From patchwork Thu Apr 30 14:48:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520605 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DC8F615E6 for ; Thu, 30 Apr 2020 14:51:33 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B288420873 for ; Thu, 30 Apr 2020 14:51:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="LKN4IgH8"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="bC+mz6Gf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B288420873 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mdeyD75dV22ROkMC5GKE665DrLMN0fCjslODoeJdCms=; b=LKN4IgH8JS27Ka QVAKSBgVW1kM3ENdlHIwOBg1D9jo3tzMN9IN5k8Qwdk4XnSTwg/JoI7BPNA6HT+HNBNT35VWutUAo iUyxRMqvwWhid3u8qNLl/pm0c1P6yRUUd3IMrK9sCit7hJv2o5/ElPdEttRJ1Dd9+x+1K5hBzbeAZ HwTpgt1SRZVGUVOoz0ZtbfmQXffVvgTWbOiiepzr/+DXeYNuv20fJg4xWUlQLfBPtMps5rGYrqjgc IdkPWGv/mRTr2izCrHMzOCn6NIObQeqSrjWIxUQ7SAyHJZajfMMSqCHMhZ9xNAcwbEaOU2O/ofJ16 ZBe20PM5bqD4+QXgg8Vg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAXA-0007xd-7Q; Thu, 30 Apr 2020 14:51:28 +0000 Received: from mail-wm1-f68.google.com ([209.85.128.68]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAUb-0003le-Vw for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:49:01 +0000 Received: by mail-wm1-f68.google.com with SMTP id z6so2245278wml.2 for ; Thu, 30 Apr 2020 07:48:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Et31WmaEF3NVxQ9iRAHtWhBYsTOVasFwLzjZZ8UNgn4=; b=bC+mz6GfvrnkOnQEz/hwR9vsyG+MUq+E6dxIarhu6iYRLAXwYKw8e+g6X+D4R8l+IH sMOa5WsbPR2Njm/N5hvCsmHXp/J4/8BJatuBqFCFWe/11sj222cqDm5wvQ9XR0qHNVTp xKMnub7wYILsLr3ZVeu1u/oCCj2JcM5eNtGB9cdY4cRE8BBuoQzQA/57yVi+JqWiP8hk rvvSkBeHxyV4lQ5NKFuZm+q1W0ypfO7AjdnUek1NcCC7RhCM7G1EuxLxmFxCd20l/UcP e9Wu+aLysGBDvEg/WmqNk/ZVA+zl0fTKNaWy25DymyTsz7nJtrx62FMqKsazcbtU+yQg z02Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Et31WmaEF3NVxQ9iRAHtWhBYsTOVasFwLzjZZ8UNgn4=; b=q1iOuHPKX3cQNCnOpRDVLiN61wxO1nmaLPKbAUtRO/c4K8Bji6s6PbF57mUqTYHJ5v iTedWYDt+lIX4NtI2KrMLbtW2WZuhWpK2QAWodsTuPwnq8FDGP4REfV5HlS5d4z9A13o /F+D2sz1yJen3rpaegcT0RIElEJ/STpLOFUWb4xKuUy7KDYWdx6xHT0zxMnIvOYf+JLv hQaIKa1XEId/jTPGvEZrVPezTWa3dvWqCIKX4i8CYqnx8SVBppE4+IrXgR70n/3cRtRj O52/dfY0kBV0YmBsdW3DZYhhV04GbdaYV3Ps5kes45/Qi7RwC+W0ObOjRrVlU2qZES1U VWnA== X-Gm-Message-State: AGi0PuZARXlx+fq5g/ILDja7bJUEC5KYE761CVzyo46cEDiQ/PriNrD4 tYswnZRgGKH97x9pIgrixk6u+A== X-Google-Smtp-Source: APiQypLUgdRp550jRZBba6fEHIWQDJiuFtrLkN9twyuprE+II7yxmPIaRskKBXyR0JgZ3TtglMgvqw== X-Received: by 2002:a1c:9cca:: with SMTP id f193mr3308452wme.71.1588258127525; Thu, 30 Apr 2020 07:48:47 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id v19sm4450674wra.57.2020.04.30.07.48.45 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:48:46 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 02/15] arm64: kvm: Formalize host-hyp hypcall ABI Date: Thu, 30 Apr 2020 15:48:18 +0100 Message-Id: <20200430144831.59194-3-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_074850_036087_38AF49C3 X-CRM114-Status: GOOD ( 20.90 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [209.85.128.68 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [209.85.128.68 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , Quentin Perret , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Quentin Perret In preparation for removing the hyp code from the TCB, convert the current EL1 - EL2 KVM ABI to use hypercall numbers in lieu of direct function pointers. While this in itself does not provide any isolation, it is a first step towards having a properly defined EL2 ABI. The implementation is based on a kvm_hcall_table which holds function pointers to the hyp_text functions corresponding to each hypercall. This is highly inspired from the arm64 sys_call_table, the main difference being the lack of hcall wrappers at this stage. Signed-off-by: Quentin Perret Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_host.h | 20 ++++++- arch/arm64/include/asm/kvm_host_hypercalls.h | 62 ++++++++++++++++++++ arch/arm64/kvm/hyp/Makefile | 2 +- arch/arm64/kvm/hyp/host_hypercall.c | 22 +++++++ arch/arm64/kvm/hyp/hyp-entry.S | 36 +++++++++++- 5 files changed, 137 insertions(+), 5 deletions(-) create mode 100644 arch/arm64/include/asm/kvm_host_hypercalls.h create mode 100644 arch/arm64/kvm/hyp/host_hypercall.c diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e61143d6602d..5dec3b06f2b7 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -24,6 +24,7 @@ #include #include #include +#include #include #include @@ -447,10 +448,25 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); void kvm_arm_halt_guest(struct kvm *kvm); void kvm_arm_resume_guest(struct kvm *kvm); -#define kvm_call_hyp_nvhe(hypfn, ...) \ - __kvm_call_hyp((unsigned long)kvm_ksym_ref(hypfn), ##__VA_ARGS__) +/* + * Call the hypervisor via HVC. The hcall parameter must be the name of + * a hypercall as defined in . + * + * Hypercalls take at most 6 parameters. + */ +#define kvm_call_hyp_nvhe(hcall, ...) \ + __kvm_call_hyp(KVM_HOST_HCALL_NR(hcall), ##__VA_ARGS__) /* + * u64 kvm_call_hyp(hcall, ...); + * + * Call the hypervisor. The hcall parameter must be the name of a hypercall + * defined in . In the VHE case, this will be + * translated into a direct function call. + * + * A hcall value of less than 0xfff has a special meaning and is used to + * implement hyp stubs in the same way as in arch/arm64/kernel/hyp_stub.S. + * * The couple of isb() below are there to guarantee the same behaviour * on VHE as on !VHE, where the eret to EL1 acts as a context * synchronization event. diff --git a/arch/arm64/include/asm/kvm_host_hypercalls.h b/arch/arm64/include/asm/kvm_host_hypercalls.h new file mode 100644 index 000000000000..af8ad505d816 --- /dev/null +++ b/arch/arm64/include/asm/kvm_host_hypercalls.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020 Google, inc + */ + +#ifndef __KVM_HOST_HCALL +#define __KVM_HOST_HCALL(x) +#endif + +#define __KVM_HOST_HCALL_TABLE_IDX___kvm_enable_ssbs 0 +__KVM_HOST_HCALL(__kvm_enable_ssbs) + +#define __KVM_HOST_HCALL_TABLE_IDX___kvm_get_mdcr_el2 1 +__KVM_HOST_HCALL(__kvm_get_mdcr_el2) + +#define __KVM_HOST_HCALL_TABLE_IDX___kvm_timer_set_cntvoff 2 +__KVM_HOST_HCALL(__kvm_timer_set_cntvoff) + +#define __KVM_HOST_HCALL_TABLE_IDX___kvm_tlb_flush_local_vmid 3 +__KVM_HOST_HCALL(__kvm_tlb_flush_local_vmid) + +#define __KVM_HOST_HCALL_TABLE_IDX___kvm_flush_vm_context 4 +__KVM_HOST_HCALL(__kvm_flush_vm_context) + +#define __KVM_HOST_HCALL_TABLE_IDX___kvm_vcpu_run_nvhe 5 +__KVM_HOST_HCALL(__kvm_vcpu_run_nvhe) + +#define __KVM_HOST_HCALL_TABLE_IDX___kvm_tlb_flush_vmid 6 +__KVM_HOST_HCALL(__kvm_tlb_flush_vmid) + +#define __KVM_HOST_HCALL_TABLE_IDX___kvm_tlb_flush_vmid_ipa 7 +__KVM_HOST_HCALL(__kvm_tlb_flush_vmid_ipa) + +#define __KVM_HOST_HCALL_TABLE_IDX___vgic_v3_init_lrs 8 +__KVM_HOST_HCALL(__vgic_v3_init_lrs) + +#define __KVM_HOST_HCALL_TABLE_IDX___vgic_v3_get_ich_vtr_el2 9 +__KVM_HOST_HCALL(__vgic_v3_get_ich_vtr_el2) + +#define __KVM_HOST_HCALL_TABLE_IDX___vgic_v3_write_vmcr 10 +__KVM_HOST_HCALL(__vgic_v3_write_vmcr) + +#define __KVM_HOST_HCALL_TABLE_IDX___vgic_v3_restore_aprs 11 +__KVM_HOST_HCALL(__vgic_v3_restore_aprs) + +#define __KVM_HOST_HCALL_TABLE_IDX___vgic_v3_read_vmcr 12 +__KVM_HOST_HCALL(__vgic_v3_read_vmcr) + +#define __KVM_HOST_HCALL_TABLE_IDX___vgic_v3_save_aprs 13 +__KVM_HOST_HCALL(__vgic_v3_save_aprs) + +#define __KVM_HOST_HCALL_TABLE_IDX_SIZE 14 + +/* XXX - Arbitrary offset for KVM-related hypercalls */ +#define __KVM_HOST_HCALL_BASE_SHIFT 8 +#define __KVM_HOST_HCALL_BASE (1ULL << __KVM_HOST_HCALL_BASE_SHIFT) +#define __KVM_HOST_HCALL_END (__KVM_HOST_HCALL_BASE + \ + __KVM_HOST_HCALL_TABLE_IDX_SIZE) + +/* Hypercall number = kvm offset + table idx */ +#define KVM_HOST_HCALL_NR(name) (__KVM_HOST_HCALL_TABLE_IDX_##name + \ + __KVM_HOST_HCALL_BASE) diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index 8c9880783839..29e2b2cd2fbc 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -9,7 +9,7 @@ ccflags-y += -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ obj-$(CONFIG_KVM) += hyp.o hyp-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \ - debug-sr.o entry.o switch.o fpsimd.o tlb.o hyp-entry.o + debug-sr.o entry.o switch.o fpsimd.o tlb.o host_hypercall.o hyp-entry.o # KVM code is run at a different exception code with a different map, so # compiler instrumentation that inserts callbacks or checks into the code may diff --git a/arch/arm64/kvm/hyp/host_hypercall.c b/arch/arm64/kvm/hyp/host_hypercall.c new file mode 100644 index 000000000000..6b31310f36a8 --- /dev/null +++ b/arch/arm64/kvm/hyp/host_hypercall.c @@ -0,0 +1,22 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Google, inc + */ + +#include + +typedef long (*kvm_hcall_fn_t)(void); + +static long __hyp_text kvm_hc_ni(void) +{ + return -ENOSYS; +} + +#undef __KVM_HOST_HCALL +#define __KVM_HOST_HCALL(name) \ + [__KVM_HOST_HCALL_TABLE_IDX_##name] = (long (*)(void))name, + +const kvm_hcall_fn_t kvm_hcall_table[__KVM_HOST_HCALL_TABLE_IDX_SIZE] = { + [0 ... __KVM_HOST_HCALL_TABLE_IDX_SIZE-1] = kvm_hc_ni, +#include +}; diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index c2a13ab3c471..1a51a0958504 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -13,6 +13,7 @@ #include #include #include +#include #include .text @@ -21,17 +22,26 @@ .macro do_el2_call /* * Shuffle the parameters before calling the function - * pointed to in x0. Assumes parameters in x[1,2,3]. + * pointed to in x0. Assumes parameters in x[1,2,3,4,5,6]. */ str lr, [sp, #-16]! mov lr, x0 mov x0, x1 mov x1, x2 mov x2, x3 + mov x3, x4 + mov x4, x5 + mov x5, x6 blr lr ldr lr, [sp], #16 .endm +/* kern_hyp_va(lm_alias(ksym)) */ +.macro ksym_hyp_va ksym, lm_offset + sub \ksym, \ksym, \lm_offset + kern_hyp_va \ksym +.endm + el1_sync: // Guest trapped into EL2 mrs x0, esr_el2 @@ -66,10 +76,32 @@ el1_sync: // Guest trapped into EL2 br x5 1: + /* Check if the hcall number is valid */ + cmp x0, #__KVM_HOST_HCALL_BASE + b.lt 2f + cmp x0, #__KVM_HOST_HCALL_END + b.lt 3f +2: + mov x0, -ENOSYS + eret + +3: + /* Compute lm_alias() offset in x9 */ + ldr_l x9, kimage_voffset + ldr_l x10, physvirt_offset + add x9, x9, x10 + + /* Find hcall function pointer in the table */ + ldr x10, =kvm_hcall_table + ksym_hyp_va x10, x9 + sub x0, x0, #__KVM_HOST_HCALL_BASE + add x0, x10, x0, lsl 3 + ldr x0, [x0] + ksym_hyp_va x0, x9 + /* * Perform the EL2 call */ - kern_hyp_va x0 do_el2_call eret From patchwork Thu Apr 30 14:48:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520631 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B5DEB15AB for ; Thu, 30 Apr 2020 14:57:05 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 84DA22076D for ; Thu, 30 Apr 2020 14:57:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="W/ZZlUbK"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="jxpiI2e9"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="DGkgyiQ0" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 84DA22076D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/7Y+2OI1mkcpa6T2zayMiOrepJQYSLKktSJEDu1n0Mg=; b=W/ZZlUbKVEK/kf eCBBxhSZveNuJICpCjLmMeOraTazOjIkoBRJpUF2EVZ3t2SHA9fRRALkhnQ7G2VMEwKhfA4CtrjM0 a9gJYm8Ua8F4BxA2LMgaJ8Z9Rtcuiwm5MviHKowPSNTxn0o9H3K0K1m7IK2vV3SbBiFewyIKSux7+ qbFdDQvdeq2g+VgvFAVu8soFa9yfqL4H9gg2y2z3GsFyxnKsiQlNeWmPCk13jjJUX2EBA7kfJlA4c cBJB3H+TkN6BCKu3giynT43b+5FFBQaHdKjODGOEez+wTEbmet0Ec3q3SkY9s0xfYZqcXFWeX7oaC sVkyLj+vxmf7M9oNvFeQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAcV-000755-Td; Thu, 30 Apr 2020 14:56:59 +0000 Received: from merlin.infradead.org ([205.233.59.134]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAbc-0005X5-Cr for linux-arm-kernel@bombadil.infradead.org; Thu, 30 Apr 2020 14:56:04 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=W2dOZedYL4JDAAn5nSlmNV+ZjvfKthEjodtzNjqCunU=; b=jxpiI2e9duTIrKodZlyECzFD/m fvqjs75ivyzUEi1BHc5V7mafNIhS8uPJ4MOH10Z7Fuk4vG8fZR5/4lWJOUsUe+HnDKq0J3ENfAcBT tk33qoykE32CLp+JCGXqO1StTxonrm96hADXPAjUD3dlgttwUnMwjGqgLCF0pZ1HOZFhG5W5fgWr5 i/ybWZPxXGS4T1I3oyMe+DYWNq9w58bXxKBvPw7UlOb2rGrvOSgU1UNhEfekuGvNYH+AvKWscwBT0 LeJct8zv19BFXVeMkFyW2jT5Z/N2wtnItVBB5e/Mw16bo9/N5fIYbvPMAzNEzeDBIpl6rjkKabdKq BfR4Nn4g==; Received: from mail-wr1-x443.google.com ([2a00:1450:4864:20::443]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAUf-00025e-J9 for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:48:54 +0000 Received: by mail-wr1-x443.google.com with SMTP id k1so7286341wrx.4 for ; Thu, 30 Apr 2020 07:48:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=W2dOZedYL4JDAAn5nSlmNV+ZjvfKthEjodtzNjqCunU=; b=DGkgyiQ0NginXnf8otO/F358XbxPMWkiqajyOjWp3qrwhI3IjJ5VxpWltUxTWpeYhi EzYTPLQUJPaT54+Gtq8bQ/kq/LVPXUUOLOkSY4quVOuInKuAhg+If+fiiQWZrk8sg4/2 wucgIlbFqPSMeVyM7Tcx67kTqaT8iYVNoc7mQXwSN0bIG9PAdu2aOyVCUs1RYJDOp7G5 J6bYFSMfSCNjHyHht0n7BO/pvp6s1HqaBb0j+1ewoIQxJMbwGe4MmXheqZK06lVBoxN9 Ih1WCF5C/x/zOa0VU71rHZ7tBJxntN008fOtfonZDN0KIx0kOZdvJ4PzQkI7mNQBaWUq yiAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=W2dOZedYL4JDAAn5nSlmNV+ZjvfKthEjodtzNjqCunU=; b=ms4VJcf9PVC6cegFlshfw1OGlP+ffaTbZiye4xqzWhokqZ25O7QFA3im0s9KjS3i5s 1gcgsHwTjcYiygj4GVLWztbtfYPDuTLpO4f5MTA9kOyvORGAjEqOwQS8g1FuRQAniSMU 6mSb3PZxN6FpF1sN2uDUVh3Gs+RtAwgAF1vgEHfQrReLZ0u5INEgszHwwVsM1hwU8jPz IHuL67b3ZJYwjt0GlGvSNsAdhVwPQC33wcJ8iWNtvXkJhVrZxYWEW+ddKht973ERp5O+ BrAgVK7hrZWL+VryJZau3WznKhqf3AZvKG8sJqmvHu3D2mHgbxXCcghqLDSjX3TUESxq YCoQ== X-Gm-Message-State: AGi0PuZnkHD6bEcmp0xyvyqQkiTcCRY2RjkMpQn6C5Fkl5tdXmrFnPM6 RwQxvTIkdr+M2ZlrlUr7tuwM2A== X-Google-Smtp-Source: APiQypI092ppnnKBWs4+7y2ui5zUBh7qAbqhxW88bEw98VWLvJPs/WkNEiKJyOlPkCmjk7ivxSWnJQ== X-Received: by 2002:adf:e445:: with SMTP id t5mr4244456wrm.223.1588258129829; Thu, 30 Apr 2020 07:48:49 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id b66sm12780787wmh.12.2020.04.30.07.48.48 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:48:48 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 03/15] arm64: kvm: Fix symbol dependency in __hyp_call_panic_nvhe Date: Thu, 30 Apr 2020 15:48:19 +0100 Message-Id: <20200430144831.59194-4-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-Spam-Note: CRM114 invocation failed X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:443 listed in] [list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org __hyp_call_panic_nvhe contains inline assembly which did not declare its dependency on the __hyp_panic_string symbol. Signed-off-by: David Brazdil --- arch/arm64/kvm/hyp/switch.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 8a1e81a400e0..7a7c08029d81 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -836,7 +836,7 @@ static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par, * making sure it is a kernel address and not a PC-relative * reference. */ - asm volatile("ldr %0, =__hyp_panic_string" : "=r" (str_va)); + asm volatile("ldr %0, =%1" : "=r" (str_va) : "S" (__hyp_panic_string)); __hyp_do_panic(str_va, spsr, elr, From patchwork Thu Apr 30 14:48:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520611 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C23B117EF for ; Thu, 30 Apr 2020 14:52:39 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 98D672074A for ; Thu, 30 Apr 2020 14:52:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="QVEtwx7Z"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="bjghkyai" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 98D672074A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=i5OR/FxN9HOufJI6gMdULOQFPG7O3Njso8j2UQGUS3g=; b=QVEtwx7ZxrTSSy y2x95+6h828kg5N0TlxcpGe82P2towTrHIU9mbax2l6mykSekCmtm/vkPzkmVZ2JlaWmOJm24MI5b nXbqhkjES1SoXD/rV+6zJgtmoFdMlsWISOxBgT2qwLcmqIYtHJQLH+NA1gvD2wWYrOcsp06KEGCSO M3ZM3BDZEMvDHrcWQp83pJ7tIDYTTYCV49NqgfXzzKbtXF9bjugLtlNpcsz2QaMz/MSMwVJ2KxLbs Sdar5bGPbEK9KplaMCsRx64U7ujisVKMifDc+xv82E9U8PJtyr44U6rJfJR+PS8FkFFobbUR3HDCY SHkdgnVGEb6SzO0ULf0Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAYE-0000II-48; Thu, 30 Apr 2020 14:52:34 +0000 Received: from mail-wr1-x442.google.com ([2a00:1450:4864:20::442]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAUg-0003pL-5A for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:49:03 +0000 Received: by mail-wr1-x442.google.com with SMTP id g13so7254737wrb.8 for ; Thu, 30 Apr 2020 07:48:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=740igj1bEYMW0532toQIm1KV1KgyWGqVpSAfrDZ2cxU=; b=bjghkyaiXjdRETcAiyeCoGipb9NpVwpAjTmTZs6UWm1Kf3j9noZdWPBfA6/JgO1Xck Ct+KhqIbf8ay/K0N7BvgxJQ4DflwSEfc92VsN+0wtt0Oytn3SkBndtv1EEQIFW4LrvO7 qnyAHH0oQfFlK5XXXliPZBLVslWR3vvib8D2Ee6Nv9RYFEOJS1DrvvqGjwYqSM9/t5Ew ngwk2kC1csd4y6GOOJpBgz6idu4/Os6qKVt0P9dsFf9e4oW3HfqKKlSk6UTO+BjpayIz KBE0CfClP2er/3jD+mQvK0ChNjL/z/Mizo8hYwM5GiZC5V/NhZCgFJIx+2qEpfRhs9lS 4OqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=740igj1bEYMW0532toQIm1KV1KgyWGqVpSAfrDZ2cxU=; b=AY0fUpdArItS66svTPmSiVeoDzxqKcfhQ+EdkUanvQPIJQLp4DDBYpIiDNVK9WqERM 1kOiMti2NuQ6G7+nkXPC4Cen+S/P3EV2lxzLZ18P0UzofoZuFEFImPdIOQvR0rw+Tl0Z xAD5hwv1MnzHX2zAPKtp4hDWENEMSPnr+cu4FbSB9ccQr2bCoHMKtUyteKK16d8unKhU vzPEb6OT8VVr9p6n4zpRpfP00yzW0b2Q1/IcHv9GCzfeWUfxFS9c9qtrStS4KdHe5FJu mt+GtTp7jxwYzoxHErFGCPYj2macZyho9YcBN+yY+9WvwcqapkX4zucUSvAAKEsQydEP 1zIA== X-Gm-Message-State: AGi0PuZWt/IJfwJ5EkFShFG44Wy0GVjeuwrXa+pYOqCK+Qm0fN/z8CuX aJp0OZMV4v3xCN7XJXV44lQxtA== X-Google-Smtp-Source: APiQypIxA6sGXnhh2DZW818CDWrJOloGB/7zrs6zvXqUO6B/T0ZiG4YRLv4p2mhxHk/JReEjy1GROQ== X-Received: by 2002:adf:f3cc:: with SMTP id g12mr4422939wrp.82.1588258132187; Thu, 30 Apr 2020 07:48:52 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id f7sm4132867wrt.10.2020.04.30.07.48.50 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:48:51 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 04/15] arm64: kvm: Add build rules for separate nVHE object files Date: Thu, 30 Apr 2020 15:48:20 +0100 Message-Id: <20200430144831.59194-5-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_074854_349734_94E4E0E7 X-CRM114-Status: GOOD ( 19.99 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:442 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Add new folder arch/arm64/kvm/hyp/nvhe and a Makefile for building code that runs in EL2 under nVHE KVM. Compile each source file into a `.hyp.tmp.o` object first, then prefix all its symbols with "__hyp_text_" using `objcopy` and produce a `.hyp.o`. Suffixes were chosen so that it would be possible for VHE and nVHE to share some source files, but compiled with different CFLAGS. nVHE build rules add -D__HYPERVISOR__. Macros are added for prefixing a nVHE symbol name when referenced from kernel proper. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_asm.h | 13 ++++++++++++ arch/arm64/kernel/image-vars.h | 12 +++++++++++ arch/arm64/kvm/hyp/Makefile | 4 ++-- arch/arm64/kvm/hyp/nvhe/Makefile | 35 ++++++++++++++++++++++++++++++++ scripts/kallsyms.c | 1 + 5 files changed, 63 insertions(+), 2 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/Makefile diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 7c7eeeaab9fa..99ab204519ca 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -42,6 +42,12 @@ #include +/* Translate the name of @sym to its nVHE equivalent. */ +#define kvm_nvhe_sym(sym) __hyp_text_##sym + +/* Choose between VHE and nVHE variants of a symbol. */ +#define kvm_hyp_sym(sym) (has_vhe() ? sym : kvm_nvhe_sym(sym)) + /* Translate a kernel address of @sym into its equivalent linear mapping */ #define kvm_ksym_ref(sym) \ ({ \ @@ -51,6 +57,13 @@ val; \ }) +/* + * Translate a kernel address of @sym into its equivalent linear mapping, + * choosing between VHE and nVHE variant of @sym accordingly. + */ +#define kvm_hyp_ref(sym) \ + (has_vhe() ? kvm_ksym_ref(sym) : kvm_ksym_ref(kvm_nvhe_sym(sym))) + struct kvm; struct kvm_vcpu; diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 7f06ad93fc95..13850134fc28 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -51,4 +51,16 @@ __efistub__ctype = _ctype; #endif +#ifdef CONFIG_KVM + +/* + * KVM nVHE code has its own symbol namespace prefixed by __hyp_text_, to + * isolate it from the kernel proper. The following symbols are legally + * accessed by it, therefore provide aliases to make them linkable. + * Do not include symbols which may not be safely accessed under hypervisor + * memory mappings. + */ + +#endif /* CONFIG_KVM */ + #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index 29e2b2cd2fbc..79bf822a484b 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -6,9 +6,9 @@ ccflags-y += -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) -obj-$(CONFIG_KVM) += hyp.o +obj-$(CONFIG_KVM) += vhe.o nvhe/ -hyp-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \ +vhe-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \ debug-sr.o entry.o switch.o fpsimd.o tlb.o host_hypercall.o hyp-entry.o # KVM code is run at a different exception code with a different map, so diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile new file mode 100644 index 000000000000..873d3ab0fd68 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -0,0 +1,35 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for Kernel-based Virtual Machine module, HYP/nVHE part +# + +asflags-y := -D__HYPERVISOR__ +ccflags-y := -D__HYPERVISOR__ -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ + $(DISABLE_STACKLEAK_PLUGIN) + +obj-y := + +obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) +extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) + +$(obj)/%.hyp.tmp.o: $(src)/%.c FORCE + $(call if_changed_rule,cc_o_c) +$(obj)/%.hyp.tmp.o: $(src)/%.S FORCE + $(call if_changed_rule,as_o_S) +$(obj)/%.hyp.o: $(obj)/%.hyp.tmp.o FORCE + $(call if_changed,hypcopy) + +quiet_cmd_hypcopy = HYPCOPY $@ + cmd_hypcopy = $(OBJCOPY) --prefix-symbols=__hyp_text_ $< $@ + +# KVM nVHE code is run at a different exception code with a different map, so +# compiler instrumentation that inserts callbacks or checks into the code may +# cause crashes. Just disable it. +GCOV_PROFILE := n +KASAN_SANITIZE := n +UBSAN_SANITIZE := n +KCOV_INSTRUMENT := n + +# Skip objtool checking for this directory because nVHE code is compiled with +# non-standard build rules. +OBJECT_FILES_NON_STANDARD := y diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c index 3e8dea6e0a95..b5c9dc6de38d 100644 --- a/scripts/kallsyms.c +++ b/scripts/kallsyms.c @@ -109,6 +109,7 @@ static bool is_ignored_symbol(const char *name, char type) ".LASANPC", /* s390 kasan local symbols */ "__crc_", /* modversions */ "__efistub_", /* arm64 EFI stub namespace */ + "__hyp_text_", /* arm64 non-VHE KVM namespace */ NULL }; From patchwork Thu Apr 30 14:48:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520607 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BE93915AB for ; Thu, 30 Apr 2020 14:51:57 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8BE452082E for ; Thu, 30 Apr 2020 14:51:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="I75+Esmg"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="jAFUEnzM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8BE452082E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1CMUT7z5Dozy+FNhwvl7ToQxVN+84adnsoLFHatrYZ8=; b=I75+Esmg8XmBs0 UGyrOxgQMSkZqV4phXa3W+fxcMbOkce2lXY766OuAArBsjohY1oMyfb/oo8r1CKvKqSqUuHMJWL+X X4fkMouQKAqGaiLIPIYse/dtT6BwJvhuEjKN7FIK9gy8CvWA4WxlNYm6aUzbUccgiQsFsLPcbMJJL HMmOMjHOGErIMKZTPM939nnX7SzsCSTwTxO3MPjOejUmJhGfvgWHxcI6qTA7OAkm5Mfth0RGORXVz uUMBYzgGSqpxJQDH86o55I+v8SOhdRBIWmLUdBRSXOsEiXo8rQxr5d0QP3hONNDlMy6tVvnOVU001 oLcRkwgRpvydW0VLNrgg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAXY-0008FB-QW; Thu, 30 Apr 2020 14:51:52 +0000 Received: from mail-wm1-x343.google.com ([2a00:1450:4864:20::343]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAUi-0003rP-D6 for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:49:04 +0000 Received: by mail-wm1-x343.google.com with SMTP id z6so2245729wml.2 for ; Thu, 30 Apr 2020 07:48:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pFZrtC078tm3fF3Gkx3z+NqJzWj49dM6UPrqp/cSML8=; b=jAFUEnzMh7rqPJib89Siw1b4XJgVDTMzP0eFtU2ZqL3yR803ik4PeS7lsveczCOlSz j9Scbttch+bGCZN49PEQALcgPjD+z1kYtoS1twsUuYX6HH+OWqHjDZS0F6hMc7EYooTO wmWBWagwicc58suxd3u+0ezGkpB5tW3WvtGKIuVZMXtSJPGfwxhpgBvLSED2BRhY0gqr 1ecSyg3yGVHMbGrnt351h3jZBzXeAS4rYlzLDUe5h5rmMTiQ2BqZsYWx5ad1hM1WrLqN BITNmC5Pda5IVcW9AHtfq8hD6cyT+A6MZEmv0iE00whlLZ/qau/eTU0LQPWLUpm4KFlW MFbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pFZrtC078tm3fF3Gkx3z+NqJzWj49dM6UPrqp/cSML8=; b=XrwJGVABplJIq5G/hM81OOR9zM06cuUYYDkq+zGZsep0jIEoKDvB1WkiqMh1oQ6KbD Drbk8yzjMpH4oAlO7cbV2Jjyn8df8IpHSXThFLRnKQUUwVSyQz5RuQWYKdJGTOGnfrja 96bVATPwJsye5TVmgI8AGKbD2TYgDsn0widiAncPG886HvfPGtHrInP2wtE/AcfvLHsi C9LG+wgTnaC99UZOfErpfRQ/2hBQfzQ3+VNLv9kj2r7z3jQ/ez88e7SgHRymy2VPiBg/ ZrwD673B/z0FyF4jr+k3B37ps3mHobD8AFCwmaBMwBrEU6avsx60YcdFjF2mZQwo+pUI B9fA== X-Gm-Message-State: AGi0PuaXq4rOEATtT7iDQpvvZUhyS0++VG+MRxAWVGHOXTjOQfmotIJY snfz5brz2hVYQty7ewjUgY+HWA== X-Google-Smtp-Source: APiQypIszJ+9R4DydmyddajAs7wvDnj8khQHNQBHnVsoPhp/aepPFGbE2tXMx0yhrO9Ukjry1jjaOA== X-Received: by 2002:a7b:c38e:: with SMTP id s14mr3213893wmj.12.1588258134243; Thu, 30 Apr 2020 07:48:54 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id n2sm4558315wrq.74.2020.04.30.07.48.52 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:48:53 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 05/15] arm64: kvm: Build hyp-entry.S separately for VHE/nVHE Date: Thu, 30 Apr 2020 15:48:21 +0100 Message-Id: <20200430144831.59194-6-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_074856_521852_47F86D53 X-CRM114-Status: GOOD ( 19.11 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:343 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. hyp-entry.S contains implementation of KVM hyp vectors. This code is mostly shared between VHE/nVHE, therefore compile it under both VHE and nVHE build rules, with small differences hidden behind '#ifdef __HYPERVISOR__'. These are: * only nVHE should handle host HVCs, VHE will now panic, * only nVHE needs kvm_hcall_table, so move host_hypcall.c to nvhe/, * __smccc_workaround_1_smc is not needed by nVHE, only cpu_errata.c in kernel proper. Adjust code which selects which KVM hyp vecs to install to choose the correct VHE/nVHE symbol. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_asm.h | 7 +++++ arch/arm64/include/asm/kvm_mmu.h | 13 +++++---- arch/arm64/include/asm/mmu.h | 7 ----- arch/arm64/kernel/cpu_errata.c | 2 +- arch/arm64/kernel/image-vars.h | 28 +++++++++++++++++++ arch/arm64/kvm/hyp/Makefile | 2 +- arch/arm64/kvm/hyp/hyp-entry.S | 27 ++++++++++++------ arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- .../arm64/kvm/hyp/{ => nvhe}/host_hypercall.c | 0 arch/arm64/kvm/va_layout.c | 2 +- 10 files changed, 65 insertions(+), 25 deletions(-) rename arch/arm64/kvm/hyp/{ => nvhe}/host_hypercall.c (100%) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 99ab204519ca..cdaf3df8085d 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -71,6 +71,13 @@ extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; extern char __kvm_hyp_vector[]; +extern char kvm_nvhe_sym(__kvm_hyp_vector)[]; + +#ifdef CONFIG_KVM_INDIRECT_VECTORS +extern char __bp_harden_hyp_vecs[]; +extern char kvm_nvhe_sym(__bp_harden_hyp_vecs)[]; +extern atomic_t arm64_el2_vector_last_slot; +#endif extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 30b0e8d6b895..0a5fa033422c 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -468,7 +468,7 @@ static inline int kvm_write_guest_lock(struct kvm *kvm, gpa_t gpa, * VHE, as we don't have hypervisor-specific mappings. If the system * is VHE and yet selects this capability, it will be ignored. */ -#include +#include extern void *__kvm_bp_vect_base; extern int __kvm_harden_el2_vector_slot; @@ -477,11 +477,11 @@ extern int __kvm_harden_el2_vector_slot; static inline void *kvm_get_hyp_vector(void) { struct bp_hardening_data *data = arm64_get_bp_hardening_data(); - void *vect = kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector)); + void *vect = kern_hyp_va(kvm_hyp_ref(__kvm_hyp_vector)); int slot = -1; if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR) && data->fn) { - vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs)); + vect = kern_hyp_va(kvm_hyp_ref(__bp_harden_hyp_vecs)); slot = data->hyp_vectors_slot; } @@ -510,12 +510,13 @@ static inline int kvm_map_vectors(void) * HBP + HEL2 -> use hardened vertors and use exec mapping */ if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) { - __kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs); + __kvm_bp_vect_base = kvm_hyp_ref(__bp_harden_hyp_vecs); __kvm_bp_vect_base = kern_hyp_va(__kvm_bp_vect_base); } if (cpus_have_const_cap(ARM64_HARDEN_EL2_VECTORS)) { - phys_addr_t vect_pa = __pa_symbol(__bp_harden_hyp_vecs); + phys_addr_t vect_pa = + __pa_symbol(kvm_nvhe_sym(__bp_harden_hyp_vecs)); unsigned long size = __BP_HARDEN_HYP_VECS_SZ; /* @@ -534,7 +535,7 @@ static inline int kvm_map_vectors(void) #else static inline void *kvm_get_hyp_vector(void) { - return kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector)); + return kern_hyp_va(kvm_hyp_ref(__kvm_hyp_vector)); } static inline int kvm_map_vectors(void) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 68140fdd89d6..4d913f6dd366 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -42,13 +42,6 @@ struct bp_hardening_data { bp_hardening_cb_t fn; }; -#if (defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \ - defined(CONFIG_HARDEN_EL2_VECTORS)) - -extern char __bp_harden_hyp_vecs[]; -extern atomic_t arm64_el2_vector_last_slot; -#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR || CONFIG_HARDEN_EL2_VECTORS */ - #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index a102321fc8a2..b4b41552c6df 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -117,7 +117,7 @@ DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, const char *hyp_vecs_end) { - void *dst = lm_alias(__bp_harden_hyp_vecs + slot * SZ_2K); + void *dst = lm_alias(kvm_hyp_sym(__bp_harden_hyp_vecs) + slot * SZ_2K); int i; for (i = 0; i < SZ_2K; i += 0x80) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 13850134fc28..59c53566eceb 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,6 +61,34 @@ __efistub__ctype = _ctype; * memory mappings. */ +__hyp_text___guest_exit = __guest_exit; +__hyp_text___kvm_enable_ssbs = __kvm_enable_ssbs; +__hyp_text___kvm_flush_vm_context = __kvm_flush_vm_context; +__hyp_text___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; +__hyp_text___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; +__hyp_text___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; +__hyp_text___kvm_tlb_flush_local_vmid = __kvm_tlb_flush_local_vmid; +__hyp_text___kvm_tlb_flush_vmid = __kvm_tlb_flush_vmid; +__hyp_text___kvm_tlb_flush_vmid_ipa = __kvm_tlb_flush_vmid_ipa; +__hyp_text___kvm_vcpu_run_nvhe = __kvm_vcpu_run_nvhe; +__hyp_text___vgic_v3_get_ich_vtr_el2 = __vgic_v3_get_ich_vtr_el2; +__hyp_text___vgic_v3_init_lrs = __vgic_v3_init_lrs; +__hyp_text___vgic_v3_read_vmcr = __vgic_v3_read_vmcr; +__hyp_text___vgic_v3_restore_aprs = __vgic_v3_restore_aprs; +__hyp_text___vgic_v3_save_aprs = __vgic_v3_save_aprs; +__hyp_text___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; +__hyp_text_abort_guest_exit_end = abort_guest_exit_end; +__hyp_text_abort_guest_exit_start = abort_guest_exit_start; +__hyp_text_arm64_enable_wa2_handling = arm64_enable_wa2_handling; +__hyp_text_arm64_ssbd_callback_required = arm64_ssbd_callback_required; +__hyp_text_hyp_panic = hyp_panic; +__hyp_text_kimage_voffset = kimage_voffset; +__hyp_text_kvm_host_data = kvm_host_data; +__hyp_text_kvm_patch_vector_branch = kvm_patch_vector_branch; +__hyp_text_kvm_update_va_mask = kvm_update_va_mask; +__hyp_text_panic = panic; +__hyp_text_physvirt_offset = physvirt_offset; + #endif /* CONFIG_KVM */ #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index 79bf822a484b..a6883f4fed9e 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -9,7 +9,7 @@ ccflags-y += -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ obj-$(CONFIG_KVM) += vhe.o nvhe/ vhe-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \ - debug-sr.o entry.o switch.o fpsimd.o tlb.o host_hypercall.o hyp-entry.o + debug-sr.o entry.o switch.o fpsimd.o tlb.o hyp-entry.o # KVM code is run at a different exception code with a different map, so # compiler instrumentation that inserts callbacks or checks into the code may diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 1a51a0958504..5986e1d78d3f 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -36,12 +36,6 @@ ldr lr, [sp], #16 .endm -/* kern_hyp_va(lm_alias(ksym)) */ -.macro ksym_hyp_va ksym, lm_offset - sub \ksym, \ksym, \lm_offset - kern_hyp_va \ksym -.endm - el1_sync: // Guest trapped into EL2 mrs x0, esr_el2 @@ -53,7 +47,15 @@ el1_sync: // Guest trapped into EL2 mrs x1, vttbr_el2 // If vttbr is valid, the guest cbnz x1, el1_hvc_guest // called HVC - /* Here, we're pretty sure the host called HVC. */ +#ifdef __HYPERVISOR__ + +/* kern_hyp_va(lm_alias(ksym)) */ +.macro ksym_hyp_va ksym, lm_offset + sub \ksym, \ksym, \lm_offset + kern_hyp_va \ksym +.endm + + /* nVHE: Here, we're pretty sure the host called HVC. */ ldp x0, x1, [sp], #16 /* Check for a stub HVC call */ @@ -107,6 +109,13 @@ el1_sync: // Guest trapped into EL2 eret sb +#else /* __HYPERVISOR__ */ + + /* VHE: Guest called HVC but vttbr is not valid. */ + b __hyp_panic + +#endif /* __HYPERVISOR__ */ + el1_hvc_guest: /* * Fastest possible path for ARM_SMCCC_ARCH_WORKAROUND_1. @@ -354,6 +363,7 @@ SYM_CODE_END(__bp_harden_hyp_vecs) .popsection +#ifndef __HYPERVISOR__ SYM_CODE_START(__smccc_workaround_1_smc) esb sub sp, sp, #(8 * 4) @@ -367,4 +377,5 @@ SYM_CODE_START(__smccc_workaround_1_smc) 1: .org __smccc_workaround_1_smc + __SMCCC_WORKAROUND_1_SMC_SZ .org 1b SYM_CODE_END(__smccc_workaround_1_smc) -#endif +#endif /* __HYPERVISOR__ */ +#endif /* CONFIG_KVM_INDIRECT_VECTORS */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 873d3ab0fd68..fa20b2723652 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__HYPERVISOR__ ccflags-y := -D__HYPERVISOR__ -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) -obj-y := +obj-y := host_hypercall.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/host_hypercall.c b/arch/arm64/kvm/hyp/nvhe/host_hypercall.c similarity index 100% rename from arch/arm64/kvm/hyp/host_hypercall.c rename to arch/arm64/kvm/hyp/nvhe/host_hypercall.c diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c index a4f48c1ac28c..8d941d19085d 100644 --- a/arch/arm64/kvm/va_layout.c +++ b/arch/arm64/kvm/va_layout.c @@ -150,7 +150,7 @@ void kvm_patch_vector_branch(struct alt_instr *alt, /* * Compute HYP VA by using the same computation as kern_hyp_va() */ - addr = (uintptr_t)kvm_ksym_ref(__kvm_hyp_vector); + addr = (uintptr_t)kvm_hyp_ref(__kvm_hyp_vector); addr &= va_mask; addr |= tag_val << tag_lsb; From patchwork Thu Apr 30 14:48:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520609 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3AB8C15AB for ; Thu, 30 Apr 2020 14:52:19 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 082512074A for ; Thu, 30 Apr 2020 14:52:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="MPHm2NTo"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="llTRbVie" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 082512074A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vv85meMwN+6ztWGn2tztnaesuXb3zDr6FxSEnk7X0Wk=; b=MPHm2NTozv4p90 Eax/nKmmHcJg5AJ37q7L22aPLVcWvjjYfj5ZZG8GYTtQ8Du+UTAGrNf3HFH5Vyx+dTlDXdGudDNE5 un3evH2H/oFmhqjT1pAsMm1Xh1/WV4ErO2md6/q0rBEkI2x5hh3vxn/dpdbR3OcLL58MkFHZxFOvi 05X1EyRrwYaN34zY+BIYrLxyGWQyixrFGR1eolxefCGisvZgIfJRy3l51JsZdBUUkXAENVsGAnb7l 9YUy+X7i4H+VxvVUa9t+z0ldNoIcNp/uWZe/nWcmAAI08wgoRWWdCaDQFmvqMbMvZWFGSL3CeLupy 1MNGK3mr4/eJsVkSiiBw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAXs-0008TF-VZ; Thu, 30 Apr 2020 14:52:13 +0000 Received: from mail-wm1-x341.google.com ([2a00:1450:4864:20::341]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAUj-0003tM-R3 for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:49:04 +0000 Received: by mail-wm1-x341.google.com with SMTP id u16so2239950wmc.5 for ; Thu, 30 Apr 2020 07:48:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bbHNVTpXzIBH43k7+53bLmR3Osutfa9At2Hme6Sb9BQ=; b=llTRbViex/wVjycqDYpppwsQ0xUzk4iAPlw0uNFjdIhsppV/SKrvZbyK35stLuiFrz AUBMsiXjZ6vVVymfv1M2Ih94IKnRUaJ+o0SgAcyJS+ZLfYOlLp2IjBrMMv2h69dKvgSd oS11PAuOK+5C2Fh+LaUFFFdZCkeiH/1TK3J3zoHJ5I7xtgGHfnCkOOCuSqvjx4YKceV1 Fh9LJ1Xy4SFCRK59Fg5/k8WPH9+rjpd/VwKOKWpdUyCxmI8LwsuS2VMQHTR+CSxnLn4O EmSNclgJIW+TaBUAkSzTL/tIC3mm2zNJ/14mode5DBV9vfGzVkVSFtYV7BiJV1ezNIzY 52AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bbHNVTpXzIBH43k7+53bLmR3Osutfa9At2Hme6Sb9BQ=; b=Jb9XNS8JGqgQLWOFzTrM9DohQr2YnA2MtHdmt4VZkd88iOIscYJiLGplK3FHWWzdoY 3MkFOZkAEMPLMNPJAOxhC6w5UP/4/6clQLBW3pUO/++5kMzM5iEnJV5l2HP8172ftJuu nhLYSKteO6Ba94kndFccnq3hoEFb0rixpgLeToNPKzkJaGW5MaLJXZE8U3mEcWWtfv3Z mcP3QjK7wQ6umEEwUazob8ijlWk1PwlVQL4v/EfPx2XiZPR0WIqaxLhyu4edsOz43SPU eljfPS5U8GMs8pSiq2/jV/+G+3QsIbEUTovPt60UV8X8xVsczh49Q0pRikcMs+UZ9At5 N1cA== X-Gm-Message-State: AGi0PuZ5d/el/hrPYRqQPqDlcbdaVnI/09PafJty+xp86ogCSEAG7lRb O/EttdHS0B2hG3J1t/BbcyRdVw== X-Google-Smtp-Source: APiQypKHpehC+NoZtiH6O1b+EgH40+ocC2ldPt1fuV9Crb6tjR/KtOsD6F8IOBtbo+KoSxGe4rMN3g== X-Received: by 2002:a05:600c:2c04:: with SMTP id q4mr3264060wmg.7.1588258136239; Thu, 30 Apr 2020 07:48:56 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id c10sm4180096wru.48.2020.04.30.07.48.54 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:48:55 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 06/15] arm64: kvm: Move __smccc_workaround_1_smc to .rodata Date: Thu, 30 Apr 2020 15:48:22 +0100 Message-Id: <20200430144831.59194-7-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_074857_905590_D1594187 X-CRM114-Status: GOOD ( 12.00 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:341 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This snippet of assembly is used by cpu_errata.c to overwrite parts of KVM hyp vector. It is never directly executed, so move it from .text to .rodata. Signed-off-by: David Brazdil --- arch/arm64/kvm/hyp/hyp-entry.S | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 5986e1d78d3f..7e5f386c5c2d 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -364,6 +364,11 @@ SYM_CODE_END(__bp_harden_hyp_vecs) .popsection #ifndef __HYPERVISOR__ + /* + * This is not executed directly and is instead copied into the vectors + * by install_bp_hardening_cb(). + */ + .pushsection .rodata SYM_CODE_START(__smccc_workaround_1_smc) esb sub sp, sp, #(8 * 4) @@ -377,5 +382,6 @@ SYM_CODE_START(__smccc_workaround_1_smc) 1: .org __smccc_workaround_1_smc + __SMCCC_WORKAROUND_1_SMC_SZ .org 1b SYM_CODE_END(__smccc_workaround_1_smc) + .popsection #endif /* __HYPERVISOR__ */ #endif /* CONFIG_KVM_INDIRECT_VECTORS */ From patchwork Thu Apr 30 14:48:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520613 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D080415AB for ; Thu, 30 Apr 2020 14:52:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AA3E52074A for ; Thu, 30 Apr 2020 14:52:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="cMvJkwAF"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="SX3I6hB7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA3E52074A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NDhAGuObPBnY6jxhBixY5CCYDl/3ihugrlbC3c4F3Go=; b=cMvJkwAFtLMTFf cMPITV0JJcnNFnmvlm9794sB4jhr/RTTWP7uG0goANK8gydg1aGudth/q7RXHuBYH8khPNIcv7ZbU /vFFvi3PM4CumaUPkRPktqQYUFgVnZeV4k3m47A3k61td3fPmgWrXaV488/TbuX92RKiHTywmzBi5 CgM1zwcpZM48GIZqe31xDqXInXdRS6c8gVeq+75sB8dFh1OqHjiCdRkj/jljQCfKWAeq/hVHGzmQQ 2jMasYHkgoMAWkfTSO8oluOqT7h0DHv3ehewcFdNrn6kRkC18kAT3ezPR68DhgpLVSAbP4mk/OieF 2xBLDpIu98pw6b2iiv4g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAYW-0000cs-Vh; Thu, 30 Apr 2020 14:52:52 +0000 Received: from mail-wm1-x341.google.com ([2a00:1450:4864:20::341]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAUm-0003vC-5B for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:49:10 +0000 Received: by mail-wm1-x341.google.com with SMTP id x25so2193727wmc.0 for ; Thu, 30 Apr 2020 07:49:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vZpgAl6fMrlc5u+7N0e4lNcYAQFE8vx0PX4usrIn+P0=; b=SX3I6hB788ySFMgbr/rM3LkF68Ci9qfXsIghXpFwqbnIH5nqpq4tpqhp72dFsDspFP l3UFPrIPLeudxatMwxGS+EsPPq37dfq1qOA9g8BADdDSNvNbCQ6Tgj0o3HYMT/pfJr2n kjwzSr65zRs8e76mtma+2iOjxmNAqybrXTDO804duu7l4n9plmTelyLuXUUCe6sRu42M MYaJbo8/Ha38meP4QbbajReufb5yeOWUsELqwh1tn+PVW6WIXZraVWNMZ1bs+E2Z60C3 +JmhUPO2s7Vwn7ONG6jWg3NNl8tbN77Lo1ZxZaQB3W9u3TvvpATnYJsAqQHKzXn7ar71 pvvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vZpgAl6fMrlc5u+7N0e4lNcYAQFE8vx0PX4usrIn+P0=; b=M056KS8hQuFE7vCwUb0zEkNl5dS+qaZPy7Vnj0cnDMUEEZa0lxwvg9YqIYjpzPPeKT gttJUO4racjdkCjUIFixgGjrZCgtCCmLWj/Y5cuLUvgrYvivLPK+PEgwTokK6EwHHkbI 7W1Y/HVHMQbpMCD1zMBz5pmee1+Il4vomdURWxmV66T4Z8pU0q0fQC9ZV+K033O6+iLa J1Epi8U+LxkoBifAbKsQJqhm5TzfznjziA6cYCTxlnwFN6TA5t7cZ2f6m69qh1ZxYQxk 5v9tA3Ffa4Q3HE7DP7arVnP9Q0sroXmjH73sE1LOG9XyiukJpTROJaqQ2MJ5NiAjOL+M 5A3w== X-Gm-Message-State: AGi0PubzoVCsVmIXkMXAnT29soTCshFGfG8B5mlby2uSOL2x7OdPPSL5 s9e+Z3GWJHAI+GjJYp61GXK7OA== X-Google-Smtp-Source: APiQypLW6Nig0UV3UxPwMQWsDextdgKl/RgP7EjE2YYldolo5oR0wfSM+K/XSQ+tU3/0TJN/EhOlAQ== X-Received: by 2002:a1c:bd89:: with SMTP id n131mr3334436wmf.3.1588258138333; Thu, 30 Apr 2020 07:48:58 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id a1sm4278392wrn.80.2020.04.30.07.48.56 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:48:57 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 07/15] arm64: kvm: Split hyp/tlb.c to VHE/nVHE Date: Thu, 30 Apr 2020 15:48:23 +0100 Message-Id: <20200430144831.59194-8-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_074900_313378_0BFCE27C X-CRM114-Status: GOOD ( 29.52 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:341 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. tlb.c contains code for flushing the TLB, with parts shared between VHE/nVHE. These common routines are moved into a header file tlb.h, VHE-specific code remains in tlb.c and nVHE-specific code is moved to nvhe/tlb.c. The header file expects its users to implement two helper functions declared at the top of the file. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 8 +- arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/tlb.c | 69 +++++++++++++ arch/arm64/kvm/hyp/tlb.c | 170 +++---------------------------- arch/arm64/kvm/hyp/tlb.h | 129 +++++++++++++++++++++++ 5 files changed, 216 insertions(+), 162 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/tlb.c create mode 100644 arch/arm64/kvm/hyp/tlb.h diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 59c53566eceb..c0a0ec238854 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -62,14 +62,11 @@ __efistub__ctype = _ctype; */ __hyp_text___guest_exit = __guest_exit; +__hyp_text___icache_flags = __icache_flags; __hyp_text___kvm_enable_ssbs = __kvm_enable_ssbs; -__hyp_text___kvm_flush_vm_context = __kvm_flush_vm_context; __hyp_text___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; __hyp_text___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; __hyp_text___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; -__hyp_text___kvm_tlb_flush_local_vmid = __kvm_tlb_flush_local_vmid; -__hyp_text___kvm_tlb_flush_vmid = __kvm_tlb_flush_vmid; -__hyp_text___kvm_tlb_flush_vmid_ipa = __kvm_tlb_flush_vmid_ipa; __hyp_text___kvm_vcpu_run_nvhe = __kvm_vcpu_run_nvhe; __hyp_text___vgic_v3_get_ich_vtr_el2 = __vgic_v3_get_ich_vtr_el2; __hyp_text___vgic_v3_init_lrs = __vgic_v3_init_lrs; @@ -79,8 +76,11 @@ __hyp_text___vgic_v3_save_aprs = __vgic_v3_save_aprs; __hyp_text___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; __hyp_text_abort_guest_exit_end = abort_guest_exit_end; __hyp_text_abort_guest_exit_start = abort_guest_exit_start; +__hyp_text_arm64_const_caps_ready = arm64_const_caps_ready; __hyp_text_arm64_enable_wa2_handling = arm64_enable_wa2_handling; __hyp_text_arm64_ssbd_callback_required = arm64_ssbd_callback_required; +__hyp_text_cpu_hwcap_keys = cpu_hwcap_keys; +__hyp_text_cpu_hwcaps = cpu_hwcaps; __hyp_text_hyp_panic = hyp_panic; __hyp_text_kimage_voffset = kimage_voffset; __hyp_text_kvm_host_data = kvm_host_data; diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index fa20b2723652..0da836cb5580 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__HYPERVISOR__ ccflags-y := -D__HYPERVISOR__ -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) -obj-y := host_hypercall.o ../hyp-entry.o +obj-y := tlb.o host_hypercall.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c new file mode 100644 index 000000000000..1b8f4000f98c --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -0,0 +1,69 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include + +#include +#include +#include + +#include "../tlb.h" + +static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, + struct tlb_inv_context *cxt) +{ + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { + u64 val; + + /* + * For CPUs that are affected by ARM 1319367, we need to + * avoid a host Stage-1 walk while we have the guest's + * VMID set in the VTTBR in order to invalidate TLBs. + * We're guaranteed that the S1 MMU is enabled, so we can + * simply set the EPD bits to avoid any further TLB fill. + */ + val = cxt->tcr = read_sysreg_el1(SYS_TCR); + val |= TCR_EPD1_MASK | TCR_EPD0_MASK; + write_sysreg_el1(val, SYS_TCR); + isb(); + } + + __load_guest_stage2(kvm); + isb(); +} + +static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, + struct tlb_inv_context *cxt) +{ + write_sysreg(0, vttbr_el2); + + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { + /* Ensure write of the host VMID */ + isb(); + /* Restore the host's TCR_EL1 */ + write_sysreg_el1(cxt->tcr, SYS_TCR); + } +} + +void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +{ + __tlb_flush_vmid_ipa(kvm, ipa); +} + +void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm) +{ + __tlb_flush_vmid(kvm); +} + +void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +{ + __tlb_flush_local_vmid(vcpu); +} + +void __hyp_text __kvm_flush_vm_context(void) +{ + __tlb_flush_vm_context(); +} diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c index ceaddbe4279f..ab55b0c4a80c 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/tlb.c @@ -10,14 +10,10 @@ #include #include -struct tlb_inv_context { - unsigned long flags; - u64 tcr; - u64 sctlr; -}; +#include "tlb.h" -static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, + struct tlb_inv_context *cxt) { u64 val; @@ -60,40 +56,8 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, isb(); } -static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { - u64 val; - - /* - * For CPUs that are affected by ARM 1319367, we need to - * avoid a host Stage-1 walk while we have the guest's - * VMID set in the VTTBR in order to invalidate TLBs. - * We're guaranteed that the S1 MMU is enabled, so we can - * simply set the EPD bits to avoid any further TLB fill. - */ - val = cxt->tcr = read_sysreg_el1(SYS_TCR); - val |= TCR_EPD1_MASK | TCR_EPD0_MASK; - write_sysreg_el1(val, SYS_TCR); - isb(); - } - - __load_guest_stage2(kvm); - isb(); -} - -static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - if (has_vhe()) - __tlb_switch_to_guest_vhe(kvm, cxt); - else - __tlb_switch_to_guest_nvhe(kvm, cxt); -} - -static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, + struct tlb_inv_context *cxt) { /* * We're done with the TLB operation, let's restore the host's @@ -112,130 +76,22 @@ static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm, local_irq_restore(cxt->flags); } -static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm, - struct tlb_inv_context *cxt) +void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) { - write_sysreg(0, vttbr_el2); - - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { - /* Ensure write of the host VMID */ - isb(); - /* Restore the host's TCR_EL1 */ - write_sysreg_el1(cxt->tcr, SYS_TCR); - } + __tlb_flush_vmid_ipa(kvm, ipa); } -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, - struct tlb_inv_context *cxt) +void __kvm_tlb_flush_vmid(struct kvm *kvm) { - if (has_vhe()) - __tlb_switch_to_host_vhe(kvm, cxt); - else - __tlb_switch_to_host_nvhe(kvm, cxt); + __tlb_flush_vmid(kvm); } -void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) { - struct tlb_inv_context cxt; - - dsb(ishst); - - /* Switch to requested VMID */ - kvm = kern_hyp_va(kvm); - __tlb_switch_to_guest(kvm, &cxt); - - /* - * We could do so much better if we had the VA as well. - * Instead, we invalidate Stage-2 for this IPA, and the - * whole of Stage-1. Weep... - */ - ipa >>= 12; - __tlbi(ipas2e1is, ipa); - - /* - * We have to ensure completion of the invalidation at Stage-2, - * since a table walk on another CPU could refill a TLB with a - * complete (S1 + S2) walk based on the old Stage-2 mapping if - * the Stage-1 invalidation happened first. - */ - dsb(ish); - __tlbi(vmalle1is); - dsb(ish); - isb(); - - /* - * If the host is running at EL1 and we have a VPIPT I-cache, - * then we must perform I-cache maintenance at EL2 in order for - * it to have an effect on the guest. Since the guest cannot hit - * I-cache lines allocated with a different VMID, we don't need - * to worry about junk out of guest reset (we nuke the I-cache on - * VMID rollover), but we do need to be careful when remapping - * executable pages for the same guest. This can happen when KSM - * takes a CoW fault on an executable page, copies the page into - * a page that was previously mapped in the guest and then needs - * to invalidate the guest view of the I-cache for that page - * from EL1. To solve this, we invalidate the entire I-cache when - * unmapping a page from a guest if we have a VPIPT I-cache but - * the host is running at EL1. As above, we could do better if - * we had the VA. - * - * The moral of this story is: if you have a VPIPT I-cache, then - * you should be running with VHE enabled. - */ - if (!has_vhe() && icache_is_vpipt()) - __flush_icache_all(); - - __tlb_switch_to_host(kvm, &cxt); + __tlb_flush_local_vmid(vcpu); } -void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm) +void __kvm_flush_vm_context(void) { - struct tlb_inv_context cxt; - - dsb(ishst); - - /* Switch to requested VMID */ - kvm = kern_hyp_va(kvm); - __tlb_switch_to_guest(kvm, &cxt); - - __tlbi(vmalls12e1is); - dsb(ish); - isb(); - - __tlb_switch_to_host(kvm, &cxt); -} - -void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) -{ - struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); - struct tlb_inv_context cxt; - - /* Switch to requested VMID */ - __tlb_switch_to_guest(kvm, &cxt); - - __tlbi(vmalle1); - dsb(nsh); - isb(); - - __tlb_switch_to_host(kvm, &cxt); -} - -void __hyp_text __kvm_flush_vm_context(void) -{ - dsb(ishst); - __tlbi(alle1is); - - /* - * VIPT and PIPT caches are not affected by VMID, so no maintenance - * is necessary across a VMID rollover. - * - * VPIPT caches constrain lookup and maintenance to the active VMID, - * so we need to invalidate lines with a stale VMID to avoid an ABA - * race after multiple rollovers. - * - */ - if (icache_is_vpipt()) - asm volatile("ic ialluis"); - - dsb(ish); + __tlb_flush_vm_context(); } diff --git a/arch/arm64/kvm/hyp/tlb.h b/arch/arm64/kvm/hyp/tlb.h new file mode 100644 index 000000000000..f62ed96cb896 --- /dev/null +++ b/arch/arm64/kvm/hyp/tlb.h @@ -0,0 +1,129 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include + +#include +#include +#include + +struct tlb_inv_context { + unsigned long flags; + u64 tcr; + u64 sctlr; +}; + +static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, + struct tlb_inv_context *cxt); +static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, + struct tlb_inv_context *cxt); + +static inline void __hyp_text +__tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +{ + struct tlb_inv_context cxt; + + dsb(ishst); + + /* Switch to requested VMID */ + kvm = kern_hyp_va(kvm); + __tlb_switch_to_guest(kvm, &cxt); + + /* + * We could do so much better if we had the VA as well. + * Instead, we invalidate Stage-2 for this IPA, and the + * whole of Stage-1. Weep... + */ + ipa >>= 12; + __tlbi(ipas2e1is, ipa); + + /* + * We have to ensure completion of the invalidation at Stage-2, + * since a table walk on another CPU could refill a TLB with a + * complete (S1 + S2) walk based on the old Stage-2 mapping if + * the Stage-1 invalidation happened first. + */ + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* + * If the host is running at EL1 and we have a VPIPT I-cache, + * then we must perform I-cache maintenance at EL2 in order for + * it to have an effect on the guest. Since the guest cannot hit + * I-cache lines allocated with a different VMID, we don't need + * to worry about junk out of guest reset (we nuke the I-cache on + * VMID rollover), but we do need to be careful when remapping + * executable pages for the same guest. This can happen when KSM + * takes a CoW fault on an executable page, copies the page into + * a page that was previously mapped in the guest and then needs + * to invalidate the guest view of the I-cache for that page + * from EL1. To solve this, we invalidate the entire I-cache when + * unmapping a page from a guest if we have a VPIPT I-cache but + * the host is running at EL1. As above, we could do better if + * we had the VA. + * + * The moral of this story is: if you have a VPIPT I-cache, then + * you should be running with VHE enabled. + */ + if (!has_vhe() && icache_is_vpipt()) + __flush_icache_all(); + + __tlb_switch_to_host(kvm, &cxt); +} + +static inline void __hyp_text __tlb_flush_vmid(struct kvm *kvm) +{ + struct tlb_inv_context cxt; + + dsb(ishst); + + /* Switch to requested VMID */ + kvm = kern_hyp_va(kvm); + __tlb_switch_to_guest(kvm, &cxt); + + __tlbi(vmalls12e1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(kvm, &cxt); +} + +static inline void __hyp_text __tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); + struct tlb_inv_context cxt; + + /* Switch to requested VMID */ + __tlb_switch_to_guest(kvm, &cxt); + + __tlbi(vmalle1); + dsb(nsh); + isb(); + + __tlb_switch_to_host(kvm, &cxt); +} + +static inline void __hyp_text __tlb_flush_vm_context(void) +{ + dsb(ishst); + __tlbi(alle1is); + + /* + * VIPT and PIPT caches are not affected by VMID, so no maintenance + * is necessary across a VMID rollover. + * + * VPIPT caches constrain lookup and maintenance to the active VMID, + * so we need to invalidate lines with a stale VMID to avoid an ABA + * race after multiple rollovers. + * + */ + if (icache_is_vpipt()) + asm volatile("ic ialluis"); + + dsb(ish); +} From patchwork Thu Apr 30 14:48:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520623 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4273515E6 for ; Thu, 30 Apr 2020 14:54:24 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ED7C52074A for ; Thu, 30 Apr 2020 14:54:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="n4bNUORg"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="pcFSf1Lf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED7C52074A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=njawz/okg3+cXPyMBDCdCho3naF9Ytq2RpVdt7N7Bzo=; b=n4bNUORgF/j6nz Spvg4NpVBXE7b7kw4sv7V4SK4BsHpSFzQcIoT1YkWA3I71bzEOOXzuxbHCl37uC9/Fl4COhfDPQUh HjxG+NZjBzqVhFQPbr5dGHgsMpJI6wM4OpGRXQkQDSR0zBbHmDZvG2FAxRzHdydRsE09MGzwhjqXy 1ySvevEIlqlfAHIEbce0aJd6T3XVUoOYsHlZq9WJ8L08msZ9xv7KXu6MSCgdYzkV0q5WwA3UpLRBj tbeHVeqdxoJOSpGG+wBQB0cRW21JsCEUFlT+a5A4Mlxd/fYaxUR3Ibw9yXgV+D0c64G0QxJumyouJ IH1DcyzbQlUKf00UBozw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAZr-0001qt-0a; Thu, 30 Apr 2020 14:54:15 +0000 Received: from mail-wr1-x444.google.com ([2a00:1450:4864:20::444]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAUp-0003xV-U9 for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:49:22 +0000 Received: by mail-wr1-x444.google.com with SMTP id j1so7301705wrt.1 for ; Thu, 30 Apr 2020 07:49:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4Mmx811AjY+NPZkW3Un4aYjcKJ7XmshZwE7TY1RG+qs=; b=pcFSf1Lf/lp4sWdg5OG2SrK1DYvB+Hisicszm+YBeiclFdKC3Q4JHp3wYYQbCSWU9L c9k9pjgIl1scVSq32S8+n103p0PDQTyC5eevPJGthIeRPL8pBk0BZBe+SS2D0FMEITH/ owcAkUeWKSfMbLCD79KM3rLWriF3cP8HmEghQMSZiyTRpa1Byo5qrY5sSLcYK9xSgYYr DE2OSY70rPgfrQWv0pZ2R3lZQe3boQzPgsv+LhGPeHj4OJ5BuhfnBhIv3cHDyoFKr1za m2Wt39tZ5e7xd9mpFzOrmhXd6eWyUEjC/HmN9hMfJtulAqDQazHgGobWXNjJOfMtD9cp j2Qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4Mmx811AjY+NPZkW3Un4aYjcKJ7XmshZwE7TY1RG+qs=; b=lRtrSpwQ1iYFQ3MdG0Lu+KNlYBDfddsnYrmCRt1c9OtGvuMKX1P0FVsckNCFoChajS FgwGgkU/3mXWdY72E0twut6rvzTtrgW4RAbj+WLitaThGAOBOp7tHHfPD8o6n2Xbfj4B HGNQveVDxxBN3R4Rh7Xz1I4IBdCokT71BHD6COhEw70Iph7PO8xkUj3ymWHnB8osuDIb S94r7xtPe56cGKcrF/B2+MYycfCN9fchtD+2pjxg7S0Gr7j+6VUOoVqM/qqAe6c+xaW3 HxT3n3pSTqIF6y1bPjrY3NkB0j4wDFqpelk9mxN3+YHLN/+HePiSXQb2BJ2pbX1G1Rek aIvg== X-Gm-Message-State: AGi0PuZeBaEBhNxuFddmHT34fKUKyQoMqoSXW6vSSYh+rF1N1dM/4nKb yeOEiGgSL5iPYJG1OPnzj+f3Ig== X-Google-Smtp-Source: APiQypJ137VsZDK1ozlaQjm435DpNIPY3c0tGuorQE8h4GFA/pQ+ldWHyi0iA9O4mlPRcyC+SxEwVA== X-Received: by 2002:adf:f2c5:: with SMTP id d5mr4006662wrp.285.1588258140370; Thu, 30 Apr 2020 07:49:00 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id 91sm4778509wra.37.2020.04.30.07.48.59 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:48:59 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 08/15] arm64: kvm: Split hyp/switch.c to VHE/nVHE Date: Thu, 30 Apr 2020 15:48:24 +0100 Message-Id: <20200430144831.59194-9-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_074904_362793_F7315405 X-CRM114-Status: GOOD ( 19.77 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:444 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. switch.c implements context-switching for KVM, with large parts shared between VHE/nVHE. These common routines are moved to switch.h, VHE-specific code is left in switch.c and nVHE-specific code is moved to nvhe/switch.c. Previously __kvm_vcpu_run needed a different symbol name for VHE/nVHE. This is cleaned up and the caller in arm.c simplified. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_asm.h | 4 +- arch/arm64/include/asm/kvm_host_hypercalls.h | 4 +- arch/arm64/include/asm/kvm_hyp.h | 5 + arch/arm64/kernel/image-vars.h | 25 +- arch/arm64/kvm/arm.c | 6 +- arch/arm64/kvm/hyp/hyp-entry.S | 2 + arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/switch.c | 271 ++++++++ arch/arm64/kvm/hyp/switch.c | 688 +------------------ arch/arm64/kvm/hyp/switch.h | 441 ++++++++++++ arch/arm64/kvm/hyp/sysreg-sr.c | 4 +- 11 files changed, 764 insertions(+), 688 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/switch.c create mode 100644 arch/arm64/kvm/hyp/switch.h diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index cdaf3df8085d..0cb229b9e148 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -86,9 +86,7 @@ extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); -extern int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu); - -extern int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu); +extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); extern u64 __vgic_v3_get_ich_vtr_el2(void); extern u64 __vgic_v3_read_vmcr(void); diff --git a/arch/arm64/include/asm/kvm_host_hypercalls.h b/arch/arm64/include/asm/kvm_host_hypercalls.h index af8ad505d816..cc45930fdc76 100644 --- a/arch/arm64/include/asm/kvm_host_hypercalls.h +++ b/arch/arm64/include/asm/kvm_host_hypercalls.h @@ -22,8 +22,8 @@ __KVM_HOST_HCALL(__kvm_tlb_flush_local_vmid) #define __KVM_HOST_HCALL_TABLE_IDX___kvm_flush_vm_context 4 __KVM_HOST_HCALL(__kvm_flush_vm_context) -#define __KVM_HOST_HCALL_TABLE_IDX___kvm_vcpu_run_nvhe 5 -__KVM_HOST_HCALL(__kvm_vcpu_run_nvhe) +#define __KVM_HOST_HCALL_TABLE_IDX___kvm_vcpu_run 5 +__KVM_HOST_HCALL(__kvm_vcpu_run) #define __KVM_HOST_HCALL_TABLE_IDX___kvm_tlb_flush_vmid 6 __KVM_HOST_HCALL(__kvm_tlb_flush_vmid) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index fe57f60f06a8..b5895040c16a 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -82,11 +82,16 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu); void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); +#ifndef __HYPERVISOR__ void activate_traps_vhe_load(struct kvm_vcpu *vcpu); void deactivate_traps_vhe_put(void); +#endif u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt); + +#ifdef __HYPERVISOR__ void __noreturn __hyp_do_panic(unsigned long, ...); +#endif /* * Must be called from hyp code running at EL2 with an updated VTTBR diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index c0a0ec238854..e2282a4304c5 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,18 +61,34 @@ __efistub__ctype = _ctype; * memory mappings. */ +__hyp_text___debug_switch_to_guest = __debug_switch_to_guest; +__hyp_text___debug_switch_to_host = __debug_switch_to_host; +__hyp_text___fpsimd_restore_state = __fpsimd_restore_state; +__hyp_text___fpsimd_save_state = __fpsimd_save_state; +__hyp_text___guest_enter = __guest_enter; __hyp_text___guest_exit = __guest_exit; __hyp_text___icache_flags = __icache_flags; __hyp_text___kvm_enable_ssbs = __kvm_enable_ssbs; __hyp_text___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; __hyp_text___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; __hyp_text___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; -__hyp_text___kvm_vcpu_run_nvhe = __kvm_vcpu_run_nvhe; +__hyp_text___sysreg32_restore_state = __sysreg32_restore_state; +__hyp_text___sysreg32_save_state = __sysreg32_save_state; +__hyp_text___sysreg_restore_state_nvhe = __sysreg_restore_state_nvhe; +__hyp_text___sysreg_save_state_nvhe = __sysreg_save_state_nvhe; +__hyp_text___timer_disable_traps = __timer_disable_traps; +__hyp_text___timer_enable_traps = __timer_enable_traps; +__hyp_text___vgic_v2_perform_cpuif_access = __vgic_v2_perform_cpuif_access; +__hyp_text___vgic_v3_activate_traps = __vgic_v3_activate_traps; +__hyp_text___vgic_v3_deactivate_traps = __vgic_v3_deactivate_traps; __hyp_text___vgic_v3_get_ich_vtr_el2 = __vgic_v3_get_ich_vtr_el2; __hyp_text___vgic_v3_init_lrs = __vgic_v3_init_lrs; +__hyp_text___vgic_v3_perform_cpuif_access = __vgic_v3_perform_cpuif_access; __hyp_text___vgic_v3_read_vmcr = __vgic_v3_read_vmcr; __hyp_text___vgic_v3_restore_aprs = __vgic_v3_restore_aprs; +__hyp_text___vgic_v3_restore_state = __vgic_v3_restore_state; __hyp_text___vgic_v3_save_aprs = __vgic_v3_save_aprs; +__hyp_text___vgic_v3_save_state = __vgic_v3_save_state; __hyp_text___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; __hyp_text_abort_guest_exit_end = abort_guest_exit_end; __hyp_text_abort_guest_exit_start = abort_guest_exit_start; @@ -81,13 +97,18 @@ __hyp_text_arm64_enable_wa2_handling = arm64_enable_wa2_handling; __hyp_text_arm64_ssbd_callback_required = arm64_ssbd_callback_required; __hyp_text_cpu_hwcap_keys = cpu_hwcap_keys; __hyp_text_cpu_hwcaps = cpu_hwcaps; -__hyp_text_hyp_panic = hyp_panic; __hyp_text_kimage_voffset = kimage_voffset; __hyp_text_kvm_host_data = kvm_host_data; __hyp_text_kvm_patch_vector_branch = kvm_patch_vector_branch; +__hyp_text_kvm_skip_instr32 = kvm_skip_instr32; __hyp_text_kvm_update_va_mask = kvm_update_va_mask; +__hyp_text_kvm_vgic_global_state = kvm_vgic_global_state; __hyp_text_panic = panic; __hyp_text_physvirt_offset = physvirt_offset; +__hyp_text_sve_load_state = sve_load_state; +__hyp_text_sve_save_state = sve_save_state; +__hyp_text_vgic_v2_cpuif_trap = vgic_v2_cpuif_trap; +__hyp_text_vgic_v3_cpuif_trap = vgic_v3_cpuif_trap; #endif /* CONFIG_KVM */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c958bb37b769..dea249dc82b3 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -749,11 +749,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) trace_kvm_entry(*vcpu_pc(vcpu)); guest_enter_irqoff(); - if (has_vhe()) { - ret = kvm_vcpu_run_vhe(vcpu); - } else { - ret = kvm_call_hyp_ret(__kvm_vcpu_run_nvhe, vcpu); - } + ret = kvm_call_hyp_ret(__kvm_vcpu_run, vcpu); vcpu->mode = OUTSIDE_GUEST_MODE; vcpu->stat.exits++; diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 7e5f386c5c2d..04ea0b83b728 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -221,6 +221,7 @@ el2_error: eret sb +#ifdef __HYPERVISOR__ SYM_FUNC_START(__hyp_do_panic) mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\ PSR_MODE_EL1h) @@ -230,6 +231,7 @@ SYM_FUNC_START(__hyp_do_panic) eret sb SYM_FUNC_END(__hyp_do_panic) +#endif SYM_CODE_START(__hyp_panic) get_host_ctxt x0, x1 diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 0da836cb5580..82fbd0b76501 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__HYPERVISOR__ ccflags-y := -D__HYPERVISOR__ -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) -obj-y := tlb.o host_hypercall.o ../hyp-entry.o +obj-y := switch.o tlb.o host_hypercall.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c new file mode 100644 index 000000000000..4294beed3dc1 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -0,0 +1,271 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include +#include +#include +#include + +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../switch.h" + +static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) +{ + u64 val; + + ___activate_traps(vcpu); + __activate_traps_common(vcpu); + + val = CPTR_EL2_DEFAULT; + val |= CPTR_EL2_TTA | CPTR_EL2_TZ | CPTR_EL2_TAM; + if (!update_fp_enabled(vcpu)) { + val |= CPTR_EL2_TFP; + __activate_traps_fpsimd32(vcpu); + } + + write_sysreg(val, cptr_el2); + + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { + struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt; + + isb(); + /* + * At this stage, and thanks to the above isb(), S2 is + * configured and enabled. We can now restore the guest's S1 + * configuration: SCTLR, and only then TCR. + */ + write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); + isb(); + write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); + } +} + +static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) +{ + u64 mdcr_el2; + + ___deactivate_traps(vcpu); + + mdcr_el2 = read_sysreg(mdcr_el2); + + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { + u64 val; + + /* + * Set the TCR and SCTLR registers in the exact opposite + * sequence as __activate_traps (first prevent walks, + * then force the MMU on). A generous sprinkling of isb() + * ensure that things happen in this exact order. + */ + val = read_sysreg_el1(SYS_TCR); + write_sysreg_el1(val | TCR_EPD1_MASK | TCR_EPD0_MASK, SYS_TCR); + isb(); + val = read_sysreg_el1(SYS_SCTLR); + write_sysreg_el1(val | SCTLR_ELx_M, SYS_SCTLR); + isb(); + } + + __deactivate_traps_common(); + + mdcr_el2 &= MDCR_EL2_HPMN_MASK; + mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; + + write_sysreg(mdcr_el2, mdcr_el2); + write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2); + write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); +} + +static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu) +{ + write_sysreg(0, vttbr_el2); +} + +/* Save VGICv3 state on non-VHE systems */ +static void __hyp_text __hyp_vgic_save_state(struct kvm_vcpu *vcpu) +{ + if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { + __vgic_v3_save_state(vcpu); + __vgic_v3_deactivate_traps(vcpu); + } +} + +/* Restore VGICv3 state on non_VEH systems */ +static void __hyp_text __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) +{ + if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { + __vgic_v3_activate_traps(vcpu); + __vgic_v3_restore_state(vcpu); + } +} + +/** + * Disable host events, enable guest events + */ +static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) +{ + struct kvm_host_data *host; + struct kvm_pmu_events *pmu; + + host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); + pmu = &host->pmu_events; + + if (pmu->events_host) + write_sysreg(pmu->events_host, pmcntenclr_el0); + + if (pmu->events_guest) + write_sysreg(pmu->events_guest, pmcntenset_el0); + + return (pmu->events_host || pmu->events_guest); +} + +/** + * Disable guest events, enable host events + */ +static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) +{ + struct kvm_host_data *host; + struct kvm_pmu_events *pmu; + + host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); + pmu = &host->pmu_events; + + if (pmu->events_guest) + write_sysreg(pmu->events_guest, pmcntenclr_el0); + + if (pmu->events_host) + write_sysreg(pmu->events_host, pmcntenset_el0); +} + +/* Switch to the guest for legacy non-VHE systems */ +int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + bool pmu_switch_needed; + u64 exit_code; + + /* + * Having IRQs masked via PMR when entering the guest means the GIC + * will not signal the CPU of interrupts of lower priority, and the + * only way to get out will be via guest exceptions. + * Naturally, we want to avoid this. + */ + if (system_uses_irq_prio_masking()) { + gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); + pmr_sync(); + } + + vcpu = kern_hyp_va(vcpu); + + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + host_ctxt->__hyp_running_vcpu = vcpu; + guest_ctxt = &vcpu->arch.ctxt; + + pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); + + __sysreg_save_state_nvhe(host_ctxt); + + /* + * We must restore the 32-bit state before the sysregs, thanks + * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). + * + * Also, and in order to be able to deal with erratum #1319537 (A57) + * and #1319367 (A72), we must ensure that all VM-related sysreg are + * restored before we enable S2 translation. + */ + __sysreg32_restore_state(vcpu); + __sysreg_restore_state_nvhe(guest_ctxt); + + __activate_vm(kern_hyp_va(vcpu->kvm)); + __activate_traps(vcpu); + + __hyp_vgic_restore_state(vcpu); + __timer_enable_traps(vcpu); + + __debug_switch_to_guest(vcpu); + + __set_guest_arch_workaround_state(vcpu); + + do { + /* Jump in the fire! */ + exit_code = __guest_enter(vcpu, host_ctxt); + + /* And we're baaack! */ + } while (fixup_guest_exit(vcpu, &exit_code)); + + __set_host_arch_workaround_state(vcpu); + + __sysreg_save_state_nvhe(guest_ctxt); + __sysreg32_save_state(vcpu); + __timer_disable_traps(vcpu); + __hyp_vgic_save_state(vcpu); + + __deactivate_traps(vcpu); + __deactivate_vm(vcpu); + + __sysreg_restore_state_nvhe(host_ctxt); + + if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) + __fpsimd_save_fpexc32(vcpu); + + /* + * This must come after restoring the host sysregs, since a non-VHE + * system may enable SPE here and make use of the TTBRs. + */ + __debug_switch_to_host(vcpu); + + if (pmu_switch_needed) + __pmu_switch_to_host(host_ctxt); + + /* Returning to host will clear PSR.I, remask PMR if needed */ + if (system_uses_irq_prio_masking()) + gic_write_pmr(GIC_PRIO_IRQOFF); + + return exit_code; +} + +void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) +{ + u64 spsr = read_sysreg_el2(SYS_SPSR); + u64 elr = read_sysreg_el2(SYS_ELR); + u64 par = read_sysreg(par_el1); + struct kvm_vcpu *vcpu = host_ctxt->__hyp_running_vcpu; + unsigned long str_va; + + if (read_sysreg(vttbr_el2)) { + __timer_disable_traps(vcpu); + __deactivate_traps(vcpu); + __deactivate_vm(vcpu); + __sysreg_restore_state_nvhe(host_ctxt); + } + + /* + * Force the panic string to be loaded from the literal pool, + * making sure it is a kernel address and not a PC-relative + * reference. + */ + asm volatile("ldr %0, =%1" : "=r" (str_va) : "S" (__hyp_panic_string)); + + __hyp_do_panic(str_va, + spsr, elr, + read_sysreg(esr_el2), read_sysreg_el2(SYS_FAR), + read_sysreg(hpfar_el2), par, vcpu); + unreachable(); +} diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 7a7c08029d81..1d03c5bf0b18 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -24,76 +24,14 @@ #include #include -/* Check whether the FP regs were dirtied while in the host-side run loop: */ -static bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu) -{ - /* - * When the system doesn't support FP/SIMD, we cannot rely on - * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an - * abort on the very first access to FP and thus we should never - * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always - * trap the accesses. - */ - if (!system_supports_fpsimd() || - vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE) - vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | - KVM_ARM64_FP_HOST); - - return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED); -} - -/* Save the 32-bit only FPSIMD system register state */ -static void __hyp_text __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) -{ - if (!vcpu_el1_is_32bit(vcpu)) - return; - - vcpu->arch.ctxt.sys_regs[FPEXC32_EL2] = read_sysreg(fpexc32_el2); -} - -static void __hyp_text __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) -{ - /* - * We are about to set CPTR_EL2.TFP to trap all floating point - * register accesses to EL2, however, the ARM ARM clearly states that - * traps are only taken to EL2 if the operation would not otherwise - * trap to EL1. Therefore, always make sure that for 32-bit guests, - * we set FPEXC.EN to prevent traps to EL1, when setting the TFP bit. - * If FP/ASIMD is not implemented, FPEXC is UNDEFINED and any access to - * it will cause an exception. - */ - if (vcpu_el1_is_32bit(vcpu) && system_supports_fpsimd()) { - write_sysreg(1 << 30, fpexc32_el2); - isb(); - } -} - -static void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu) -{ - /* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */ - write_sysreg(1 << 15, hstr_el2); - - /* - * Make sure we trap PMU access from EL0 to EL2. Also sanitize - * PMSELR_EL0 to make sure it never contains the cycle - * counter, which could make a PMXEVCNTR_EL0 access UNDEF at - * EL1 instead of being trapped to EL2. - */ - write_sysreg(0, pmselr_el0); - write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); - write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); -} - -static void __hyp_text __deactivate_traps_common(void) -{ - write_sysreg(0, hstr_el2); - write_sysreg(0, pmuserenr_el0); -} +#include "switch.h" -static void activate_traps_vhe(struct kvm_vcpu *vcpu) +static void __activate_traps(struct kvm_vcpu *vcpu) { u64 val; + ___activate_traps(vcpu); + val = read_sysreg(cpacr_el1); val |= CPACR_EL1_TTA; val &= ~CPACR_EL1_ZEN; @@ -121,59 +59,14 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu) write_sysreg(kvm_get_hyp_vector(), vbar_el1); } -NOKPROBE_SYMBOL(activate_traps_vhe); - -static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu) -{ - u64 val; - - __activate_traps_common(vcpu); - - val = CPTR_EL2_DEFAULT; - val |= CPTR_EL2_TTA | CPTR_EL2_TZ | CPTR_EL2_TAM; - if (!update_fp_enabled(vcpu)) { - val |= CPTR_EL2_TFP; - __activate_traps_fpsimd32(vcpu); - } - - write_sysreg(val, cptr_el2); - - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { - struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt; - - isb(); - /* - * At this stage, and thanks to the above isb(), S2 is - * configured and enabled. We can now restore the guest's S1 - * configuration: SCTLR, and only then TCR. - */ - write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); - isb(); - write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); - } -} +NOKPROBE_SYMBOL(__activate_traps); -static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) +static void __deactivate_traps(struct kvm_vcpu *vcpu) { - u64 hcr = vcpu->arch.hcr_el2; - - if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM)) - hcr |= HCR_TVM; - - write_sysreg(hcr, hcr_el2); - - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE)) - write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); + extern char vectors[]; /* kernel exception vectors */ - if (has_vhe()) - activate_traps_vhe(vcpu); - else - __activate_traps_nvhe(vcpu); -} + ___deactivate_traps(vcpu); -static void deactivate_traps_vhe(void) -{ - extern char vectors[]; /* kernel exception vectors */ write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); /* @@ -186,57 +79,7 @@ static void deactivate_traps_vhe(void) write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1); write_sysreg(vectors, vbar_el1); } -NOKPROBE_SYMBOL(deactivate_traps_vhe); - -static void __hyp_text __deactivate_traps_nvhe(void) -{ - u64 mdcr_el2 = read_sysreg(mdcr_el2); - - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { - u64 val; - - /* - * Set the TCR and SCTLR registers in the exact opposite - * sequence as __activate_traps_nvhe (first prevent walks, - * then force the MMU on). A generous sprinkling of isb() - * ensure that things happen in this exact order. - */ - val = read_sysreg_el1(SYS_TCR); - write_sysreg_el1(val | TCR_EPD1_MASK | TCR_EPD0_MASK, SYS_TCR); - isb(); - val = read_sysreg_el1(SYS_SCTLR); - write_sysreg_el1(val | SCTLR_ELx_M, SYS_SCTLR); - isb(); - } - - __deactivate_traps_common(); - - mdcr_el2 &= MDCR_EL2_HPMN_MASK; - mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; - - write_sysreg(mdcr_el2, mdcr_el2); - write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2); - write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); -} - -static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) -{ - /* - * If we pended a virtual abort, preserve it until it gets - * cleared. See D1.14.3 (Virtual Interrupts) for details, but - * the crucial bit is "On taking a vSError interrupt, - * HCR_EL2.VSE is cleared to 0." - */ - if (vcpu->arch.hcr_el2 & HCR_VSE) { - vcpu->arch.hcr_el2 &= ~HCR_VSE; - vcpu->arch.hcr_el2 |= read_sysreg(hcr_el2) & HCR_VSE; - } - - if (has_vhe()) - deactivate_traps_vhe(); - else - __deactivate_traps_nvhe(); -} +NOKPROBE_SYMBOL(__deactivate_traps); void activate_traps_vhe_load(struct kvm_vcpu *vcpu) { @@ -256,385 +99,6 @@ void deactivate_traps_vhe_put(void) __deactivate_traps_common(); } -static void __hyp_text __activate_vm(struct kvm *kvm) -{ - __load_guest_stage2(kvm); -} - -static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu) -{ - write_sysreg(0, vttbr_el2); -} - -/* Save VGICv3 state on non-VHE systems */ -static void __hyp_text __hyp_vgic_save_state(struct kvm_vcpu *vcpu) -{ - if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { - __vgic_v3_save_state(vcpu); - __vgic_v3_deactivate_traps(vcpu); - } -} - -/* Restore VGICv3 state on non_VEH systems */ -static void __hyp_text __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) -{ - if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { - __vgic_v3_activate_traps(vcpu); - __vgic_v3_restore_state(vcpu); - } -} - -static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar) -{ - u64 par, tmp; - - /* - * Resolve the IPA the hard way using the guest VA. - * - * Stage-1 translation already validated the memory access - * rights. As such, we can use the EL1 translation regime, and - * don't have to distinguish between EL0 and EL1 access. - * - * We do need to save/restore PAR_EL1 though, as we haven't - * saved the guest context yet, and we may return early... - */ - par = read_sysreg(par_el1); - asm volatile("at s1e1r, %0" : : "r" (far)); - isb(); - - tmp = read_sysreg(par_el1); - write_sysreg(par, par_el1); - - if (unlikely(tmp & SYS_PAR_EL1_F)) - return false; /* Translation failed, back to guest */ - - /* Convert PAR to HPFAR format */ - *hpfar = PAR_TO_HPFAR(tmp); - return true; -} - -static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) -{ - u8 ec; - u64 esr; - u64 hpfar, far; - - esr = vcpu->arch.fault.esr_el2; - ec = ESR_ELx_EC(esr); - - if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW) - return true; - - far = read_sysreg_el2(SYS_FAR); - - /* - * The HPFAR can be invalid if the stage 2 fault did not - * happen during a stage 1 page table walk (the ESR_EL2.S1PTW - * bit is clear) and one of the two following cases are true: - * 1. The fault was due to a permission fault - * 2. The processor carries errata 834220 - * - * Therefore, for all non S1PTW faults where we either have a - * permission fault or the errata workaround is enabled, we - * resolve the IPA using the AT instruction. - */ - if (!(esr & ESR_ELx_S1PTW) && - (cpus_have_final_cap(ARM64_WORKAROUND_834220) || - (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) { - if (!__translate_far_to_hpfar(far, &hpfar)) - return false; - } else { - hpfar = read_sysreg(hpfar_el2); - } - - vcpu->arch.fault.far_el2 = far; - vcpu->arch.fault.hpfar_el2 = hpfar; - return true; -} - -/* Check for an FPSIMD/SVE trap and handle as appropriate */ -static bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) -{ - bool vhe, sve_guest, sve_host; - u8 hsr_ec; - - if (!system_supports_fpsimd()) - return false; - - if (system_supports_sve()) { - sve_guest = vcpu_has_sve(vcpu); - sve_host = vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE; - vhe = true; - } else { - sve_guest = false; - sve_host = false; - vhe = has_vhe(); - } - - hsr_ec = kvm_vcpu_trap_get_class(vcpu); - if (hsr_ec != ESR_ELx_EC_FP_ASIMD && - hsr_ec != ESR_ELx_EC_SVE) - return false; - - /* Don't handle SVE traps for non-SVE vcpus here: */ - if (!sve_guest) - if (hsr_ec != ESR_ELx_EC_FP_ASIMD) - return false; - - /* Valid trap. Switch the context: */ - - if (vhe) { - u64 reg = read_sysreg(cpacr_el1) | CPACR_EL1_FPEN; - - if (sve_guest) - reg |= CPACR_EL1_ZEN; - - write_sysreg(reg, cpacr_el1); - } else { - write_sysreg(read_sysreg(cptr_el2) & ~(u64)CPTR_EL2_TFP, - cptr_el2); - } - - isb(); - - if (vcpu->arch.flags & KVM_ARM64_FP_HOST) { - /* - * In the SVE case, VHE is assumed: it is enforced by - * Kconfig and kvm_arch_init(). - */ - if (sve_host) { - struct thread_struct *thread = container_of( - vcpu->arch.host_fpsimd_state, - struct thread_struct, uw.fpsimd_state); - - sve_save_state(sve_pffr(thread), - &vcpu->arch.host_fpsimd_state->fpsr); - } else { - __fpsimd_save_state(vcpu->arch.host_fpsimd_state); - } - - vcpu->arch.flags &= ~KVM_ARM64_FP_HOST; - } - - if (sve_guest) { - sve_load_state(vcpu_sve_pffr(vcpu), - &vcpu->arch.ctxt.gp_regs.fp_regs.fpsr, - sve_vq_from_vl(vcpu->arch.sve_max_vl) - 1); - write_sysreg_s(vcpu->arch.ctxt.sys_regs[ZCR_EL1], SYS_ZCR_EL12); - } else { - __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); - } - - /* Skip restoring fpexc32 for AArch64 guests */ - if (!(read_sysreg(hcr_el2) & HCR_RW)) - write_sysreg(vcpu->arch.ctxt.sys_regs[FPEXC32_EL2], - fpexc32_el2); - - vcpu->arch.flags |= KVM_ARM64_FP_ENABLED; - - return true; -} - -static bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) -{ - u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu)); - int rt = kvm_vcpu_sys_get_rt(vcpu); - u64 val = vcpu_get_reg(vcpu, rt); - - /* - * The normal sysreg handling code expects to see the traps, - * let's not do anything here. - */ - if (vcpu->arch.hcr_el2 & HCR_TVM) - return false; - - switch (sysreg) { - case SYS_SCTLR_EL1: - write_sysreg_el1(val, SYS_SCTLR); - break; - case SYS_TTBR0_EL1: - write_sysreg_el1(val, SYS_TTBR0); - break; - case SYS_TTBR1_EL1: - write_sysreg_el1(val, SYS_TTBR1); - break; - case SYS_TCR_EL1: - write_sysreg_el1(val, SYS_TCR); - break; - case SYS_ESR_EL1: - write_sysreg_el1(val, SYS_ESR); - break; - case SYS_FAR_EL1: - write_sysreg_el1(val, SYS_FAR); - break; - case SYS_AFSR0_EL1: - write_sysreg_el1(val, SYS_AFSR0); - break; - case SYS_AFSR1_EL1: - write_sysreg_el1(val, SYS_AFSR1); - break; - case SYS_MAIR_EL1: - write_sysreg_el1(val, SYS_MAIR); - break; - case SYS_AMAIR_EL1: - write_sysreg_el1(val, SYS_AMAIR); - break; - case SYS_CONTEXTIDR_EL1: - write_sysreg_el1(val, SYS_CONTEXTIDR); - break; - default: - return false; - } - - __kvm_skip_instr(vcpu); - return true; -} - -/* - * Return true when we were able to fixup the guest exit and should return to - * the guest, false when we should restore the host state and return to the - * main run loop. - */ -static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) -{ - if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) - vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); - - /* - * We're using the raw exception code in order to only process - * the trap if no SError is pending. We will come back to the - * same PC once the SError has been injected, and replay the - * trapping instruction. - */ - if (*exit_code != ARM_EXCEPTION_TRAP) - goto exit; - - if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && - kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 && - handle_tx2_tvm(vcpu)) - return true; - - /* - * We trap the first access to the FP/SIMD to save the host context - * and restore the guest context lazily. - * If FP/SIMD is not implemented, handle the trap and inject an - * undefined instruction exception to the guest. - * Similarly for trapped SVE accesses. - */ - if (__hyp_handle_fpsimd(vcpu)) - return true; - - if (!__populate_fault_info(vcpu)) - return true; - - if (static_branch_unlikely(&vgic_v2_cpuif_trap)) { - bool valid; - - valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW && - kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT && - kvm_vcpu_dabt_isvalid(vcpu) && - !kvm_vcpu_dabt_isextabt(vcpu) && - !kvm_vcpu_dabt_iss1tw(vcpu); - - if (valid) { - int ret = __vgic_v2_perform_cpuif_access(vcpu); - - if (ret == 1) - return true; - - /* Promote an illegal access to an SError.*/ - if (ret == -1) - *exit_code = ARM_EXCEPTION_EL1_SERROR; - - goto exit; - } - } - - if (static_branch_unlikely(&vgic_v3_cpuif_trap) && - (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 || - kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) { - int ret = __vgic_v3_perform_cpuif_access(vcpu); - - if (ret == 1) - return true; - } - -exit: - /* Return to the host kernel and handle the exit */ - return false; -} - -static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) -{ - if (!cpus_have_final_cap(ARM64_SSBD)) - return false; - - return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG); -} - -static void __hyp_text __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) -{ -#ifdef CONFIG_ARM64_SSBD - /* - * The host runs with the workaround always present. If the - * guest wants it disabled, so be it... - */ - if (__needs_ssbd_off(vcpu) && - __hyp_this_cpu_read(arm64_ssbd_callback_required)) - arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL); -#endif -} - -static void __hyp_text __set_host_arch_workaround_state(struct kvm_vcpu *vcpu) -{ -#ifdef CONFIG_ARM64_SSBD - /* - * If the guest has disabled the workaround, bring it back on. - */ - if (__needs_ssbd_off(vcpu) && - __hyp_this_cpu_read(arm64_ssbd_callback_required)) - arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1, NULL); -#endif -} - -/** - * Disable host events, enable guest events - */ -static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) -{ - struct kvm_host_data *host; - struct kvm_pmu_events *pmu; - - host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); - pmu = &host->pmu_events; - - if (pmu->events_host) - write_sysreg(pmu->events_host, pmcntenclr_el0); - - if (pmu->events_guest) - write_sysreg(pmu->events_guest, pmcntenset_el0); - - return (pmu->events_host || pmu->events_guest); -} - -/** - * Disable guest events, enable host events - */ -static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) -{ - struct kvm_host_data *host; - struct kvm_pmu_events *pmu; - - host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); - pmu = &host->pmu_events; - - if (pmu->events_guest) - write_sysreg(pmu->events_guest, pmcntenclr_el0); - - if (pmu->events_host) - write_sysreg(pmu->events_host, pmcntenset_el0); -} - /* Switch to the guest for VHE systems running in EL2 */ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { @@ -691,7 +155,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) } NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe); -int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) +int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { int ret; @@ -726,126 +190,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) return ret; } -/* Switch to the guest for legacy non-VHE systems */ -int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) -{ - struct kvm_cpu_context *host_ctxt; - struct kvm_cpu_context *guest_ctxt; - bool pmu_switch_needed; - u64 exit_code; - - /* - * Having IRQs masked via PMR when entering the guest means the GIC - * will not signal the CPU of interrupts of lower priority, and the - * only way to get out will be via guest exceptions. - * Naturally, we want to avoid this. - */ - if (system_uses_irq_prio_masking()) { - gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); - pmr_sync(); - } - - vcpu = kern_hyp_va(vcpu); - - host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - host_ctxt->__hyp_running_vcpu = vcpu; - guest_ctxt = &vcpu->arch.ctxt; - - pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); - - __sysreg_save_state_nvhe(host_ctxt); - - /* - * We must restore the 32-bit state before the sysregs, thanks - * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). - * - * Also, and in order to be able to deal with erratum #1319537 (A57) - * and #1319367 (A72), we must ensure that all VM-related sysreg are - * restored before we enable S2 translation. - */ - __sysreg32_restore_state(vcpu); - __sysreg_restore_state_nvhe(guest_ctxt); - - __activate_vm(kern_hyp_va(vcpu->kvm)); - __activate_traps(vcpu); - - __hyp_vgic_restore_state(vcpu); - __timer_enable_traps(vcpu); - - __debug_switch_to_guest(vcpu); - - __set_guest_arch_workaround_state(vcpu); - - do { - /* Jump in the fire! */ - exit_code = __guest_enter(vcpu, host_ctxt); - - /* And we're baaack! */ - } while (fixup_guest_exit(vcpu, &exit_code)); - - __set_host_arch_workaround_state(vcpu); - - __sysreg_save_state_nvhe(guest_ctxt); - __sysreg32_save_state(vcpu); - __timer_disable_traps(vcpu); - __hyp_vgic_save_state(vcpu); - - __deactivate_traps(vcpu); - __deactivate_vm(vcpu); - - __sysreg_restore_state_nvhe(host_ctxt); - - if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) - __fpsimd_save_fpexc32(vcpu); - - /* - * This must come after restoring the host sysregs, since a non-VHE - * system may enable SPE here and make use of the TTBRs. - */ - __debug_switch_to_host(vcpu); - - if (pmu_switch_needed) - __pmu_switch_to_host(host_ctxt); - - /* Returning to host will clear PSR.I, remask PMR if needed */ - if (system_uses_irq_prio_masking()) - gic_write_pmr(GIC_PRIO_IRQOFF); - - return exit_code; -} - -static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n"; - -static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par, - struct kvm_cpu_context *__host_ctxt) -{ - struct kvm_vcpu *vcpu; - unsigned long str_va; - - vcpu = __host_ctxt->__hyp_running_vcpu; - - if (read_sysreg(vttbr_el2)) { - __timer_disable_traps(vcpu); - __deactivate_traps(vcpu); - __deactivate_vm(vcpu); - __sysreg_restore_state_nvhe(__host_ctxt); - } - - /* - * Force the panic string to be loaded from the literal pool, - * making sure it is a kernel address and not a PC-relative - * reference. - */ - asm volatile("ldr %0, =%1" : "=r" (str_va) : "S" (__hyp_panic_string)); - - __hyp_do_panic(str_va, - spsr, elr, - read_sysreg(esr_el2), read_sysreg_el2(SYS_FAR), - read_sysreg(hpfar_el2), par, vcpu); -} - -static void __hyp_call_panic_vhe(u64 spsr, u64 elr, u64 par, - struct kvm_cpu_context *host_ctxt) +static void __hyp_call_panic(u64 spsr, u64 elr, u64 par, + struct kvm_cpu_context *host_ctxt) { struct kvm_vcpu *vcpu; vcpu = host_ctxt->__hyp_running_vcpu; @@ -858,18 +204,14 @@ static void __hyp_call_panic_vhe(u64 spsr, u64 elr, u64 par, read_sysreg_el2(SYS_ESR), read_sysreg_el2(SYS_FAR), read_sysreg(hpfar_el2), par, vcpu); } -NOKPROBE_SYMBOL(__hyp_call_panic_vhe); +NOKPROBE_SYMBOL(__hyp_call_panic); -void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) +void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) { u64 spsr = read_sysreg_el2(SYS_SPSR); u64 elr = read_sysreg_el2(SYS_ELR); u64 par = read_sysreg(par_el1); - if (!has_vhe()) - __hyp_call_panic_nvhe(spsr, elr, par, host_ctxt); - else - __hyp_call_panic_vhe(spsr, elr, par, host_ctxt); - + __hyp_call_panic(spsr, elr, par, host_ctxt); unreachable(); } diff --git a/arch/arm64/kvm/hyp/switch.h b/arch/arm64/kvm/hyp/switch.h new file mode 100644 index 000000000000..00ecb0111e79 --- /dev/null +++ b/arch/arm64/kvm/hyp/switch.h @@ -0,0 +1,441 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include +#include +#include +#include + +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n"; + +/* Check whether the FP regs were dirtied while in the host-side run loop: */ +static inline bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu) +{ + /* + * When the system doesn't support FP/SIMD, we cannot rely on + * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an + * abort on the very first access to FP and thus we should never + * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always + * trap the accesses. + */ + if (!system_supports_fpsimd() || + vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE) + vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | + KVM_ARM64_FP_HOST); + + return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED); +} + +/* Save the 32-bit only FPSIMD system register state */ +static inline void __hyp_text __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) +{ + if (!vcpu_el1_is_32bit(vcpu)) + return; + + vcpu->arch.ctxt.sys_regs[FPEXC32_EL2] = read_sysreg(fpexc32_el2); +} + +static inline void __hyp_text __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) +{ + /* + * We are about to set CPTR_EL2.TFP to trap all floating point + * register accesses to EL2, however, the ARM ARM clearly states that + * traps are only taken to EL2 if the operation would not otherwise + * trap to EL1. Therefore, always make sure that for 32-bit guests, + * we set FPEXC.EN to prevent traps to EL1, when setting the TFP bit. + * If FP/ASIMD is not implemented, FPEXC is UNDEFINED and any access to + * it will cause an exception. + */ + if (vcpu_el1_is_32bit(vcpu) && system_supports_fpsimd()) { + write_sysreg(1 << 30, fpexc32_el2); + isb(); + } +} + +static inline void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu) +{ + /* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */ + write_sysreg(1 << 15, hstr_el2); + + /* + * Make sure we trap PMU access from EL0 to EL2. Also sanitize + * PMSELR_EL0 to make sure it never contains the cycle + * counter, which could make a PMXEVCNTR_EL0 access UNDEF at + * EL1 instead of being trapped to EL2. + */ + write_sysreg(0, pmselr_el0); + write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); + write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); +} + +static inline void __hyp_text __deactivate_traps_common(void) +{ + write_sysreg(0, hstr_el2); + write_sysreg(0, pmuserenr_el0); +} + +static inline void __hyp_text ___activate_traps(struct kvm_vcpu *vcpu) +{ + u64 hcr = vcpu->arch.hcr_el2; + + if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM)) + hcr |= HCR_TVM; + + write_sysreg(hcr, hcr_el2); + + if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE)) + write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); +} + +static inline void __hyp_text ___deactivate_traps(struct kvm_vcpu *vcpu) +{ + /* + * If we pended a virtual abort, preserve it until it gets + * cleared. See D1.14.3 (Virtual Interrupts) for details, but + * the crucial bit is "On taking a vSError interrupt, + * HCR_EL2.VSE is cleared to 0." + */ + if (vcpu->arch.hcr_el2 & HCR_VSE) { + vcpu->arch.hcr_el2 &= ~HCR_VSE; + vcpu->arch.hcr_el2 |= read_sysreg(hcr_el2) & HCR_VSE; + } +} + +static inline void __hyp_text __activate_vm(struct kvm *kvm) +{ + __load_guest_stage2(kvm); +} + +static inline bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar) +{ + u64 par, tmp; + + /* + * Resolve the IPA the hard way using the guest VA. + * + * Stage-1 translation already validated the memory access + * rights. As such, we can use the EL1 translation regime, and + * don't have to distinguish between EL0 and EL1 access. + * + * We do need to save/restore PAR_EL1 though, as we haven't + * saved the guest context yet, and we may return early... + */ + par = read_sysreg(par_el1); + asm volatile("at s1e1r, %0" : : "r" (far)); + isb(); + + tmp = read_sysreg(par_el1); + write_sysreg(par, par_el1); + + if (unlikely(tmp & SYS_PAR_EL1_F)) + return false; /* Translation failed, back to guest */ + + /* Convert PAR to HPFAR format */ + *hpfar = PAR_TO_HPFAR(tmp); + return true; +} + +static inline bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) +{ + u8 ec; + u64 esr; + u64 hpfar, far; + + esr = vcpu->arch.fault.esr_el2; + ec = ESR_ELx_EC(esr); + + if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW) + return true; + + far = read_sysreg_el2(SYS_FAR); + + /* + * The HPFAR can be invalid if the stage 2 fault did not + * happen during a stage 1 page table walk (the ESR_EL2.S1PTW + * bit is clear) and one of the two following cases are true: + * 1. The fault was due to a permission fault + * 2. The processor carries errata 834220 + * + * Therefore, for all non S1PTW faults where we either have a + * permission fault or the errata workaround is enabled, we + * resolve the IPA using the AT instruction. + */ + if (!(esr & ESR_ELx_S1PTW) && + (cpus_have_final_cap(ARM64_WORKAROUND_834220) || + (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) { + if (!__translate_far_to_hpfar(far, &hpfar)) + return false; + } else { + hpfar = read_sysreg(hpfar_el2); + } + + vcpu->arch.fault.far_el2 = far; + vcpu->arch.fault.hpfar_el2 = hpfar; + return true; +} + +/* Check for an FPSIMD/SVE trap and handle as appropriate */ +static inline bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) +{ + bool vhe, sve_guest, sve_host; + u8 hsr_ec; + + if (!system_supports_fpsimd()) + return false; + + if (system_supports_sve()) { + sve_guest = vcpu_has_sve(vcpu); + sve_host = vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE; + vhe = true; + } else { + sve_guest = false; + sve_host = false; + vhe = has_vhe(); + } + + hsr_ec = kvm_vcpu_trap_get_class(vcpu); + if (hsr_ec != ESR_ELx_EC_FP_ASIMD && + hsr_ec != ESR_ELx_EC_SVE) + return false; + + /* Don't handle SVE traps for non-SVE vcpus here: */ + if (!sve_guest) + if (hsr_ec != ESR_ELx_EC_FP_ASIMD) + return false; + + /* Valid trap. Switch the context: */ + + if (vhe) { + u64 reg = read_sysreg(cpacr_el1) | CPACR_EL1_FPEN; + + if (sve_guest) + reg |= CPACR_EL1_ZEN; + + write_sysreg(reg, cpacr_el1); + } else { + write_sysreg(read_sysreg(cptr_el2) & ~(u64)CPTR_EL2_TFP, + cptr_el2); + } + + isb(); + + if (vcpu->arch.flags & KVM_ARM64_FP_HOST) { + /* + * In the SVE case, VHE is assumed: it is enforced by + * Kconfig and kvm_arch_init(). + */ + if (sve_host) { + struct thread_struct *thread = container_of( + vcpu->arch.host_fpsimd_state, + struct thread_struct, uw.fpsimd_state); + + sve_save_state(sve_pffr(thread), + &vcpu->arch.host_fpsimd_state->fpsr); + } else { + __fpsimd_save_state(vcpu->arch.host_fpsimd_state); + } + + vcpu->arch.flags &= ~KVM_ARM64_FP_HOST; + } + + if (sve_guest) { + sve_load_state(vcpu_sve_pffr(vcpu), + &vcpu->arch.ctxt.gp_regs.fp_regs.fpsr, + sve_vq_from_vl(vcpu->arch.sve_max_vl) - 1); + write_sysreg_s(vcpu->arch.ctxt.sys_regs[ZCR_EL1], SYS_ZCR_EL12); + } else { + __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); + } + + /* Skip restoring fpexc32 for AArch64 guests */ + if (!(read_sysreg(hcr_el2) & HCR_RW)) + write_sysreg(vcpu->arch.ctxt.sys_regs[FPEXC32_EL2], + fpexc32_el2); + + vcpu->arch.flags |= KVM_ARM64_FP_ENABLED; + + return true; +} + +static inline bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) +{ + u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu)); + int rt = kvm_vcpu_sys_get_rt(vcpu); + u64 val = vcpu_get_reg(vcpu, rt); + + /* + * The normal sysreg handling code expects to see the traps, + * let's not do anything here. + */ + if (vcpu->arch.hcr_el2 & HCR_TVM) + return false; + + switch (sysreg) { + case SYS_SCTLR_EL1: + write_sysreg_el1(val, SYS_SCTLR); + break; + case SYS_TTBR0_EL1: + write_sysreg_el1(val, SYS_TTBR0); + break; + case SYS_TTBR1_EL1: + write_sysreg_el1(val, SYS_TTBR1); + break; + case SYS_TCR_EL1: + write_sysreg_el1(val, SYS_TCR); + break; + case SYS_ESR_EL1: + write_sysreg_el1(val, SYS_ESR); + break; + case SYS_FAR_EL1: + write_sysreg_el1(val, SYS_FAR); + break; + case SYS_AFSR0_EL1: + write_sysreg_el1(val, SYS_AFSR0); + break; + case SYS_AFSR1_EL1: + write_sysreg_el1(val, SYS_AFSR1); + break; + case SYS_MAIR_EL1: + write_sysreg_el1(val, SYS_MAIR); + break; + case SYS_AMAIR_EL1: + write_sysreg_el1(val, SYS_AMAIR); + break; + case SYS_CONTEXTIDR_EL1: + write_sysreg_el1(val, SYS_CONTEXTIDR); + break; + default: + return false; + } + + __kvm_skip_instr(vcpu); + return true; +} + +/* + * Return true when we were able to fixup the guest exit and should return to + * the guest, false when we should restore the host state and return to the + * main run loop. + */ +static inline bool __hyp_text +fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) +{ + if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) + vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); + + /* + * We're using the raw exception code in order to only process + * the trap if no SError is pending. We will come back to the + * same PC once the SError has been injected, and replay the + * trapping instruction. + */ + if (*exit_code != ARM_EXCEPTION_TRAP) + goto exit; + + if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && + kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 && + handle_tx2_tvm(vcpu)) + return true; + + /* + * We trap the first access to the FP/SIMD to save the host context + * and restore the guest context lazily. + * If FP/SIMD is not implemented, handle the trap and inject an + * undefined instruction exception to the guest. + * Similarly for trapped SVE accesses. + */ + if (__hyp_handle_fpsimd(vcpu)) + return true; + + if (!__populate_fault_info(vcpu)) + return true; + + if (static_branch_unlikely(&vgic_v2_cpuif_trap)) { + bool valid; + + valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW && + kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT && + kvm_vcpu_dabt_isvalid(vcpu) && + !kvm_vcpu_dabt_isextabt(vcpu) && + !kvm_vcpu_dabt_iss1tw(vcpu); + + if (valid) { + int ret = __vgic_v2_perform_cpuif_access(vcpu); + + if (ret == 1) + return true; + + /* Promote an illegal access to an SError.*/ + if (ret == -1) + *exit_code = ARM_EXCEPTION_EL1_SERROR; + + goto exit; + } + } + + if (static_branch_unlikely(&vgic_v3_cpuif_trap) && + (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 || + kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) { + int ret = __vgic_v3_perform_cpuif_access(vcpu); + + if (ret == 1) + return true; + } + +exit: + /* Return to the host kernel and handle the exit */ + return false; +} + +static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) +{ + if (!cpus_have_final_cap(ARM64_SSBD)) + return false; + + return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG); +} + +static inline void __hyp_text +__set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) +{ +#ifdef CONFIG_ARM64_SSBD + /* + * The host runs with the workaround always present. If the + * guest wants it disabled, so be it... + */ + if (__needs_ssbd_off(vcpu) && + __hyp_this_cpu_read(arm64_ssbd_callback_required)) + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL); +#endif +} + +static inline void __hyp_text +__set_host_arch_workaround_state(struct kvm_vcpu *vcpu) +{ +#ifdef CONFIG_ARM64_SSBD + /* + * If the guest has disabled the workaround, bring it back on. + */ + if (__needs_ssbd_off(vcpu) && + __hyp_this_cpu_read(arm64_ssbd_callback_required)) + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1, NULL); +#endif +} diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c index 75b1925763f1..7a261ace2405 100644 --- a/arch/arm64/kvm/hyp/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/sysreg-sr.c @@ -125,7 +125,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) /* * Must only be done for guest registers, hence the context * test. We're coming from the host, so SCTLR.M is already - * set. Pairs with __activate_traps_nvhe(). + * set. Pairs with nVHE's __activate_traps(). */ write_sysreg_el1((ctxt->sys_regs[TCR_EL1] | TCR_EPD1_MASK | TCR_EPD0_MASK), @@ -153,7 +153,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) ctxt->__hyp_running_vcpu) { /* * Must only be done for host registers, hence the context - * test. Pairs with __deactivate_traps_nvhe(). + * test. Pairs with nVHE's __deactivate_traps(). */ isb(); /* From patchwork Thu Apr 30 14:48:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520615 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1720415AB for ; Thu, 30 Apr 2020 14:53:13 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DBE3D2074A for ; Thu, 30 Apr 2020 14:53:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="JjeRtXYx"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="d6CsR5eE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DBE3D2074A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7HVG6h64QMQ/DPfEb0I3cyejjDeqVr/qAp7Cv/LFVVM=; b=JjeRtXYxbN1gdD aFNmyI4rxpqxE0Ndqzm1By5zEkIcCwo5awc8YGRNRSIc8XdlJvEfA/etYZbQKOgtGMAHdNA+UF9Qy hziTzR6FaMIZoZZ6kOpeXL2V/x/h1HLiyx7aVDl+KYreN1sGacclvbVPT8Qs4CbOAGpNEPSqXez8y sJ6JmNWwBUCfeLFfVeL7GnW064hcjj95mHJabCT8uVk7jRN7jPnsnWqsN7bzXYP1fH/e/wLM7bd4m yKCYRzGS6c93/jCbvPM8zBk4OwskxPVjwqUDfxtGgmsWXZZuuhjlWX7tp8vuMgi9EEF/y2T2Y0Fa3 clRMjPo9r6Tws2xFmhdg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAYm-0000s0-Fp; Thu, 30 Apr 2020 14:53:08 +0000 Received: from mail-wm1-x344.google.com ([2a00:1450:4864:20::344]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAUq-0003xr-IB for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:49:19 +0000 Received: by mail-wm1-x344.google.com with SMTP id u16so2240336wmc.5 for ; Thu, 30 Apr 2020 07:49:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FjOhuVLumRuIz5DqCnJtHllE6aJ7knqtOcOqchqCoP0=; b=d6CsR5eE/2qsuxYygVgKLlf4mEw7CQgXvj+bFpfqCLASJfWKdEEqt8ZOcCf2GcmHGq WckMhcMboTjvLyMxCJ/FXrWKv3q2d/0ph8HuGKR/kKXO9lAanvvKJYK2owDIxVaqlvoL xPrCK//SLreaASmsUfZCfUrBb39jTSWEDHLloIrbWsTJOHPKjImfXA2msvSa6IZaIN+I /rgaM0cxZZqFfMEhkY9o5xJdOsiF3Z2VnRZQTx+cUrVPAcgBXYcufHhRr5oaG9D3WMkv 2lTEFn9PWJFKZiHfTgQxvtVRxyvZdIIy3lN4nvk4LwZpvIpJML6YeWFdfOdIXW0xq1fS Qv1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FjOhuVLumRuIz5DqCnJtHllE6aJ7knqtOcOqchqCoP0=; b=IO/nI74V2nJG6708i2LFtFFy9uJIGOhYHly1DWeSUhx0ow1eH6A7gHq1+dtP9lbEwf 80T77Uo0WMfWPSveIy0XErt3nkQKd78Gv2LkGR7oJqYITUeZNn7KD5FeocjQZ2AzyY1L m+LToIAitT4zwio+NB+ZcVfcJUqFRUs6/WCtSxGNqp6ZdD+IMZFzyVZ9bMHLGRNz62g1 ifU8+s2AuKpJ8iH0sxWoy/8FxU3TvpbzBX/1psyDVW58eJWEQpxKRrQBvbFmcq1dfHxu 0c1GPTSFOd7GUmejML8/2bAvaNPoS8E2uWlMZlLM4GBMkINbhQ6zeCY9SfVJUq6NaQrY 7+mA== X-Gm-Message-State: AGi0PuZmuofEesll/8mvX4NN59XtF6jxJ0ePj8vUy/OtAgoLoVp0EYmt BNFpUqFu7DYz7224S/iAB8nG5A== X-Google-Smtp-Source: APiQypIpR706rjdCf/ZS+HC3QHzlGhtv1fzyIp5yTp43CXqfsiIX7l2v8svou2NbjfSyCkGK4Sng/Q== X-Received: by 2002:a7b:cf1a:: with SMTP id l26mr3573099wmg.114.1588258142098; Thu, 30 Apr 2020 07:49:02 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id e5sm4151260wru.92.2020.04.30.07.49.01 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:49:01 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 09/15] arm64: kvm: Split hyp/debug-sr.c to VHE/nVHE Date: Thu, 30 Apr 2020 15:48:25 +0100 Message-Id: <20200430144831.59194-10-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_074904_988124_FC654137 X-CRM114-Status: GOOD ( 20.10 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:344 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. debug-sr.c contains KVM's code for context-switching debug registers, with some parts shared between VHE/nVHE. These common routines are moved to debug-sr.h, VHE-specific code is left in debug-sr.c and nVHE-specific code is moved to nvhe/debug-sr.c. Functions are slightly refactored to move code hidden behind `has_vhe()` checks to the corresponding .c file. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 3 - arch/arm64/kvm/hyp/debug-sr.c | 214 +---------------------------- arch/arm64/kvm/hyp/debug-sr.h | 167 ++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 77 +++++++++++ 5 files changed, 251 insertions(+), 212 deletions(-) create mode 100644 arch/arm64/kvm/hyp/debug-sr.h create mode 100644 arch/arm64/kvm/hyp/nvhe/debug-sr.c diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index e2282a4304c5..0e9ede9c473a 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,15 +61,12 @@ __efistub__ctype = _ctype; * memory mappings. */ -__hyp_text___debug_switch_to_guest = __debug_switch_to_guest; -__hyp_text___debug_switch_to_host = __debug_switch_to_host; __hyp_text___fpsimd_restore_state = __fpsimd_restore_state; __hyp_text___fpsimd_save_state = __fpsimd_save_state; __hyp_text___guest_enter = __guest_enter; __hyp_text___guest_exit = __guest_exit; __hyp_text___icache_flags = __icache_flags; __hyp_text___kvm_enable_ssbs = __kvm_enable_ssbs; -__hyp_text___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; __hyp_text___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; __hyp_text___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; __hyp_text___sysreg32_restore_state = __sysreg32_restore_state; diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c index 0fc9872a1467..39e0b9bcc8b7 100644 --- a/arch/arm64/kvm/hyp/debug-sr.c +++ b/arch/arm64/kvm/hyp/debug-sr.c @@ -4,221 +4,19 @@ * Author: Marc Zyngier */ -#include -#include +#include "debug-sr.h" -#include -#include -#include -#include - -#define read_debug(r,n) read_sysreg(r##n##_el1) -#define write_debug(v,r,n) write_sysreg(v, r##n##_el1) - -#define save_debug(ptr,reg,nr) \ - switch (nr) { \ - case 15: ptr[15] = read_debug(reg, 15); \ - /* Fall through */ \ - case 14: ptr[14] = read_debug(reg, 14); \ - /* Fall through */ \ - case 13: ptr[13] = read_debug(reg, 13); \ - /* Fall through */ \ - case 12: ptr[12] = read_debug(reg, 12); \ - /* Fall through */ \ - case 11: ptr[11] = read_debug(reg, 11); \ - /* Fall through */ \ - case 10: ptr[10] = read_debug(reg, 10); \ - /* Fall through */ \ - case 9: ptr[9] = read_debug(reg, 9); \ - /* Fall through */ \ - case 8: ptr[8] = read_debug(reg, 8); \ - /* Fall through */ \ - case 7: ptr[7] = read_debug(reg, 7); \ - /* Fall through */ \ - case 6: ptr[6] = read_debug(reg, 6); \ - /* Fall through */ \ - case 5: ptr[5] = read_debug(reg, 5); \ - /* Fall through */ \ - case 4: ptr[4] = read_debug(reg, 4); \ - /* Fall through */ \ - case 3: ptr[3] = read_debug(reg, 3); \ - /* Fall through */ \ - case 2: ptr[2] = read_debug(reg, 2); \ - /* Fall through */ \ - case 1: ptr[1] = read_debug(reg, 1); \ - /* Fall through */ \ - default: ptr[0] = read_debug(reg, 0); \ - } - -#define restore_debug(ptr,reg,nr) \ - switch (nr) { \ - case 15: write_debug(ptr[15], reg, 15); \ - /* Fall through */ \ - case 14: write_debug(ptr[14], reg, 14); \ - /* Fall through */ \ - case 13: write_debug(ptr[13], reg, 13); \ - /* Fall through */ \ - case 12: write_debug(ptr[12], reg, 12); \ - /* Fall through */ \ - case 11: write_debug(ptr[11], reg, 11); \ - /* Fall through */ \ - case 10: write_debug(ptr[10], reg, 10); \ - /* Fall through */ \ - case 9: write_debug(ptr[9], reg, 9); \ - /* Fall through */ \ - case 8: write_debug(ptr[8], reg, 8); \ - /* Fall through */ \ - case 7: write_debug(ptr[7], reg, 7); \ - /* Fall through */ \ - case 6: write_debug(ptr[6], reg, 6); \ - /* Fall through */ \ - case 5: write_debug(ptr[5], reg, 5); \ - /* Fall through */ \ - case 4: write_debug(ptr[4], reg, 4); \ - /* Fall through */ \ - case 3: write_debug(ptr[3], reg, 3); \ - /* Fall through */ \ - case 2: write_debug(ptr[2], reg, 2); \ - /* Fall through */ \ - case 1: write_debug(ptr[1], reg, 1); \ - /* Fall through */ \ - default: write_debug(ptr[0], reg, 0); \ - } - -static void __hyp_text __debug_save_spe_nvhe(u64 *pmscr_el1) -{ - u64 reg; - - /* Clear pmscr in case of early return */ - *pmscr_el1 = 0; - - /* SPE present on this CPU? */ - if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1), - ID_AA64DFR0_PMSVER_SHIFT)) - return; - - /* Yes; is it owned by EL3? */ - reg = read_sysreg_s(SYS_PMBIDR_EL1); - if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT)) - return; - - /* No; is the host actually using the thing? */ - reg = read_sysreg_s(SYS_PMBLIMITR_EL1); - if (!(reg & BIT(SYS_PMBLIMITR_EL1_E_SHIFT))) - return; - - /* Yes; save the control register and disable data generation */ - *pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1); - write_sysreg_s(0, SYS_PMSCR_EL1); - isb(); - - /* Now drain all buffered data to memory */ - psb_csync(); - dsb(nsh); -} - -static void __hyp_text __debug_restore_spe_nvhe(u64 pmscr_el1) -{ - if (!pmscr_el1) - return; - - /* The host page table is installed, but not yet synchronised */ - isb(); - - /* Re-enable data generation */ - write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1); -} - -static void __hyp_text __debug_save_state(struct kvm_vcpu *vcpu, - struct kvm_guest_debug_arch *dbg, - struct kvm_cpu_context *ctxt) -{ - u64 aa64dfr0; - int brps, wrps; - - aa64dfr0 = read_sysreg(id_aa64dfr0_el1); - brps = (aa64dfr0 >> 12) & 0xf; - wrps = (aa64dfr0 >> 20) & 0xf; - - save_debug(dbg->dbg_bcr, dbgbcr, brps); - save_debug(dbg->dbg_bvr, dbgbvr, brps); - save_debug(dbg->dbg_wcr, dbgwcr, wrps); - save_debug(dbg->dbg_wvr, dbgwvr, wrps); - - ctxt->sys_regs[MDCCINT_EL1] = read_sysreg(mdccint_el1); -} - -static void __hyp_text __debug_restore_state(struct kvm_vcpu *vcpu, - struct kvm_guest_debug_arch *dbg, - struct kvm_cpu_context *ctxt) -{ - u64 aa64dfr0; - int brps, wrps; - - aa64dfr0 = read_sysreg(id_aa64dfr0_el1); - - brps = (aa64dfr0 >> 12) & 0xf; - wrps = (aa64dfr0 >> 20) & 0xf; - - restore_debug(dbg->dbg_bcr, dbgbcr, brps); - restore_debug(dbg->dbg_bvr, dbgbvr, brps); - restore_debug(dbg->dbg_wcr, dbgwcr, wrps); - restore_debug(dbg->dbg_wvr, dbgwvr, wrps); - - write_sysreg(ctxt->sys_regs[MDCCINT_EL1], mdccint_el1); -} - -void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu) +void __debug_switch_to_guest(struct kvm_vcpu *vcpu) { - struct kvm_cpu_context *host_ctxt; - struct kvm_cpu_context *guest_ctxt; - struct kvm_guest_debug_arch *host_dbg; - struct kvm_guest_debug_arch *guest_dbg; - - /* - * Non-VHE: Disable and flush SPE data generation - * VHE: The vcpu can run, but it can't hide. - */ - if (!has_vhe()) - __debug_save_spe_nvhe(&vcpu->arch.host_debug_state.pmscr_el1); - - if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) - return; - - host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - guest_ctxt = &vcpu->arch.ctxt; - host_dbg = &vcpu->arch.host_debug_state.regs; - guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr); - - __debug_save_state(vcpu, host_dbg, host_ctxt); - __debug_restore_state(vcpu, guest_dbg, guest_ctxt); + __debug_switch_to_guest_common(vcpu); } -void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu) +void __debug_switch_to_host(struct kvm_vcpu *vcpu) { - struct kvm_cpu_context *host_ctxt; - struct kvm_cpu_context *guest_ctxt; - struct kvm_guest_debug_arch *host_dbg; - struct kvm_guest_debug_arch *guest_dbg; - - if (!has_vhe()) - __debug_restore_spe_nvhe(vcpu->arch.host_debug_state.pmscr_el1); - - if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) - return; - - host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - guest_ctxt = &vcpu->arch.ctxt; - host_dbg = &vcpu->arch.host_debug_state.regs; - guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr); - - __debug_save_state(vcpu, guest_dbg, guest_ctxt); - __debug_restore_state(vcpu, host_dbg, host_ctxt); - - vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY; + __debug_switch_to_host_common(vcpu); } -u32 __hyp_text __kvm_get_mdcr_el2(void) +u32 __kvm_get_mdcr_el2(void) { return read_sysreg(mdcr_el2); } diff --git a/arch/arm64/kvm/hyp/debug-sr.h b/arch/arm64/kvm/hyp/debug-sr.h new file mode 100644 index 000000000000..1e849f31cb74 --- /dev/null +++ b/arch/arm64/kvm/hyp/debug-sr.h @@ -0,0 +1,167 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include + +#include +#include +#include +#include + +#define read_debug(r,n) read_sysreg(r##n##_el1) +#define write_debug(v,r,n) write_sysreg(v, r##n##_el1) + +#define save_debug(ptr,reg,nr) \ + switch (nr) { \ + case 15: ptr[15] = read_debug(reg, 15); \ + /* Fall through */ \ + case 14: ptr[14] = read_debug(reg, 14); \ + /* Fall through */ \ + case 13: ptr[13] = read_debug(reg, 13); \ + /* Fall through */ \ + case 12: ptr[12] = read_debug(reg, 12); \ + /* Fall through */ \ + case 11: ptr[11] = read_debug(reg, 11); \ + /* Fall through */ \ + case 10: ptr[10] = read_debug(reg, 10); \ + /* Fall through */ \ + case 9: ptr[9] = read_debug(reg, 9); \ + /* Fall through */ \ + case 8: ptr[8] = read_debug(reg, 8); \ + /* Fall through */ \ + case 7: ptr[7] = read_debug(reg, 7); \ + /* Fall through */ \ + case 6: ptr[6] = read_debug(reg, 6); \ + /* Fall through */ \ + case 5: ptr[5] = read_debug(reg, 5); \ + /* Fall through */ \ + case 4: ptr[4] = read_debug(reg, 4); \ + /* Fall through */ \ + case 3: ptr[3] = read_debug(reg, 3); \ + /* Fall through */ \ + case 2: ptr[2] = read_debug(reg, 2); \ + /* Fall through */ \ + case 1: ptr[1] = read_debug(reg, 1); \ + /* Fall through */ \ + default: ptr[0] = read_debug(reg, 0); \ + } + +#define restore_debug(ptr,reg,nr) \ + switch (nr) { \ + case 15: write_debug(ptr[15], reg, 15); \ + /* Fall through */ \ + case 14: write_debug(ptr[14], reg, 14); \ + /* Fall through */ \ + case 13: write_debug(ptr[13], reg, 13); \ + /* Fall through */ \ + case 12: write_debug(ptr[12], reg, 12); \ + /* Fall through */ \ + case 11: write_debug(ptr[11], reg, 11); \ + /* Fall through */ \ + case 10: write_debug(ptr[10], reg, 10); \ + /* Fall through */ \ + case 9: write_debug(ptr[9], reg, 9); \ + /* Fall through */ \ + case 8: write_debug(ptr[8], reg, 8); \ + /* Fall through */ \ + case 7: write_debug(ptr[7], reg, 7); \ + /* Fall through */ \ + case 6: write_debug(ptr[6], reg, 6); \ + /* Fall through */ \ + case 5: write_debug(ptr[5], reg, 5); \ + /* Fall through */ \ + case 4: write_debug(ptr[4], reg, 4); \ + /* Fall through */ \ + case 3: write_debug(ptr[3], reg, 3); \ + /* Fall through */ \ + case 2: write_debug(ptr[2], reg, 2); \ + /* Fall through */ \ + case 1: write_debug(ptr[1], reg, 1); \ + /* Fall through */ \ + default: write_debug(ptr[0], reg, 0); \ + } + +static inline void __hyp_text +__debug_save_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, + struct kvm_cpu_context *ctxt) +{ + u64 aa64dfr0; + int brps, wrps; + + aa64dfr0 = read_sysreg(id_aa64dfr0_el1); + brps = (aa64dfr0 >> 12) & 0xf; + wrps = (aa64dfr0 >> 20) & 0xf; + + save_debug(dbg->dbg_bcr, dbgbcr, brps); + save_debug(dbg->dbg_bvr, dbgbvr, brps); + save_debug(dbg->dbg_wcr, dbgwcr, wrps); + save_debug(dbg->dbg_wvr, dbgwvr, wrps); + + ctxt->sys_regs[MDCCINT_EL1] = read_sysreg(mdccint_el1); +} + +static inline void __hyp_text +__debug_restore_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, + struct kvm_cpu_context *ctxt) +{ + u64 aa64dfr0; + int brps, wrps; + + aa64dfr0 = read_sysreg(id_aa64dfr0_el1); + + brps = (aa64dfr0 >> 12) & 0xf; + wrps = (aa64dfr0 >> 20) & 0xf; + + restore_debug(dbg->dbg_bcr, dbgbcr, brps); + restore_debug(dbg->dbg_bvr, dbgbvr, brps); + restore_debug(dbg->dbg_wcr, dbgwcr, wrps); + restore_debug(dbg->dbg_wvr, dbgwvr, wrps); + + write_sysreg(ctxt->sys_regs[MDCCINT_EL1], mdccint_el1); +} + +static inline void __hyp_text +__debug_switch_to_guest_common(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + struct kvm_guest_debug_arch *host_dbg; + struct kvm_guest_debug_arch *guest_dbg; + + if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) + return; + + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + guest_ctxt = &vcpu->arch.ctxt; + host_dbg = &vcpu->arch.host_debug_state.regs; + guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr); + + __debug_save_state(vcpu, host_dbg, host_ctxt); + __debug_restore_state(vcpu, guest_dbg, guest_ctxt); +} + +static inline void __hyp_text +__debug_switch_to_host_common(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + struct kvm_guest_debug_arch *host_dbg; + struct kvm_guest_debug_arch *guest_dbg; + + if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) + return; + + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + guest_ctxt = &vcpu->arch.ctxt; + host_dbg = &vcpu->arch.host_debug_state.regs; + guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr); + + __debug_save_state(vcpu, guest_dbg, guest_ctxt); + __debug_restore_state(vcpu, host_dbg, host_ctxt); + + vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY; +} diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 82fbd0b76501..bca3b862927c 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__HYPERVISOR__ ccflags-y := -D__HYPERVISOR__ -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) -obj-y := switch.o tlb.o host_hypercall.o ../hyp-entry.o +obj-y := debug-sr.o switch.o tlb.o host_hypercall.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c new file mode 100644 index 000000000000..b3752cfdcf3d --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -0,0 +1,77 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include + +#include +#include +#include +#include + +#include "../debug-sr.h" + +static void __hyp_text __debug_save_spe(u64 *pmscr_el1) +{ + u64 reg; + + /* Clear pmscr in case of early return */ + *pmscr_el1 = 0; + + /* SPE present on this CPU? */ + if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1), + ID_AA64DFR0_PMSVER_SHIFT)) + return; + + /* Yes; is it owned by EL3? */ + reg = read_sysreg_s(SYS_PMBIDR_EL1); + if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT)) + return; + + /* No; is the host actually using the thing? */ + reg = read_sysreg_s(SYS_PMBLIMITR_EL1); + if (!(reg & BIT(SYS_PMBLIMITR_EL1_E_SHIFT))) + return; + + /* Yes; save the control register and disable data generation */ + *pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1); + write_sysreg_s(0, SYS_PMSCR_EL1); + isb(); + + /* Now drain all buffered data to memory */ + psb_csync(); + dsb(nsh); +} + +static void __hyp_text __debug_restore_spe(u64 pmscr_el1) +{ + if (!pmscr_el1) + return; + + /* The host page table is installed, but not yet synchronised */ + isb(); + + /* Re-enable data generation */ + write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1); +} + +void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu) +{ + /* Disable and flush SPE data generation */ + __debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1); + __debug_switch_to_guest_common(vcpu); +} + +void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu) +{ + __debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1); + __debug_switch_to_host_common(vcpu); +} + +u32 __hyp_text __kvm_get_mdcr_el2(void) +{ + return read_sysreg(mdcr_el2); +} From patchwork Thu Apr 30 14:48:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520621 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0919715E6 for ; Thu, 30 Apr 2020 14:54:02 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D66F22074A for ; Thu, 30 Apr 2020 14:54:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="jy6AEJrX"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="eAxFbpbB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D66F22074A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KpAWbw/J8U+5Zj6pYZRfRwbPgExy0LEf9MbkdODZeu8=; b=jy6AEJrXjFT7O2 WtWdxDeanuysIEML3NiV8KWwIyeG3QTOo5l5wnWmbJwiWNA8Zso8omC0vL7q3dKExITZbeuRO/WGv HVwoCfd42skl7nF/9+VHEGJht9Ign2UJVE5BnBlrSSdRds/NAJDyrfdNdgmtXbvu/BVcg1LAI5KIL FpB+IjPCaAxiisdqzcB8yVKx2oYKLQLXPxbIlBHH7M7eHf3+Tw1ZWrwPv29iZ01oXOktqWIAzeX/d JeHzpFMTMYxHDOnx3D/Zr34Sv4Cyl0snG3ZOB/VMjtQNIzQt/fhaFLQKGie9XMahDVtHhb5yejoI4 23fQvAYdmRzKPbdAM9og==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAZY-0001ck-GE; Thu, 30 Apr 2020 14:53:56 +0000 Received: from mail-wm1-x341.google.com ([2a00:1450:4864:20::341]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAUs-0003zw-2f for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:49:21 +0000 Received: by mail-wm1-x341.google.com with SMTP id e26so2184431wmk.5 for ; Thu, 30 Apr 2020 07:49:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ckUwu7M7JnKxYtp4YcyrixRVYD5P7zPGk5W5SBVtZIc=; b=eAxFbpbB88WAkJkK/l0R+gS1GNcXp/5WU1ermJF6JaRu8CL7y0KBxwF4rYpBJNoqEk uMBG0Mpd+54DfMHAEtG/netau4iRLtv9gAkLhVv2kyThzDUxxAxVFo7cyC8504ipYelv YKbPwQJpptOmlug7glvf9oRPsbp7MPzoaJsX5bSwCLI3IWYG5qaTIW8K3M/XgVbLdUmE hnwvr6onUFwMvoaDI6rVs5OMSydrUpoUcZ3Md2u6jQ8rs1eY3nd6u5fvRnDbjEl49A3B SYcYGodJJG8djLsJa7mJ/Hix99bwtyGQp/KA/CCqEb+Iaplo8npFdAfbPwYKja+PD4nZ FXig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ckUwu7M7JnKxYtp4YcyrixRVYD5P7zPGk5W5SBVtZIc=; b=e4/Ww+Ma2p2zNSHQPZpnIsWt9oT1GX/bR0ibfj+HeGts0+kEhw+VedZez7gvaphAnE RGE9Kpd6zqOOTki63FvknUQhrZtqBvIfOwbsjD02/m6iZmFEgePySpBXS3nmLwRROPte AlJ1QzXY27rTQ1JwWz52+CZqofYnitEB0qbIopzS6kgm4iS7Vl+FzKzpRZbykt2xOguT NTZ73iZ6jTR5ijaMmNdmrqj0+AuMQfHcdw7wjgGw0WSHJFR3xRQHgYTfd+8yEQn4J7JK pKtzM9Fjox1e2emq0p20fi/tJ+Qc5TMNcuxcrmdDQjPTvzxDaldYA87sLv+4mF5r4KCE 1Cfg== X-Gm-Message-State: AGi0PubIS8IojZbwVQlQnfeNKYeKIRZQ9KttIAT90+hThK/BJQhuCYTM jvhwe0nn1HMheVIzwaGvAH5Dhg== X-Google-Smtp-Source: APiQypJQWNduwzltaDXtimSwwlg/dZj+fSo++nsFEKtvMxqxIiImB5DCuDmEuHuiKTNz9McXBvQpuw== X-Received: by 2002:a1c:2d02:: with SMTP id t2mr3395070wmt.98.1588258144030; Thu, 30 Apr 2020 07:49:04 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id j22sm1317015wre.84.2020.04.30.07.49.02 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:49:03 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 10/15] arm64: kvm: Split hyp/sysreg-sr.c to VHE/nVHE Date: Thu, 30 Apr 2020 15:48:26 +0100 Message-Id: <20200430144831.59194-11-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_074906_461124_9486DF54 X-CRM114-Status: GOOD ( 18.69 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:341 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. sysreg-sr.c contains KVM's code for saving/restoring system registers, with some parts shared between VHE/nVHE. These common routines are moved to sysreg-sr.h, VHE-specific code is left in sysreg-sr.c and nVHE-specific code is moved to nvhe/sysreg-sr.c. Naming of helper functions is simplified as VHE/nVHE symbol names cannot clash anymore. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_asm.h | 4 + arch/arm64/include/asm/kvm_host.h | 4 +- arch/arm64/include/asm/kvm_hyp.h | 4 + arch/arm64/kernel/image-vars.h | 5 - arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 56 +++++++ arch/arm64/kvm/hyp/sysreg-sr.c | 227 +--------------------------- arch/arm64/kvm/hyp/sysreg-sr.h | 218 ++++++++++++++++++++++++++ 8 files changed, 290 insertions(+), 230 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/sysreg-sr.c create mode 100644 arch/arm64/kvm/hyp/sysreg-sr.h diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 0cb229b9e148..646201a6576e 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -88,6 +88,10 @@ extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); +#ifdef __HYPERVISOR__ +extern void __kvm_enable_ssbs(void); +#endif + extern u64 __vgic_v3_get_ich_vtr_el2(void); extern u64 __vgic_v3_read_vmcr(void); extern void __vgic_v3_write_vmcr(u32 vmcr); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 5dec3b06f2b7..e4abe9d759bc 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -547,8 +547,6 @@ static inline void kvm_init_host_cpu_context(struct kvm_cpu_context *cpu_ctxt) cpu_ctxt->sys_regs[MPIDR_EL1] = read_cpuid_mpidr(); } -void __kvm_enable_ssbs(void); - static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr, unsigned long hyp_stack_ptr, unsigned long vector_ptr) @@ -577,7 +575,7 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr, */ if (!has_vhe() && this_cpu_has_cap(ARM64_SSBS) && arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) { - kvm_call_hyp(__kvm_enable_ssbs); + kvm_call_hyp_nvhe(__kvm_enable_ssbs); } } diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index b5895040c16a..c2bcd5dea030 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -67,12 +67,16 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu); void __timer_enable_traps(struct kvm_vcpu *vcpu); void __timer_disable_traps(struct kvm_vcpu *vcpu); +#ifdef __HYPERVISOR__ void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt); void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt); +#else void sysreg_save_host_state_vhe(struct kvm_cpu_context *ctxt); void sysreg_restore_host_state_vhe(struct kvm_cpu_context *ctxt); void sysreg_save_guest_state_vhe(struct kvm_cpu_context *ctxt); void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt); +#endif + void __sysreg32_save_state(struct kvm_vcpu *vcpu); void __sysreg32_restore_state(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 0e9ede9c473a..c4ff4a61eb5d 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -66,13 +66,8 @@ __hyp_text___fpsimd_save_state = __fpsimd_save_state; __hyp_text___guest_enter = __guest_enter; __hyp_text___guest_exit = __guest_exit; __hyp_text___icache_flags = __icache_flags; -__hyp_text___kvm_enable_ssbs = __kvm_enable_ssbs; __hyp_text___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; __hyp_text___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; -__hyp_text___sysreg32_restore_state = __sysreg32_restore_state; -__hyp_text___sysreg32_save_state = __sysreg32_save_state; -__hyp_text___sysreg_restore_state_nvhe = __sysreg_restore_state_nvhe; -__hyp_text___sysreg_save_state_nvhe = __sysreg_save_state_nvhe; __hyp_text___timer_disable_traps = __timer_disable_traps; __hyp_text___timer_enable_traps = __timer_enable_traps; __hyp_text___vgic_v2_perform_cpuif_access = __vgic_v2_perform_cpuif_access; diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index bca3b862927c..cfb55c01b3ff 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__HYPERVISOR__ ccflags-y := -D__HYPERVISOR__ -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) -obj-y := debug-sr.o switch.o tlb.o host_hypercall.o ../hyp-entry.o +obj-y := sysreg-sr.o debug-sr.o switch.o tlb.o host_hypercall.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c new file mode 100644 index 000000000000..55ab924d841a --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c @@ -0,0 +1,56 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2012-2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include + +#include +#include +#include +#include + +#include "../sysreg-sr.h" + +/* + * Non-VHE: Both host and guest must save everything. + */ + +void __hyp_text __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) +{ + __sysreg_save_el1_state(ctxt); + __sysreg_save_common_state(ctxt); + __sysreg_save_user_state(ctxt); + __sysreg_save_el2_return_state(ctxt); +} + +void __hyp_text __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) +{ + __sysreg_restore_el1_state(ctxt); + __sysreg_restore_common_state(ctxt); + __sysreg_restore_user_state(ctxt); + __sysreg_restore_el2_return_state(ctxt); +} + +void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu) +{ + ___sysreg32_save_state(vcpu); +} + +void __hyp_text __sysreg32_restore_state(struct kvm_vcpu *vcpu) +{ + ___sysreg32_restore_state(vcpu); +} + +void __hyp_text __kvm_enable_ssbs(void) +{ + u64 tmp; + + asm volatile( + "mrs %0, sctlr_el2\n" + "orr %0, %0, %1\n" + "msr sctlr_el2, %0" + : "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS)); +} diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c index 7a261ace2405..7d4b946739e9 100644 --- a/arch/arm64/kvm/hyp/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/sysreg-sr.c @@ -12,9 +12,9 @@ #include #include +#include "sysreg-sr.h" + /* - * Non-VHE: Both host and guest must save everything. - * * VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and pstate, * which are handled as part of the el2 return state) on every switch. * tpidr_el0 and tpidrro_el0 only need to be switched when going @@ -23,66 +23,6 @@ * classes are handled as part of kvm_arch_vcpu_load and kvm_arch_vcpu_put. */ -static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt) -{ - ctxt->sys_regs[MDSCR_EL1] = read_sysreg(mdscr_el1); - - /* - * The host arm64 Linux uses sp_el0 to point to 'current' and it must - * therefore be saved/restored on every entry/exit to/from the guest. - */ - ctxt->gp_regs.regs.sp = read_sysreg(sp_el0); -} - -static void __hyp_text __sysreg_save_user_state(struct kvm_cpu_context *ctxt) -{ - ctxt->sys_regs[TPIDR_EL0] = read_sysreg(tpidr_el0); - ctxt->sys_regs[TPIDRRO_EL0] = read_sysreg(tpidrro_el0); -} - -static void __hyp_text __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) -{ - ctxt->sys_regs[CSSELR_EL1] = read_sysreg(csselr_el1); - ctxt->sys_regs[SCTLR_EL1] = read_sysreg_el1(SYS_SCTLR); - ctxt->sys_regs[ACTLR_EL1] = read_sysreg(actlr_el1); - ctxt->sys_regs[CPACR_EL1] = read_sysreg_el1(SYS_CPACR); - ctxt->sys_regs[TTBR0_EL1] = read_sysreg_el1(SYS_TTBR0); - ctxt->sys_regs[TTBR1_EL1] = read_sysreg_el1(SYS_TTBR1); - ctxt->sys_regs[TCR_EL1] = read_sysreg_el1(SYS_TCR); - ctxt->sys_regs[ESR_EL1] = read_sysreg_el1(SYS_ESR); - ctxt->sys_regs[AFSR0_EL1] = read_sysreg_el1(SYS_AFSR0); - ctxt->sys_regs[AFSR1_EL1] = read_sysreg_el1(SYS_AFSR1); - ctxt->sys_regs[FAR_EL1] = read_sysreg_el1(SYS_FAR); - ctxt->sys_regs[MAIR_EL1] = read_sysreg_el1(SYS_MAIR); - ctxt->sys_regs[VBAR_EL1] = read_sysreg_el1(SYS_VBAR); - ctxt->sys_regs[CONTEXTIDR_EL1] = read_sysreg_el1(SYS_CONTEXTIDR); - ctxt->sys_regs[AMAIR_EL1] = read_sysreg_el1(SYS_AMAIR); - ctxt->sys_regs[CNTKCTL_EL1] = read_sysreg_el1(SYS_CNTKCTL); - ctxt->sys_regs[PAR_EL1] = read_sysreg(par_el1); - ctxt->sys_regs[TPIDR_EL1] = read_sysreg(tpidr_el1); - - ctxt->gp_regs.sp_el1 = read_sysreg(sp_el1); - ctxt->gp_regs.elr_el1 = read_sysreg_el1(SYS_ELR); - ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(SYS_SPSR); -} - -static void __hyp_text __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) -{ - ctxt->gp_regs.regs.pc = read_sysreg_el2(SYS_ELR); - ctxt->gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); - - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) - ctxt->sys_regs[DISR_EL1] = read_sysreg_s(SYS_VDISR_EL2); -} - -void __hyp_text __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) -{ - __sysreg_save_el1_state(ctxt); - __sysreg_save_common_state(ctxt); - __sysreg_save_user_state(ctxt); - __sysreg_save_el2_return_state(ctxt); -} - void sysreg_save_host_state_vhe(struct kvm_cpu_context *ctxt) { __sysreg_save_common_state(ctxt); @@ -96,116 +36,6 @@ void sysreg_save_guest_state_vhe(struct kvm_cpu_context *ctxt) } NOKPROBE_SYMBOL(sysreg_save_guest_state_vhe); -static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctxt) -{ - write_sysreg(ctxt->sys_regs[MDSCR_EL1], mdscr_el1); - - /* - * The host arm64 Linux uses sp_el0 to point to 'current' and it must - * therefore be saved/restored on every entry/exit to/from the guest. - */ - write_sysreg(ctxt->gp_regs.regs.sp, sp_el0); -} - -static void __hyp_text __sysreg_restore_user_state(struct kvm_cpu_context *ctxt) -{ - write_sysreg(ctxt->sys_regs[TPIDR_EL0], tpidr_el0); - write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0); -} - -static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) -{ - write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2); - write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1); - - if (!cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { - write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); - write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); - } else if (!ctxt->__hyp_running_vcpu) { - /* - * Must only be done for guest registers, hence the context - * test. We're coming from the host, so SCTLR.M is already - * set. Pairs with nVHE's __activate_traps(). - */ - write_sysreg_el1((ctxt->sys_regs[TCR_EL1] | - TCR_EPD1_MASK | TCR_EPD0_MASK), - SYS_TCR); - isb(); - } - - write_sysreg(ctxt->sys_regs[ACTLR_EL1], actlr_el1); - write_sysreg_el1(ctxt->sys_regs[CPACR_EL1], SYS_CPACR); - write_sysreg_el1(ctxt->sys_regs[TTBR0_EL1], SYS_TTBR0); - write_sysreg_el1(ctxt->sys_regs[TTBR1_EL1], SYS_TTBR1); - write_sysreg_el1(ctxt->sys_regs[ESR_EL1], SYS_ESR); - write_sysreg_el1(ctxt->sys_regs[AFSR0_EL1], SYS_AFSR0); - write_sysreg_el1(ctxt->sys_regs[AFSR1_EL1], SYS_AFSR1); - write_sysreg_el1(ctxt->sys_regs[FAR_EL1], SYS_FAR); - write_sysreg_el1(ctxt->sys_regs[MAIR_EL1], SYS_MAIR); - write_sysreg_el1(ctxt->sys_regs[VBAR_EL1], SYS_VBAR); - write_sysreg_el1(ctxt->sys_regs[CONTEXTIDR_EL1],SYS_CONTEXTIDR); - write_sysreg_el1(ctxt->sys_regs[AMAIR_EL1], SYS_AMAIR); - write_sysreg_el1(ctxt->sys_regs[CNTKCTL_EL1], SYS_CNTKCTL); - write_sysreg(ctxt->sys_regs[PAR_EL1], par_el1); - write_sysreg(ctxt->sys_regs[TPIDR_EL1], tpidr_el1); - - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE) && - ctxt->__hyp_running_vcpu) { - /* - * Must only be done for host registers, hence the context - * test. Pairs with nVHE's __deactivate_traps(). - */ - isb(); - /* - * At this stage, and thanks to the above isb(), S2 is - * deconfigured and disabled. We can now restore the host's - * S1 configuration: SCTLR, and only then TCR. - */ - write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); - isb(); - write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); - } - - write_sysreg(ctxt->gp_regs.sp_el1, sp_el1); - write_sysreg_el1(ctxt->gp_regs.elr_el1, SYS_ELR); - write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],SYS_SPSR); -} - -static void __hyp_text -__sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt) -{ - u64 pstate = ctxt->gp_regs.regs.pstate; - u64 mode = pstate & PSR_AA32_MODE_MASK; - - /* - * Safety check to ensure we're setting the CPU up to enter the guest - * in a less privileged mode. - * - * If we are attempting a return to EL2 or higher in AArch64 state, - * program SPSR_EL2 with M=EL2h and the IL bit set which ensures that - * we'll take an illegal exception state exception immediately after - * the ERET to the guest. Attempts to return to AArch32 Hyp will - * result in an illegal exception return because EL2's execution state - * is determined by SCR_EL3.RW. - */ - if (!(mode & PSR_MODE32_BIT) && mode >= PSR_MODE_EL2t) - pstate = PSR_MODE_EL2h | PSR_IL_BIT; - - write_sysreg_el2(ctxt->gp_regs.regs.pc, SYS_ELR); - write_sysreg_el2(pstate, SYS_SPSR); - - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) - write_sysreg_s(ctxt->sys_regs[DISR_EL1], SYS_VDISR_EL2); -} - -void __hyp_text __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) -{ - __sysreg_restore_el1_state(ctxt); - __sysreg_restore_common_state(ctxt); - __sysreg_restore_user_state(ctxt); - __sysreg_restore_el2_return_state(ctxt); -} - void sysreg_restore_host_state_vhe(struct kvm_cpu_context *ctxt) { __sysreg_restore_common_state(ctxt); @@ -219,48 +49,14 @@ void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt) } NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe); -void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu) +void __sysreg32_save_state(struct kvm_vcpu *vcpu) { - u64 *spsr, *sysreg; - - if (!vcpu_el1_is_32bit(vcpu)) - return; - - spsr = vcpu->arch.ctxt.gp_regs.spsr; - sysreg = vcpu->arch.ctxt.sys_regs; - - spsr[KVM_SPSR_ABT] = read_sysreg(spsr_abt); - spsr[KVM_SPSR_UND] = read_sysreg(spsr_und); - spsr[KVM_SPSR_IRQ] = read_sysreg(spsr_irq); - spsr[KVM_SPSR_FIQ] = read_sysreg(spsr_fiq); - - sysreg[DACR32_EL2] = read_sysreg(dacr32_el2); - sysreg[IFSR32_EL2] = read_sysreg(ifsr32_el2); - - if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) - sysreg[DBGVCR32_EL2] = read_sysreg(dbgvcr32_el2); + ___sysreg32_save_state(vcpu); } -void __hyp_text __sysreg32_restore_state(struct kvm_vcpu *vcpu) +void __sysreg32_restore_state(struct kvm_vcpu *vcpu) { - u64 *spsr, *sysreg; - - if (!vcpu_el1_is_32bit(vcpu)) - return; - - spsr = vcpu->arch.ctxt.gp_regs.spsr; - sysreg = vcpu->arch.ctxt.sys_regs; - - write_sysreg(spsr[KVM_SPSR_ABT], spsr_abt); - write_sysreg(spsr[KVM_SPSR_UND], spsr_und); - write_sysreg(spsr[KVM_SPSR_IRQ], spsr_irq); - write_sysreg(spsr[KVM_SPSR_FIQ], spsr_fiq); - - write_sysreg(sysreg[DACR32_EL2], dacr32_el2); - write_sysreg(sysreg[IFSR32_EL2], ifsr32_el2); - - if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) - write_sysreg(sysreg[DBGVCR32_EL2], dbgvcr32_el2); + ___sysreg32_restore_state(vcpu); } /** @@ -329,14 +125,3 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) vcpu->arch.sysregs_loaded_on_cpu = false; } - -void __hyp_text __kvm_enable_ssbs(void) -{ - u64 tmp; - - asm volatile( - "mrs %0, sctlr_el2\n" - "orr %0, %0, %1\n" - "msr sctlr_el2, %0" - : "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS)); -} diff --git a/arch/arm64/kvm/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/sysreg-sr.h new file mode 100644 index 000000000000..f667d3fb680f --- /dev/null +++ b/arch/arm64/kvm/hyp/sysreg-sr.h @@ -0,0 +1,218 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2012-2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include + +#include +#include +#include +#include + +static inline void __hyp_text +__sysreg_save_common_state(struct kvm_cpu_context *ctxt) +{ + ctxt->sys_regs[MDSCR_EL1] = read_sysreg(mdscr_el1); + + /* + * The host arm64 Linux uses sp_el0 to point to 'current' and it must + * therefore be saved/restored on every entry/exit to/from the guest. + */ + ctxt->gp_regs.regs.sp = read_sysreg(sp_el0); +} + +static inline void __hyp_text +__sysreg_save_user_state(struct kvm_cpu_context *ctxt) +{ + ctxt->sys_regs[TPIDR_EL0] = read_sysreg(tpidr_el0); + ctxt->sys_regs[TPIDRRO_EL0] = read_sysreg(tpidrro_el0); +} + +static inline void __hyp_text +__sysreg_save_el1_state(struct kvm_cpu_context *ctxt) +{ + ctxt->sys_regs[CSSELR_EL1] = read_sysreg(csselr_el1); + ctxt->sys_regs[SCTLR_EL1] = read_sysreg_el1(SYS_SCTLR); + ctxt->sys_regs[ACTLR_EL1] = read_sysreg(actlr_el1); + ctxt->sys_regs[CPACR_EL1] = read_sysreg_el1(SYS_CPACR); + ctxt->sys_regs[TTBR0_EL1] = read_sysreg_el1(SYS_TTBR0); + ctxt->sys_regs[TTBR1_EL1] = read_sysreg_el1(SYS_TTBR1); + ctxt->sys_regs[TCR_EL1] = read_sysreg_el1(SYS_TCR); + ctxt->sys_regs[ESR_EL1] = read_sysreg_el1(SYS_ESR); + ctxt->sys_regs[AFSR0_EL1] = read_sysreg_el1(SYS_AFSR0); + ctxt->sys_regs[AFSR1_EL1] = read_sysreg_el1(SYS_AFSR1); + ctxt->sys_regs[FAR_EL1] = read_sysreg_el1(SYS_FAR); + ctxt->sys_regs[MAIR_EL1] = read_sysreg_el1(SYS_MAIR); + ctxt->sys_regs[VBAR_EL1] = read_sysreg_el1(SYS_VBAR); + ctxt->sys_regs[CONTEXTIDR_EL1] = read_sysreg_el1(SYS_CONTEXTIDR); + ctxt->sys_regs[AMAIR_EL1] = read_sysreg_el1(SYS_AMAIR); + ctxt->sys_regs[CNTKCTL_EL1] = read_sysreg_el1(SYS_CNTKCTL); + ctxt->sys_regs[PAR_EL1] = read_sysreg(par_el1); + ctxt->sys_regs[TPIDR_EL1] = read_sysreg(tpidr_el1); + + ctxt->gp_regs.sp_el1 = read_sysreg(sp_el1); + ctxt->gp_regs.elr_el1 = read_sysreg_el1(SYS_ELR); + ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(SYS_SPSR); +} + +static inline void __hyp_text +__sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) +{ + ctxt->gp_regs.regs.pc = read_sysreg_el2(SYS_ELR); + ctxt->gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); + + if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) + ctxt->sys_regs[DISR_EL1] = read_sysreg_s(SYS_VDISR_EL2); +} + +static inline void __hyp_text +__sysreg_restore_common_state(struct kvm_cpu_context *ctxt) +{ + write_sysreg(ctxt->sys_regs[MDSCR_EL1], mdscr_el1); + + /* + * The host arm64 Linux uses sp_el0 to point to 'current' and it must + * therefore be saved/restored on every entry/exit to/from the guest. + */ + write_sysreg(ctxt->gp_regs.regs.sp, sp_el0); +} + +static inline void __hyp_text +__sysreg_restore_user_state(struct kvm_cpu_context *ctxt) +{ + write_sysreg(ctxt->sys_regs[TPIDR_EL0], tpidr_el0); + write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0); +} + +static inline void __hyp_text +__sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) +{ + write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2); + write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1); + + if (!cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { + write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); + write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); + } else if (!ctxt->__hyp_running_vcpu) { + /* + * Must only be done for guest registers, hence the context + * test. We're coming from the host, so SCTLR.M is already + * set. Pairs with nVHE's __activate_traps(). + */ + write_sysreg_el1((ctxt->sys_regs[TCR_EL1] | + TCR_EPD1_MASK | TCR_EPD0_MASK), + SYS_TCR); + isb(); + } + + write_sysreg(ctxt->sys_regs[ACTLR_EL1], actlr_el1); + write_sysreg_el1(ctxt->sys_regs[CPACR_EL1], SYS_CPACR); + write_sysreg_el1(ctxt->sys_regs[TTBR0_EL1], SYS_TTBR0); + write_sysreg_el1(ctxt->sys_regs[TTBR1_EL1], SYS_TTBR1); + write_sysreg_el1(ctxt->sys_regs[ESR_EL1], SYS_ESR); + write_sysreg_el1(ctxt->sys_regs[AFSR0_EL1], SYS_AFSR0); + write_sysreg_el1(ctxt->sys_regs[AFSR1_EL1], SYS_AFSR1); + write_sysreg_el1(ctxt->sys_regs[FAR_EL1], SYS_FAR); + write_sysreg_el1(ctxt->sys_regs[MAIR_EL1], SYS_MAIR); + write_sysreg_el1(ctxt->sys_regs[VBAR_EL1], SYS_VBAR); + write_sysreg_el1(ctxt->sys_regs[CONTEXTIDR_EL1],SYS_CONTEXTIDR); + write_sysreg_el1(ctxt->sys_regs[AMAIR_EL1], SYS_AMAIR); + write_sysreg_el1(ctxt->sys_regs[CNTKCTL_EL1], SYS_CNTKCTL); + write_sysreg(ctxt->sys_regs[PAR_EL1], par_el1); + write_sysreg(ctxt->sys_regs[TPIDR_EL1], tpidr_el1); + + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE) && + ctxt->__hyp_running_vcpu) { + /* + * Must only be done for host registers, hence the context + * test. Pairs with nVHE's __deactivate_traps(). + */ + isb(); + /* + * At this stage, and thanks to the above isb(), S2 is + * deconfigured and disabled. We can now restore the host's + * S1 configuration: SCTLR, and only then TCR. + */ + write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); + isb(); + write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); + } + + write_sysreg(ctxt->gp_regs.sp_el1, sp_el1); + write_sysreg_el1(ctxt->gp_regs.elr_el1, SYS_ELR); + write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],SYS_SPSR); +} + +static inline void __hyp_text +__sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt) +{ + u64 pstate = ctxt->gp_regs.regs.pstate; + u64 mode = pstate & PSR_AA32_MODE_MASK; + + /* + * Safety check to ensure we're setting the CPU up to enter the guest + * in a less privileged mode. + * + * If we are attempting a return to EL2 or higher in AArch64 state, + * program SPSR_EL2 with M=EL2h and the IL bit set which ensures that + * we'll take an illegal exception state exception immediately after + * the ERET to the guest. Attempts to return to AArch32 Hyp will + * result in an illegal exception return because EL2's execution state + * is determined by SCR_EL3.RW. + */ + if (!(mode & PSR_MODE32_BIT) && mode >= PSR_MODE_EL2t) + pstate = PSR_MODE_EL2h | PSR_IL_BIT; + + write_sysreg_el2(ctxt->gp_regs.regs.pc, SYS_ELR); + write_sysreg_el2(pstate, SYS_SPSR); + + if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) + write_sysreg_s(ctxt->sys_regs[DISR_EL1], SYS_VDISR_EL2); +} + +static inline void __hyp_text ___sysreg32_save_state(struct kvm_vcpu *vcpu) +{ + u64 *spsr, *sysreg; + + if (!vcpu_el1_is_32bit(vcpu)) + return; + + spsr = vcpu->arch.ctxt.gp_regs.spsr; + sysreg = vcpu->arch.ctxt.sys_regs; + + spsr[KVM_SPSR_ABT] = read_sysreg(spsr_abt); + spsr[KVM_SPSR_UND] = read_sysreg(spsr_und); + spsr[KVM_SPSR_IRQ] = read_sysreg(spsr_irq); + spsr[KVM_SPSR_FIQ] = read_sysreg(spsr_fiq); + + sysreg[DACR32_EL2] = read_sysreg(dacr32_el2); + sysreg[IFSR32_EL2] = read_sysreg(ifsr32_el2); + + if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) + sysreg[DBGVCR32_EL2] = read_sysreg(dbgvcr32_el2); +} + +static inline void __hyp_text ___sysreg32_restore_state(struct kvm_vcpu *vcpu) +{ + u64 *spsr, *sysreg; + + if (!vcpu_el1_is_32bit(vcpu)) + return; + + spsr = vcpu->arch.ctxt.gp_regs.spsr; + sysreg = vcpu->arch.ctxt.sys_regs; + + write_sysreg(spsr[KVM_SPSR_ABT], spsr_abt); + write_sysreg(spsr[KVM_SPSR_UND], spsr_und); + write_sysreg(spsr[KVM_SPSR_IRQ], spsr_irq); + write_sysreg(spsr[KVM_SPSR_FIQ], spsr_fiq); + + write_sysreg(sysreg[DACR32_EL2], dacr32_el2); + write_sysreg(sysreg[IFSR32_EL2], ifsr32_el2); + + if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) + write_sysreg(sysreg[DBGVCR32_EL2], dbgvcr32_el2); +} From patchwork Thu Apr 30 14:48:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520619 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8120715AB for ; Thu, 30 Apr 2020 14:53:47 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3916A2063A for ; Thu, 30 Apr 2020 14:53:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="e7Ixd1lZ"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="qzxBdMMy" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3916A2063A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=L8faqMSgAX0K7yRXQ2egkrf07oA6bml3NW3bUzGtfKY=; b=e7Ixd1lZH1xxgy f1vxFGKye/AyvYewcfZgSUsHckU2zu9zDNJ9v1oqVURrW002nzTINOmrCEEWmbI/sNmlWQr9FiYrj 6VmLBuZDQij5EIEd3tojtrGbk5fzCSTACfFVzNKOUfEW3oLu6aqcjtpYMadUngA3zoh9A1U+O3KqB eP/QXKRO0mqYhtT9ej7PvzHasvTdmktfCCktdROMcZ3lIyodQ9IiVSCtDrrFaebhhmKyNZBCJrbZo Y+uEDSL0RagKNGZzHmv1Za6beKTmgPmmbmF2unSTpUVuQ+xCIluFXCZkjwc2/BJmfqM2vNSYh7Ei/ kdkIa0AREASCD7q7Rccw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAZJ-0001NL-9D; Thu, 30 Apr 2020 14:53:41 +0000 Received: from mail-wm1-x343.google.com ([2a00:1450:4864:20::343]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAUt-000416-Sy for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:49:20 +0000 Received: by mail-wm1-x343.google.com with SMTP id e26so2184565wmk.5 for ; Thu, 30 Apr 2020 07:49:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Bjfw6WLtQLu18dtBBq50BIhl44h6PIVIcyoJvF8/xYs=; b=qzxBdMMyCAD6o0fFfCMZFFwj9xWxiTwC89TTNDeEy0DERNz4Vs3+r1AMfBGsifCJNd 1OcFJ9BYuJln9feFG9CWrpr+rGLRdA6uYTNFGx3XtzZDHwmf6LX/Zdh+SKqW+gdBOHlU HSVNIKcxgLgVuuYctvVXgDY6Mn3BtmLZp4tIFp/NH6Y1VIcziWGLfJpt8TAoFP3W/+YU Dn5DRVltXjzGqK7t71RPCVpwEcUABZQNxIj73WTbVnPlH1rB95CEBTvqR/5zeIsgpkBj aWD+5uGPboEh0RrIh1wRNQWwyW/c6SBYBElpYfuRnJZcWX2ssUv//XyH9JF4yOXRD7KB JJEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Bjfw6WLtQLu18dtBBq50BIhl44h6PIVIcyoJvF8/xYs=; b=lLajaurpr/DxpjLgLJ5nwgrRYjuljhNEc8ceUM/2rymyb+9iKDj+9jhMZfLJ5WlkZ1 4tt28XhpgYg5+X+1PbCsiwSexiIwAgeGLu26L7TUaby+gnKM7N4wxhBq5FY0a7deSQs8 BqtidnVrVo4aq5ZyuWWoXgf6+Khf0hzkJtEz4xtCikc5eLwAoNL+ec5+374jlKhnqIR/ CIs6wGcs97Ui21G5v08VV58NUTYM9LbcH0wVtAnnjj9/i4BpKR1F7TNHHlP0E0PIug4H KzuZHzOADimt5JhlC7N50tIvubymOdg21gFbFOeaQhY1XuQYlUam/yLAYgcyuPaf8QI2 n3+Q== X-Gm-Message-State: AGi0PuagihcUvQWq0vea2GCFDTd1u9hFGVZ8QEHItKi3yKkUx9qh0nsr SMyk9Vxc5pxpsTKLYN9xeGYpGA== X-Google-Smtp-Source: APiQypInlMtemBSxgprrtZ1Db2OkJJ+L2RoycVH9DSN6tmY9cQoEWe6neeG85fag/QDQ6Uvh0JBf+A== X-Received: by 2002:a05:600c:2046:: with SMTP id p6mr3391158wmg.177.1588258146104; Thu, 30 Apr 2020 07:49:06 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id l15sm12182667wmi.48.2020.04.30.07.49.04 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:49:05 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 11/15] arm64: kvm: Split hyp/timer-sr.c to VHE/nVHE Date: Thu, 30 Apr 2020 15:48:27 +0100 Message-Id: <20200430144831.59194-12-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_074908_033035_C6B29D3C X-CRM114-Status: GOOD ( 17.54 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:343 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. timer-sr.c contains a HVC handler for setting CNTVOFF_EL2 and two helper functions for controlling access to physical counter. The former is shared between VHE/nVHE and is kept in timer-sr.c but compiled under both configs. The latter are nVHE-specific and are moved to nvhe/timer-sr.c. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_hyp.h | 2 ++ arch/arm64/kernel/image-vars.h | 3 --- arch/arm64/kvm/hyp/nvhe/Makefile | 3 ++- arch/arm64/kvm/hyp/nvhe/timer-sr.c | 43 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/timer-sr.c | 36 ------------------------- 5 files changed, 47 insertions(+), 40 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/timer-sr.c diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index c2bcd5dea030..3320a22a5fb1 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -64,8 +64,10 @@ void __vgic_v3_save_aprs(struct kvm_vcpu *vcpu); void __vgic_v3_restore_aprs(struct kvm_vcpu *vcpu); int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu); +#ifdef __HYPERVISOR__ void __timer_enable_traps(struct kvm_vcpu *vcpu); void __timer_disable_traps(struct kvm_vcpu *vcpu); +#endif #ifdef __HYPERVISOR__ void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt); diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index c4ff4a61eb5d..b3de24d7ecd1 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -67,9 +67,6 @@ __hyp_text___guest_enter = __guest_enter; __hyp_text___guest_exit = __guest_exit; __hyp_text___icache_flags = __icache_flags; __hyp_text___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; -__hyp_text___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; -__hyp_text___timer_disable_traps = __timer_disable_traps; -__hyp_text___timer_enable_traps = __timer_enable_traps; __hyp_text___vgic_v2_perform_cpuif_access = __vgic_v2_perform_cpuif_access; __hyp_text___vgic_v3_activate_traps = __vgic_v3_activate_traps; __hyp_text___vgic_v3_deactivate_traps = __vgic_v3_deactivate_traps; diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index cfb55c01b3ff..2b8286ee8138 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,8 @@ asflags-y := -D__HYPERVISOR__ ccflags-y := -D__HYPERVISOR__ -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) -obj-y := sysreg-sr.o debug-sr.o switch.o tlb.o host_hypercall.o ../hyp-entry.o +obj-y := ../timer-sr.o timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o \ + host_hypercall.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/timer-sr.c b/arch/arm64/kvm/hyp/nvhe/timer-sr.c new file mode 100644 index 000000000000..f0e694743883 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/timer-sr.c @@ -0,0 +1,43 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2012-2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include +#include + +#include + +/* + * Should only be called on non-VHE systems. + * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). + */ +void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu) +{ + u64 val; + + /* Allow physical timer/counter access for the host */ + val = read_sysreg(cnthctl_el2); + val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN; + write_sysreg(val, cnthctl_el2); +} + +/* + * Should only be called on non-VHE systems. + * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). + */ +void __hyp_text __timer_enable_traps(struct kvm_vcpu *vcpu) +{ + u64 val; + + /* + * Disallow physical timer access for the guest + * Physical counter access is allowed + */ + val = read_sysreg(cnthctl_el2); + val &= ~CNTHCTL_EL1PCEN; + val |= CNTHCTL_EL1PCTEN; + write_sysreg(val, cnthctl_el2); +} diff --git a/arch/arm64/kvm/hyp/timer-sr.c b/arch/arm64/kvm/hyp/timer-sr.c index ff76e6845fe4..46e303281a2c 100644 --- a/arch/arm64/kvm/hyp/timer-sr.c +++ b/arch/arm64/kvm/hyp/timer-sr.c @@ -4,10 +4,6 @@ * Author: Marc Zyngier */ -#include -#include -#include - #include void __hyp_text __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high) @@ -15,35 +11,3 @@ void __hyp_text __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high) u64 cntvoff = (u64)cntvoff_high << 32 | cntvoff_low; write_sysreg(cntvoff, cntvoff_el2); } - -/* - * Should only be called on non-VHE systems. - * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). - */ -void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu) -{ - u64 val; - - /* Allow physical timer/counter access for the host */ - val = read_sysreg(cnthctl_el2); - val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN; - write_sysreg(val, cnthctl_el2); -} - -/* - * Should only be called on non-VHE systems. - * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). - */ -void __hyp_text __timer_enable_traps(struct kvm_vcpu *vcpu) -{ - u64 val; - - /* - * Disallow physical timer access for the guest - * Physical counter access is allowed - */ - val = read_sysreg(cnthctl_el2); - val &= ~CNTHCTL_EL1PCEN; - val |= CNTHCTL_EL1PCTEN; - write_sysreg(val, cnthctl_el2); -} From patchwork Thu Apr 30 14:48:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520617 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C753B15AB for ; Thu, 30 Apr 2020 14:53:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A29B12074A for ; Thu, 30 Apr 2020 14:53:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="jaCX7hEW"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="efC8vkYg" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A29B12074A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=T+8m4VcE/jNn/atzX3lHrAdI80OLp9GAaTaTycVr/H0=; b=jaCX7hEW5Ri1yp 76fI0cGURqZePk+VsVkSrPD3uTVB79O+0BE57+sswud5BqFFooYRHJgZ+ZGrFrs8UJR7TjsCzUN4j PTb2WqJ7tmUkw3jcA3A2VhgOn+N2nNKB2ownjPRxZSAWegrITQQAcM4uRKYKcv60xP5BOef17B7Er IAvepIXOF5qqGzOJiNe2BPnWk4Y2+59HiF3WJ+TO92xfEH6dF0dP4ceU4JIBx+HuwlLBb6Qyu6oSO uwuwy8TTmNtll9ACsUDrNl4CEDdx/ssmfXc4o0dRIUQ9JCY8rdbvN7cuf+W8JFTe6Q/Hw/D5LtvsD UMwsPDJnmCrRqIEvEtHw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAZ4-00019W-DV; Thu, 30 Apr 2020 14:53:26 +0000 Received: from mail-wr1-x444.google.com ([2a00:1450:4864:20::444]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAUv-00042A-D3 for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:49:21 +0000 Received: by mail-wr1-x444.google.com with SMTP id k1so7287716wrx.4 for ; Thu, 30 Apr 2020 07:49:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=umkbm5KRG1N3np5zJ4HE1sftpuuUyazttPbykboHAm8=; b=efC8vkYgvNM1vKeVr2o1KyvheFweWQLdJEpfuO35sBECfVZWnRqYWwF651X3tv7WUl y7foAr0R0XqqiZFBrmI42yS1Ovwh7dWqJWSGU7t2oFv+doT6ZnA2r5ymHwPFZD2OBVPT pMw0DH9qc39RJ/JvzQIbUxJASJN7ZxUpMhrKoubW3zdb3umhK9O7VKPc1aDoQUlJyVMx O+Ei/SA4/M9+HqIx1BrLA84FsSF9Zm4hUlUT/C6/Qx4J7F1lhqzW5VbWxfHgrTX0eWjE Nzs6eMyZqIpzGWPgODgpMKn5JoW7WpNAATr3jAtV9DoXK0zChwiEWhVhc5aN3uSgaXwW 91Mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=umkbm5KRG1N3np5zJ4HE1sftpuuUyazttPbykboHAm8=; b=CHv/w7v2V+QBIR0aUtem3saC0jiPR4QOfeqhKXGrd1q6kymj8v4Kk/4K1PP3q+oAXZ EpeleQip4784Is8Cod4yN1s6s0ZY1dKDH59bDuStvIW5nAAxSrvR7uTuFn7jjNAWaA33 H5AjHeVDoAJ8yPN3pz03c7xaeGI+ASMji5/oW1H9eFcrlcHtLFNjEB5cHu7nKuu6UAKa dLO+n0sj81JmsLe+A/m2Qh7SBSwCN1NTXIY3FY4zT0nabQ81bbV/b4s1EUUSn3GGFGCl jpnnezcZt40BGS6dNtM804P1NsUmKIWuiZJE3vL+ltvxLRJD6YWmOwY7iIco4zFdW5X/ eRNA== X-Gm-Message-State: AGi0PuYhMIfrLuOY711h5Kxsgam8PS1CgNTm+cymG4zEB0V5VbBHKzY8 gFB3vE2S+1TOhGc+AGCmEnEEFA== X-Google-Smtp-Source: APiQypIpmxrWfSEem32pw/VDQBnv+MYBdsOIVLgK9Hym1JocNQpMrPcTj2RTIrgFipnFVqbek+O/ww== X-Received: by 2002:a5d:6841:: with SMTP id o1mr4321497wrw.412.1588258147736; Thu, 30 Apr 2020 07:49:07 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id s14sm12054954wmh.18.2020.04.30.07.49.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:49:07 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 12/15] arm64: kvm: Compile remaining hyp/ files for both VHE/nVHE Date: Thu, 30 Apr 2020 15:48:28 +0100 Message-Id: <20200430144831.59194-13-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_074909_620085_28E26C79 X-CRM114-Status: GOOD ( 12.42 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:444 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. The following files in hyp/ contain only code shared by VHE/nVHE: vgic-v3-sr.c, aarch32.c, vgic-v2-cpuif-proxy.c, entry.S, fpsimd.S Compile them under both configurations. Deletions in image-vars.h reflect eliminated dependencies of nVHE code on the rest of the kernel. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 19 ------------------- arch/arm64/kvm/hyp/nvhe/Makefile | 5 +++-- 2 files changed, 3 insertions(+), 21 deletions(-) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index b3de24d7ecd1..e272eedfe19a 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,26 +61,8 @@ __efistub__ctype = _ctype; * memory mappings. */ -__hyp_text___fpsimd_restore_state = __fpsimd_restore_state; -__hyp_text___fpsimd_save_state = __fpsimd_save_state; -__hyp_text___guest_enter = __guest_enter; -__hyp_text___guest_exit = __guest_exit; __hyp_text___icache_flags = __icache_flags; __hyp_text___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; -__hyp_text___vgic_v2_perform_cpuif_access = __vgic_v2_perform_cpuif_access; -__hyp_text___vgic_v3_activate_traps = __vgic_v3_activate_traps; -__hyp_text___vgic_v3_deactivate_traps = __vgic_v3_deactivate_traps; -__hyp_text___vgic_v3_get_ich_vtr_el2 = __vgic_v3_get_ich_vtr_el2; -__hyp_text___vgic_v3_init_lrs = __vgic_v3_init_lrs; -__hyp_text___vgic_v3_perform_cpuif_access = __vgic_v3_perform_cpuif_access; -__hyp_text___vgic_v3_read_vmcr = __vgic_v3_read_vmcr; -__hyp_text___vgic_v3_restore_aprs = __vgic_v3_restore_aprs; -__hyp_text___vgic_v3_restore_state = __vgic_v3_restore_state; -__hyp_text___vgic_v3_save_aprs = __vgic_v3_save_aprs; -__hyp_text___vgic_v3_save_state = __vgic_v3_save_state; -__hyp_text___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; -__hyp_text_abort_guest_exit_end = abort_guest_exit_end; -__hyp_text_abort_guest_exit_start = abort_guest_exit_start; __hyp_text_arm64_const_caps_ready = arm64_const_caps_ready; __hyp_text_arm64_enable_wa2_handling = arm64_enable_wa2_handling; __hyp_text_arm64_ssbd_callback_required = arm64_ssbd_callback_required; @@ -89,7 +71,6 @@ __hyp_text_cpu_hwcaps = cpu_hwcaps; __hyp_text_kimage_voffset = kimage_voffset; __hyp_text_kvm_host_data = kvm_host_data; __hyp_text_kvm_patch_vector_branch = kvm_patch_vector_branch; -__hyp_text_kvm_skip_instr32 = kvm_skip_instr32; __hyp_text_kvm_update_va_mask = kvm_update_va_mask; __hyp_text_kvm_vgic_global_state = kvm_vgic_global_state; __hyp_text_panic = panic; diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 2b8286ee8138..41018d25118c 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,8 +7,9 @@ asflags-y := -D__HYPERVISOR__ ccflags-y := -D__HYPERVISOR__ -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) -obj-y := ../timer-sr.o timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o \ - host_hypercall.o ../hyp-entry.o +obj-y := ../vgic-v3-sr.o ../timer-sr.o timer-sr.o ../aarch32.o \ + ../vgic-v2-cpuif-proxy.o sysreg-sr.o debug-sr.o ../entry.o switch.o \ + ../fpsimd.o tlb.o host_hypercall.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) From patchwork Thu Apr 30 14:48:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520625 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2323515AB for ; Thu, 30 Apr 2020 14:54:38 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E2D6E2076D for ; Thu, 30 Apr 2020 14:54:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="hFdtY/KD"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="d2VkIY9z" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E2D6E2076D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JRSZHRVVFrtkIMYkzATmKrTCjW4mI6ZjPzHvbsJxfg4=; b=hFdtY/KD8/ZIee 935u5lSzD1zR2BmR1GUS4tLo36BbMFxpFhsxIpeiXun/f0grgLVKfSHVOdMYnOrh9O94GOhFTJzKv mmCyeCp7EyVKZdcN849yKAViEdw4VfKOcI1WlSwf4c9sEdPoLzPWhM9hXMKz70D/ZvB+iXlPl+Wa0 HT9ZlOI4n6tmDmc7BKb8odis1pykOd6NwvTHwa1YgcOE32u1KLgx0PxGGPadS7nJR8tldu8IZ+MS1 ZDDJquuGhaUAm4e+PjwhYCMuABAMFiAPFaRsaHCm325CJxfh2B5e4vsVAwTFRAQOHLLCCiRLB/Xt/ r2nzTUoBO+y9oGYOdCfA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAa8-00027P-Er; Thu, 30 Apr 2020 14:54:32 +0000 Received: from mail-wm1-x344.google.com ([2a00:1450:4864:20::344]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAUx-000447-8t for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:49:23 +0000 Received: by mail-wm1-x344.google.com with SMTP id x4so2188341wmj.1 for ; Thu, 30 Apr 2020 07:49:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wdv9+QboSB0n8mCt4SCrZeIquYs40oZsiEsRDsuUlz0=; b=d2VkIY9z2QxMTr9h0uS1PRKv10w2VGgdwRE0sKCV0WjUfaxd1zset/EMhBFQfskuJe goCtRuSXhK3kBgyLdjxZkBMTTngcnKPaSof2L7u3yexydH49mW/U9M2iAHHQSRFilRvM k5KEZcFespm6qTVeA32TSRCr6jhm8eRQ7Ym71GbZAd8JnCvyrcI6hKtB2OiGX+xOYXeV ZJH2zA3r+wbu9/VfJjCHzG1bm+FreLspb3qUpNwgCGbxVqCG0mvKmtssm+3ikb0CfkWM 4I5A3chetvEn8rgjOt0JZFuLRpLheKnZsDVdY+eVw++ctr0guSEm0z28nbRuzNSgGdil gVHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wdv9+QboSB0n8mCt4SCrZeIquYs40oZsiEsRDsuUlz0=; b=snSiZAwAENe/j7PLFuPOXCpCrOvdVyqBSIUjjKnft5LHh299vr0P4H18/bP4btafME DBgDDyVn0QXPMv3Mw8nQsIXB/2Om2ijqIc+ZwDFm4h35X8uU+s65vW3c+GOVr0mcaaQh 9dTOv2OOsN3qKZK7Mn+t28Zy2vSZ4d1G5qcls0MgAAZWcr9zJHjgw2pA6TTowfilgBy6 niHRLbJZ//6KqqLa/jlCwv0x7DdRiE1CMcp25yH1H0ny9/4/hT0N1c3CYthsseWQDi20 WF7Y7qYoMWOEi3oqBwRoOsN3en5lsvyCevuEb/ay0HpCiW6mUNcyIFiakvn4EHlV2TWG 5lzw== X-Gm-Message-State: AGi0Pub8Xhn2ZGEKA80uYg9e9j0pkg++ZWQgvCAcj3ufl9C0tahd8JAY CY5RCYIwmYorb5Dh5ZF+kbQwWA== X-Google-Smtp-Source: APiQypI5dMJz38mZmwtrxPPLvPMzBmrs/Qs1QaCKD/ikyZpdAJowByuEdCLiCgMzdGxwo2O5sdWohw== X-Received: by 2002:a1c:7905:: with SMTP id l5mr3549110wme.5.1588258149457; Thu, 30 Apr 2020 07:49:09 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id a24sm12220432wmb.24.2020.04.30.07.49.08 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:49:08 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 13/15] arm64: kvm: Add comments around __hyp_text_ symbol aliases Date: Thu, 30 Apr 2020 15:48:29 +0100 Message-Id: <20200430144831.59194-14-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_074911_335204_3E938953 X-CRM114-Status: GOOD ( 12.48 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:344 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. With all source files split between VHE/nVHE, add comments around the list of symbols where nVHE code still links against kernel proper. Split them into groups and explain how each group is currently used. Some of these dependencies will be removed in the future. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 49 +++++++++++++++++++++------------- 1 file changed, 31 insertions(+), 18 deletions(-) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index e272eedfe19a..04a3ee21e694 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,24 +61,37 @@ __efistub__ctype = _ctype; * memory mappings. */ -__hyp_text___icache_flags = __icache_flags; -__hyp_text___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; -__hyp_text_arm64_const_caps_ready = arm64_const_caps_ready; -__hyp_text_arm64_enable_wa2_handling = arm64_enable_wa2_handling; -__hyp_text_arm64_ssbd_callback_required = arm64_ssbd_callback_required; -__hyp_text_cpu_hwcap_keys = cpu_hwcap_keys; -__hyp_text_cpu_hwcaps = cpu_hwcaps; -__hyp_text_kimage_voffset = kimage_voffset; -__hyp_text_kvm_host_data = kvm_host_data; -__hyp_text_kvm_patch_vector_branch = kvm_patch_vector_branch; -__hyp_text_kvm_update_va_mask = kvm_update_va_mask; -__hyp_text_kvm_vgic_global_state = kvm_vgic_global_state; -__hyp_text_panic = panic; -__hyp_text_physvirt_offset = physvirt_offset; -__hyp_text_sve_load_state = sve_load_state; -__hyp_text_sve_save_state = sve_save_state; -__hyp_text_vgic_v2_cpuif_trap = vgic_v2_cpuif_trap; -__hyp_text_vgic_v3_cpuif_trap = vgic_v3_cpuif_trap; +/* If nVHE code panics, it ERETs into panic() in EL1. */ +__hyp_text_panic = panic; + +/* Stub HVC IDs are routed to a handler in .hyp.idmap.text. Executed in EL2. */ +__hyp_text___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; + +/* Alternative callbacks, referenced in .altinstructions. Executed in EL1. */ +__hyp_text_arm64_enable_wa2_handling = arm64_enable_wa2_handling; +__hyp_text_kvm_patch_vector_branch = kvm_patch_vector_branch; +__hyp_text_kvm_update_va_mask = kvm_update_va_mask; + +/* Values used to convert between memory mappings, read-only after init. */ +__hyp_text_kimage_voffset = kimage_voffset; +__hyp_text_physvirt_offset = physvirt_offset; + +/* Data shared with the kernel. */ +__hyp_text_cpu_hwcaps = cpu_hwcaps; +__hyp_text_cpu_hwcap_keys = cpu_hwcap_keys; +__hyp_text___icache_flags = __icache_flags; +__hyp_text_kvm_vgic_global_state = kvm_vgic_global_state; +__hyp_text_arm64_ssbd_callback_required = arm64_ssbd_callback_required; +__hyp_text_kvm_host_data = kvm_host_data; + +/* Static keys shared with the kernel. */ +__hyp_text_arm64_const_caps_ready = arm64_const_caps_ready; +__hyp_text_vgic_v2_cpuif_trap = vgic_v2_cpuif_trap; +__hyp_text_vgic_v3_cpuif_trap = vgic_v3_cpuif_trap; + +/* SVE support, currently unused by nVHE. */ +__hyp_text_sve_save_state = sve_save_state; +__hyp_text_sve_load_state = sve_load_state; #endif /* CONFIG_KVM */ From patchwork Thu Apr 30 14:48:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520629 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8321B15AB for ; Thu, 30 Apr 2020 14:55:25 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4F2B12082E for ; Thu, 30 Apr 2020 14:55:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="f+bMf0l0"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="GNfjrqk4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F2B12082E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2TAeyhfC0B8g02Dx1hYVhzdU55Qs82rERHuxJverk4g=; b=f+bMf0l0Qug9so gkzfOeMK+PQXIAGPR2AF8FQHBvAcJemlrcN2SVItlOnIeQf3mvnLC/8ZXqnHGlVlKpG8TPvynUY2A Shc7z0r36pjB6Vr+Ojv6sU7FdTgs2GrW7AtVTykz2cZj8N1C/zdOV5loUbePPiN/lXnAKeo4rc69y xmKT+lA0mxpaf0HH+B0pI4HdbtFxMp4iDmeWWgGxIKBFSZNOJf7mH+s4XrdFv2jXDyzuNCgfPZtMn yJJVVwfQZjilfg4gUYUVJSHGLwUMttUCFvd9zohL/f/kg7LhcdNZ3NaQW6PhPeCeeqTqMFx8zMHFY 2iUxIQe20+YbjRUfD0Jw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAau-00052g-EI; Thu, 30 Apr 2020 14:55:20 +0000 Received: from mail-wr1-f68.google.com ([209.85.221.68]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAV0-00046A-Jw for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:49:35 +0000 Received: by mail-wr1-f68.google.com with SMTP id d17so7242116wrg.11 for ; Thu, 30 Apr 2020 07:49:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uwyqQHz73ZYcFBTJINbq68P7BuVAdAQq+GrVMVCVjaE=; b=GNfjrqk4/RTCKgWo4TTku2mrbOncSlhi/lnWzFtz29htYr6TdDf5Adn+ChvUV/VPU6 9oIClsLhkjDDhJn/cbXmpfS0bmpHHFaLM4l3hdbST8Hy/wPL1RenQ9CFCRDbvXSqhbXw EqfFAqIURGLtGZI+A3+JvxSMMWyq1ixM5h10rfDnTb97P6cpPm+/65dtAE9FSxAfCW2b AM2+3842bUEb2xpUENEyEyJc0/gZVud+xj/lBaGufcjWTOjHXt2HYEFZX++h6q/CGgFB pMuTq0bvxrhJw8ysO+tvSq78N5fN+ZA5oPEXfUmbc657zge+Y8rzjpA6byIqQEmjERfG 24zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uwyqQHz73ZYcFBTJINbq68P7BuVAdAQq+GrVMVCVjaE=; b=TP0qDXeHbEZzXY4xeX5pG3wPUZTpdUSlfoScGkfXqFGY5wtH+vwiifwjw75RHu+q6r m+vdk1FbA+ET9OcqJmPC668XfFrmI0IlLYnPlKoyYZrA7v0w6kzraV58pL971BDw9zB9 Nu4iuTKEf5U1tS13RFHGnZ8BT2qist5kZCrn0TfunU2YcqH91hlIhWBJeA9FE9DtFoAE aSr/zF6pVn/5cbEbEmZ5RiOwTtXip6r9w8oXb2KeouV/o11o7moZpjw5vtVuZlnh1jW3 sacw404kLt9BnnbeWSBI4vZkgyi8/f1fOdOcNEZ5qFR/oAZfGbiVlLV51bEv9Gl+PFNT K8VA== X-Gm-Message-State: AGi0Pub9EUEJQlEoFO3PrF33fdv2fM/yZz73XvzDcusGI5FhHmUWdwN9 3R6LN76K7iYZW48SSGkMvOMsDQ== X-Google-Smtp-Source: APiQypINOLmmEp5BKRcrAgzWh8ev/DarJhnfCQfmggg4/Ln8iqZTNXoyt8VV8o7FOUFwTcOCcYQqvg== X-Received: by 2002:a5d:664f:: with SMTP id f15mr4382204wrw.72.1588258151295; Thu, 30 Apr 2020 07:49:11 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id g6sm4354983wrw.34.2020.04.30.07.49.10 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:49:10 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 14/15] arm64: kvm: Remove __hyp_text macro, use build rules instead Date: Thu, 30 Apr 2020 15:48:30 +0100 Message-Id: <20200430144831.59194-15-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_074915_009140_96EDCB51 X-CRM114-Status: GOOD ( 17.50 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [209.85.221.68 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [209.85.221.68 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org With nVHE code now fully separated from the rest of the kernel, the effects of the __hyp_text macro (which had to be applied on all nVHE code) can be achieved with build rules instead. The macro used to: (a) move code to .hyp.text ELF section, now done by renaming .text using `objcopy`, and (b) `notrace` would negate effects of CC_FLAGS_FTRACE, now those flags are erased from KBUILD_CFLAGS (same way as in EFI stub). Note that by removing __hyp_text from code shared with VHE, all VHE code is now compiled into .text and without `notrace`. Use of '.pushsection .hyp.text' removed from assembly files as this is now also covered by the build rules. For MAINTAINERS: if needed to re-run, uses of macro were removed with the following command. Formatting was fixed up manually. find arch/arm64/kvm/hyp -type f -name '*.c' -o -name '*.h' \ -exec sed -i 's/ __hyp_text//g' {} + Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_emulate.h | 2 +- arch/arm64/include/asm/kvm_hyp.h | 4 +- arch/arm64/kvm/hyp/aarch32.c | 6 +- arch/arm64/kvm/hyp/debug-sr.h | 18 ++-- arch/arm64/kvm/hyp/entry.S | 1 - arch/arm64/kvm/hyp/fpsimd.S | 1 - arch/arm64/kvm/hyp/hyp-entry.S | 3 - arch/arm64/kvm/hyp/nvhe/Makefile | 7 +- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 10 +- arch/arm64/kvm/hyp/nvhe/host_hypercall.c | 2 +- arch/arm64/kvm/hyp/nvhe/switch.c | 18 ++-- arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 10 +- arch/arm64/kvm/hyp/nvhe/timer-sr.c | 4 +- arch/arm64/kvm/hyp/nvhe/tlb.c | 14 ++- arch/arm64/kvm/hyp/switch.h | 35 +++--- arch/arm64/kvm/hyp/sysreg-sr.h | 27 ++--- arch/arm64/kvm/hyp/timer-sr.c | 2 +- arch/arm64/kvm/hyp/tlb.c | 6 +- arch/arm64/kvm/hyp/tlb.h | 15 ++- arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c | 4 +- arch/arm64/kvm/hyp/vgic-v3-sr.c | 130 ++++++++++------------- 21 files changed, 142 insertions(+), 177 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index a30b4eec7cb4..1666ecbfaac7 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -520,7 +520,7 @@ static __always_inline void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_i * Skip an instruction which has been emulated at hyp while most guest sysregs * are live. */ -static __always_inline void __hyp_text __kvm_skip_instr(struct kvm_vcpu *vcpu) +static __always_inline void __kvm_skip_instr(struct kvm_vcpu *vcpu) { *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); vcpu->arch.ctxt.gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 3320a22a5fb1..ab0bc4fd5f47 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -13,8 +13,6 @@ #include #include -#define __hyp_text __section(.hyp.text) notrace - #define read_sysreg_elx(r,nvh,vh) \ ({ \ u64 reg; \ @@ -103,7 +101,7 @@ void __noreturn __hyp_do_panic(unsigned long, ...); * Must be called from hyp code running at EL2 with an updated VTTBR * and interrupts disabled. */ -static __always_inline void __hyp_text __load_guest_stage2(struct kvm *kvm) +static __always_inline void __load_guest_stage2(struct kvm *kvm) { write_sysreg(kvm->arch.vtcr, vtcr_el2); write_sysreg(kvm_get_vttbr(kvm), vttbr_el2); diff --git a/arch/arm64/kvm/hyp/aarch32.c b/arch/arm64/kvm/hyp/aarch32.c index d31f267961e7..44fecab99bbe 100644 --- a/arch/arm64/kvm/hyp/aarch32.c +++ b/arch/arm64/kvm/hyp/aarch32.c @@ -44,7 +44,7 @@ static const unsigned short cc_map[16] = { /* * Check if a trapped instruction should have been executed or not. */ -bool __hyp_text kvm_condition_valid32(const struct kvm_vcpu *vcpu) +bool kvm_condition_valid32(const struct kvm_vcpu *vcpu) { unsigned long cpsr; u32 cpsr_cond; @@ -93,7 +93,7 @@ bool __hyp_text kvm_condition_valid32(const struct kvm_vcpu *vcpu) * * IT[7:0] -> CPSR[26:25],CPSR[15:10] */ -static void __hyp_text kvm_adjust_itstate(struct kvm_vcpu *vcpu) +static void kvm_adjust_itstate(struct kvm_vcpu *vcpu) { unsigned long itbits, cond; unsigned long cpsr = *vcpu_cpsr(vcpu); @@ -123,7 +123,7 @@ static void __hyp_text kvm_adjust_itstate(struct kvm_vcpu *vcpu) * kvm_skip_instr - skip a trapped instruction and proceed to the next * @vcpu: The vcpu pointer */ -void __hyp_text kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr) +void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr) { bool is_thumb; diff --git a/arch/arm64/kvm/hyp/debug-sr.h b/arch/arm64/kvm/hyp/debug-sr.h index 1e849f31cb74..be2651bffc1a 100644 --- a/arch/arm64/kvm/hyp/debug-sr.h +++ b/arch/arm64/kvm/hyp/debug-sr.h @@ -85,9 +85,9 @@ default: write_debug(ptr[0], reg, 0); \ } -static inline void __hyp_text -__debug_save_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, - struct kvm_cpu_context *ctxt) +static inline void __debug_save_state(struct kvm_vcpu *vcpu, + struct kvm_guest_debug_arch *dbg, + struct kvm_cpu_context *ctxt) { u64 aa64dfr0; int brps, wrps; @@ -104,9 +104,9 @@ __debug_save_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, ctxt->sys_regs[MDCCINT_EL1] = read_sysreg(mdccint_el1); } -static inline void __hyp_text -__debug_restore_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, - struct kvm_cpu_context *ctxt) +static inline void __debug_restore_state(struct kvm_vcpu *vcpu, + struct kvm_guest_debug_arch *dbg, + struct kvm_cpu_context *ctxt) { u64 aa64dfr0; int brps, wrps; @@ -124,8 +124,7 @@ __debug_restore_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, write_sysreg(ctxt->sys_regs[MDCCINT_EL1], mdccint_el1); } -static inline void __hyp_text -__debug_switch_to_guest_common(struct kvm_vcpu *vcpu) +static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; @@ -144,8 +143,7 @@ __debug_switch_to_guest_common(struct kvm_vcpu *vcpu) __debug_restore_state(vcpu, guest_dbg, guest_ctxt); } -static inline void __hyp_text -__debug_switch_to_host_common(struct kvm_vcpu *vcpu) +static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index d22d0534dd60..01b946af75b9 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -20,7 +20,6 @@ #define CPU_XREG_OFFSET(x) CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x) .text - .pushsection .hyp.text, "ax" /* * We treat x18 as callee-saved as the host may use it as a platform diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index 5b8ff517ff10..01f114aa47b0 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -9,7 +9,6 @@ #include .text - .pushsection .hyp.text, "ax" SYM_FUNC_START(__fpsimd_save_state) fpsimd_save x0, 1 diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 04ea0b83b728..e535db6e19e9 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -17,7 +17,6 @@ #include .text - .pushsection .hyp.text, "ax" .macro do_el2_call /* @@ -363,8 +362,6 @@ SYM_CODE_START(__bp_harden_hyp_vecs) .org 1b SYM_CODE_END(__bp_harden_hyp_vecs) - .popsection - #ifndef __HYPERVISOR__ /* * This is not executed directly and is instead copied into the vectors diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 41018d25118c..68c0d844369f 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -22,7 +22,12 @@ $(obj)/%.hyp.o: $(obj)/%.hyp.tmp.o FORCE $(call if_changed,hypcopy) quiet_cmd_hypcopy = HYPCOPY $@ - cmd_hypcopy = $(OBJCOPY) --prefix-symbols=__hyp_text_ $< $@ + cmd_hypcopy = $(OBJCOPY) --prefix-symbols=__hyp_text_ \ + --rename-section=.text=.hyp.text \ + $< $@ + +# Remove ftrace CFLAGS, this is equivalent to the 'notrace' annotation. +KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) # KVM nVHE code is run at a different exception code with a different map, so # compiler instrumentation that inserts callbacks or checks into the code may diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index b3752cfdcf3d..bb5c529da394 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -14,7 +14,7 @@ #include "../debug-sr.h" -static void __hyp_text __debug_save_spe(u64 *pmscr_el1) +static void __debug_save_spe(u64 *pmscr_el1) { u64 reg; @@ -46,7 +46,7 @@ static void __hyp_text __debug_save_spe(u64 *pmscr_el1) dsb(nsh); } -static void __hyp_text __debug_restore_spe(u64 pmscr_el1) +static void __debug_restore_spe(u64 pmscr_el1) { if (!pmscr_el1) return; @@ -58,20 +58,20 @@ static void __hyp_text __debug_restore_spe(u64 pmscr_el1) write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1); } -void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu) +void __debug_switch_to_guest(struct kvm_vcpu *vcpu) { /* Disable and flush SPE data generation */ __debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1); __debug_switch_to_guest_common(vcpu); } -void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu) +void __debug_switch_to_host(struct kvm_vcpu *vcpu) { __debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1); __debug_switch_to_host_common(vcpu); } -u32 __hyp_text __kvm_get_mdcr_el2(void) +u32 __kvm_get_mdcr_el2(void) { return read_sysreg(mdcr_el2); } diff --git a/arch/arm64/kvm/hyp/nvhe/host_hypercall.c b/arch/arm64/kvm/hyp/nvhe/host_hypercall.c index 6b31310f36a8..43926ca79e6d 100644 --- a/arch/arm64/kvm/hyp/nvhe/host_hypercall.c +++ b/arch/arm64/kvm/hyp/nvhe/host_hypercall.c @@ -7,7 +7,7 @@ typedef long (*kvm_hcall_fn_t)(void); -static long __hyp_text kvm_hc_ni(void) +static long kvm_hc_ni(void) { return -ENOSYS; } diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 4294beed3dc1..ffea4efe8d92 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -26,7 +26,7 @@ #include "../switch.h" -static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) +static void __activate_traps(struct kvm_vcpu *vcpu) { u64 val; @@ -57,7 +57,7 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) } } -static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) +static void __deactivate_traps(struct kvm_vcpu *vcpu) { u64 mdcr_el2; @@ -92,13 +92,13 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); } -static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu) +static void __deactivate_vm(struct kvm_vcpu *vcpu) { write_sysreg(0, vttbr_el2); } /* Save VGICv3 state on non-VHE systems */ -static void __hyp_text __hyp_vgic_save_state(struct kvm_vcpu *vcpu) +static void __hyp_vgic_save_state(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { __vgic_v3_save_state(vcpu); @@ -107,7 +107,7 @@ static void __hyp_text __hyp_vgic_save_state(struct kvm_vcpu *vcpu) } /* Restore VGICv3 state on non_VEH systems */ -static void __hyp_text __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) +static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { __vgic_v3_activate_traps(vcpu); @@ -118,7 +118,7 @@ static void __hyp_text __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) /** * Disable host events, enable guest events */ -static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) +static bool __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) { struct kvm_host_data *host; struct kvm_pmu_events *pmu; @@ -138,7 +138,7 @@ static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) /** * Disable guest events, enable host events */ -static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) +static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) { struct kvm_host_data *host; struct kvm_pmu_events *pmu; @@ -154,7 +154,7 @@ static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) } /* Switch to the guest for legacy non-VHE systems */ -int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; @@ -241,7 +241,7 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) return exit_code; } -void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) +void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) { u64 spsr = read_sysreg_el2(SYS_SPSR); u64 elr = read_sysreg_el2(SYS_ELR); diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c index 55ab924d841a..b1da891bf307 100644 --- a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c @@ -18,7 +18,7 @@ * Non-VHE: Both host and guest must save everything. */ -void __hyp_text __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) +void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) { __sysreg_save_el1_state(ctxt); __sysreg_save_common_state(ctxt); @@ -26,7 +26,7 @@ void __hyp_text __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) __sysreg_save_el2_return_state(ctxt); } -void __hyp_text __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) +void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) { __sysreg_restore_el1_state(ctxt); __sysreg_restore_common_state(ctxt); @@ -34,17 +34,17 @@ void __hyp_text __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) __sysreg_restore_el2_return_state(ctxt); } -void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu) +void __sysreg32_save_state(struct kvm_vcpu *vcpu) { ___sysreg32_save_state(vcpu); } -void __hyp_text __sysreg32_restore_state(struct kvm_vcpu *vcpu) +void __sysreg32_restore_state(struct kvm_vcpu *vcpu) { ___sysreg32_restore_state(vcpu); } -void __hyp_text __kvm_enable_ssbs(void) +void __kvm_enable_ssbs(void) { u64 tmp; diff --git a/arch/arm64/kvm/hyp/nvhe/timer-sr.c b/arch/arm64/kvm/hyp/nvhe/timer-sr.c index f0e694743883..8b80a4c4c4c6 100644 --- a/arch/arm64/kvm/hyp/nvhe/timer-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/timer-sr.c @@ -14,7 +14,7 @@ * Should only be called on non-VHE systems. * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). */ -void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu) +void __timer_disable_traps(struct kvm_vcpu *vcpu) { u64 val; @@ -28,7 +28,7 @@ void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu) * Should only be called on non-VHE systems. * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). */ -void __hyp_text __timer_enable_traps(struct kvm_vcpu *vcpu) +void __timer_enable_traps(struct kvm_vcpu *vcpu) { u64 val; diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index 1b8f4000f98c..151fc9cc2553 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -12,8 +12,7 @@ #include "../tlb.h" -static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __tlb_switch_to_guest(struct kvm *kvm, struct tlb_inv_context *cxt) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { u64 val; @@ -35,8 +34,7 @@ static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, isb(); } -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __tlb_switch_to_host(struct kvm *kvm, struct tlb_inv_context *cxt) { write_sysreg(0, vttbr_el2); @@ -48,22 +46,22 @@ static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, } } -void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) { __tlb_flush_vmid_ipa(kvm, ipa); } -void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm) +void __kvm_tlb_flush_vmid(struct kvm *kvm) { __tlb_flush_vmid(kvm); } -void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) { __tlb_flush_local_vmid(vcpu); } -void __hyp_text __kvm_flush_vm_context(void) +void __kvm_flush_vm_context(void) { __tlb_flush_vm_context(); } diff --git a/arch/arm64/kvm/hyp/switch.h b/arch/arm64/kvm/hyp/switch.h index 00ecb0111e79..12fb838324bc 100644 --- a/arch/arm64/kvm/hyp/switch.h +++ b/arch/arm64/kvm/hyp/switch.h @@ -27,7 +27,7 @@ static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n"; /* Check whether the FP regs were dirtied while in the host-side run loop: */ -static inline bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu) +static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) { /* * When the system doesn't support FP/SIMD, we cannot rely on @@ -45,7 +45,7 @@ static inline bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu) } /* Save the 32-bit only FPSIMD system register state */ -static inline void __hyp_text __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) +static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) { if (!vcpu_el1_is_32bit(vcpu)) return; @@ -53,7 +53,7 @@ static inline void __hyp_text __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) vcpu->arch.ctxt.sys_regs[FPEXC32_EL2] = read_sysreg(fpexc32_el2); } -static inline void __hyp_text __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) +static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) { /* * We are about to set CPTR_EL2.TFP to trap all floating point @@ -70,7 +70,7 @@ static inline void __hyp_text __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) } } -static inline void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu) +static inline void __activate_traps_common(struct kvm_vcpu *vcpu) { /* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */ write_sysreg(1 << 15, hstr_el2); @@ -86,13 +86,13 @@ static inline void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu) write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); } -static inline void __hyp_text __deactivate_traps_common(void) +static inline void __deactivate_traps_common(void) { write_sysreg(0, hstr_el2); write_sysreg(0, pmuserenr_el0); } -static inline void __hyp_text ___activate_traps(struct kvm_vcpu *vcpu) +static inline void ___activate_traps(struct kvm_vcpu *vcpu) { u64 hcr = vcpu->arch.hcr_el2; @@ -105,7 +105,7 @@ static inline void __hyp_text ___activate_traps(struct kvm_vcpu *vcpu) write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); } -static inline void __hyp_text ___deactivate_traps(struct kvm_vcpu *vcpu) +static inline void ___deactivate_traps(struct kvm_vcpu *vcpu) { /* * If we pended a virtual abort, preserve it until it gets @@ -119,12 +119,12 @@ static inline void __hyp_text ___deactivate_traps(struct kvm_vcpu *vcpu) } } -static inline void __hyp_text __activate_vm(struct kvm *kvm) +static inline void __activate_vm(struct kvm *kvm) { __load_guest_stage2(kvm); } -static inline bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar) +static inline bool __translate_far_to_hpfar(u64 far, u64 *hpfar) { u64 par, tmp; @@ -153,7 +153,7 @@ static inline bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar) return true; } -static inline bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) +static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) { u8 ec; u64 esr; @@ -193,7 +193,7 @@ static inline bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) } /* Check for an FPSIMD/SVE trap and handle as appropriate */ -static inline bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) +static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) { bool vhe, sve_guest, sve_host; u8 hsr_ec; @@ -275,7 +275,7 @@ static inline bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) return true; } -static inline bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) +static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) { u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu)); int rt = kvm_vcpu_sys_get_rt(vcpu); @@ -335,8 +335,7 @@ static inline bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) * the guest, false when we should restore the host state and return to the * main run loop. */ -static inline bool __hyp_text -fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); @@ -405,7 +404,7 @@ fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) return false; } -static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) +static inline bool __needs_ssbd_off(struct kvm_vcpu *vcpu) { if (!cpus_have_final_cap(ARM64_SSBD)) return false; @@ -413,8 +412,7 @@ static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG); } -static inline void __hyp_text -__set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) +static inline void __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) { #ifdef CONFIG_ARM64_SSBD /* @@ -427,8 +425,7 @@ __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) #endif } -static inline void __hyp_text -__set_host_arch_workaround_state(struct kvm_vcpu *vcpu) +static inline void __set_host_arch_workaround_state(struct kvm_vcpu *vcpu) { #ifdef CONFIG_ARM64_SSBD /* diff --git a/arch/arm64/kvm/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/sysreg-sr.h index f667d3fb680f..d57e077bd03c 100644 --- a/arch/arm64/kvm/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/sysreg-sr.h @@ -12,8 +12,7 @@ #include #include -static inline void __hyp_text -__sysreg_save_common_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) { ctxt->sys_regs[MDSCR_EL1] = read_sysreg(mdscr_el1); @@ -24,15 +23,13 @@ __sysreg_save_common_state(struct kvm_cpu_context *ctxt) ctxt->gp_regs.regs.sp = read_sysreg(sp_el0); } -static inline void __hyp_text -__sysreg_save_user_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) { ctxt->sys_regs[TPIDR_EL0] = read_sysreg(tpidr_el0); ctxt->sys_regs[TPIDRRO_EL0] = read_sysreg(tpidrro_el0); } -static inline void __hyp_text -__sysreg_save_el1_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) { ctxt->sys_regs[CSSELR_EL1] = read_sysreg(csselr_el1); ctxt->sys_regs[SCTLR_EL1] = read_sysreg_el1(SYS_SCTLR); @@ -58,8 +55,7 @@ __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(SYS_SPSR); } -static inline void __hyp_text -__sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) { ctxt->gp_regs.regs.pc = read_sysreg_el2(SYS_ELR); ctxt->gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); @@ -68,8 +64,7 @@ __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) ctxt->sys_regs[DISR_EL1] = read_sysreg_s(SYS_VDISR_EL2); } -static inline void __hyp_text -__sysreg_restore_common_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_restore_common_state(struct kvm_cpu_context *ctxt) { write_sysreg(ctxt->sys_regs[MDSCR_EL1], mdscr_el1); @@ -80,15 +75,13 @@ __sysreg_restore_common_state(struct kvm_cpu_context *ctxt) write_sysreg(ctxt->gp_regs.regs.sp, sp_el0); } -static inline void __hyp_text -__sysreg_restore_user_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_restore_user_state(struct kvm_cpu_context *ctxt) { write_sysreg(ctxt->sys_regs[TPIDR_EL0], tpidr_el0); write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0); } -static inline void __hyp_text -__sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) { write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2); write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1); @@ -146,7 +139,7 @@ __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],SYS_SPSR); } -static inline void __hyp_text +static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt) { u64 pstate = ctxt->gp_regs.regs.pstate; @@ -173,7 +166,7 @@ __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt) write_sysreg_s(ctxt->sys_regs[DISR_EL1], SYS_VDISR_EL2); } -static inline void __hyp_text ___sysreg32_save_state(struct kvm_vcpu *vcpu) +static inline void ___sysreg32_save_state(struct kvm_vcpu *vcpu) { u64 *spsr, *sysreg; @@ -195,7 +188,7 @@ static inline void __hyp_text ___sysreg32_save_state(struct kvm_vcpu *vcpu) sysreg[DBGVCR32_EL2] = read_sysreg(dbgvcr32_el2); } -static inline void __hyp_text ___sysreg32_restore_state(struct kvm_vcpu *vcpu) +static inline void ___sysreg32_restore_state(struct kvm_vcpu *vcpu) { u64 *spsr, *sysreg; diff --git a/arch/arm64/kvm/hyp/timer-sr.c b/arch/arm64/kvm/hyp/timer-sr.c index 46e303281a2c..ab4b2a214309 100644 --- a/arch/arm64/kvm/hyp/timer-sr.c +++ b/arch/arm64/kvm/hyp/timer-sr.c @@ -6,7 +6,7 @@ #include -void __hyp_text __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high) +void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high) { u64 cntvoff = (u64)cntvoff_high << 32 | cntvoff_low; write_sysreg(cntvoff, cntvoff_el2); diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c index ab55b0c4a80c..d39fa06fdfe8 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/tlb.c @@ -12,8 +12,7 @@ #include "tlb.h" -static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __tlb_switch_to_guest(struct kvm *kvm, struct tlb_inv_context *cxt) { u64 val; @@ -56,8 +55,7 @@ static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, isb(); } -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __tlb_switch_to_host(struct kvm *kvm, struct tlb_inv_context *cxt) { /* * We're done with the TLB operation, let's restore the host's diff --git a/arch/arm64/kvm/hyp/tlb.h b/arch/arm64/kvm/hyp/tlb.h index f62ed96cb896..c90f1c32cb29 100644 --- a/arch/arm64/kvm/hyp/tlb.h +++ b/arch/arm64/kvm/hyp/tlb.h @@ -16,13 +16,10 @@ struct tlb_inv_context { u64 sctlr; }; -static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, - struct tlb_inv_context *cxt); -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, - struct tlb_inv_context *cxt); +static void __tlb_switch_to_guest(struct kvm *kvm, struct tlb_inv_context *cxt); +static void __tlb_switch_to_host(struct kvm *kvm, struct tlb_inv_context *cxt); -static inline void __hyp_text -__tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +static inline void __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) { struct tlb_inv_context cxt; @@ -76,7 +73,7 @@ __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) __tlb_switch_to_host(kvm, &cxt); } -static inline void __hyp_text __tlb_flush_vmid(struct kvm *kvm) +static inline void __tlb_flush_vmid(struct kvm *kvm) { struct tlb_inv_context cxt; @@ -93,7 +90,7 @@ static inline void __hyp_text __tlb_flush_vmid(struct kvm *kvm) __tlb_switch_to_host(kvm, &cxt); } -static inline void __hyp_text __tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +static inline void __tlb_flush_local_vmid(struct kvm_vcpu *vcpu) { struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); struct tlb_inv_context cxt; @@ -108,7 +105,7 @@ static inline void __hyp_text __tlb_flush_local_vmid(struct kvm_vcpu *vcpu) __tlb_switch_to_host(kvm, &cxt); } -static inline void __hyp_text __tlb_flush_vm_context(void) +static inline void __tlb_flush_vm_context(void) { dsb(ishst); __tlbi(alle1is); diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c index 4f3a087e36d5..bd1bab551d48 100644 --- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c +++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c @@ -13,7 +13,7 @@ #include #include -static bool __hyp_text __is_be(struct kvm_vcpu *vcpu) +static bool __is_be(struct kvm_vcpu *vcpu) { if (vcpu_mode_is_32bit(vcpu)) return !!(read_sysreg_el2(SYS_SPSR) & PSR_AA32_E_BIT); @@ -32,7 +32,7 @@ static bool __hyp_text __is_be(struct kvm_vcpu *vcpu) * 0: Not a GICV access * -1: Illegal GICV access successfully performed */ -int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) +int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) { struct kvm *kvm = kern_hyp_va(vcpu->kvm); struct vgic_dist *vgic = &kvm->arch.vgic; diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index 49fedf6710f9..d6628573b855 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -16,7 +16,7 @@ #define vtr_to_nr_pre_bits(v) ((((u32)(v) >> 26) & 7) + 1) #define vtr_to_nr_apr_regs(v) (1 << (vtr_to_nr_pre_bits(v) - 5)) -static u64 __hyp_text __gic_v3_get_lr(unsigned int lr) +static u64 __gic_v3_get_lr(unsigned int lr) { switch (lr & 0xf) { case 0: @@ -56,7 +56,7 @@ static u64 __hyp_text __gic_v3_get_lr(unsigned int lr) unreachable(); } -static void __hyp_text __gic_v3_set_lr(u64 val, int lr) +static void __gic_v3_set_lr(u64 val, int lr) { switch (lr & 0xf) { case 0: @@ -110,7 +110,7 @@ static void __hyp_text __gic_v3_set_lr(u64 val, int lr) } } -static void __hyp_text __vgic_v3_write_ap0rn(u32 val, int n) +static void __vgic_v3_write_ap0rn(u32 val, int n) { switch (n) { case 0: @@ -128,7 +128,7 @@ static void __hyp_text __vgic_v3_write_ap0rn(u32 val, int n) } } -static void __hyp_text __vgic_v3_write_ap1rn(u32 val, int n) +static void __vgic_v3_write_ap1rn(u32 val, int n) { switch (n) { case 0: @@ -146,7 +146,7 @@ static void __hyp_text __vgic_v3_write_ap1rn(u32 val, int n) } } -static u32 __hyp_text __vgic_v3_read_ap0rn(int n) +static u32 __vgic_v3_read_ap0rn(int n) { u32 val; @@ -170,7 +170,7 @@ static u32 __hyp_text __vgic_v3_read_ap0rn(int n) return val; } -static u32 __hyp_text __vgic_v3_read_ap1rn(int n) +static u32 __vgic_v3_read_ap1rn(int n) { u32 val; @@ -194,7 +194,7 @@ static u32 __hyp_text __vgic_v3_read_ap1rn(int n) return val; } -void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu) +void __vgic_v3_save_state(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; @@ -230,7 +230,7 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu) } } -void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu) +void __vgic_v3_restore_state(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; @@ -257,7 +257,7 @@ void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu) } } -void __hyp_text __vgic_v3_activate_traps(struct kvm_vcpu *vcpu) +void __vgic_v3_activate_traps(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; @@ -306,7 +306,7 @@ void __hyp_text __vgic_v3_activate_traps(struct kvm_vcpu *vcpu) write_gicreg(cpu_if->vgic_hcr, ICH_HCR_EL2); } -void __hyp_text __vgic_v3_deactivate_traps(struct kvm_vcpu *vcpu) +void __vgic_v3_deactivate_traps(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; u64 val; @@ -333,7 +333,7 @@ void __hyp_text __vgic_v3_deactivate_traps(struct kvm_vcpu *vcpu) write_gicreg(0, ICH_HCR_EL2); } -void __hyp_text __vgic_v3_save_aprs(struct kvm_vcpu *vcpu) +void __vgic_v3_save_aprs(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if; u64 val; @@ -370,7 +370,7 @@ void __hyp_text __vgic_v3_save_aprs(struct kvm_vcpu *vcpu) } } -void __hyp_text __vgic_v3_restore_aprs(struct kvm_vcpu *vcpu) +void __vgic_v3_restore_aprs(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if; u64 val; @@ -407,7 +407,7 @@ void __hyp_text __vgic_v3_restore_aprs(struct kvm_vcpu *vcpu) } } -void __hyp_text __vgic_v3_init_lrs(void) +void __vgic_v3_init_lrs(void) { int max_lr_idx = vtr_to_max_lr_idx(read_gicreg(ICH_VTR_EL2)); int i; @@ -416,28 +416,28 @@ void __hyp_text __vgic_v3_init_lrs(void) __gic_v3_set_lr(0, i); } -u64 __hyp_text __vgic_v3_get_ich_vtr_el2(void) +u64 __vgic_v3_get_ich_vtr_el2(void) { return read_gicreg(ICH_VTR_EL2); } -u64 __hyp_text __vgic_v3_read_vmcr(void) +u64 __vgic_v3_read_vmcr(void) { return read_gicreg(ICH_VMCR_EL2); } -void __hyp_text __vgic_v3_write_vmcr(u32 vmcr) +void __vgic_v3_write_vmcr(u32 vmcr) { write_gicreg(vmcr, ICH_VMCR_EL2); } -static int __hyp_text __vgic_v3_bpr_min(void) +static int __vgic_v3_bpr_min(void) { /* See Pseudocode for VPriorityGroup */ return 8 - vtr_to_nr_pre_bits(read_gicreg(ICH_VTR_EL2)); } -static int __hyp_text __vgic_v3_get_group(struct kvm_vcpu *vcpu) +static int __vgic_v3_get_group(struct kvm_vcpu *vcpu) { u32 esr = kvm_vcpu_get_hsr(vcpu); u8 crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT; @@ -447,9 +447,8 @@ static int __hyp_text __vgic_v3_get_group(struct kvm_vcpu *vcpu) #define GICv3_IDLE_PRIORITY 0xff -static int __hyp_text __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, - u32 vmcr, - u64 *lr_val) +static int __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, u32 vmcr, + u64 *lr_val) { unsigned int used_lrs = vcpu->arch.vgic_cpu.used_lrs; u8 priority = GICv3_IDLE_PRIORITY; @@ -487,8 +486,8 @@ static int __hyp_text __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, return lr; } -static int __hyp_text __vgic_v3_find_active_lr(struct kvm_vcpu *vcpu, - int intid, u64 *lr_val) +static int __vgic_v3_find_active_lr(struct kvm_vcpu *vcpu, int intid, + u64 *lr_val) { unsigned int used_lrs = vcpu->arch.vgic_cpu.used_lrs; int i; @@ -507,7 +506,7 @@ static int __hyp_text __vgic_v3_find_active_lr(struct kvm_vcpu *vcpu, return -1; } -static int __hyp_text __vgic_v3_get_highest_active_priority(void) +static int __vgic_v3_get_highest_active_priority(void) { u8 nr_apr_regs = vtr_to_nr_apr_regs(read_gicreg(ICH_VTR_EL2)); u32 hap = 0; @@ -539,12 +538,12 @@ static int __hyp_text __vgic_v3_get_highest_active_priority(void) return GICv3_IDLE_PRIORITY; } -static unsigned int __hyp_text __vgic_v3_get_bpr0(u32 vmcr) +static unsigned int __vgic_v3_get_bpr0(u32 vmcr) { return (vmcr & ICH_VMCR_BPR0_MASK) >> ICH_VMCR_BPR0_SHIFT; } -static unsigned int __hyp_text __vgic_v3_get_bpr1(u32 vmcr) +static unsigned int __vgic_v3_get_bpr1(u32 vmcr) { unsigned int bpr; @@ -563,7 +562,7 @@ static unsigned int __hyp_text __vgic_v3_get_bpr1(u32 vmcr) * Convert a priority to a preemption level, taking the relevant BPR * into account by zeroing the sub-priority bits. */ -static u8 __hyp_text __vgic_v3_pri_to_pre(u8 pri, u32 vmcr, int grp) +static u8 __vgic_v3_pri_to_pre(u8 pri, u32 vmcr, int grp) { unsigned int bpr; @@ -581,7 +580,7 @@ static u8 __hyp_text __vgic_v3_pri_to_pre(u8 pri, u32 vmcr, int grp) * matter what the guest does with its BPR, we can always set/get the * same value of a priority. */ -static void __hyp_text __vgic_v3_set_active_priority(u8 pri, u32 vmcr, int grp) +static void __vgic_v3_set_active_priority(u8 pri, u32 vmcr, int grp) { u8 pre, ap; u32 val; @@ -600,7 +599,7 @@ static void __hyp_text __vgic_v3_set_active_priority(u8 pri, u32 vmcr, int grp) } } -static int __hyp_text __vgic_v3_clear_highest_active_priority(void) +static int __vgic_v3_clear_highest_active_priority(void) { u8 nr_apr_regs = vtr_to_nr_apr_regs(read_gicreg(ICH_VTR_EL2)); u32 hap = 0; @@ -638,7 +637,7 @@ static int __hyp_text __vgic_v3_clear_highest_active_priority(void) return GICv3_IDLE_PRIORITY; } -static void __hyp_text __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 lr_val; u8 lr_prio, pmr; @@ -674,7 +673,7 @@ static void __hyp_text __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int r vcpu_set_reg(vcpu, rt, ICC_IAR1_EL1_SPURIOUS); } -static void __hyp_text __vgic_v3_clear_active_lr(int lr, u64 lr_val) +static void __vgic_v3_clear_active_lr(int lr, u64 lr_val) { lr_val &= ~ICH_LR_ACTIVE_BIT; if (lr_val & ICH_LR_HW) { @@ -687,7 +686,7 @@ static void __hyp_text __vgic_v3_clear_active_lr(int lr, u64 lr_val) __gic_v3_set_lr(lr_val, lr); } -static void __hyp_text __vgic_v3_bump_eoicount(void) +static void __vgic_v3_bump_eoicount(void) { u32 hcr; @@ -696,8 +695,7 @@ static void __hyp_text __vgic_v3_bump_eoicount(void) write_gicreg(hcr, ICH_HCR_EL2); } -static void __hyp_text __vgic_v3_write_dir(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 vid = vcpu_get_reg(vcpu, rt); u64 lr_val; @@ -720,7 +718,7 @@ static void __hyp_text __vgic_v3_write_dir(struct kvm_vcpu *vcpu, __vgic_v3_clear_active_lr(lr, lr_val); } -static void __hyp_text __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 vid = vcpu_get_reg(vcpu, rt); u64 lr_val; @@ -757,17 +755,17 @@ static void __hyp_text __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int __vgic_v3_clear_active_lr(lr, lr_val); } -static void __hyp_text __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vcpu_set_reg(vcpu, rt, !!(vmcr & ICH_VMCR_ENG0_MASK)); } -static void __hyp_text __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vcpu_set_reg(vcpu, rt, !!(vmcr & ICH_VMCR_ENG1_MASK)); } -static void __hyp_text __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 val = vcpu_get_reg(vcpu, rt); @@ -779,7 +777,7 @@ static void __hyp_text __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, __vgic_v3_write_vmcr(vmcr); } -static void __hyp_text __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 val = vcpu_get_reg(vcpu, rt); @@ -791,17 +789,17 @@ static void __hyp_text __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, __vgic_v3_write_vmcr(vmcr); } -static void __hyp_text __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vcpu_set_reg(vcpu, rt, __vgic_v3_get_bpr0(vmcr)); } -static void __hyp_text __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vcpu_set_reg(vcpu, rt, __vgic_v3_get_bpr1(vmcr)); } -static void __hyp_text __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 val = vcpu_get_reg(vcpu, rt); u8 bpr_min = __vgic_v3_bpr_min() - 1; @@ -818,7 +816,7 @@ static void __hyp_text __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int __vgic_v3_write_vmcr(vmcr); } -static void __hyp_text __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 val = vcpu_get_reg(vcpu, rt); u8 bpr_min = __vgic_v3_bpr_min(); @@ -838,7 +836,7 @@ static void __hyp_text __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int __vgic_v3_write_vmcr(vmcr); } -static void __hyp_text __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) +static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) { u32 val; @@ -850,7 +848,7 @@ static void __hyp_text __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n vcpu_set_reg(vcpu, rt, val); } -static void __hyp_text __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) +static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) { u32 val = vcpu_get_reg(vcpu, rt); @@ -860,56 +858,49 @@ static void __hyp_text __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int __vgic_v3_write_ap1rn(val, n); } -static void __hyp_text __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, +static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_read_apxrn(vcpu, rt, 0); } -static void __hyp_text __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu, +static void __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_read_apxrn(vcpu, rt, 1); } -static void __hyp_text __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_read_apxrn(vcpu, rt, 2); } -static void __hyp_text __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_read_apxrn(vcpu, rt, 3); } -static void __hyp_text __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_write_apxrn(vcpu, rt, 0); } -static void __hyp_text __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_write_apxrn(vcpu, rt, 1); } -static void __hyp_text __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_write_apxrn(vcpu, rt, 2); } -static void __hyp_text __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_write_apxrn(vcpu, rt, 3); } -static void __hyp_text __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 lr_val; int lr, lr_grp, grp; @@ -928,16 +919,14 @@ static void __hyp_text __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, vcpu_set_reg(vcpu, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK); } -static void __hyp_text __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vmcr &= ICH_VMCR_PMR_MASK; vmcr >>= ICH_VMCR_PMR_SHIFT; vcpu_set_reg(vcpu, rt, vmcr); } -static void __hyp_text __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 val = vcpu_get_reg(vcpu, rt); @@ -949,15 +938,13 @@ static void __hyp_text __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, write_gicreg(vmcr, ICH_VMCR_EL2); } -static void __hyp_text __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 val = __vgic_v3_get_highest_active_priority(); vcpu_set_reg(vcpu, rt, val); } -static void __hyp_text __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 vtr, val; @@ -978,8 +965,7 @@ static void __hyp_text __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, vcpu_set_reg(vcpu, rt, val); } -static void __hyp_text __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 val = vcpu_get_reg(vcpu, rt); @@ -996,7 +982,7 @@ static void __hyp_text __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, write_gicreg(vmcr, ICH_VMCR_EL2); } -int __hyp_text __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) +int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { int rt; u32 esr; From patchwork Thu Apr 30 14:48:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11520627 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8A3BC15AB for ; Thu, 30 Apr 2020 14:54:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5B7442076D for ; Thu, 30 Apr 2020 14:54:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="rYzaKRDp"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="OA5l7aPT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5B7442076D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YgsOdgn4BOqE2S1GM5AZ6oUem2azLyoC4j+NUtSrQfs=; b=rYzaKRDp0QuJm+ 8N1gKhlTV+a2h1mZ9M64pBQqRIgv4QRnLDi+f3pPlgei5h0bHUnJ9ab51dM7Xx+nYQXrfexDaAlKn 7i4YAULltRx3gWH22ya4WUq6OJNFw3S7toT/PW5rq05Qo8sihXZHqLgYmTBlJDQfrqTQkyoyxPINc g3rgOHXRk+y8B0F4V2sHOItU8OdP9MLx9GVtCnriM68+e6G1zQUarXt90NnPMVJ47RTGj54T1gC0f zsgeF8ucVM8/+UnachC4bRTY82RqTB1XYeY+dGVHpbYEzG6SS/rfQi6Cn5g7JRVg0hIixgLfDLkiy cFX5nerZ6lrXSoywvkAA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAaS-0002M8-Kb; Thu, 30 Apr 2020 14:54:52 +0000 Received: from mail-wm1-x344.google.com ([2a00:1450:4864:20::344]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUAV0-00046R-Ey for linux-arm-kernel@lists.infradead.org; Thu, 30 Apr 2020 14:49:24 +0000 Received: by mail-wm1-x344.google.com with SMTP id x25so2194742wmc.0 for ; Thu, 30 Apr 2020 07:49:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g7eYSMAp+jVN9aeMCPrBnn3TIBRQFFZukfwsZjZglmQ=; b=OA5l7aPTjb+/faLVms3XE72y8w1jxluq6N84GtC94alO2aDBpgsSDm1VINExf6U31K 9tqr3waTe0vuItea2ZK+4FQY33rrwYVdlFm66/zz3Cbebio6W/zJw+cuzPwy6n+iuVO1 Qv5Txcaf0FBe7wgFuL7AbtE7IOV5EsWHhF+AbyMKfvo6KM8oDXgrOneyRlflh3kUpYDW E4uoX+Djh2ULGNXS229plrMBCBOviGp/3ySosN9wHOowtX6bdb+p566EJDb6/lYWICXS WRk4yWH7ivHkDHxhIfzZpbxIlPohMJP1LV+ksBsadHK0LHqwXShd8qhbrbm5BovWgXrw ++DQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g7eYSMAp+jVN9aeMCPrBnn3TIBRQFFZukfwsZjZglmQ=; b=R87V7ma1a2aeWV/URXUDkFnJKhguzhrJAONeK2TDVEfqLrDyQ2/dBtD9L6j/5M+Ybt hT39d4Q3+AvGOllOdswka5Dn6HPME4zOVJEStv3DlRBJ3aeaIbkbhkW31fpyrwwRdyOY N9o5a+HL1Z4zU228IXHOIC7PbHD65Scn1wxrri22nCn7Wm/JzivTix3OC6Qo3MNmsVem p6l923o5nGGfQXYPiwsVEV7f4zqGPA9bHva/f2DFrJUdTr1cGUFXVwaLO1bCCCSZksaq MV8RQE+NQp98FBbWQZLdoNuil8g+0p5KxPXCbbZ3rLE2qeB3y6aKhx67BDXmU2ssl8Ch ANFw== X-Gm-Message-State: AGi0PuYPVuYv06uYX/f//Md3aZu0P/nhW4ZLEP7xYkIDlBKAVOPTlB6W 57phowd9Lv7RebIE9KXH+Gotdg== X-Google-Smtp-Source: APiQypITD1jhHlRSJgCpCChsFN8ZYowBb0ctI7WS8xvFw5N4p+kZxrBFEWIMXyQBlBl3+nKP5Z4Cvw== X-Received: by 2002:a7b:c306:: with SMTP id k6mr3287632wmj.40.1588258152859; Thu, 30 Apr 2020 07:49:12 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d4b6:9828:8bd2:ce6f]) by smtp.gmail.com with ESMTPSA id f2sm4557427wro.59.2020.04.30.07.49.11 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Apr 2020 07:49:12 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH 15/15] arm64: kvm: Lift instrumentation restrictions on VHE Date: Thu, 30 Apr 2020 15:48:31 +0100 Message-Id: <20200430144831.59194-16-dbrazdil@google.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200430144831.59194-1-dbrazdil@google.com> References: <20200430144831.59194-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_074914_607377_7D607966 X-CRM114-Status: UNSURE ( 9.83 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:344 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org With VHE and nVHE executable code completely separated, remove build config that disabled GCOV/KASAN/UBSAN/KCOV instrumentation for VHE as these now execute under the same memory mappings as the rest of the kernel. No violations are currently being reported by either KASAN or UBSAN. Signed-off-by: David Brazdil --- arch/arm64/kvm/hyp/Makefile | 8 -------- 1 file changed, 8 deletions(-) diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index a6883f4fed9e..f877ac404b46 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -10,11 +10,3 @@ obj-$(CONFIG_KVM) += vhe.o nvhe/ vhe-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \ debug-sr.o entry.o switch.o fpsimd.o tlb.o hyp-entry.o - -# KVM code is run at a different exception code with a different map, so -# compiler instrumentation that inserts callbacks or checks into the code may -# cause crashes. Just disable it. -GCOV_PROFILE := n -KASAN_SANITIZE := n -UBSAN_SANITIZE := n -KCOV_INSTRUMENT := n